Uncommon Descent Serving The Intelligent Design Community
Category

Logic and Reason

Logic and First Principles of right reason

L&FP, 71: The island of function, fitness peak trap

We have been using a 3-D printer-constructor formalism, and now we can use it to see how hill climbing leads to local trapping. Again, the core formalism: Now, let us modify by allowing some sort of local random mutation to d(E) case by case within an n-run, now seen as a generation, so E1 to En are all incrementally different, and in effect are a ring around E in a fitness landscape. From this, we can see a survival filter that on average selects for superior performance. This leads, naturally to hill-climbing, perhaps even to several related peaks in a chain on an island of function. But now, we see: Here, we see that hill climbing leads to peak trapping, Read More ›

L&FP, 70: Exploring cosmological fine tuning using the idea of a 3-D, universal printer and constructor (also, islands of function)

Last time, we looked at how Kolmogorov Complexity can be used to quantify the information in functionally specific complex organisation, by using the formal idea of a 3-D universal printer and constructor, 3-DP/C: . . . it is but a short step to imagine a universal constructor device which, fed a compact description in a suitable language, will construct and present the [obviously, finite] object. Let us call this the universal 3-D printer/constructor, 3-DP/C. Thus, in principle, reduction of an organised entity to a description in a suitably compact language is formally equivalent in information terms to the object, once 3-DP/C is present as a conceptual entity. So, WLOG, reduction to compact description in a compact language d(E) is readily Read More ›

L&FP, 69: A way to understand Functionally Specific Complex Organisation and/or associated Information [FSCO/I] i/l/o Kolmogorov-Chaitin Complexity

It seems that it is exceedingly hard for some to understand what FSCO/I is about. In responding to an objector, I wrote as follows just now, and think it is worth headlining for reference: Where, K-Complexity is summarised by Wikipedia, as a first level point of reference that would have been immediately accessible all along: <<In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size Read More ›

L&FP, 68: Cognitive Dissonance and fallacies of projection etc

It is a sub-study of logic, to address fallacies. Accordingly, as it has come up, it seems helpful to highlight cognitive dissonance and certain associated fallacies. First, [HT: Montecinos et al, fair use] here is a recent framework for cognitive dissonance: This is a simple but powerful model. As an example, it has been argued that certain free trial period software, by involving the user in considerable effort to register and use the product, then shifts attitudes towards reluctance to give up the product. Of course, many attitude, thought, belief or behaviour changes can be influenced by the need to reduce inner pain, and some of these are more justifiable in the cold light of day than others. However, some Read More ›

L&FP, 67: So-called “critical rationalism” and the blunder of denying [defeat-able] warrant for knowledge

IEP summarises: “Critical Rationalism” is the name Karl Popper (1902-1994) gave to a modest and self-critical rationalism. He contrasted this view with “uncritical or comprehensive rationalism,” the received justificationist view that only what can be proved by reason and/or experience should be accepted. Popper argued that comprehensive rationalism cannot explain how proof is possible and that it leads to inconsistencies. Critical rationalism today is the project of extending Popper’s approach to all areas of thought and action. In each field the central task of critical rationalism is to replace allegedly justificatory methods with critical ones. A common summary of this is that it replaces knowledge as justified, true belief, with “knowledge is unjustified untrue unbelief.” That is, we see here Read More ›

L&FP, 66: String — yes, s-t-r-i-n-g — data structures as key information storage arrays (thus the significance of DNA and mRNA)

One of the more peculiar objections to the design inference is the strident, often repeated claim that the genetic code is not a code, and that DNA and mRNA are not storing algorithmic, coded information used in protein synthesis. These are tied to the string (yes, s-t-r-i-n-g) data structure, a key foundational array for information storage, transfer and application. So, it seems useful to address the string as a key first principles issue, with the onward point being that strings of course can and do store coded information. Let us begin with, what a string — yes, s-t-r-i-n-g — is (though that should already be obvious from even the headline): Geeks for Geeks: A string is a sequence of characters, Read More ›

L&FP, 65g: Quantum vs classical digital computing — hope or hype? (Or, superposition?)

Quantum computing, of course, has been a hot sci-tech topic in recent years, what with stories as to how it will obsolete the large prime number product encryption schemes that give us some of our strongest codes, and stories of vast computing power exponentially beyond our hottest supercomputers today. With hot money being poured in by the wheelbarrow load. (Well, maybe buckets of bits as most serious transactions are digital nowadays. Itself already a problem . . . security is an issue.) What are we to make of this? (My bet is, superposition. Itself, a core quantum issue.) Reader and commenter, Relatd, has given us a useful, first level video: (A good place to begin, a useful survey with some Read More ›

L&FP, 65f: It’s all tangled up — quantum entanglement (vs how we tend to talk loosely)

Arvin Ash poses a macro scale parallel to entanglement (while using a Stern-Gerlach apparatus): Vid: Ash highlights, of course, that once entangled, particles have superposed wave functions leading to inherent non locality. So, spooky action at a distance overlooks that non locality. And as with the gloves, Alice needs to know her particle is part of an entangled pair to freely infer Bob got the other one, so to speak. Information has not evaded the speed of light limit. Translation,* our concept of space, needs to be er, ah, uh, quantum adjusted. That was already lurking in low intensity beam interference and superposition. KF *PS, added to show certain objectors that “translated” needs not be pernicious. PPS, DV, quantum computing Read More ›

L&FP, 65f: It’s all tangled up — quantum entanglement (vs how we tend to talk loosely)

Arvin Ash poses a macro scale parallel to entanglement (while using a Stern-Gerlach apparatus): Vid: Ash highlights, of course, that once entangled, particles have superposed wave functions leading to inherent non locality. So, spooky action at a distance overlooks that non locality. And as with the gloves, Alice needs to know her particle is part of an entangled pair to freely infer Bob got the other one, so to speak. Information has not evaded the speed of light limit. Translation,* our concept of space, needs to be er, ah, uh, quantum adjusted. That was already lurking in low intensity beam interference and superposition. KF *PS, added to show certain objectors that “translated” needs not be pernicious. PPS, DV, quantum computing Read More ›

L&FP, 65e: Imaging light as a “wavicle” — both wave and particle

. . . using standing waves of light, vid: x Here is a snapshot: By setting up standing waves and using an electron beam to interact with it, a map could be imaged on photon location and waves. As an article explains: Until now [–> c 2015], scientists have only ever been able to capture an image of light as either a particle or a wave, and never both at the same time. But a team from the École Polytechnique Fédérale de Lausanne in Switzerland have managed to overcome the obstacles that stood in the way of previous experiments by using electrons to image light in this very strange state.  The key to their success is their unusual experiment design. Read More ›

L&FP, 65: So, you think you understand the double slit experiment? (HT, Q & BA77)

So, here we go: And, the rise of solid state laser pointers makes this sort of exercise so much easier, BUT YOU MUST BE CAREFUL NOT TO GET SUCH A BRIGHT SOURCE INTO YOUR EYE AS THIS MAY CAUSE RETINAL BURNS THUS BLIND SPOTS. (I recall, buying and assembling a kit He-Ne laser to have this exercise for my High School students. We had a ball, using metre sticks stuck to a screen with blu-tack, to observe and measure effects from several metres away.) So, now, what about, electrons: Notice, the pattern here builds up statistically, one spot at a time. Then, HT BA77 way back, here is Dr Quantum: Now, if you think you have it all figured out, Read More ›

L&FP, 64: The challenge of self-referentiality on hard questions (thus, of self-defeating arguments)

One way to define Philosophy, is to note that it is that department of thought that addresses hard, core questions. Known to be hard as there are no easy answers. Where, core topics include metaphysics [critical analysis of worldviews on what reality is, what exists etc], epistemology [core questions on “knowledge”], logic [what are the principles of right reason], ethics/morals [virtue, the good, evil, duty, justice etc], aesthetics [what is beauty], and of course meta issues emerging from other subjects such as politics, history, Mathematics, Theology/Religion, Science, Psychology, Medicine, Education etc. As we look at such a list, we can see that one reason why these are difficult is that it is very hard to avoid self-referentiality on such topics, Read More ›

Origenes on the self-defeating incoherence of the [hyper-]skeptic

Origenes is on fire these days, so let’s headline: [Origenes, emergence play thread, 57:] The skeptic wants to criticize, but he doesn’t want to be criticized himself. We all make statements of belief, skeptics included. But the skeptic posits a closed circle in which no beliefs are justified. Yet at the same time, he arrogates to himself a position outside of this circle by which he can judge the beliefs of others, a move he denies to his opponents. Since the raison d’être of his thesis is that there is no outside of the circle, he does not have the epistemic right to assume a position independent of it, and so his belief about the unjustifiability of beliefs or reasoning Read More ›

L&FP, 63: Do design thinkers, theists and the like “always” make bad arguments because they are “all” ignorant, stupid, insane or wicked?

Dawkins’ barbed blanket dismissiveness comes up far too often in discussions of the design inference and related themes. Rarely, explicitly, most often by implication of a far too commonly seen no concessions, selectively hyperskeptical policy that objectors to design too often manifest. It is time to set this straight. First, we need to highlight fallacious, crooked yardstick thinking (as exposed by naturally straight and upright plumb-lines). And yes, that classical era work, the Bible, is telling: Notice, a pivotal point here, is self-evident truths. Things, similar to 2 + 3 = 5: Notoriously, Winston Smith in 1984 is put on the rack to break his mind to conform to The Party’s double-think. He is expected to think 2 + 2 Read More ›

At Reasons.org: “I Think, Therefore It Must Be True,” Part 2: The Science of Certainty

winsome argument towards others for its validity. One aspect of humility would be a willingness to honestly consider all the evidence, including evidence that is contrary to our presuppositions. Read More ›