Uncommon Descent Serving The Intelligent Design Community
Category

specified complexity

Functionally Specified Complex Information and Organization

Illustrating embedded specification and specified improbability with specially labeled coins

The reason the 500-fair-coins-heads illustration has been devastating to the materialists is due to a fact that has somewhat escaped everyone until Neil Rickert (perhaps unwittingly) pointed it out: the sides of the coin are distinguishable, but not in a way that biases the probability. This fact guarantees that chance cannot construct recognizable symbolic organization, it can only destroy it. In essence, the world of symbols (heads and tails) has become somewhat decoupled from the world of materials, and the world of specialized information (in the form of recognizable configurations like all-coins-heads) can thus transcend material causes. If the coins were perfectly symmetric and did not have any markings to let you know one side was distinguishable from the other, Read More ›

To recognize design is to recognize products of a like-minded process, identifying the real probability in question, Part I

“Take the coins and dice and arrange them in a way that is evidently designed.” That was my instruction to groups of college science students who voluntarily attended my extra-curricular ID classes sponsored by Campus Crusade for Christ at James Madison University (even Jason Rosenhouse dropped in a few times). Many of the students were biology and science students hoping to learn truths that are forbidden topics in their regular classes… They would each have two boxes, and each box contained dice and coins. They were instructed to randomly shake one box and then put designs in the other box. While they did their work, I and another volunteer would leave the room or turn our backs. After the students Read More ›

The paradox of almost definite knowledge in the face of maximum uncertainty — the basis of ID

When facing maximum uncertainty, it seems paradoxical that one can have great assurance about certain things. This has enormous relevance to ID because Darwinists will argue, “how can you be so certain of something when it is apparent there is great uncertainty in the system.” I will respond by saying, “when we have maximum uncertainty about what specific configuration 500 fair coins is in (by randomizing the coins in some vigorous fashion), we simultaneously have almost near certainty about which configurations it cannot be in — such as all-coins heads or a pre-specified sequence….” When a process like a biotic soup maximizes uncertainty about possible polymer sequences that can evolve, it gives us near certainty life will not evolve by Read More ›

The Fundamental Law of Intelligent Design

After being in the ID movement for 10 years, and suffering through many debates, if someone were to ask me what is the most fundamental law upon which the ID case rests, I would have to say it is the law of large numbers (LLN). It is the law that tells us that a set of fair coins randomly shaken will converge on 50% heads and not 100% heads. It is the law that tells us systems will tend toward disorganization rather than organization. It is the law of math that makes the 2nd law of thermodynamics a law of physics. Few notions in math are accorded the status of law. We have the fundamental theorem of calculus, the fundamental Read More ›

Thermodynamics, Coin Illustrations and Design

The second law says when a cold object is in contact with a hot object, the two objects will eventually arrive at the same temperature, and once in equilibrium, one object will not become spontaneously colder again without an external agent. This illustrates that undirected natural forces will favor certain configurations of matter and energy and that the configurations cannot be undone without an external agent. Here is another simpler illustration. Start out with tray of fair coins in the all-heads configuration. Shake the tray or do something so as to get the coins flipping. You’ll notice it never reverts back to all heads. In fact for a large set of fair coins, the law of large numbers says the Read More ›

CSI and Maxwell’s Demon

“It is CSI that enables Maxwell’s demon to outsmart a thermodynamic system tending toward thermal equilibrium” (Intelligent Design, pag. 159) HT: niwrad For those wanting to understand Maxwell’s demon, here is a great video! [youtube tqgvqeLybik] How does this apply to No Free Lunch? In my essay “simplified illustration of no free lunch“, I describe how a Darwinist could get a free lunch if the Darwinian mechanism could create necessary information out of thin air. Instead of the free energy that Maxwell’s demon could supposedly make, I invited a Darwinist to show he could get free information with his Darwinian demon, and thus a free lunch worth $100 from me. Of course, he failed. 🙂 The problem for Darwinism, like Read More ›

Specified Entropy — a suggested convention for discussion of ID concepts

In terms of textbook thermodynamics, a functioning Lamborghini has more thermal entropy than that same Lamborghini with its engine and other vital parts removed. But intuitively we view such a destruction of a functioning Lamborghini as an increase in entropy and not a decrease of entropy. Something about this example seems downright wrong… To fix this enigma, and to make the notion of entropy line up to our intuitions, I’m suggesting that the notion of “specified entropy” be used to describe the increase in disorganization. I derive this coined phrase from Bill Dembski’s notions of specified information. In the case of the Lamborghini getting its vital parts removed, the specified entropy goes up by exactly the amount that the specified Read More ›

Forgotten Creationist/ID Book endorsed by Nobel Prize Winner in Physics

There is a forgotten creationist book by engineer and physicist Robert Gange, PhD: Origins and Destiny that was published in 1986. It is available for free online, but for how long, I do not know. It was pioneering, and anticipated arguments that would be found in ID for the next 27 years, and likely beyond. Gange worked in the field of cyrophysics, so it is no surprise he writes with incredible insight regarding thermodynamics. His book is the only book written by a creationist that I agree with on the subject of thermodynamics, and he uses the so-called “New Generalized 2nd Law” to make his case. [the Kelvin-Plank version of the 2nd Law is a special case of the “New Read More ›

The Tragedy of Two CSIs

CSI has come to refer to two distinct and incompatible concepts. This has lead to no end of confusion and flawed argumentation. CSI, as developed by Dembski, requires the calculation of the probability of an artefact under the mechanisms actually in operation. It a measurement of how unlikely the artefact was to emerge given its context. This is the version that I’ve been defending in my recent posts. CSI, as used by others is something more along the lines of the appearance of design. Its typically along the same lines as notion of complicated developed by Richard Dawkin in The Blind Watchmaker: complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random Read More ›

Ranking the information content of platonic forms and ideas

Consider the following numbers and concepts (dare I say platonic forms): 1 1/2 1/9 or 0.111111….. PI PI squared The Book War and Peace by Tolstoy approximate self-replicating von-Neuman automaton (i.e. living cells) Omega Number, Chaitin’s Constant Chaitin’s Super Omega Numbers I listed the above concepts in terms of which concepts I estimate are more information rich than others, going from lower to higher. The curious thing is that even though we can’t really say exactly how many bits each concept has, we can rank the concepts in terms of estimated complexity. PI can be represented by an infinite number of digits, and thus be represented with far greater number of bits than contained in Tolstoy’s War and Peace, but Read More ›

CSI Confusion: Remember the Mechanism!

A number of posts on Uncommon Descent have discussed issues surrounding specified complexity. Unfortunately, the posts and discussions that resulted have exhibited some confusion over the nature of specified complexity. Specified complexity was developed by William Dembski and deviation from his formulation has led to much confusion and trouble. I’m picking a random number between 2 and 12. What is the probability that it will be 7? You might say it was 1 in 11, but you’d wrong because I chose that number by rolling two dice, and the probability was 1 in 6. The probability of an outcome depends on how that outcome was produced. In order to calculate a problem, you must always consider a mechanism and an Read More ›

Dawkins Weasel vs. Blind Search — simplified illustration of No Free Lunch theorems

I once offered to donate $100 to Darwinist Dave Thomas’ favorite Darwinist organization if he could write an genetic algorithm to solve a password. I wrote a 40-character password on paper and stored it in safe place. To get the $100, his genetic algorithm would have to figure out what the password was. I was even willing to let him have more than a few shots at it. That is, he could write an algorithm which would propose a password, it would connect to my computer, and my computer that had a copy of the password would simply say “pass or fail”. My computer wouldn’t say “you’re getting closer or farther” from the solution it would merely say “pass or Read More ›

The paradox in calculating CSI numbers for 2000 coins

Having participated at UD for 8 years now, criticizing Darwinism and OOL over and over again for 8 years is like beating a dead horse for 8 years. We only dream up more clever and effective and creative ways to beat the dead horse of Darwinism, but it’s still beating a dead horse. It’s amazing we still have a readership that enjoys seeing the debates play out given we know which side will win the debates about Darwin… Given this fact, I’ve turned to some other questions that have been of interest to me and readers. One question that remains outstanding (and may not ever have an answer) is how much information is in an artifact. This may not be Read More ›

Nuances in understanding NFL Theorems — some pathological “counterexamples”

NLF theorems are stated in terms of the average performance of evolutionary algorithms, but ID proponents must be mindful whenever the word AVERAGE is used, because it implies there may be above average performers, and I’m surprised Darwinists have been slow to seize refuge in the possibility of above average outcomes. To illustrate, the house edge (casino edge) in the game of dice (craps) is a mere 1.41% for the “passline” wager. So on average we expect the casino to win, but not immutably. I asked one pit boss, “what was the longest winning streak by the players?” He said something on the order of 15 wins in a row, and the casino lost over $140,000 in a few hours Read More ›

sRNA for Quorum Sensing: Evidence for CSI?

Bacteria demonstrate intra-species communication that is species specific using a partner with a communication molecule. Bacteria are also “multilingual” with a generic trade language for interspecies communication. Bacteria control tasks by signal producing and receiving receptors with a signal carrier. The tasks bacteria conduct depend on the concentration they sense of self bacteria versus generic species concentration. e.g. Bacteria control pathogenicity with quorum sensing. The detailed (small) sRNA required for these control mechanisms is now beginning to be desciphered. See below. Question:
Did bacteria “invent” their communication and control methods via evolutionary stochastic processes?
Or do these constitute Complex Specified Information and thus evidence design? Read More ›