Uncommon Descent Serving The Intelligent Design Community

Complexity, Specification, Design Inference, and Designers

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I often see misunderstanding of what ID is about. It’s about inferring design by critical analysis of a pattern and the ways that pattern could have come to exist. I find a comparison with a lottery to be the easiest way to understand this.

Suppose there is a state lottery and each month for 12 consecutive months 10 million tickets are sold and one winning ticket is drawn at random. Obviously there must be 12 winners at the end of the year. While each winner beats odds of 10 million to 1 there’s nothing unusual about that as someone must beat the odds each time.

Now suppose that the 12 winners are all siblings in order from oldest to youngest.

This lottery result constitutes a pattern.

First of all we have complexity in the pattern. The odds of any particular sequence of 12 winners are 1 in 10^84 (that’s 10 followed by 84 zeroes). Any single pattern where there are trillions and trillions of patterns possible is complex. But complex things like this happen all the time because the result must be one of those many sequences. A sequence of 10 coin flips, no matter the result, is not complex as there are only 1024 possible results. This is roughly how we define complexity. Complex results happen all the time and in themselves are no indication of design.

Next, the pattern has specification. The pattern conforms to an independently given specification. In this case siblings from the same family is the indendently given specification.

Now we have identified the lottery result as a complex specified pattern (or complex specified information if you will). This is a reliable indicator of design. The more complex the result and the more definitive the pattern the more reliable the design inference.

No matter how convincingly it can be told that the lottery was secure from cheating no reasonable person will be convinced that there was no cheating involved. So we can almost certainly rest assured that the result of the lottery was not random but was the result of design (cheating; rigged).

However, even though we know the result was rigged we have no clue how it was rigged (the mechanism) nor who did the rigging (the designer).

ID is the theory that certain patterns in nature exhibit specified complexity that can only reasonably be attributed to design. ID does not and cannot reveal how the design was accomplished nor what entity or entities did the designing. ID is nothing more or less than design inference based upon high improbability of independently given patterns arising by chance.

Now let’s quickly look at the flagellum. There’s no room for debate about complexity. It’s a precise arrangement of millions of protein molecules from a set of dozens of different proteins, each protein itself a complex pattern. There’s little room for debate that it conforms to an independently given pattern. It’s a propulsion device. Where there is room for debate is in what Bill Dembski calls “probabilistic resources”. These are the resources that “chance” (or unintelligent cause) has to draw upon in forming the pattern. This is why ID seems to be an attack on mutation & selection. Mutation & selection are the leading known probabilistic resource that could form the specified complexity of the flagellum.

Logically one can never prove a negative. ID proponents will never be able to prove that some unknown probabilistic resource wasn’t the source of design in the flagellum. However, this is a problem with nearly every hypothesis in science and it’s why you often hear that all of science is tentative. Some bits are just more tentative than others. This is why most philosophers of science say a hypothesis has to be, at least in principle, falsifiable. If we can’t prove something is true, if we can at least be able to prove it false in principle, then it’s science. The ID hypothesis of the flagellum is falsifiable. In principle a neoDarwinian pathway for its evolution can be plotted on paper and confirmed in a laboratory.

The greater question in my mind regarding falsifiability is whether there’s any method in principle of falsifying a hypothetical neoDarwinian pathway for the flagellum. The only real contender for falsification is a design inference! So you see, if ID didn’t exist, neoDarwinists would have to invent it just so they have a method of falsification in principle for random mutation plus natural selection in creating things like the flagellum.

Comments
Dave Scot, Your lottery analogy is similar to something that had a big impact on me. I think it was Michael Behe who explained that the odds are about 1/10^ 125 of a random “coming together” of even a simple protein made up of just 100 amino acids [with 20 different amino acids to choose from, given that they must all have peptide bonds, and all be “left” shaped—I am sorry I don’t know the correct terminology]. To most of us non-scientists, non-mathematicians, the number 10^125 does not convey a sense of its size. We need to ‘see’ how big that number is. I “translated” the number into something regular people might make sense of. First, create 57 stacks of cards, each stack containing three decks of cards, each deck a different color [red, yellow, and blue]. Thus, each stack contains 156 unique cards. Take a card from each stack. The odds of correctly drawing 57 straight cards in a certain pre-specified order, such as 57 straight blue aces of hearts {wouldn’t that essentially be ‘specified complexity’?} is about 1/10^ 125. Now design a super computer that can play this poker game. How many computers, playing the game at how many times a second, for how long, will it take to generate 10^ 125 hands? 1. Make each computer really fast, able to play one trillion hands per second [10^12]. 2. Make one computer for every neutron and proton in the universe [I understand that’s about 10^85]. 3. Have these computers run continuously since the universe began [10^17 seconds]. That gets us ‘only’ 10^112 hands total. We need more universes!! Or more time. Or faster computers. Or all three. Given that, who would bet their life on an accidentally-stumbled together protein molecule?JohnLiljegren
February 22, 2006
February
02
Feb
22
22
2006
09:32 PM
9
09
32
PM
PDT
Irving, that's the point. The complexity of the signal depends on your chance hypotheses. Under a white noise hypothesis, any signal of sufficient length has a probability of less than 1/10^150. As far as specification, even a signal of constant amplitude and frequency exhibits a recognizable pattern, kind of like flipping millions of heads in a row. So, under a white noise hypothesis, a constant signal infers design. The design inference of CSI is only as good as our chance hypotheses, which in some cases are pretty arbitrary.secondclass
February 22, 2006
February
02
Feb
22
22
2006
08:50 PM
8
08
50
PM
PDT
Secondclass, Which natural cyclical radio signal has CSI is excess of 1/10^150?Irving
February 22, 2006
February
02
Feb
22
22
2006
06:29 PM
6
06
29
PM
PDT
Dave, thanks for your response. Your point regarding the epistemic limitations of science is a good one. I can't speak to the biology aspect, so I'll take your word for it that there are no known non-telic processes that result in biological systems. I assume that we also agree on the fact that there are no known _telic_ processes that result in biological systems. So in biology, as in the pulsar example above, we're faced with the question of how to formulate meaningful chance hypotheses from a dearth of information. The question is vital to the reliability of CSI-based design inference. Under a uniform-noise chance hypothesis, cyclical radio signals exhibit CSI. The resulting design inference turns out to be a false positive, and not because of some freakish Gettier counterexample, but because of the unremarkable discovery of yet another non-telic pattern generator - a natural computational process, if you will. The abundance of such processes in nature, which are intractable as an aggregate, would seem to render CSI-based design theory less reliable than other scientific theories that can be negated only by extraordinary new evidence. As always, I hope someone will correct my misconceptions.secondclass
February 22, 2006
February
02
Feb
22
22
2006
04:45 PM
4
04
45
PM
PDT

Disclosure: I am not a statistician, nor am I an expert on Dembski's approach, having read only his online articles. I hope that neither my rookie status nor the length of this post will disqualify me from discussion, as I'm eager to learn.

I submit that complexity calculations, in order to be meaningful, must be based on the aggregate of all relevant chance hypotheses, and that care should be taken to include any plausible hypotheses that posit a dependent relationship between specification and event. Furthermore, I submit that complexity calculations are meaningful only to the degree that the underlying chance hypotheses are based on knowledge rather than assumption.

The lottery scenario is a case in which we have excellent knowledge of the processes involved, since lotteries are designed and operated by humans. We know, for instance, that a uniform distribution is presumably designed into the selection procedure. We also know that information regarding family affiliation is typically far removed from the lottery operation, so the odds of such information being inadvertently incorporated into the selection procedure is negligible. (In real life, we would try to increase our knowledge by asking questions such as, "Did the whole family go to Kwik-E-Mart and buy their tickets together?" For this example, I'll assume that we're restricted to the information stated in the scenario description.) Based on our knowledge, it seems reasonable to consider only one chance hypothesis - a fair lottery. As Dave explained, this hypothesis confers very low probability on the outcome, so the complexity condition is met. Since the specification condition is also met, we should infer intentional cheating.

Now consider a case in which our knowledge is lacking. Suppose that we receive an RF signal from space that exhibits a cyclical on-off-on-off pattern, and suppose further that pulsars have not yet been discovered. It's clear that a specified pattern is manifest, and if we consider only a uniform noise hypothesis, then the long string 10101010... easily meets the complexity condition. (Note that the universal distribution gives us the opposite result, but I've never seen the universal distribution used in the CSI approach.) CSI indicates design in this case as in the case above, but here the indication is unwarranted because our set of chance hypotheses has very little informational basis.

Note that the correct but uncontemplated hypothesis in this case ties the specification to the event by a non-telic process, resulting in a high probability for the specified pattern. Processes like this are abundant in nature, but the CSI analyses that I've seen don't account for them unless they're already known to be part of the causal story. I hope that someone can clarify this issue for me. Thanks.

The lottery example was only intended to illustrate the concepts of specified complexity and design inferences. I went on to explain probalisitic resources and the difficulties associated with fully delineating them in complex biological machinery. Given we have no direct or indirect evidence that chance mechanisms had anything to do with the creation of novel cell types, tissue types, organs, or body plans it makes it difficult to assign them any plausible chance at all. That's the basic problem with neoDarwinian evolution - it attempts to assign a mechanism for creative events which were not observed in the past and cannot be repeated via experiment. -ds secondclass
February 22, 2006
February
02
Feb
22
22
2006
01:42 PM
1
01
42
PM
PDT

DaveScot, there is one problem I see with your lottery analogy. The analogy begins with unstated foreknowledge of what kinds of things might be able to influence a lottery, human interference being one such influence. Because that influence is ruled into your argument by the existance of people at many points in the lottery system (sales, drawing, distribution, etc), it is a justified argument. Other influences which might cause a non random sequence could also be present and we could expect a different specified outcome from them. Unevenly weighted lotto-balls might cause certain numbers to be selected more frequently, for example. However, the pattern you suppose points us in the direction of human interference only because it is the kind of pattern we expect from our previous experiences with human sources. Your argument for a design inference from the CSI is correct, from that point.

However, that justification does not exist for the flagellum or other IC biological systems. In order to infer design for a biological system, which came about at a time when no human agent was demonstrably present, we must first rule-in the presence of a designer and know something about what kind of design would be expected. We know of the existance of naturally selective environmental pressures and we know that there are many kinds of "designs" that evolution can bring about. Therefore, evolution is very hard to rule out as a possible designer. The existance of CSI in biological forms could, logically, be pointing us towards an evolutionary pressure in many cases. However, to differentiate the presence of a designer apart from evolution, we must first establish what kinds of designs that designer would employ. But, can anyone do that?

Did everyone just stop reading when I got down to explaining there is room for debate in probabilistic resources? -ds curtrozeboom
February 22, 2006
February
02
Feb
22
22
2006
07:16 AM
7
07
16
AM
PDT

ds:

Of course it's not a scientific argument. In my experience, most ID supporters are born-again creationists (some present company may be excepted), and that is exactly the sort of argument they make. The only lame thing about court orders is that they come about when ID supporters try to take the fast track to scientific acceptance straight into school textbooks.

Oh give me a major break. The Cobb county case was about a textbook sticker that didn't mention ID, didn't mention religion, it's only crime being it called evolution a theory that should be carefully studied and critically considered. The courts are giving evolution the fast track to become scripture is what's happening. -ds

Joseph:
I know science is not about "proving" theories. I'm a scientist. It's about hypothesis testing. The way the point was phrased above- "a neoDarwinian pathway for its evolution can be plotted on paper and confirmed in a laboratory" is not a test of ID. It is a test of the particular evolutionary model proposed. If the test of this alternative hypothesis rejects the hypothesis, then ds implies that this supports a null hypothesis of "intelligent design". The true null hypothesis would simply be "did not come about by this evolutionary pathway".

What I want to see is a test of ID that can possibly support Ha: this thing was intelligently designed, and rejects the Ho: this thing wasn't.

So what repeatable test was performed that said "Ha" the eukaryote nucleus was the result of random mutation and natural selection? [sound of crickets chirping] NeoDarwinian dogmatists have such a double standard. You can't possibly not see these things I've mentioned after its brought to your attention. -ds

George
February 22, 2006
February
02
Feb
22
22
2006
05:58 AM
5
05
58
AM
PDT
George: How does the existence of an alternative hypothesis (the neoDarwinian pathway) falsify the ID hypothesis? Couldn’t the designer just have made it *look* like the flagellum evolved? If it could be demonstrated that unintelligent, blind/ undirected processes can account for something then Occam's Razor slices off the requirement of an intelligent designer. Remember science is not about "proof". It is about the best explanation/ inference based on the available data.Joseph
February 22, 2006
February
02
Feb
22
22
2006
04:39 AM
4
04
39
AM
PDT

Thanks for the explination. The only concern I have is that with the lottery, and most other expamples I have seen used, it is possible to reliably estimate the probability of the event occuring by chance. Even in the case of Mount Rushmore this may be possible by studying erosion patterns. However in the case of biological systems I dont think our current knowledge allows accurate estimates of the probability. I may be wrong so please feel free to correct.

In regards to the falsifiability, you may not be able to falsify a particular pathway, unless you can calculate that pathway couldnt have conceivably occured. However there are several cases where it may be possible to falsify evolution/conclude design. A good example is the glow in the dark pigs(http://news.bbc.co.uk/2/hi/asia-pacific/4605202.stm). Here we have an example of an entirely new gene appearing in a population, not only that but a comparative genomic analysis will show that it does not appear in other populations of pigs or other mammals, and it is infact a jellyfish gene. In this case we would be able to reliably infer design.

In this case we would be able to reliably infer design. No Chris, you can't. Didn't you get the memo A New Paradigm for Biology? I explained that the difficulty with design inference is that there can never be proof that every probabilistic resource has been factored into the equation. I also explained that this isn't a unique problem for ID and is why all of science is tentative - we can never know everything. ID can't be held to a higher standard. Its opponents don't get to have their cake and eat it too. -ds Chris Hyland
February 22, 2006
February
02
Feb
22
22
2006
03:50 AM
3
03
50
AM
PDT

remember the time when the New York pick three lotto had 911 as the 3 numbers? The amazing thing is it happened on Sept 11th one year after the Twin Tower attack.

Very good! There's an independently given pattern there. The result however isn't very complex as on any given day the odds would be 1:1000 for that result. Dembski defines the universal probability bound where a design inference is warranted as one out of ten to the 150th power (1/10^150) which is almost infinitely smaller chance than 1/10^3. But you're thinking along the right lines! -ds Fross
February 21, 2006
February
02
Feb
21
21
2006
11:42 AM
11
11
42
AM
PDT

-ds,
so what is it, am i banned?

No. If you have anything constructive to say I'll approve it. Read the moderation policy. -ds insouciant
February 21, 2006
February
02
Feb
21
21
2006
09:54 AM
9
09
54
AM
PDT

"The ID hypothesis of the flagellum is falsifiable. In principle a neoDarwinian pathway for its evolution can be plotted on paper and confirmed in a laboratory."

How does the existence of an alternative hypothesis (the neoDarwinian pathway) falsify the ID hypothesis? Couldn't the designer just have made it *look* like the flagellum evolved?

Not in a scientific explanation. That would be metaphysical explanation. Why do neoDarwinists always resort to metaphysical arguments in order to dispute the detectability of design in patterns found in nature? It appears you can't dispute it with science. Metaphysical arguments and court orders. How very lame. -ds George
February 21, 2006
February
02
Feb
21
21
2006
08:09 AM
8
08
09
AM
PDT
Sorry- Mike Gene's ID 101Joseph
February 21, 2006
February
02
Feb
21
21
2006
07:00 AM
7
07
00
AM
PDT
Intelligent Design 101 My new blog incorporates Mike Genne's ID 101, plus adds other insights. All comments are welcome.Joseph
February 21, 2006
February
02
Feb
21
21
2006
07:00 AM
7
07
00
AM
PDT
DS - nicely related to something that hits home for all of us - our pocketbooks. There is also an important idea here that I hope everyone catches - design is so easily detected that it actually takes deliberate effort and significant energy to hide the fact of design when it's designers don't want it detected. Did that make sense?dougmoran
February 21, 2006
February
02
Feb
21
21
2006
12:19 AM
12
12
19
AM
PDT
1 2

Leave a Reply