Uncommon Descent Serving The Intelligent Design Community

To recognize design is to recognize products of a like-minded process, identifying the real probability in question, Part I

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

“Take the coins and dice and arrange them in a way that is evidently designed.” That was my instruction to groups of college science students who voluntarily attended my extra-curricular ID classes sponsored by Campus Crusade for Christ at James Madison University (even Jason Rosenhouse dropped in a few times). Many of the students were biology and science students hoping to learn truths that are forbidden topics in their regular classes…

They would each have two boxes, and each box contained dice and coins. They were instructed to randomly shake one box and then put designs in the other box. While they did their work, I and another volunteer would leave the room or turn our backs. After the students were done building their designs, I and the volunteer would inspect each box, and tell the students which boxes we felt contained a design, and the students would tell us if we passed or failed to recognize their designs. We never failed!

Granted, this was not a rigorous experiment, but the exercise was to get the point across that even with token objects like coins and dice, one can communicate design.

So what is the reason that human designs were recognized in the classroom exercise? Is it because one configuration of coins and dice are inherently more improbable than any other? Let us assume for the sake of argument that no configuration is more improbable than any other, why then do some configurations seem more special than others with respect to design? The answer is that some configurations suggest a like-minded process was involved in the assembly of the configuration rather than a chance process.

A Darwinist once remarked:

if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins,

Law of Large Numbers vs. Keiths

But what is the real probability in question? It clearly isn’t about the probability of each possible 500-coin sequence, since each sequence is just as improbable as any other. Rather the probability that is truly in question is the probability our minds will recognize a sequence that conforms to our ideas of a non-random outcome. In other words, outcomes that look like “the products of a like-minded process, not a random process”. This may be a shocking statement so let me briefly review two scenarios.

A. 500-fair coins are discovered heads up on a table. We recognized it to be a non-random event based on the law of large numbers as described in The fundamental law of Intelligent Design.

B. 500-fair are discovered on a table. The coins were not there the day before. Each coin on the table is assigned a number 1-500. The pattern of heads and tails looks at first to be nothing special with 50% of the coins being heads. But then we find that the pattern of coins matches a blueprint that had been in a vault as far back as a year ago. Clearly this pattern also is non-random, but why?

The naïve and incorrect answer is “the probability of that pattern is 1 out of 2^500, therefore the event is non-random”. But that is the wrong answer since every other possible coin pattern has a chance of occurring of 1 out of 2^500 times.

The correct answer as to why the coin arrangement is non-random is “it conforms to blueprints”, or using ID terminology, “it conforms to independent specifications”. The independent specification in scenario B is the printed blueprint that had been stored away in the vault, the independent specification of scenario A is all-coins heads “blueprint” that is implicitly defined in our minds and math books.

The real probability at issue is the probability the independent specification will be realized by a random process.

We could end the story of scenario B by saying that a relative or friend put the design together as a surprise present to would-be observers that had access to the blueprint. But such a detail would only confirm what we already knew, that the coin configuration on the table was not the product of a random process, but rather a human-like, like-minded process.

I had an exchange with Graham2, where I said:

But what is it about that particular pattern [all fair coins heads] versus any other. Is it because the pattern is not consistent with the expectation of a random pattern? If so, then the pattern is special by its very nature.

to which Graham2 responded:

No No No No. There is nothing ‘special’ about any pattern. We attach significance to it because we like patterns, but statistically, there is nothing special about it. All sequences (patterns) are equally likely. They only become suspicious if we have specified them in advance.

Comment, Fundamental Law of ID

Whether Grahams2 is right or wrong is a moot point. Statistical tests can be used to reject chance as the explanation that certain artifacts look like the products of a like-minded process. The test is valid provided the blueprint wasn’t drawn up after the fact (postdictive blueprints).

A Darwinist will object and say, “that’s all well and fine, but we don’t have such blue prints for life. Give me sheet paper that has the blueprint of life and proof the blueprint was written before life began.” But the “blueprint” in question is already somewhat hard-wired into the human brain, that’s why in the exercise for the ID class, we never failed to detect design. Humans are like-minded and they make like-minded constructs that other humans recognize as designed.

The problem for Darwinism is that biological designs resemble human designs. Biological organisms look like like-minded designs except they look like they were crafted by a Mind far greater than any human mind. That’s why Dawkins said:

it was hard to be an atheist before Darwin: the illusion of living design is so overwhelming.

Richard Dawkins

Dawkins erred by saying “illusion of living design”, we know he should have said “reality of living design”. 🙂

How then can we reconstruct the blueprints embedded in the human mind in such a sufficiently rigorous way that we can then use the “blueprints” or independent specifications to perform statistical tests? How can we do it in a way that is unassailable to complaints of after-the-fact (postdictive) specifications?

That is the subject of Part II of this series. But briefly, I hinted toward at least a couple methods in previous discussions:

The fundamental law of Intelligent Design

Coordinated Complexity, the key to refuting single target and postdiction objections.

And there will be more to come, God willing.

NOTES

1. I mentioned “independent specification”. This obviously corresponds to Bill Dembksi’s notion of independent specification from Design Inference and No Free Lunch. I use the word blueprint to help illustrate the concept.

2. The physical coin patterns that conform to independent specifications can then be said to evidence specified improbability. I highly recommend the term “specified improbability” (SI) be used instead of Complex Specified Information (CSI). The term “Specified Improbability” is now being offered by Bill Dembski himself. I feel it more accurately describes what is being observed when identifying design, and the phrase is less confusing. See: Specified Improbability and Bill’s letter to me from way back.

3. I carefully avoided using CSI, information, or entropy to describe the design inference in the bulk of this essay. Those terms could have been used, but I avoided them to show that the problem of identifying design can be made with simpler more accessible arguments, and thus hopefully make the points more unassailable. This essay actually describes detection of CSI, but CSI has become such a loaded term in ID debates I refrained from using it. The phrase “Specified Improbability” conveys the idea better. The objects in the students’ boxes that were recognized as designed were improbable configurations that conformed to independent specifications, therefore they evidenced specified improbability, therefore they were designed.

Comments
I meant @10, it's less likely than coin tosses coming up all heads.Mapou
December 20, 2013
December
12
Dec
20
20
2013
07:22 PM
7
07
22
PM
PDT
Graham2:
But all sequences have the same probability, so whats the difference ?
Aw, come on. All sequences of coin tosses do not have the same probability. That's the fallacy that you're having a hard time understanding. The reason is that coins only have two faces and therefore the probability of either head or tails is always 50%. This is true no matter how often you flip the coin. Having an all-heads outcome after many flips is extremely unlikely. Any deviation over the long run from the 50% expectation is less likely and the more the result deviates from 50%, the less likely it is. As an aside, I was thinking about this within the context of our finding that some (supposedly) non-functional DNA sequences are repeated many times in the genome. How likely is that if DNA sequences are strictly the result of random mutations? That's even less likely than coin tosses.Mapou
December 20, 2013
December
12
Dec
20
20
2013
07:19 PM
7
07
19
PM
PDT
Graham2, all sequences have the same probability, however a set of 100% head has a very low probability :)Box
December 20, 2013
December
12
Dec
20
20
2013
06:48 PM
6
06
48
PM
PDT
Graham2, you already gave the game away in your comment at 5. You recognized one was random and the other was not, and you let your Darwinst "never give an inch" resolve drop just a split second and that allowed you to state the obvious. No good trying to take it back now. You can't unring that bell.Barry Arrington
December 20, 2013
December
12
Dec
20
20
2013
06:42 PM
6
06
42
PM
PDT
But all sequences have the same probability, so whats the difference ? I agree the 500-head one is suspicious, but Im asking you to explain your position.Graham2
December 20, 2013
December
12
Dec
20
20
2013
06:37 PM
6
06
37
PM
PDT
Put this one under amusing things Darwinists say: Graham2:
EA #2: You clearly agree with Sal that 500 heads is suspicious, yet a random pattern is not, so whats the difference ?
Uh, one is random and the other isn’t.Barry Arrington
December 20, 2013
December
12
Dec
20
20
2013
06:09 PM
6
06
09
PM
PDT
EA #2: You clearly agree with Sal that 500 heads is suspicious, yet a random pattern is not, so whats the difference ?Graham2
December 20, 2013
December
12
Dec
20
20
2013
06:06 PM
6
06
06
PM
PDT
Although I like the 'made in God's image' inference oozing out of this comment:
But the “blueprint” in question is already somewhat hard-wired into the human brain, that’s why in the exercise for the ID class, we never failed to detect design.
As does Michael Behe like the inference:
Michael Behe - Life Reeks Of Design - video https://www.youtube.com/watch?v=Hdh-YcNYThY
I would like to expand a bit on this following comment instead:
except they look like they were crafted by a Mind far greater than any human mind.
Bur what gives us the impression that life was 'crafted by a Mind far greater than any human mind'? Well for starters even the simplest life ever found on earth is far, far, more complex than any machine, or integrated circuit, devised by man:
To Model the Simplest Microbe in the World, You Need 128 Computers - July 2012 Excerpt: Mycoplasma genitalium has one of the smallest genomes of any free-living organism in the world, clocking in at a mere 525 genes. That's a fraction of the size of even another bacterium like E. coli, which has 4,288 genes.,,, The bioengineers, led by Stanford's Markus Covert, succeeded in modeling the bacterium, and published their work last week in the journal Cell. What's fascinating is how much horsepower they needed to partially simulate this simple organism. It took a cluster of 128 computers running for 9 to 10 hours to actually generate the data on the 25 categories of molecules that are involved in the cell's lifecycle processes.,,, ,,the depth and breadth of cellular complexity has turned out to be nearly unbelievable, and difficult to manage, even given Moore's Law. The M. genitalium model required 28 subsystems to be individually modeled and integrated, and many critics of the work have been complaining on Twitter that's only a fraction of what will eventually be required to consider the simulation realistic.,,, http://www.theatlantic.com/technology/archive/2012/07/to-model-the-simplest-microbe-in-the-world-you-need-128-computers/260198/
But perhaps the best way to get this life was 'crafted by a Mind far greater than any human mind' inference across more effectively is to highlight the overlapping coding on the DNA. It recently made headlines in major new outlets that there is dual coding in DNA:
Time mag: (Another) Second Code Uncovered Inside the DNA -- Scientists have discovered a second code hidden within the DNA, written on top of the other. - December 2013 http://science.time.com/2013/12/13/second-code-uncovered-inside-the-dna/
Which is astonishing enough 'since our best computer programmers can't even conceive of overlapping codes.',,,
'It's becoming extremely problematic to explain how the genome could arise and how these multiple levels of overlapping information could arise, since our best computer programmers can't even conceive of overlapping codes. The genome dwarfs all of the computer information technology that man has developed. So I think that it is very problematic to imagine how you can achieve that through random changes in the code.,,, and there is no Junk DNA in these codes. More and more the genome looks likes a super-super set of programs.,, More and more it looks like top down design and not just bottom up chance discovery of making complex systems.' - Dr. John Sanford http://www.youtube.com/watch?feature=player_detailpage&v=YemLbrCdM_s#t=31s
But the News release for dual coding did not tell the whole story. They have been discovering overlapping coding in DNA for years. In fact it is shown that DNA 'can carry abundant parallel codes'.
The genetic code is nearly optimal for allowing additional information within protein-coding sequences - Shalev Itzkovitz and Uri Alon - 2006 Excerpt: Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes.... the present findings support the view that protein-coding regions can carry abundant parallel codes. http://genome.cshlp.org/content/17/4/405.full
Moreover, in the following video, Edward N. Trifonov humorously reflects how they have been 'RE'-discovering 'second' codes in the DNA for years, all the while forgetting to count above the number two for the previous code that was discovered:
Second, third, fourth… genetic codes - One spectacular case of code crowding - Edward N. Trifonov - video https://vimeo.com/81930637
In the preceding video, Trifonov also talks about 13 different codes that can be encoded in parallel along the DNA sequence. As well, he elucidates 4 different codes that are, simultaneously, in the same sequence, coding for DNA curvature, Chromatin Code, Amphipathic helices, and NF kappaB. In fact, at the 58:00 minute mark he states,
"Reading only one message, one gets three more, practically GRATIS!".
And please note that this was just an introductory lecture in which Trifinov just covered the very basics and left many of the other codes out of the lecture. Codes which code for completely different, yet still biologically important, functions. In fact, at the 7:55 mark of the video, there are 13 different codes that are listed on a powerpoint, although the writing was too small for me to read. Concluding powerpoint of the lecture (at the 1 hour mark) states:
"Not only are there many different codes in the sequences, but they overlap, so that the same letters in a sequence may take part simultaneously in several different messages." Edward N. Trifonov - 2010
As well, according to Trifonov, other codes, on top of the 13 he listed, are yet to be discovered. In a paper, that was in a recent book that Darwinists tried to censor from ever getting published, Robert Marks, John Sanford, and company, mathematically dotted the i's and crossed the t's in what is intuitively obvious to the rest of us about finding multiple overlapping codes in DNA:
Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez 1, Robert J. Marks II 2, Jorge Fernandez 3 and John C. Sanford 4 - published online May 2013 Excerpt: In the last decade, we have discovered still another aspect of the multi- dimensional genome. We now know that DNA sequences are typically “ poly-functional” [38]. Trifanov previously had described at least 12 genetic codes that any given nucleotide can contribute to [39,40], and showed that a given base-pair can contribute to multiple overlapping codes simultaneously. The first evidence of overlapping protein-coding sequences in viruses caused quite a stir, but since then it has become recognized as typical. According to Kapronov et al., “it is not unusual that a single base-pair can be part of an intricate network of multiple isoforms of overlapping sense and antisense transcripts, the majority of which are unannotated” [41]. The ENCODE project [42] has confirmed that this phenomenon is ubiquitous in higher genomes, wherein a given DNA sequence routinely encodes multiple overlapping messages, meaning that a single nucleotide can contribute to two or more genetic codes. Most recently, Itzkovitz et al. analyzed protein coding regions of 700 species, and showed that virtually all forms of life have extensive overlapping information in their genomes [43]. 38. Sanford J (2008) Genetic Entropy and the Mystery of the Genome. FMS Publications, NY. Pages 131–142. 39. Trifonov EN (1989) Multiple codes of nucleotide sequences. Bull of Mathematical Biology 51:417–432. 40. Trifanov EN (1997) Genetic sequences as products of compression by inclusive superposition of many codes. Mol Biol 31:647–654. 41. Kapranov P, et al (2005) Examples of complex architecture of the human transcriptome revealed by RACE and high density tiling arrays. Genome Res 15:987–997. 42. Birney E, et al (2007) Encode Project Consortium: Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project. Nature 447:799–816. 43. Itzkovitz S, Hodis E, Sega E (2010) Overlapping codes within protein-coding sequences. Genome Res. 20:1582–1589. Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez 1, Robert J. Marks II 2, Jorge Fernandez 3 and John C. Sanford 4 - May 2013 Conclusions: Our analysis confirms mathematically what would seem intuitively obvious - multiple overlapping codes within the genome must radically change our expectations regarding the rate of beneficial mutations. As the number of overlapping codes increases, the rate of potential beneficial mutation decreases exponentially, quickly approaching zero. Therefore the new evidence for ubiquitous overlapping codes in higher genomes strongly indicates that beneficial mutations should be extremely rare. This evidence combined with increasing evidence that biological systems are highly optimized, and evidence that only relatively high-impact beneficial mutations can be effectively amplified by natural selection, lead us to conclude that mutations which are both selectable and unambiguously beneficial must be vanishingly rare. This conclusion raises serious questions. How might such vanishingly rare beneficial mutations ever be sufficient for genome building? How might genetic degeneration ever be averted, given the continuous accumulation of low impact deleterious mutations? http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0006
Of course, multiple overlapping coding that man cannot even fathom of building is not all there is to drawing the inference that life was 'crafted by a Mind far greater than any human mind', but it is very a good start. But to briefly touch on what else is in the cell, Quantum Computation, which man has barely taken his first baby steps in, is now heavily implicated to be involved within DNA repair mechanisms (3D fractal). As well, biophotonic communication (think laser light), between all the molecules of the cell, DNA and proteins, is now also heavily implicated to be within the cell. As well 'Reversible computation' is heavily implicated to be involved in cellular processes. All in all, given the unfathomable complexity being dealt with in the 'simple' cell, I think this following quote is quite fitting for expressing the awe of what is being found in life:
Systems biology: Untangling the protein web - July 2009 Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. "Combine all this and you can start to think that maybe some of the information flow can be captured," he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. "The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent," he says. "The simple pathway models are a gross oversimplification of what is actually happening." http://www.nature.com/nature/journal/v460/n7253/full/460415a.html
Verse and Music:
John 1:1-5 In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made. In him was life, and that life was the light of men. The light shines in the darkness, but the darkness has not understood it. Lindsey Stirling & Kuha'o Case - Oh Come, Emmanuel - video http://www.youtube.com/watch?v=ozVmO5LHJ2k
bornagain77
December 20, 2013
December
12
Dec
20
20
2013
05:12 PM
5
05
12
PM
PDT
Sal:
But the “blueprint” in question is already somewhat hard-wired into the human brain, that’s why in the exercise for the ID class, we never failed to detect design. Humans are like-minded and they make like-minded constructs that other humans recognize as designed.
It may be true that there is some "like-mindedness" going on, but it doesn't necessarily come from the fact that the designer and the observer are both human. The like-mindedness can be described more generally: specifically, functional complexity or meaningful communication, to name a couple. The reason SETI believes it will be able to detect a signal from an intelligent alien is not because the alien is human, or even hominid, or has much of any similarity with humans. It is the mere recognition of a meaningful communication that allows the inference. Same with function. Were we ever to capture a confirmed UFO, it would be immediately recognized as designed. Not so much because the designer is "like-minded" in any sense of being human or even similar to human, but because the existence of a functional complex object is adequate to infer design (a la Behe's original, if perhaps simple, description of [functional] irreducible complexity). You are quite right, however, that many things in biology appear to be designed (as even ardent materialists admit). So the onus should be firmly on those disputing that obvious and reasonable inference to provide a decent alternative explanation.Eric Anderson
December 20, 2013
December
12
Dec
20
20
2013
04:49 PM
4
04
49
PM
PDT
Graham (quoted by Sal):
They only become suspicious if we have specified them in advance.
This is clearly, obviously wrong. There are many, many cases in which the specification is discovered after the fact. New civilizations discovered, the Rosetta Stone, the effort to unlock Egyptian, and on and on. Indeed, every major "surprise" discovery in archaeology is a surprise precisely because the specification was not known, was not identified, was not expected beforehand. Same is true in living systems as new things are discovered. SETI relies on the same concept and certainly hasn't specified in advance the precise message, not even the type of message, it must receive. So, no, it is clearly false that the specification has to be identified, known, agreed to in advance.Eric Anderson
December 20, 2013
December
12
Dec
20
20
2013
04:26 PM
4
04
26
PM
PDT
Sal:
They would each have two boxes, and each box contained dice and coins. They were instructed to randomly shake one box and then put designs in the other box. While they did their work, I and another volunteer would leave the room or turn our backs. After the students were done building their designs, I and the volunteer would inspect each box, and tell the students which boxes we felt contained a design, and the students would tell us if we passed or failed to recognize their designs. We never failed!
Excellent. I have long thought of doing an experiment of essentially this type with volunteers to get the exact same point across. I was thinking of some kind of lincoln logs or similar blocks, but I like the dice/coins approach. I have also contemplated having the students themselves go around the room and make the inference (the primary caveat being that you would have to control for false negatives (i.e., the blocks purposely arranged to appear random). With your approach of having two sets each with the clear instructions, however, this should resolve that problem. Great pedagogical exercise. I'm jealous!Eric Anderson
December 20, 2013
December
12
Dec
20
20
2013
04:19 PM
4
04
19
PM
PDT
1 4 5 6

Leave a Reply