Uncommon Descent Serving The Intelligent Design Community

To recognize design is to recognize products of a like-minded process, identifying the real probability in question, Part I

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

“Take the coins and dice and arrange them in a way that is evidently designed.” That was my instruction to groups of college science students who voluntarily attended my extra-curricular ID classes sponsored by Campus Crusade for Christ at James Madison University (even Jason Rosenhouse dropped in a few times). Many of the students were biology and science students hoping to learn truths that are forbidden topics in their regular classes…

They would each have two boxes, and each box contained dice and coins. They were instructed to randomly shake one box and then put designs in the other box. While they did their work, I and another volunteer would leave the room or turn our backs. After the students were done building their designs, I and the volunteer would inspect each box, and tell the students which boxes we felt contained a design, and the students would tell us if we passed or failed to recognize their designs. We never failed!

Granted, this was not a rigorous experiment, but the exercise was to get the point across that even with token objects like coins and dice, one can communicate design.

So what is the reason that human designs were recognized in the classroom exercise? Is it because one configuration of coins and dice are inherently more improbable than any other? Let us assume for the sake of argument that no configuration is more improbable than any other, why then do some configurations seem more special than others with respect to design? The answer is that some configurations suggest a like-minded process was involved in the assembly of the configuration rather than a chance process.

A Darwinist once remarked:

if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins,

Law of Large Numbers vs. Keiths

But what is the real probability in question? It clearly isn’t about the probability of each possible 500-coin sequence, since each sequence is just as improbable as any other. Rather the probability that is truly in question is the probability our minds will recognize a sequence that conforms to our ideas of a non-random outcome. In other words, outcomes that look like “the products of a like-minded process, not a random process”. This may be a shocking statement so let me briefly review two scenarios.

A. 500-fair coins are discovered heads up on a table. We recognized it to be a non-random event based on the law of large numbers as described in The fundamental law of Intelligent Design.

B. 500-fair are discovered on a table. The coins were not there the day before. Each coin on the table is assigned a number 1-500. The pattern of heads and tails looks at first to be nothing special with 50% of the coins being heads. But then we find that the pattern of coins matches a blueprint that had been in a vault as far back as a year ago. Clearly this pattern also is non-random, but why?

The naïve and incorrect answer is “the probability of that pattern is 1 out of 2^500, therefore the event is non-random”. But that is the wrong answer since every other possible coin pattern has a chance of occurring of 1 out of 2^500 times.

The correct answer as to why the coin arrangement is non-random is “it conforms to blueprints”, or using ID terminology, “it conforms to independent specifications”. The independent specification in scenario B is the printed blueprint that had been stored away in the vault, the independent specification of scenario A is all-coins heads “blueprint” that is implicitly defined in our minds and math books.

The real probability at issue is the probability the independent specification will be realized by a random process.

We could end the story of scenario B by saying that a relative or friend put the design together as a surprise present to would-be observers that had access to the blueprint. But such a detail would only confirm what we already knew, that the coin configuration on the table was not the product of a random process, but rather a human-like, like-minded process.

I had an exchange with Graham2, where I said:

But what is it about that particular pattern [all fair coins heads] versus any other. Is it because the pattern is not consistent with the expectation of a random pattern? If so, then the pattern is special by its very nature.

to which Graham2 responded:

No No No No. There is nothing ‘special’ about any pattern. We attach significance to it because we like patterns, but statistically, there is nothing special about it. All sequences (patterns) are equally likely. They only become suspicious if we have specified them in advance.

Comment, Fundamental Law of ID

Whether Grahams2 is right or wrong is a moot point. Statistical tests can be used to reject chance as the explanation that certain artifacts look like the products of a like-minded process. The test is valid provided the blueprint wasn’t drawn up after the fact (postdictive blueprints).

A Darwinist will object and say, “that’s all well and fine, but we don’t have such blue prints for life. Give me sheet paper that has the blueprint of life and proof the blueprint was written before life began.” But the “blueprint” in question is already somewhat hard-wired into the human brain, that’s why in the exercise for the ID class, we never failed to detect design. Humans are like-minded and they make like-minded constructs that other humans recognize as designed.

The problem for Darwinism is that biological designs resemble human designs. Biological organisms look like like-minded designs except they look like they were crafted by a Mind far greater than any human mind. That’s why Dawkins said:

it was hard to be an atheist before Darwin: the illusion of living design is so overwhelming.

Richard Dawkins

Dawkins erred by saying “illusion of living design”, we know he should have said “reality of living design”. 🙂

How then can we reconstruct the blueprints embedded in the human mind in such a sufficiently rigorous way that we can then use the “blueprints” or independent specifications to perform statistical tests? How can we do it in a way that is unassailable to complaints of after-the-fact (postdictive) specifications?

That is the subject of Part II of this series. But briefly, I hinted toward at least a couple methods in previous discussions:

The fundamental law of Intelligent Design

Coordinated Complexity, the key to refuting single target and postdiction objections.

And there will be more to come, God willing.

NOTES

1. I mentioned “independent specification”. This obviously corresponds to Bill Dembksi’s notion of independent specification from Design Inference and No Free Lunch. I use the word blueprint to help illustrate the concept.

2. The physical coin patterns that conform to independent specifications can then be said to evidence specified improbability. I highly recommend the term “specified improbability” (SI) be used instead of Complex Specified Information (CSI). The term “Specified Improbability” is now being offered by Bill Dembski himself. I feel it more accurately describes what is being observed when identifying design, and the phrase is less confusing. See: Specified Improbability and Bill’s letter to me from way back.

3. I carefully avoided using CSI, information, or entropy to describe the design inference in the bulk of this essay. Those terms could have been used, but I avoided them to show that the problem of identifying design can be made with simpler more accessible arguments, and thus hopefully make the points more unassailable. This essay actually describes detection of CSI, but CSI has become such a loaded term in ID debates I refrained from using it. The phrase “Specified Improbability” conveys the idea better. The objects in the students’ boxes that were recognized as designed were improbable configurations that conformed to independent specifications, therefore they evidenced specified improbability, therefore they were designed.

Comments
[…] Here I described this exercise: […]Intelligent Design and Creation Science
April 16, 2014
April
04
Apr
16
16
2014
07:20 PM
7
07
20
PM
PDT
[…] (where Heads =1, Tails =0), and he ought to recognize it as designed. As pointed out in the essay To recognize design is to recognize products of a like-minded process, Part I, the real probability in the question of design recognition isn’t the probability of a given […]Design recognition is possible in part because of finite human memory and limited human information | Uncommon Descent
January 3, 2014
January
01
Jan
3
03
2014
01:29 AM
1
01
29
AM
PDT
Sal (and JDH, Chance Ratcliff, et al.) In #35 I made several objections to design detection that should sound familiar. Imperfect design is a non-starter. Even though on closer inspection I put a 0 instead of a 1 on the lower end of the first C, making the two C's different, the design is still clearly recognizable. We will come back to this. Bad or malevolent design is likewise a non-starter. Even if one can prove malevolent design, a bad designer is still a designer. That just would mean that a theology of "God's in Heaven, and all's right with the world" is poor theology. But then, that's not either Job's theology or Jesus' theology. The idea that any one sequence is just as likely as any other sequence, and therefore Sequence B is just as likely as Sequence A. But if Sequence B can be shown to be in a small subset of sequences, which are very unlikely to be chance arrangements, then that argument fails. How many different ways are there to create essentially a 210 bit bitmap of the word CHANCE? Exactly, there are probably about 32 different ways to do so. Let's say I am wrong and it is really closer to 1024 different ways. If one allows for a single error, one can have 210 different errors on each of these 1024 ways, which means we are talking about roughly 2^18 different bitmaps for CHANCE. If we have perhaps a million, or 2^20 words to choose from in English, and perhaps a million languages, then we have perhaps 2^58 sequences that would look this good in bitmap. Let's say that we have 1,000 times as many nearly perfect pictures as we have words (a word is equivalent to 1000 pictures :) ), then we are looking at 2^68 special sequences. All heads is just one of those sequences. That sounds like a lot, until you consider that there are 2^210 total sequences, so that the ratio of (that) special sequences to more usual sequences is 1 to 2^142, or roughly 6 x 10^42. That's why the impression is so strong that the other arguments about this not being designed seem so totally irrelevant. Notice if I underestimated the obviously special sequences by a factor of a million, it makes virtually no difference in the final argument. Dembski's universal probability bound is not a sacred number. It gives up way more ground than necessary. Dawkin's 10^50 is even being too generous for events on earth proper. It is important to note that this argument completely destroys the argument about not pre-specifying the word CHANCE. If the sequence is functional in painting a picture, especially a word-picture, with minimal errors, one can be virtually certain that it is not a random sequence, regardless of whether the particular word is specified. It is also interesting that knowing that the design was intelligently made tells us nothing about the method of producing the pattern. I could have typed it out by hand, cut and pasted some of it (the first and last row are identical), written a computer program to print out the sequence, or had my secretary (if I had a secretary) write the letters, and scanned them into a bitmap that was then transcribed into 1's and 0's. You can say a little more about the designer of the sequence. Obviously the designer knows about Roman letters, and probably knows English, or at least knows someone who knows. But that's more than you can say about how he did it. This sequence is more obviously designed than the all heads sequence, because that sequence can be made by a simple law, whereas this one cannot. But Sal, I think that you should not give up yet on the idea that there are objective reasons for our subjective identification of design. The other side has not properly done the math. Cantor (#76), you did get the right answer without seeing the pattern, as did Sal. But seeing the pattern vastly increases the strength of the conviction that the pattern is not due to chance (pun intended).Paul Giem
December 23, 2013
December
12
Dec
23
23
2013
11:39 PM
11
11
39
PM
PDT
Sorry for taking this down the rabbit hole.
Please, no need to apologize, this was fun, a lot more fun than some of the ugly debates that sometimes transpire on the internet. I hope to see you some more. Take care. Salscordova
December 23, 2013
December
12
Dec
23
23
2013
12:26 PM
12
12
26
PM
PDT
Sorry for taking this down the rabbit hole.cantor
December 23, 2013
December
12
Dec
23
23
2013
12:24 PM
12
12
24
PM
PDT
Ignore the above post. My aging eyes were looking at the wrong column in the Table. The Table values do agree with the error function. (1/2)*erf(Z/sqrt(2).cantor
December 23, 2013
December
12
Dec
23
23
2013
12:22 PM
12
12
22
PM
PDT
But the question remains: Z = (x-mu)/sigma = (333000-333333)/471.4045207910317 = -0.70639967440536 If I use a table to look up the probability for the tail of the standard normal distribution for this Z score, I get a value 0f 0.2224 Why is the value so different?
I plugged in the following number in this Standard Normal Distribution Z-Score Calculator : http://www.danielsoper.com/statcalc3/calc.aspx?id=19 Cumulative probability level : .2399 I got a Z score of : -0.70662427 Which looks right. Are you sure you're interpreting the tables correctly? I must confess, I'm not a statistician, this is starting to get beyond my level of knowledge.scordova
December 23, 2013
December
12
Dec
23
23
2013
11:51 AM
11
11
51
AM
PDT
Scordova @153 wrote: Try this [1 / (1 - ERF (x / sqrt(2)) ] / 2 = 0.239969812 where x = 0.706399674405361 That tells you given the deviation, the one-sided area of the population that lies outside the deviation. That looks better.
B(n,p) ~= N(mu,sigma^2) if both np and n(1-p) are large sigma of B(n,p) = sqrt(n*p*(1-p)) = sqrt(1000000*(1/3)*(2/3)) = 471.4045207910317 CDF of B(n,p) ~= CDF of N(mu,sigma^2) = (1/2)*(1+erf((value-mean)/sqrt(2*sigma^2))) = (1/2)*(1+erf((333000-333333)/sqrt(2*471.4045207910317^2))) = 0.23996981165296 ... Or do it this way: The standard normal distribution probability density function PDF is: PDF(x):=exp((-x^2)/2)/sqrt(2*%pi) Integrate PDF from x=0 to x=Z to get the CDF: CDF = erf(Z/sqrt(2))/2 Z = (mu-x)/sigma = (333333-333000)/sigma = 0.70639967440536 Plug in the Z score into CDF and crunch the numbers: CDF = erf(0.70639967440536/sqrt(2))/2 = 0.26003018834704 Subtract from 0.5: 0.5 - 0.26003018834704 = 0.23996981165296 ... But the question remains: Z = (x-mu)/sigma = (333000-333333)/471.4045207910317 = -0.70639967440536 If I use a table to look up the probability for the tail of the standard normal distribution for this Z score, I get a value 0f 0.2224 Why is the value so different? ...cantor
December 23, 2013
December
12
Dec
23
23
2013
11:08 AM
11
11
08
AM
PDT
Try this [1 / (1 - ERF (x / sqrt(2)) ] / 2 = 0.239969812 where x = 0.706399674405361 That tells you given the deviation, the one-sided area of the population that lies outside the deviation. That looks better. :-)scordova
December 22, 2013
December
12
Dec
22
22
2013
09:38 PM
9
09
38
PM
PDT
What am I missing here?
Good question, I don't know. Where are the mathematicians and statisticians like Neil, DiEb, and Mark Frank when you need them? :-) It would be news to me the normal approximation of the binomial distribution is that much off for 1,000,000 trials, but maybe that's the way it is. Salscordova
December 22, 2013
December
12
Dec
22
22
2013
09:33 PM
9
09
33
PM
PDT
Sal, If I want the probability of 333,000 or fewer heads, using your method: n = 1000000 p = 1/3 sigma = sqrt(n*p*(1-p)) = 471.4045207910317 Z = (333333-333000)/sigma = 0.706399674405361 The area under the standard normal distribution between its peak and that Z score is 0.2776, so the area of the tail is 0.5-0.2776 = 0.2224 ... But if I use Octave's binocdf() function I get a different result: Octave 3.6.4> binocdf(333000,1000000,1/3) = 0.24010 ... Scilab agrees with Octave: Scilab 5.4.1> cdfbin("PQ",333000,1000000,1/3,2/3) = 0.2400981 ... What am I missing here?cantor
December 22, 2013
December
12
Dec
22
22
2013
08:33 PM
8
08
33
PM
PDT
OK. I'm on board now. I was figuring it with 333,000 instead of 330,000.cantor
December 22, 2013
December
12
Dec
22
22
2013
05:50 PM
5
05
50
PM
PDT
I plugged the numbers into R, which calculates the values by making used of the Beta function.wd400
December 22, 2013
December
12
Dec
22
22
2013
05:17 PM
5
05
17
PM
PDT
wd400 @146: It’s about 7×10^-13
How did you arrive at that answer?cantor
December 22, 2013
December
12
Dec
22
22
2013
05:11 PM
5
05
11
PM
PDT
I suspect the normal distribution approximation gives a wildly inaccurate answer. Can someone compute an answer accurate to 2 significant digits?
Actually the approximation gets better with larger number of trials, not less, so I'm inclined to think this is good enough. I don't know that many computers can handle a binomial distribution with 1,000,000 trials. You'd have to be plugging in n = 1,000,000 trials and to accurately calculate the binomial distribution you'd have to be processing numbers like 1,000,000 factorial. Even at 100 factorial, lots of calculators will be going to some sort of approximation anyway.scordova
December 22, 2013
December
12
Dec
22
22
2013
05:10 PM
5
05
10
PM
PDT
It's about 7x10^-13.wd400
December 22, 2013
December
12
Dec
22
22
2013
05:06 PM
5
05
06
PM
PDT
0% effectively of getting 330,000 or less. It is several sigma from the expectation of 333,333 heads. 100% of getting 330,000 or more ... sigma here is normal distribution sigma approximation of the binomial sigma
I suspect the normal distribution approximation gives a wildly inaccurate answer. Can someone compute an answer accurate to 2 significant digits?cantor
December 22, 2013
December
12
Dec
22
22
2013
04:59 PM
4
04
59
PM
PDT
The question was, What’s the probability you’ll get 330,000 or fewer heads
0% effectively of getting 330,000 or less. It is several sigma from the expectation of 333,333 heads. 100% of getting 330,000 or more A 1 sigma deviation is 471 heads (333,333 - 330,000)/471 = 33,333/471 = 7 sigma deviation, thus it is effectively 0% sigma = sqrt ( np(1-np) ) n = number trials p = probability of success sigma here is normal distribution sigma approximation of the binomial sigmascordova
December 22, 2013
December
12
Dec
22
22
2013
04:49 PM
4
04
49
PM
PDT
The question was, What’s the probability you’ll get 330,000 or fewer headscantor
December 22, 2013
December
12
Dec
22
22
2013
04:38 PM
4
04
38
PM
PDT
sal @41: the chance of success is 100% or close to it.
Sal, I don't think that answer is anywhere near correct.cantor
December 22, 2013
December
12
Dec
22
22
2013
04:36 PM
4
04
36
PM
PDT
Cantor, As the number of trials go up, the standard deviation as a percentage of the number of trials goes down due to the law of large numbers. Try http://www.stattrek.com/online-calculator/binomial.aspx First try with these parameters P success = .333333 trials = 1000000 number of successes = 333333 and you see that probability is 50% which means the number of successes is right at expectation, change that slightly to what you were aiming for P success = .333333 trials = 1000000 number of successes = 330000 and the chance of success is 100% or close to it. Why? It's way outside 3 sigma from the expectation of 333,333 heads. The website cautions:
When number of trials is large (> 1000), the calculator uses the normal approximation to the binomial.
scordova
December 22, 2013
December
12
Dec
22
22
2013
04:25 PM
4
04
25
PM
PDT
A biased coin turns up tails exactly twice as often as heads. What's the probability you'll get 330,000 or fewer heads if this coin is flipped one million times? I'm wondering if anyone participating in or reading this thread has the knowledge and resources to find the answer.cantor
December 22, 2013
December
12
Dec
22
22
2013
04:05 PM
4
04
05
PM
PDT
But all this only advances the ID cause if natural selection doesnt operate, in which case there dont seem to be many options left. If natural selection doesnt operate, that is, a fictitious world you have invented.
1. Natural selection can't operate if you don't have a population of living organisms to begin with. Natural selection cannot solve the OOL problem. And Darwinists themselves insist on not using selection as a solution to OOL. Thus the arguments I laid out as pertaining to OOL cannot be solved by Darwinian mechanisms. So my point holds. 2. "Natural Selection" as defined by Dawkins and Darwin's isn't how nature really works, the fictitious world is Darwin's, Dawkins', and Dennett's (D+D+D = 3D) not mine. See: NS is double speak for DFFM. You are so convinced that Dawkins is right that you think his proposed solution of Natural Selection will actually work as advertised. Lab and field observations plus analysis by population geneticists prove otherwise. Dawkins is wrong, if he debated us at UD like Nick Mazke did, we dispose of Dawkins in a week. For selection to work as you suppose, it has to select for precursors of systems that are not even in existence. The problem with that is outlined in: Selection after something exists is not the same as selection before something exists.scordova
December 22, 2013
December
12
Dec
22
22
2013
04:03 PM
4
04
03
PM
PDT
I asked you sal, because your name is on the top of the thread. I presume the point of the whole thread is that if we see some non-random pattern in life/genome etc, then we assume some agent is reponsible, ergo design. But all this only advances the ID cause if natural selection doesnt operate, in which case there dont seem to be many options left. If natural selection doesnt operate, that is, a fictitious world you have invented.Graham2
December 22, 2013
December
12
Dec
22
22
2013
03:51 PM
3
03
51
PM
PDT
fifthmonarchyman, I have no clue why the spam filter held your comment up in moderation. I hope you visit again. Salscordova
December 22, 2013
December
12
Dec
22
22
2013
03:19 PM
3
03
19
PM
PDT
scordova said, Long time no see! Thanks for dropping in. I say, Your welcome I come by from time to see how the debate is progressing. ID has the potential to be very a fruitful exercise but its hard to see how we will ever get there as long as: 1) Our side thinks it can be used as a tool to prove God's existence. As if God's existence was not already patently obvious to everyone. 2) The other side is unwilling to give even an inch of ground for fear that they will be tricked into acknowledging God's existence. Until one or other of those factors changes we will continue to butt heads in long threads about whether or not we can rule out chance if we discover 500 fair coins on a table heads up. it is comical if you think about it peacefifthmonarchyman
December 22, 2013
December
12
Dec
22
22
2013
03:06 PM
3
03
06
PM
PDT
What all this has to do with ID (or evolution) you will have to ask sal.
Why me? But any way, I did right up something to that effect just now: https://uncommondescent.com/chemistry/relevance-of-coin-analogies-to-homochirality-and-symbolic-organization-in-biology/ Thanks for being such a good sport Graham. Not all of us are out to humiliate you. I hope you might learn something from these discussions. Thanks for participating. The questions you raise I know may be on the minds of some of our ID friendly readers who are too shy to ask. Salscordova
December 22, 2013
December
12
Dec
22
22
2013
02:52 PM
2
02
52
PM
PDT
So you don't have any point you've been trying to make in relation to any specific ID argument. Okay.Upright BiPed
December 22, 2013
December
12
Dec
22
22
2013
02:35 PM
2
02
35
PM
PDT
You have already asked the question, why. There was a brief diversion into std deviations, but sal, above, seems to be saying that we suspect something if we see a recognizable pattern in the result, which is what Ive been saying all along. What all this has to do with ID (or evolution) you will have to ask sal.Graham2
December 22, 2013
December
12
Dec
22
22
2013
02:02 PM
2
02
02
PM
PDT
Graham, lol, you've been trying to make this point for so long, Why don't you just make it directly. I would suspect that you and everyone else on the surface of the planet would be surprised to see a person flip 500 coins that perfectly responded to the value of pi. So what? What are you specifically saying in regards to a specific ID argument?Upright BiPed
December 22, 2013
December
12
Dec
22
22
2013
01:49 PM
1
01
49
PM
PDT
1 2 3 6

Leave a Reply