Uncommon Descent Serving The Intelligent Design Community

To recognize design is to recognize products of a like-minded process, identifying the real probability in question, Part I

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

“Take the coins and dice and arrange them in a way that is evidently designed.” That was my instruction to groups of college science students who voluntarily attended my extra-curricular ID classes sponsored by Campus Crusade for Christ at James Madison University (even Jason Rosenhouse dropped in a few times). Many of the students were biology and science students hoping to learn truths that are forbidden topics in their regular classes…

They would each have two boxes, and each box contained dice and coins. They were instructed to randomly shake one box and then put designs in the other box. While they did their work, I and another volunteer would leave the room or turn our backs. After the students were done building their designs, I and the volunteer would inspect each box, and tell the students which boxes we felt contained a design, and the students would tell us if we passed or failed to recognize their designs. We never failed!

Granted, this was not a rigorous experiment, but the exercise was to get the point across that even with token objects like coins and dice, one can communicate design.

So what is the reason that human designs were recognized in the classroom exercise? Is it because one configuration of coins and dice are inherently more improbable than any other? Let us assume for the sake of argument that no configuration is more improbable than any other, why then do some configurations seem more special than others with respect to design? The answer is that some configurations suggest a like-minded process was involved in the assembly of the configuration rather than a chance process.

A Darwinist once remarked:

if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins,

Law of Large Numbers vs. Keiths

But what is the real probability in question? It clearly isn’t about the probability of each possible 500-coin sequence, since each sequence is just as improbable as any other. Rather the probability that is truly in question is the probability our minds will recognize a sequence that conforms to our ideas of a non-random outcome. In other words, outcomes that look like “the products of a like-minded process, not a random process”. This may be a shocking statement so let me briefly review two scenarios.

A. 500-fair coins are discovered heads up on a table. We recognized it to be a non-random event based on the law of large numbers as described in The fundamental law of Intelligent Design.

B. 500-fair are discovered on a table. The coins were not there the day before. Each coin on the table is assigned a number 1-500. The pattern of heads and tails looks at first to be nothing special with 50% of the coins being heads. But then we find that the pattern of coins matches a blueprint that had been in a vault as far back as a year ago. Clearly this pattern also is non-random, but why?

The naïve and incorrect answer is “the probability of that pattern is 1 out of 2^500, therefore the event is non-random”. But that is the wrong answer since every other possible coin pattern has a chance of occurring of 1 out of 2^500 times.

The correct answer as to why the coin arrangement is non-random is “it conforms to blueprints”, or using ID terminology, “it conforms to independent specifications”. The independent specification in scenario B is the printed blueprint that had been stored away in the vault, the independent specification of scenario A is all-coins heads “blueprint” that is implicitly defined in our minds and math books.

The real probability at issue is the probability the independent specification will be realized by a random process.

We could end the story of scenario B by saying that a relative or friend put the design together as a surprise present to would-be observers that had access to the blueprint. But such a detail would only confirm what we already knew, that the coin configuration on the table was not the product of a random process, but rather a human-like, like-minded process.

I had an exchange with Graham2, where I said:

But what is it about that particular pattern [all fair coins heads] versus any other. Is it because the pattern is not consistent with the expectation of a random pattern? If so, then the pattern is special by its very nature.

to which Graham2 responded:

No No No No. There is nothing ‘special’ about any pattern. We attach significance to it because we like patterns, but statistically, there is nothing special about it. All sequences (patterns) are equally likely. They only become suspicious if we have specified them in advance.

Comment, Fundamental Law of ID

Whether Grahams2 is right or wrong is a moot point. Statistical tests can be used to reject chance as the explanation that certain artifacts look like the products of a like-minded process. The test is valid provided the blueprint wasn’t drawn up after the fact (postdictive blueprints).

A Darwinist will object and say, “that’s all well and fine, but we don’t have such blue prints for life. Give me sheet paper that has the blueprint of life and proof the blueprint was written before life began.” But the “blueprint” in question is already somewhat hard-wired into the human brain, that’s why in the exercise for the ID class, we never failed to detect design. Humans are like-minded and they make like-minded constructs that other humans recognize as designed.

The problem for Darwinism is that biological designs resemble human designs. Biological organisms look like like-minded designs except they look like they were crafted by a Mind far greater than any human mind. That’s why Dawkins said:

it was hard to be an atheist before Darwin: the illusion of living design is so overwhelming.

Richard Dawkins

Dawkins erred by saying “illusion of living design”, we know he should have said “reality of living design”. 🙂

How then can we reconstruct the blueprints embedded in the human mind in such a sufficiently rigorous way that we can then use the “blueprints” or independent specifications to perform statistical tests? How can we do it in a way that is unassailable to complaints of after-the-fact (postdictive) specifications?

That is the subject of Part II of this series. But briefly, I hinted toward at least a couple methods in previous discussions:

The fundamental law of Intelligent Design

Coordinated Complexity, the key to refuting single target and postdiction objections.

And there will be more to come, God willing.

NOTES

1. I mentioned “independent specification”. This obviously corresponds to Bill Dembksi’s notion of independent specification from Design Inference and No Free Lunch. I use the word blueprint to help illustrate the concept.

2. The physical coin patterns that conform to independent specifications can then be said to evidence specified improbability. I highly recommend the term “specified improbability” (SI) be used instead of Complex Specified Information (CSI). The term “Specified Improbability” is now being offered by Bill Dembski himself. I feel it more accurately describes what is being observed when identifying design, and the phrase is less confusing. See: Specified Improbability and Bill’s letter to me from way back.

3. I carefully avoided using CSI, information, or entropy to describe the design inference in the bulk of this essay. Those terms could have been used, but I avoided them to show that the problem of identifying design can be made with simpler more accessible arguments, and thus hopefully make the points more unassailable. This essay actually describes detection of CSI, but CSI has become such a loaded term in ID debates I refrained from using it. The phrase “Specified Improbability” conveys the idea better. The objects in the students’ boxes that were recognized as designed were improbable configurations that conformed to independent specifications, therefore they evidenced specified improbability, therefore they were designed.

Comments
[…] Here I described this exercise: […] Intelligent Design and Creation Science
[…] (where Heads =1, Tails =0), and he ought to recognize it as designed. As pointed out in the essay To recognize design is to recognize products of a like-minded process, Part I, the real probability in the question of design recognition isn’t the probability of a given […] Design recognition is possible in part because of finite human memory and limited human information | Uncommon Descent
Sal (and JDH, Chance Ratcliff, et al.) In #35 I made several objections to design detection that should sound familiar. Imperfect design is a non-starter. Even though on closer inspection I put a 0 instead of a 1 on the lower end of the first C, making the two C's different, the design is still clearly recognizable. We will come back to this. Bad or malevolent design is likewise a non-starter. Even if one can prove malevolent design, a bad designer is still a designer. That just would mean that a theology of "God's in Heaven, and all's right with the world" is poor theology. But then, that's not either Job's theology or Jesus' theology. The idea that any one sequence is just as likely as any other sequence, and therefore Sequence B is just as likely as Sequence A. But if Sequence B can be shown to be in a small subset of sequences, which are very unlikely to be chance arrangements, then that argument fails. How many different ways are there to create essentially a 210 bit bitmap of the word CHANCE? Exactly, there are probably about 32 different ways to do so. Let's say I am wrong and it is really closer to 1024 different ways. If one allows for a single error, one can have 210 different errors on each of these 1024 ways, which means we are talking about roughly 2^18 different bitmaps for CHANCE. If we have perhaps a million, or 2^20 words to choose from in English, and perhaps a million languages, then we have perhaps 2^58 sequences that would look this good in bitmap. Let's say that we have 1,000 times as many nearly perfect pictures as we have words (a word is equivalent to 1000 pictures :) ), then we are looking at 2^68 special sequences. All heads is just one of those sequences. That sounds like a lot, until you consider that there are 2^210 total sequences, so that the ratio of (that) special sequences to more usual sequences is 1 to 2^142, or roughly 6 x 10^42. That's why the impression is so strong that the other arguments about this not being designed seem so totally irrelevant. Notice if I underestimated the obviously special sequences by a factor of a million, it makes virtually no difference in the final argument. Dembski's universal probability bound is not a sacred number. It gives up way more ground than necessary. Dawkin's 10^50 is even being too generous for events on earth proper. It is important to note that this argument completely destroys the argument about not pre-specifying the word CHANCE. If the sequence is functional in painting a picture, especially a word-picture, with minimal errors, one can be virtually certain that it is not a random sequence, regardless of whether the particular word is specified. It is also interesting that knowing that the design was intelligently made tells us nothing about the method of producing the pattern. I could have typed it out by hand, cut and pasted some of it (the first and last row are identical), written a computer program to print out the sequence, or had my secretary (if I had a secretary) write the letters, and scanned them into a bitmap that was then transcribed into 1's and 0's. You can say a little more about the designer of the sequence. Obviously the designer knows about Roman letters, and probably knows English, or at least knows someone who knows. But that's more than you can say about how he did it. This sequence is more obviously designed than the all heads sequence, because that sequence can be made by a simple law, whereas this one cannot. But Sal, I think that you should not give up yet on the idea that there are objective reasons for our subjective identification of design. The other side has not properly done the math. Cantor (#76), you did get the right answer without seeing the pattern, as did Sal. But seeing the pattern vastly increases the strength of the conviction that the pattern is not due to chance (pun intended). Paul Giem
Sorry for taking this down the rabbit hole.
Please, no need to apologize, this was fun, a lot more fun than some of the ugly debates that sometimes transpire on the internet. I hope to see you some more. Take care. Sal scordova
Sorry for taking this down the rabbit hole. cantor
Ignore the above post. My aging eyes were looking at the wrong column in the Table. The Table values do agree with the error function. (1/2)*erf(Z/sqrt(2). cantor
But the question remains: Z = (x-mu)/sigma = (333000-333333)/471.4045207910317 = -0.70639967440536 If I use a table to look up the probability for the tail of the standard normal distribution for this Z score, I get a value 0f 0.2224 Why is the value so different?
I plugged in the following number in this Standard Normal Distribution Z-Score Calculator : http://www.danielsoper.com/statcalc3/calc.aspx?id=19 Cumulative probability level : .2399 I got a Z score of : -0.70662427 Which looks right. Are you sure you're interpreting the tables correctly? I must confess, I'm not a statistician, this is starting to get beyond my level of knowledge. scordova
Scordova @153 wrote: Try this [1 / (1 - ERF (x / sqrt(2)) ] / 2 = 0.239969812 where x = 0.706399674405361 That tells you given the deviation, the one-sided area of the population that lies outside the deviation. That looks better.
B(n,p) ~= N(mu,sigma^2) if both np and n(1-p) are large sigma of B(n,p) = sqrt(n*p*(1-p)) = sqrt(1000000*(1/3)*(2/3)) = 471.4045207910317 CDF of B(n,p) ~= CDF of N(mu,sigma^2) = (1/2)*(1+erf((value-mean)/sqrt(2*sigma^2))) = (1/2)*(1+erf((333000-333333)/sqrt(2*471.4045207910317^2))) = 0.23996981165296 ... Or do it this way: The standard normal distribution probability density function PDF is: PDF(x):=exp((-x^2)/2)/sqrt(2*%pi) Integrate PDF from x=0 to x=Z to get the CDF: CDF = erf(Z/sqrt(2))/2 Z = (mu-x)/sigma = (333333-333000)/sigma = 0.70639967440536 Plug in the Z score into CDF and crunch the numbers: CDF = erf(0.70639967440536/sqrt(2))/2 = 0.26003018834704 Subtract from 0.5: 0.5 - 0.26003018834704 = 0.23996981165296 ... But the question remains: Z = (x-mu)/sigma = (333000-333333)/471.4045207910317 = -0.70639967440536 If I use a table to look up the probability for the tail of the standard normal distribution for this Z score, I get a value 0f 0.2224 Why is the value so different? ... cantor
Try this [1 / (1 - ERF (x / sqrt(2)) ] / 2 = 0.239969812 where x = 0.706399674405361 That tells you given the deviation, the one-sided area of the population that lies outside the deviation. That looks better. :-) scordova
What am I missing here?
Good question, I don't know. Where are the mathematicians and statisticians like Neil, DiEb, and Mark Frank when you need them? :-) It would be news to me the normal approximation of the binomial distribution is that much off for 1,000,000 trials, but maybe that's the way it is. Sal scordova
Sal, If I want the probability of 333,000 or fewer heads, using your method: n = 1000000 p = 1/3 sigma = sqrt(n*p*(1-p)) = 471.4045207910317 Z = (333333-333000)/sigma = 0.706399674405361 The area under the standard normal distribution between its peak and that Z score is 0.2776, so the area of the tail is 0.5-0.2776 = 0.2224 ... But if I use Octave's binocdf() function I get a different result: Octave 3.6.4> binocdf(333000,1000000,1/3) = 0.24010 ... Scilab agrees with Octave: Scilab 5.4.1> cdfbin("PQ",333000,1000000,1/3,2/3) = 0.2400981 ... What am I missing here? cantor
OK. I'm on board now. I was figuring it with 333,000 instead of 330,000. cantor
I plugged the numbers into R, which calculates the values by making used of the Beta function. wd400
wd400 @146: It’s about 7×10^-13
How did you arrive at that answer? cantor
I suspect the normal distribution approximation gives a wildly inaccurate answer. Can someone compute an answer accurate to 2 significant digits?
Actually the approximation gets better with larger number of trials, not less, so I'm inclined to think this is good enough. I don't know that many computers can handle a binomial distribution with 1,000,000 trials. You'd have to be plugging in n = 1,000,000 trials and to accurately calculate the binomial distribution you'd have to be processing numbers like 1,000,000 factorial. Even at 100 factorial, lots of calculators will be going to some sort of approximation anyway. scordova
It's about 7x10^-13. wd400
0% effectively of getting 330,000 or less. It is several sigma from the expectation of 333,333 heads. 100% of getting 330,000 or more ... sigma here is normal distribution sigma approximation of the binomial sigma
I suspect the normal distribution approximation gives a wildly inaccurate answer. Can someone compute an answer accurate to 2 significant digits? cantor
The question was, What’s the probability you’ll get 330,000 or fewer heads
0% effectively of getting 330,000 or less. It is several sigma from the expectation of 333,333 heads. 100% of getting 330,000 or more A 1 sigma deviation is 471 heads (333,333 - 330,000)/471 = 33,333/471 = 7 sigma deviation, thus it is effectively 0% sigma = sqrt ( np(1-np) ) n = number trials p = probability of success sigma here is normal distribution sigma approximation of the binomial sigma scordova
The question was, What’s the probability you’ll get 330,000 or fewer heads cantor
sal @41: the chance of success is 100% or close to it.
Sal, I don't think that answer is anywhere near correct. cantor
Cantor, As the number of trials go up, the standard deviation as a percentage of the number of trials goes down due to the law of large numbers. Try http://www.stattrek.com/online-calculator/binomial.aspx First try with these parameters P success = .333333 trials = 1000000 number of successes = 333333 and you see that probability is 50% which means the number of successes is right at expectation, change that slightly to what you were aiming for P success = .333333 trials = 1000000 number of successes = 330000 and the chance of success is 100% or close to it. Why? It's way outside 3 sigma from the expectation of 333,333 heads. The website cautions:
When number of trials is large (> 1000), the calculator uses the normal approximation to the binomial.
scordova
A biased coin turns up tails exactly twice as often as heads. What's the probability you'll get 330,000 or fewer heads if this coin is flipped one million times? I'm wondering if anyone participating in or reading this thread has the knowledge and resources to find the answer. cantor
But all this only advances the ID cause if natural selection doesnt operate, in which case there dont seem to be many options left. If natural selection doesnt operate, that is, a fictitious world you have invented.
1. Natural selection can't operate if you don't have a population of living organisms to begin with. Natural selection cannot solve the OOL problem. And Darwinists themselves insist on not using selection as a solution to OOL. Thus the arguments I laid out as pertaining to OOL cannot be solved by Darwinian mechanisms. So my point holds. 2. "Natural Selection" as defined by Dawkins and Darwin's isn't how nature really works, the fictitious world is Darwin's, Dawkins', and Dennett's (D+D+D = 3D) not mine. See: NS is double speak for DFFM. You are so convinced that Dawkins is right that you think his proposed solution of Natural Selection will actually work as advertised. Lab and field observations plus analysis by population geneticists prove otherwise. Dawkins is wrong, if he debated us at UD like Nick Mazke did, we dispose of Dawkins in a week. For selection to work as you suppose, it has to select for precursors of systems that are not even in existence. The problem with that is outlined in: Selection after something exists is not the same as selection before something exists. scordova
I asked you sal, because your name is on the top of the thread. I presume the point of the whole thread is that if we see some non-random pattern in life/genome etc, then we assume some agent is reponsible, ergo design. But all this only advances the ID cause if natural selection doesnt operate, in which case there dont seem to be many options left. If natural selection doesnt operate, that is, a fictitious world you have invented. Graham2
fifthmonarchyman, I have no clue why the spam filter held your comment up in moderation. I hope you visit again. Sal scordova
scordova said, Long time no see! Thanks for dropping in. I say, Your welcome I come by from time to see how the debate is progressing. ID has the potential to be very a fruitful exercise but its hard to see how we will ever get there as long as: 1) Our side thinks it can be used as a tool to prove God's existence. As if God's existence was not already patently obvious to everyone. 2) The other side is unwilling to give even an inch of ground for fear that they will be tricked into acknowledging God's existence. Until one or other of those factors changes we will continue to butt heads in long threads about whether or not we can rule out chance if we discover 500 fair coins on a table heads up. it is comical if you think about it peace fifthmonarchyman
What all this has to do with ID (or evolution) you will have to ask sal.
Why me? But any way, I did right up something to that effect just now: https://uncommondescent.com/chemistry/relevance-of-coin-analogies-to-homochirality-and-symbolic-organization-in-biology/ Thanks for being such a good sport Graham. Not all of us are out to humiliate you. I hope you might learn something from these discussions. Thanks for participating. The questions you raise I know may be on the minds of some of our ID friendly readers who are too shy to ask. Sal scordova
So you don't have any point you've been trying to make in relation to any specific ID argument. Okay. Upright BiPed
You have already asked the question, why. There was a brief diversion into std deviations, but sal, above, seems to be saying that we suspect something if we see a recognizable pattern in the result, which is what Ive been saying all along. What all this has to do with ID (or evolution) you will have to ask sal. Graham2
Graham, lol, you've been trying to make this point for so long, Why don't you just make it directly. I would suspect that you and everyone else on the surface of the planet would be surprised to see a person flip 500 coins that perfectly responded to the value of pi. So what? What are you specifically saying in regards to a specific ID argument? Upright BiPed
Sal: So we both agree that the PI case is suspicious, but why ? It satisfys your std dev test (I presume), so why is it suspicious ? You seem to be suggesting that it matches a recognizable pattern, but thats exactly what Ive been agreeing with all along. Graham2
SC: Pi in bin seems hard to come by to 10^6 or so listed digits. Best I came up with is 32k+, here. But of course, without claiming a proof, I note that again we have a clash between uncorrelated deterministic entities so we should expect to get effective randomness . . . especially as pi goes on forever. KF PS: Onlookers may get a kick out of a discussion of that here, which does not bring out the little problem that searching out the relevant items is a solar system and observed cosmos scale supertask. kairosfocus
fifthmonarchyman, Wow. Long time no see! Thanks for dropping in. Sal scordova
So could you answer the question about the PI case … would you regard this with any suspicion ? Sal ?
Yes. To illustrate why just use the procedure outlined in https://uncommondescent.com/computer-science/illustrating-embedded-specification-and-specified-improbability-with-specially-labeled-coins/ So use red numbered labels to specify PI. If "all red labels are up" after a random process, then imagine the first binary digits of PI generated by a random shaking of coins. :shock: Here are some of the first digits:
11. 00100100 00111111 01101010 10001000 10000101 10100011 00001000 11010011 00010011 00011001 10001010 00101110 00000011 01110000 01110011 01000100 10100100 00001001 00111000 00100010 00101001 10011111 00110001 11010000 00001000 00101110 11111010 10011000 11101100 01001110 01101100 10001001 01000101 00101000 00100001 11100110 00111000 11010000 00010011 01110111 10111110 01010100 01100110 11001111 00110100 11101001 00001100 01101100 11000000 10101100 00101001 10110111 11001001 01111100 01010000 11011101 00111111 10000100 11010101 10110101 10110101 01000111 00001001 00010111
The reason this works is that humans can only in principle write down or conceive so many highly specific specifications. Try writing down how many specific sequences that are 500 binary digits long. You'll be hard pressed to find anything that reaches 2^500. On earth there are only 2^149 atoms, so you won't even be able to take all 500-bit sequences from the printed books of history and fiction, all the sentences every person has ever spoken that was recorded and then locate it in a random sea of 2^500 sequences. 500 seems like tiny number. Agree, but 2^500 is big, and if we go to 1000 coins, then 2^1000 is astronomical relative to 2^500. That's why Bill Dembski went to a lot of trouble to estimate how likely it was we'd be able to use the following metaphors to describe biology: code control error correction language interpreter feedback sensor redundancy translation transcription wing gear wheel copy blueprint etc. How difficult is it to project engineering metaphors onto biology? You can't do that with a rock. But biological organisms seem so amenable to these metaphors. Compare then the class exercise in the OP. It was relatively easy to project my hard-wired and learned patterns onto the students designs and recognize them as designs. Detecting designs in biology is detecting patterns that conform to engineering designs. It's no coincidence ID seems to be over represented by engineers. They find it outrageous a chance hypothesis in a pre-biotic soup could even synthesize the first DNA/protein system in the ancestral cell at a the nano-level where there is tons of thermal and quantum noise to destroy any would-be pre-cursor of a cell rather quickly. scordova
G2: Have you done basic statistical mechanics? Try this class slide show, paying particular attention to the pattern of dominant clusters explained in slides 1 - 6, esp. the diag in 4. The links to the statistical principles behind the 2nd law of thermodynamics should be clear. KF PS: For tossed coins that dominant cluster tends to be near 50-50 in no particular order. kairosfocus
SC, 95:
You said it’s a psychological effect, and I actually agreed with you. Some IDists find that uncomfortable, btw. The problem however, is that with respect to all coins heads, the target has been well known throughout human history in as much as humans like simple repetitive patterns, and all coins heads is only an extension of a pre-existing fixed target.
With CSI as a broad thing, yes that is often true. But you are very close to why I have focussed our attention on functionally specific complex organisation and/or associated information [FSCO/I]. The isolated target zone in the space of configs is there, but now there is an objective test: does this thing work in a way that depends on configs? Scrambled text does not work, beyond a certain threshold. Scrambled genes, too. Scrambled car parts, scrambled electronic parts, scrambled programs etc etc etc. Hence the ideological rage to refuse to acknowledge this obvious reality. It is increasingly evident that we are up against the ideologised, closed, hostile mind, and that beyond a certain point we can only ring fence, and put up warning labels. The patent absurdities will in the end tell. But so long as entrenched power backs up absurdity, "it's dangerous to be right when city hall is wrong." Thus, the sadly revealing expelled phenomenon. KF kairosfocus
G2, 77:
The probability of any [--> INDIVIDUAL] sequence is (1/2)^n. Look it up.
True but a strawman, as I pointed out by highlighting INDIVIDUAL. In short, ti is maximally improbable to get any arbitrary specific 500-character sequence. But as the difference between garbage hands and valuable ones in card games shows, there are CLUSTERS of sequences that are of interest, that form isolated target zones in the config space of all sequences. By contrast there is an overwhelmingly dominant cluster of sequences that are near 50-50 and which hold not particularly interesting pattern or order or organisation. It is unsurprising to obtain by a chance process one of these. But, it IS highly unexpected to obtain one of the special sequences by such a process, but we know that patterns -- simple ones -- can be triggered by lawlike mechanisms [all H, all T, alternating H-T and the like, similar to crystals] and/or by design. The pattern 500-H, is an example of the simple repetitive pattern, which can be necessity or design mimicking necessity. And frankly, this fairly obvious distinction has been well known for a long time, so the plain point is this is a concept and perception gap triggered by ideological bias in a context of polarisation over the design inference. But, when you are a reasonably educated person and the matter has been pointed out to you in a reasonably clear way, then clinging to such a gap begins to look a lot like closed-mindedness. To show that you are not being closed minded, kindly accurately put the above in your own words, and then discuss it and its implications. KF kairosfocus
G2:
71 Graham2December 21, 2013 at 3:49 pm I think Neil @ 58 got it: If we were to label coins with many different symbols, not just H/T, then ALL outcomes would look random and we would be surprised about none of them. What used to be all H would now appear random, just like the rest. Yep, I will buy that.
Nope. For instance, X and Y would add informational features, it would not subtract the underlying ones. And if you were to write the alphabet's worth of characters over and over on coins scattered H vs T, it still would not change the fact that 500 coins all H will be maximally unlikely on a chance process. KF kairosfocus
oops, your browser! kairosfocus
PS: If you link the page, you can then use the in-page search feature of your blog to see if interesting digit strings crop up. I find that consistently, you may find 5 - 6 digits that strike us, but 7 up begins to get no hits. That looks like a threshold of 1 in a million . . . and that happens to be precisely the number of digits we have! kairosfocus
G2: The digits of pi are an example of the uncorrelated clash of two deterministic systems giving rise to effective, evident -- as opposed to proved -- randomness. The ratio of the circumference to the diameter of a circle has no necessary correlation with the decimal number place value notation system, and so it is no surprise that we can use tables of pi -- cf here 1 mn digits -- as random number tables for practical purposes. Of course, even pseudorandom numbers can be used as random numbers for many purposes. Here's a bloc:
84666104665392171482080130502298052637836426959733 70705392278915351056888393811324975707133102950443 03467159894487868471164383280506925077662745001220 03526203709466023414648998390252588830148678162196 77519458316771876275720050543979441245990077115205
Similarly, it is possible to use the local line loop codes -- phone numbers -- of telephones in a book as a poor man's random number table, based on the same root of chance. If you want effectively guaranteed chance digits, get a zener and drive a circuit that flattens out the distribution. Quantum noise. Sky noise may work as well. God old fashioned Johnson noise form a high value resistor would also work. KF kairosfocus
Apparantly, no one knows if the digits of PI are randomly distributed, its an unsolved problem, but it certainly appears that way. Graham2
And PI doesnt have that ? Graham2
PI The single highest movement from a 50/50 distribution would be 9 straight tosses of either heads or tails out of 500 tosses. That would not make me be suspcious of the sequence in terms of deviation. goodnight Upright BiPed
So could you answer the question about the PI case ... would you regard this with any suspicion ? Sal ? Graham2
You are probably right about the H/T distribution test. I dont mind that, but in general we are suspicious of outomes that dont ‘look’ random. The case of 500 heads would be suspicious on 2 counts: the unexpected distribution, and the fact that it matches what we regard as ‘unnatural’.
We regard them as "unnatural" only because we have studied them enough to know what "natural" is. This particular practice has served humanity very well. We look for both regularities and their counterparts. The incessant attempt to paint ID proponents as seeing "patterns everywhere" is cheap BS offered as a rhetorical placemat in leau of enaging the arguments that ID proponents actually make. The "500 fair coins" conversation has been a great testament to that attempt. You should not have stood on the trivial fact that a coin has two faces which are equally likely to appear - while ignoring the fact that coins have two faces that must equally appear as a regularity of the physical event known as a "fair coin toss". Upright BiPed
You are probably right about the H/T distribution test. I dont mind that, but in general we are suspicious of outomes that dont 'look' random. The case of 500 heads would be suspicious on 2 counts: the unexpected distribution, and the fact that it matches what we regard as 'unnatural'. My example of PI is a better test. The H/T distribution is (I presume) close to 50/50. Would you regard such an outcome as suspicious ? Graham2
If you tossed 250 heads straight, it would still deviate wildly from the known value of a fair coin toss. For some reason that seems to give you trouble understanding. Following that with 250 straight tails would not make it an even distribution. good grief. Upright BiPed
Again, you've lost your place in the conversation. The question you posed was -> why I would be suspicious if 500 fair coin tosses came up with all heads, and I gave my answer. The result of a fair coin toss is either one of two values, heads or tails, at roughly a 50/50 distribution. That is a known value of a physical event controlled by inexorable law. If the result deviates from that value by some wild factor, then I would have every right to be suspicious of that result. Do you disagree? The question that remains is why *you* would be suspicious of it ... setting aside the ridiculous answer that it's a psychological thing you carry around, having nothing whatsoever to do with the simple fact that it's a physical event with a known random distribution. Upright BiPed
If I tossed 250 heads then 250 tails, it also conforms exactly to the the expected distribution (50% heads), but you can probably see through that one. Graham2
So if I tossed 500 coins and the result represented the value of PI (correct to 500 bits) you wouldnt see any problem. None at all. A perfectly reasonable result. I see. Graham2
Is there a follow-on comment you'd like to make Graham? Upright BiPed
it deviates wildly from a random distributiuon of fair coin tosses .... Is that your explanation ? Graham2
Answer: no, it deviates from expectation of 50% heads by wide margin (on the order of 22-sigma or whatever)
The reason I said "whatever" is when the term sigma is used, it implies a normal distribution. The binomial distribution can be approximated by the normal distribution, and thus I can borrow some language, but it is inexact in extreme case. In this case, the probability of all heads is 1 out of 2^500 = 3.2 x 10^150 When I put 26 sigma in Wolfram Alpha to get expected frequency of 1 out of some huge number, I got 1/(1-erf(26/sqrt(2))) = 2 x 10^148 so 22 sigma actually understates the severity of the deviation if we are borrowing terminology of the normal distribution. Something like 26 sigma would be more accurate. As I said in the original discussion, that numbers involved are so extreme for the normal approximation for a binomial distribution, "22-sigma" becomes a figure of speech. So I actually understated my case. scordova
I already gave my answer, and compared it to yours - but here again, my answer has nothing whatsoever to do with why you would find 500 heads suspect. That is the question at hand, why did you say that *you* would find 500 heads suspect? Apparently, it's all about a psychological thing you carry around. I think the weakness of that answer is rather ironic for someone who operates around here with such self-certainty, but it is what it is, and I am prepared to leave it at that. Upright BiPed
OK, then we are all waiting. Your explanation is ... Graham2
typo phychological -> psychological Upright BiPed
You've lost your place Graham. I accepted 'no answer' from you in #96, and left it alone. You then came back to say that I needed to "come clean". As crazy as it sounds, I actually have nothing whatsoever to do with why you would see 500 heads as suspect. I can tell you that I would see 500 heads as suspect, not because (as you suggest) it's a phychological thing we carry around, but because it deviates wildly from the a random distributiuon of fair coin tosses. If I commissioned a research project of 500 parents, and sit down to find that everyone in my sample has a female child, then I can assure you I will have the director in my office to be "suspect" with. I would not think "Gee, all children are either girls or boys, so it must be a phychological thing I carry around". And no matter how many times the RD tells me that "having a girl is as likely as having a boy" I would not be swayed by that reasoning. I was just wondering why you are. Upright BiPed
UBP: For the umpteenth time, I have explained it at #79 as best as I can. If you have any questions about that, then ask. In the meantime please dont keep asking the same question. Graham2
Graham, there is nothing for me to "come clean" about. You said you'd be suspect of seeing 500 heads in a row. I am asking to think about it and tell me: Why? Upright BiPed
This is getting tiresome. My reply at #79 is the best I can explain it. Graham2
Sorry heres the link again! http://m.youtube.com/watch?v=-G6UkPS9YjU he can actually move the tail! Jaceli123
Graham2 @79: Please define precisely which "random sequence" you have in mind that 500 heads would be just as probable as. ---- Then try re-reading #29. You are missing the boat. You are stuck on a simple statistical level 101. We've moved beyond that long ago. Everyone understands the point you are making and it (i) is entirely trivial, (ii) misses the point, and (iii) demonstrates that you are not willing to even think through your own acknowledgement as to why 500 heads in a row is suspicious. Since you can't seem to grasp our explanation, why don't you offer your own. Why does 500 heads in a row seem suspicious to you? Think about it carefully and once you've come up with a decent answer then our "incomprehensible" answers might suddenly make a lot more sense. Eric Anderson
Guys I really hate to interrupt your discussion and would like you guys to check this video about a guy who can move his caudal appendage or tail does this show common decent?!?! m.youtube.com/watch?v=xnxzqeT466A Jaceli123
Sal: I understand the difference perfectly, and its really not subtle at all. My reply at #79 sums up my position as best as I can express it. I was trying to get UBP to come clean. Graham2
so no answer okay Upright BiPed
Graham2 asked: Out of interest, do you think 500 heads has the same probablility as other sequences or not?
It has the same probability as any other specific sequence, but that is not the question being asked, the question being asked is whether a chance process can be expected to make 500 fair coins heads reasonably speaking. They are subtly two different questions, and you are equivocating one with the other. Here are the questions:
do you think 500 heads has the same probablility as other sequences or not?
Answer: yes
do you think fair coins 500 heads (100% heads) can emerge out of a random process practically speaking?
Answer: no, it deviates from expectation of 50% heads by wide margin (on the order of 22-sigma or whatever) You said it's a psychological effect, and I actually agreed with you. Some IDists find that uncomfortable, btw. The problem however, is that with respect to all coins heads, the target has been well known throughout human history in as much as humans like simple repetitive patterns, and all coins heads is only an extension of a pre-existing fixed target. Thus, the objection of after-the-fact drawing of targets cannot be sustained, and thus you would suspect something was up if you saw 500 fair coins on a table all heads. Even you admitted you'd be suspicious, and all I'm trying to do is explain why you'd be suspicious. PS For what it's worth, I once arragned 50 fair coins all heads on a plate. The phenomenon isn't that improbable if intelligence is involved. scordova
Thats the best I can express it. Its a bit like moving a target to fit the arrow. Do you think 500 heads has the same probablility as other sequences or not? Graham2
#79 doesn't answer the question. Why would you “suspect [500 heads] wasn’t a fair throw” ? Upright BiPed
Then you didnt read #79. Out of interest, do you think 500 heads has the same probablility as other sequences or not? Graham2
I've been here all along. Why would you “suspect [500 heads] wasn’t a fair throw” ? Upright BiPed
Youve come in a bit late, try reading from here backwards for a bit to get up to speed. Graham2
So you are not going to say why you would "suspect [500 heads] wasn't a fair throw" ? Why not? Upright BiPed
I think this has been done to death. Graham2
#79
If 500 heads were thrown, we would suspect it wasnt a fair throw.
Why? You say it has the same probability as any other outcome. Upright BiPed
jerry
So the difference is that one proportion is incredibly unlikely and the other is much more common. It is not a specific sequence but a specific proportion that is at issue.
Don't you mean this?: P(HTTT) = (0.5)^4 = 0.0625 P(HHHH) = (0.5)^4 = 0.0625 unless you talk about permutations, proportions don't make a difference to coins. P(HTTT) = 4!/1!3! = 4(0.0625)= 0.25 selvaRajan
I think Neil @ 58 got it: If we were to label coins with many different symbols, not just H/T, then ALL outcomes would look random and we would be surprised about none of them. What used to be all H would now appear random, just like the rest. Yep, I will buy that.
Nope. https://uncommondescent.com/computer-science/illustrating-embedded-specification-and-specified-improbability-with-specially-labeled-coins/ scordova
[...] may have escaped many, myself included, until Neil made this comment in another discussion, is that when the coin manufacturer created the a heads-tails coin (instead of a 2-headed or [...] Illustrating embedded specification and specified improbability with specially labeled coins | Uncommon Descent
I have not followed this at all but haven't seen the real issue in my cursory reading. It is not that the 500 heads is a unique sequence (every sequence is unique), it is that it represents a specific proportion of heads versus tails which is very unique also. If it was 499 heads and one tail, we would still be highly suspect because there are only 500 possible combinations that give rise to this proportion compared to other proportions. So when one offers a different sequence of heads and tails, say it is 250 heads and 250 tails, there could be almost an infinite number of ways of getting this combination (well an extremely high number). So the difference is that one proportion is incredibly unlikely and the other is much more common. It is not a specific sequence but a specific proportion that is at issue. In DNA, it is those combinations that give rise to a folding protein versus those combinations that do not. The proportion of combinations that give rise to a folding protein is infinitesimally small compared to those combinations that do not. So how does one stumble on one of these incredibly small instances of a folding protein or how does one stumble on the incredibly small number of instances of 500 straight heads. It is not by chance or any naturalistic process known to man. As an aside someone did an experiment with dice that designed a machine that flipped the die to land on a table and give the same number each time. So 500 heads should be easy. Just that I said it was a machine that was designed. jerry
Mapou, I think you too are confusing sequence with permutations of the sequence- P(HHHH) = (0.5)^4 = 0.0625 P(HTTT) = 4!/1!3! = 4(0.0625)= 0.25 P(HHTT) = 4!/2!2! = 6 X(0.0625) = 0.375 selvaRajan
Box , I will make this clear: Every sequence has same probability -I was talking about permutations of the sequence-because Sal talked about students arranging the sequence building their design (and Penny's Game in respect of pitting first sequence obtained against another): P(HHHH) = (0.5)^4 = 0.0625 P(HTTT) = 4!/1!3! = 4(0.0625)= 0.25 P(HHTT) = 4!/2!2! = 6 X(0.0625) = 0.375 so yes Paul,Graham2 and Cantor are right in saying sequence has same probability unless Sal meant something else. selvaRajan
Graham2, you make no sense whatsoever. There is no use in arguing with you. Mapou
Box: If 500 heads were thrown, we would suspect it wasnt a fair throw. If a (apparantly) random sequence was thrown, we wouldnt be concerned, yet both sequences have exactly the same probability. It appears counter-intuitive, but its not. Its a psychological effect, nothing to do with mathematics (the universe doesnt care). We are suspicious of the 500 heads because it matches a small pool of sequences that we carry round with us, that we regard as 'special'. Graham2
Graham2, What is your comment on this article by Scordova? excerpt:
500 coins heads is (500-250)/11 = 22 standard deviations (22 sigma) from expectation! (...) Bottom line, the critic at skeptical zone is incorrect. His statement symbolizes the determination to disagree with my reasonable claim that 500 fair coins heads is inconsistent with a random physical outcome.
Box
EA #75 Your post at #29 was sort of incomprehensible. The probability of any sequence is (1/2)^n. Look it up. If you dont agree, then use just 3 coins as an example and tell us the probability of HHH and something else (eg: HTH), in other words, put your money where your mouth is. A number now, not all that waffle about 'specified' etc. An actual number. (Before you post it, just do a quick check that all 8 values add up to 1: this is a dead giveaway) Graham2
Post 24 Paul GiemDecember 21, 2013 at 12:09 am Look not only at the number of ones and zeroes, but also at the two-dimensional pattern. Then get ready to defend your answer.
I got the right answer (B) without seeing the pattern. B has 85 heads. The probability of getting 85 or fewer heads with one random trial of 210 bits is about 0.3% A has 104 heads. The probability of getting more than 85 and fewer than 105 heads with one random trial of 210 bits is about 47% Assuming you didn't: 1) design A and then purposely add (or subtract) heads to make the number 104, and 2) randomize B until you got an outlier, then ... is it not valid to infer (with some geater than 50/50 probability) that B is the designed pattern? cantor
Graham2 @70: C'mon. Neil's example doesn't change a single thing. This is all pretty simple. We are still dealing with a binary coin. And if you add more characters, we just end up with more possible combinations. Doesn't change a thing. In Neil's example are you saying that the probability of getting all the coins to fall with X mark upside is the same as the probability of getting a sequence that doesn't have all the X marks upside? Or are we going to be more careful with our use of "any other sequence" kind of language? Re-read my #29. That is the key. Eric Anderson
Once we agree that humans can differentiate between random noise and patterns the next step is to understand that absent an reasonable explanation humans are hardwired to infer design to nonrandom patterns. It's just what we do from here: "http://www.dailymail.co.uk/sciencetech/article-1136482/Brains-hardwired-believe-God-imaginary-friends.html So when we see something that is obviously not random like the rapid emergence of body forms in the Cambrian explosion the default explanation is Design. There is just no getting around that it's in our genes. It might not be fair but the burden of proof in such cases will always fall to the person denying design. It's human nature. Until a convincing explanation can be given that does not rely on chance people with out an axe to grind will always assume design once we rule out randomness. That is why ID will not go away no matter the efforts of the critics. peace fifthmonarchyman
No doubt cantor, but what's really inexplicable -- or maybe just weird -- print up Pedro Giem's (yes, in that universe he's an illegal alien -- WHOOPS! I'll get in trouble in both universes talking that way -- I mean undocumented worker) on an 8 1/2 x 11 sheet of paper, default margins and fonts in MF Word (the meanings of "soft" and "fuzz" being flipped over there) and punch holes where the zeros are, staple it to a scroll, and run it through a 1923 Wrigley Player Piano, and it plays six measures of "Ode To Joy" backwards. But -- and here's the really weird part -- play it backwards to hear that piece forward and it plays the chorus of "Sgt. Pepper" instead; the same tune as that hit in this universe, except the Walrus is Ringo (how do you like those cucumbers? yes, over there cucumbers, not apples grow on trees, got to Eve, hit Newton, inspired Jobs, etc., etc., etc.). Which, you know, that's just like science, right? The multiverse makes duck soup of all the vexing OOL questions, but at the same time poses even more vexing questions. Go figure. jstanley01
it spells “supercalifragilisticexpialidocious.” Using the same number of digits, no less. Go figure. 210 bits is more than adequate to code that word, with lots of bits left over. cantor
I think Neil @ 58 got it: If we were to label coins with many different symbols, not just H/T, then ALL outcomes would look random and we would be surprised about none of them. What used to be all H would now appear random, just like the rest. Yep, I will buy that. Graham2
Hey all, I find this subject fascinating. Humans are indeed pattern recognizing creatures. A couple of years ago it was shown that humans can tell actual financial data from random permutations of the same numbers. from here: http://arxiv.org/abs/1002.4592 I think that everyone would agree that the charts with the actual data are not the product of intelligent design they are just not the product of random chance so they stick out to us. Perhaps it's not that we always recognize design when we see it. it's just that we know what random noise looks like and non random configurations stand out like a sore thumb against that background. As has been pointed out here ruling out chance is only the first step in a design inference. It's sad that the "never give an inch" crowd won't let us get past that first step if they did we could have some very interesting discussions. Peace fifthmonarchyman
I was noticing, in the adjacent multiverse it spells "supercalifragilisticexpialidocious." Using the same number of digits, no less. Go figure. jstanley01
Paul Giem at #60, I repent. That the sequence spells "chance" is no less likely than if it were to spell "design", I confess. And with such a relatively small sample space (around 10^63 permutations) it had to happen eventually. Chance Ratcliff
Instead of using heads and tail, take a marker and put an X on one side of each coin and a Y on the other side. You can do this in a mixed up way, so that the X is on the heads side of some coins and on the tails side of the others. The mathematics works just as well with the X and Y as it does with the heads and tails. And, in a sense, that’s the whole point of mathematics being abstract. The mathematics doesn’t make the X special, and it doesn’t make heads special. Maybe you are making them special.
Yes indeed, and you have just stated more clearly than I ever could why the LLN will work just as well for a variety of coin sequences, not just all coins heads, but for independently specified sequences. You've given me the means to make the arguments more forceful. Thank you. This has been a fruitful exchange. scordova
In math or science disciplines, there can be multiple paths to arrive at the same conclusion. The route I'm going in this line of inquiry is probability and statistics rather than information. Why? The simplicity of the argument. Many of the students that were in my ID class would not be able to appreciate arguments in favor of ID using information ideas. As some have guessed, part of my offerings at UD and TSZ are to clean up teaching materials for the ID and Creationist underground matriculating through university. Developing simple, succinct, accessible, unassailable arguments are my goal. I'd like to publicly than Nick Matzke for empirically proving it is possible to develop unassailable arguments for the students in the ID underground. :-) scordova
It is a universally immutable truth. But it is a truth about abstractions, and its universality extends only to those who share the same abstractions. The mathematics says nothing about heads. We make a mathematical model, and we might represent heads in that model. Then the mathematics talks about the model. We have to interpret that back to reality. The mathematics won’t do that interpretation for us. Instead of using heads and tail, take a marker and put an X on one side of each coin and a Y on the other side. You can do this in a mixed up way, so that the X is on the heads side of some coins and on the tails side of the others. The mathematics works just as well with the X and Y as it does with the heads and tails. And, in a sense, that’s the whole point of mathematics being abstract. The mathematics doesn’t make the X special, and it doesn’t make heads special. Maybe you are making them special.
Granting for the sake of argument that you are right, if we have made "all heads" special in our minds, and then we find 500 fair coins all heads, we can reject chance as a mechanism for creating a pattern that looks like the product of a like-minded process. My point was, it's not ultimately important whether "all fair coins heads" is some how special in a philosophical sense any more than my example using blue prints. My point is, the real probability in question is not the probability of any one coin configuration, but the probability it lines up with patterns hard-wired or learned by us. For example, Paul Giem, showed how a learned pattern is particularly special to others. I actually didn't see the word "chance" but I did notice: 1st line identical to 6th line, 2nd line identical to 5th line. I came up with the design inference via a different route than JDH. I couldn't figure out why Paul made the 3rd and 4th lines different from each other since the symmetry was destroyed, but there was enough to make the design inference. JDH was able to see why the symmetry was destroyed because he was like-minded with Paul, I wasn't.... What's interesting here is the design inference is valid even if only one person is able to see a like-minded pattern. Chance can be rejected as a hypothesis. Bill Dembksi illustrates this with the Champernowne sequence. Some people will recognize the Champernowne sequence in a binary string, other people won't. Thus some will fail to see the string is designed. But the fact that only a few people can recognize the string does not invalidate the design inference. Now that the cat is out of the bag thanks to JDH and Mr. Ratcliff, we all see the design inference which some of us didn't see at first. I see it more clearly now than I did earlier with my primitive analysis. It doesn't matter the reason why we have an independent specification, it just has to be independent. It will work. Some will say this process is subjective. It doesn't matter, it's an objective fact that a subjective thought process in one mind produces products that other like-minded people can recognize is the product of a like-mind, not a mindless chance process. The circumstantial case then is biological organisms look like they were made by a like-minded process except by a far greater mind. I know biology doesn't look designed to you. I respect that, but that doesn't invalidate the design inference. The fact that some people didn't recognize the design in Paul's example, doesn't invalidate the fact it was designed. I honestly didn't recognize the complete design by Paul myself, only parts of it. If I framed the question "Neal, all fair coins heads is a pattern special to some human minds (if not all). Would you, practically speaking reject a mindless chance process as an explanation if you found such a pattern in 500 fair coins?" The question isn't about the inherent improbability of any one configuration (a mistake many ID proponents make when trying to define CSI), but the improbability a configuration will line up with the patterns our minds would view as the product of a like-mind, not some mindless process. Sal scordova
Jaceli123 @ 55, yes UB's post is a great resource. I have also created a new post on the topic you raise. Barry Arrington
Jaceli123 @ 55, I see that you've asked that question a couple of times. The answer is "it both" information and a chemical reaction. All transfers of information are material events; that's how they have material effects in a material universe. You might try reading this thread for some perspective. Upright BiPed
Scordova: Our math proceeds in part based on our hardwiring — we’re hard wired to think in terms of expectation and averages and regularities.
Neil Rickert: The students that I have taught must have somehow missed that hardwiring part.
Toddlers prove that Sal is right and Neil is wrong:
Gopnik: First, the study—and a small IQ test for you. Suppose you see an experimenter put two orange blocks on a machine, and it lights up. She then puts a green one and a blue one on the same machine, but nothing happens. Two red ones work, a black and white combination doesn't. Now you have to make the machine light up yourself. You can choose two purple blocks or a yellow one and a brown one. But this simple problem actually requires some very abstract thinking. It's not that any particular block makes the machine go. It's the fact that the blocks are the same rather than different. Other animals have a very hard time understanding this. Chimpanzees can get hundreds of examples and still not get it, even with delicious bananas as a reward. As a clever (or even not so clever) reader of this newspaper, you'd surely choose the two purple blocks. The conventional wisdom has been that young children also can't learn this kind of abstract logical principle. Scientists like Jean Piaget believed that young children's thinking was concrete and superficial. And in earlier studies, preschoolers couldn't solve this sort of "same/different" problem. But in those studies, researchers asked children to say what they thought about pictures of objects. Children often look much smarter when you watch what they do instead of relying on what they say. We did the experiment I just described with 18-to-24-month-olds. And they got it right, with just two examples. The secret was showing them real blocks on a real machine and asking them to use the blocks to make the machine go. Tiny toddlers, barely walking and talking, could quickly learn abstract relationships. And they understood "different" as well as "same." If you reversed the examples so that the two different blocks made the machine go, they would choose the new, "different" pair.
Box
Seriously, one of my critiques is at least superficially valid. Which one is it, and is it really valid? To repeat from comment 12,
If I tell you (honestly, but how do you know besides that you trust me) that one was done by flipping coins, and one was done by an intelligent design, can you tell which is which? Do you believe me? And do you know how the intelligently designed pattern was made? Do you have any way to tell? (this is less than 500 bits/flips)
Anybody want to answer? Paul Giem
Chance Ratcliff (#47), You are just conditioned to see your name everywhere. Can't you listen to reason? Didn't you find any of my arguments in #35 persuasive? You're just impervious to reason, you Creationist IDiot. You are probably a Christian Reconstructionist, getting ready to take us back to the Dark Ages when they believed in a flat earth. Paul Giem
Post53 selvaRajanDecember 21, 2013 at 6:37 am "unordered sequences" is an oxymoron. cantor
scordova:
The way we conceive of mathematics implicitly makes certain sequences special even though they are no more improbable than any other.
Perhaps that is true of the way you conceive of mathematics. It is not how I conceive of mathematics.
For example, the conception of the Law of Large numbers and averages converging on expectation has implicitly made “all coins heads” special.
No, it hasn't. You seem to be confusing your personal interpretation of the theorem with the theorem itself.
Our math proceeds in part based on our hardwiring — we’re hard wired to think in terms of expectation and averages and regularities.
The students that I have taught must have somehow missed that hardwiring part.
The way we state our math is a reflection of our hardwired intuitions.
The missed that hardwiring part, too.
So deeply hard wired is “all coins heads” that our math which resulted from our hard-wired thought processes, spits out things like expectation values and averages and thus our math automatically makes “all heads” special. As if our math were some universally immutable truth.
It is a universally immutable truth. But it is a truth about abstractions, and its universality extends only to those who share the same abstractions. The mathematics says nothing about heads. We make a mathematical model, and we might represent heads in that model. Then the mathematics talks about the model. We have to interpret that back to reality. The mathematics won't do that interpretation for us. Instead of using heads and tail, take a marker and put an X on one side of each coin and a Y on the other side. You can do this in a mixed up way, so that the X is on the heads side of some coins and on the tails side of the others. The mathematics works just as well with the X and Y as it does with the heads and tails. And, in a sense, that's the whole point of mathematics being abstract. The mathematics doesn't make the X special, and it doesn't make heads special. Maybe you are making them special. Neil Rickert
Put another way, there is no statistical test that will say that a particular sequence is not random; it will only say that it is very unlikely that it is. tkeithlu
Get back to the basics. Statistical tests do not reject an outcome as impossible, they only assign a probability to the event. 500 heads is extremely unlikely, and so is our communicating by internet rather than jumping about in a tree looking for breakfast, but neither is impossible. tkeithlu
Now I have a question for you guys is DNA simply just chemical reactions between DNA and RNA, does it really contain information or is it just the result of a chemical potential? From this video: http://m.youtube.com/watch?v=18ivdLtR7IA Jaceli123
Of note:
Fred Sanger, Protein Sequences and Evolution Versus Science - Are Proteins Random? Cornelius Hunter - November 2013 Excerpt: Standard tests of randomness show that English text, and protein sequences, are not random.,,, http://darwins-god.blogspot.com/2013/11/fred-sanger-protein-sequences-and.html Measuring the functional sequence complexity of proteins - 2007: Kirk K Durston, David KY Chiu, David L Abel, Jack T Trevors In this paper, we provide a method to measure functional sequence complexity (in proteins). Conclusion: This method successfully distinguishes between order, randomness, and biological function (for proteins). http://www.tbiomed.com/content/4/1/47/ A Scientific Method to Detect Intelligent Design in Biological Life – Kirk Durston – October 15, 2013 Excerpt: Intelligent Design in Biological life: 1. If an effect requires, encodes or produces statistically significant levels of functional information or functional complexity, it requires an intelligent mind to produce. (from above hypothesis) 2. Universal protein Ribosomal S12 requires (at) least 359 bits of functional information to encode. 3. Therefore, Ribosomal S12 required an intelligent mind to encode. http://p2c.com/students/blogs/truthquest/2013/10/scientific-method-detect-intelligent-design-biological-life (A Reply To PZ Myers) Estimating the Probability of Functional Biological Proteins? Kirk Durston , Ph.D. Biophysics – 2012 Excerpt (Page 4): The Probabilities Get Worse This measure of functional information (for the RecA protein) is good as a first pass estimate, but the situation is actually far worse for an evolutionary search. In the method described above and as noted in our paper, each site in an amino acid protein sequence is assumed to be independent of all other sites in the sequence. In reality, we know that this is not the case. There are numerous sites in the sequence that are mutually interdependent with other sites somewhere else in the sequence. A more recent paper shows how these interdependencies can be located within multiple sequence alignments.[6] These interdependencies greatly reduce the number of possible functional protein sequences by many orders of magnitude which, in turn, reduce the probabilities by many orders of magnitude as well. In other words, the numbers we obtained for RecA above are exceedingly generous; the actual situation is far worse for an evolutionary search. http://powertochange.com/wp-content/uploads/2012/11/Devious-Distortions-Durston-or-Myers_.pdf
Of interest to the universal RecA protein that Dr. Durston studied:
The World’s Toughest Bacterium - 2002 Excerpt: Several recent studies of the bacterium's DNA repair pathway have focused on one protein that is now known to be essential for radiation resistance—the RecA protein.,, "When subjected to high levels of radiation, the Deinococcus genome is reduced to fragments," they write in Proceedings of the National Academy of Sciences. "RecA proteins may play role in finding overlapping fragments and splicing them together." http://www.genomenewsnetwork.org/articles/07_02/deinococcus.shtml Extreme Genome Repair - 20 March 2009 Excerpt: If its naming had followed, rather than preceded, molecular analyses of its DNA, the extremophile bacterium Deinococcus radiodurans might have been called Lazarus. After shattering of its 3.2 Mb genome into 20–30 kb pieces by desiccation or a high dose of ionizing radiation, D. radiodurans miraculously reassembles its genome such that only 3 hr later fully reconstituted nonrearranged chromosomes are present, and the cells carry on, alive as normal. http://www.sciencedirect.com/science/article/pii/S0092867409002657
I would say that the actions of the RecA protein point to more than just a slight anomaly in the whole 'bottom up' neo-Darwinian paradigm! bornagain77
Hi Graham2, Dr.Paul Giem, Sal's post is about "comparing to blueprint" so why are you not pitting the sequences against each other as shown in comment #14 , or at least check out the permutations and calculated the unordered sequences probability? P(HHHH) = (0.5)^4 = 0.0625 P(HTTT) = 4!/1!3! = 4(0.0625)= 0.25 P(HHTT) = 4!/2!2! = 6 X(0.0625) = 0.375 selvaRajan
I'll make a radical statement. The way we conceive of mathematics implicitly makes certain sequences special even though they are no more improbable than any other. For example, the conception of the Law of Large numbers and averages converging on expectation has implicitly made "all coins heads" special. But it seems so natural that all coins heads is special we hardly give it a thought -- that's how deeply hard-wired certain specifications are in the human mind, it's like we were born to recognize them. Our math proceeds in part based on our hardwiring -- we're hard wired to think in terms of expectation and averages and regularities. The way we state our math is a reflection of our hardwired intuitions. So deeply hard wired is "all coins heads" that our math which resulted from our hard-wired thought processes, spits out things like expectation values and averages and thus our math automatically makes "all heads" special. As if our math were some universally immutable truth. Whether "all heads" is special in the ultimate philosophical sense, I have no idea, and I don't care. The same question could be posed of math in general, is it a figment of our thought process or is it real in some ultimate philosophical sense. scordova
Graham2: but not because it is more or less likely than any other sequence, but because (as I understand it), it matches a pre-defined sequence, one that we regard as significant
Yes, I believe that is correct. "pre-defined sequence" is referred to as "independent specification" in ID literature. Specification is not limited to sequences. For example a blueprint of a skyscraper is not really describing a sequence, but a skyscraper. "Independent" is used instead of pre-defined, to cover situations where the specification came about independently but after the design in question was observed by someone. For example, scale-free networks are now widely recognized as designs for the internet. It turns out such networks already existed in biological systems. Technically speaking the "blueprint" for scale free networks arrived long after the first cell came into existence Thank you for your participation. scordova
Thanks Graham, I was just making sure I understood you. Just one last thing. If you were to walk into the room when I had finished tossing my coins, and I asked you to start at the beginning and work your way along the sequence to verify that my desired sequence had in fact been successful, would there be any point at which you might become suspicious, or might you say 'well, it's nothing special really Peter ... the chance of that happening is (1/2)^n. In fact do it again, me old flower, you'll see'? PeterJ
Yes. Is this going anywhere ? Graham2
Thanks Graham, So what your saying is this: If I was to sit down with 500 coins and decide that I want the sequence '200 heads, followed with 10 tails then 10 heads to 480 tosses, ending with 1 tail 1 head repeated till 500 tosses is complete. Are you saying that the probability of that happening is (1/2)^n? As I take it that would be an ordered sequence, agreed? PeterJ PeterJ
Yes. Graham2
Hi Graham, I know this stuff is easy as pie to you, and I may come across as a complete dunce, but could you better explain this to me: You say, "The probability of getting any sequence is (1/2)^n. Im using sequence to mean an ordered sequence (I think this is the term)." So basically what you are saying is 'the probability of getting an ordered sequence is (1/2)^n', is that correct? PeterJ PeterJ
CC: Your comment at #28 doesnt make a lick of sense to me. Jeez, this is school stuff: The probability of getting any sequence is (1/2)^n. Im using sequence to mean an ordered sequence (I think this is the term). If you think the probablility is something else, then tell us. A group of 3 should be enough. Just tell us what you think is the probability of HHH. A number now, not heaps of verbiage, a number. A single number. (Hint: the answer is 1/8). Graham2
The weirdness I noted at my #41 was an indexing error in my program. The missing triplets reappeared when it was corrected. All is well. Chance Ratcliff
One more time, with feeling:  ooo  o   o   o   o   o  ooo  ooooo o   o o   o  o o  oo  o o   o o     o     o   o o   o o o o o     o     o     ooooo ooooo o o o o     oooo  o     o   o o   o o  oo o   o o      ooo  o   o o   o o   o  ooo  ooooo Chance Ratcliff
Paul Giem =>Graham said (and he is right) that any one sequence is just as likely as any other sequence Me=>I just gave an elaborate reason for why sequence can't be same #28. Can you please explain why you still believe any sequence has the same probability? coldcoffee
So WTF with the missing triplets in the B set, with adjacent triplets and sequential triplets? It just seems odd. (cf. #31). Chance Ratcliff
Definitely "bad" design. ;) Chance Ratcliff
Even better:  ***  *   *   *   *   *  ***  ***** *   * *   *  * *  **  * *   * *     *     *   * *   * * * * *     *     *     ***** ***** * * * *     ****  *     *   * *   * *  ** *   * *      ***  *   * *   * *   *  ***  ***** Chance Ratcliff
Squint if y'all don't see it right away. Chance Ratcliff
A better rendering:  111  1   1   1   1   1  111  11111 1   1 1   1  1 1  11  1 1   1 1     1     1   1 1   1 1 1 1 1     1     1     11111 11111 1 1 1 1     1111  1     1   1 1   1 1  11 1   1 1      111  1   1 1   1 1   1  111  11111 Chance Ratcliff
Paul Giem and JDH, well done. 111  1   1   1   1   1  111  11111 1   1 1   1  1 1  11  1 1   1 1     1     1   1 1   1 1 1 1 1     1     1     11111 11111 1 1 1 1     1111  1     1   1 1   1 1  11 1   1 1      111  1   1 1   1 1   1  111  11111 Chance Ratcliff
JDH, I find your proposal ridiculous. How can B possibly be designed? If it spells out CHANCE, isn't it incompetent design, with the middle arm of the E too low, and the A being misshapen? Isn't it bad design if it spells out CHANCE instead of DESIGN? Doesn't that disprove design? Graham said (and he is right) that any one sequence is just as likely as any other sequence, so why should Sequence B be any more likely to be designed than Sequence A? You want to say that it spells out a word and therefore it is designed? Do you have any idea how many words can be made in a 210 bit string? Besides CHANCE and DESIGN, there could have been thousands, perhaps millions, of other words that could have been spelled, such as PURPLE, SWINGS, and HELPED, not to mention small letters, German words, Spanish words, Russian words in the Cyrillic alphabet, Greek words, Hebrew words, Arabic words, Korean words, Chinese words, Sanskrit words, and Sumerian words. Did you have a target sequence before you looked at the string? I bet you didn't. So why are you so sure that this particular string is designed? What about if you had arranged it into seven rows? The pattern you think you see entirely disappears. I think you are like Hamlet just peering into clouds and seeing familiar shapes. Why do you persist in seeing design? Besides that, isn't 210 bits below the universal probability bound, and therefore you can't say that B is designed? Maybe, since God designs everything, both Sequence A and Sequence B were designed, in which case, how can you say that Sequence B has any more design than Sequence A? I bet you don't have any answers for this, you stupid punk! I bet you've never even taken statistics. You still wanna tell me B is designed? Paul Giem
Footnote to my #33, the triplets are more accurately described as octal, as opposed to decimal. Chance Ratcliff
Addendum to my #31: for the B group, whether adjacent triplets or sequential triplets, the base 10 numbers 0, 3, 4, and 7 are present, while 1, 2, 5, and 6 are missing. Strange. Chance Ratcliff
Paul Giem @12 Would I be literally spelling out "CHANCE" if I guessed that "B" was designed? JDH
Both A and B seem a little off. Singles, Pairs, and Triplets histogram (adjacent pairs and triplets) A 0:   125 1:   85 00:  34 01:  26 10:  31 11:  14 000: 15 001: 11 010: 12 011: 3 100: 8 101: 11 110: 4 111: 6 B 0:   106 1:   104 00:  26 10:  25 01:  29 11:  25 000: 17 011: 20 100: 21 111: 12 Singles, Pairs, and Triplets histogram (sequential pairs and triplets) A 0:   125 1:   85 00:  70 01:  55 10:  54 11:  30 000: 37 001: 33 010: 44 011: 11 100: 33 101: 21 110: 10 111: 19 B 0:   106 1:   104 00:  49 01:  57 10:  57 11:  46 000: 49 011: 56 100: 57 111: 46 In string A, all pairs and triplets are represented, but there is some definite disparity between occurrences. In string B, some triplets are not represented at all, whether adjacent pairs or sequential iterations. The triplets 000, 111, 100, and 011 are represented in both B sets, but the other half of the triplet permutations are missing. Neither one looks strictly random, but if only one is actually designed, I'd go with B. EDIT: I assume coldcoffee's patterns are found on B. Chance Ratcliff
=> CentralScrutinizer, Graham2,cantor,Eric The HHH vs HHT sequnece odds 1/2 follows Fibonacci series, HHH vs TTH odds 3/10 follows Padovan series, HHT vs THT odds 5/8 follows non-attacking bishops series, HTT vs TTT odds 7/8 follows Narayana’s cow series, HHT vs HTT odds 2/3 follows quarter square series.. and so on coldcoffee
Graham2 @13: You are correct that from a purely statistical standpoint any particular sequence of 500 coins is just as improbable as any other particular sequence of 500 coins. However, there are two important issues at play. 1. As Mapou points out, if we say that a sequence of 500 heads is "just as probable as any random sequence," we are tricking ourselves, because the referenced "random sequence" is inadequately defined, and in fact encompasses a hugely massive group, whereas the 500 heads group is singular. In other words, it is vastly more likely that we will run into some old "random sequence" than it is that we will run into 500 heads. As a result, if we are to avoid allowing the vagary of our identification of sequences to cloud our judgment, we have to be very careful when arguing that 500 heads is just a probable as any other sequence, to specify precisely which other sequence we have in mind. That brings us to a related point (the one that I highlighted in my comment #3) that is important to ID: that of independent specification. 2. 500 heads is more "meaningful" than any old random sequence. Everyone knows it is more meaningful; indeed it jumps out at us, as you acknowledge. Why is that? Well, we recognize that it occupies a special set of 1, something unique in the sea of other possible sequences. Now, to be sure, 500 heads is not the only "meaningful" sequence, not the only one that communicates something or that stands out based on our experience, or sets itself apart from the sea of random sequences. We could probably come up with a number of sequences that would be immediately recognizable as "special" and perhaps several more that would be recognizable as "special" with some work. But the number of meaningful (to use my term) sequences is miniscule, a drop in the bucket, compared to the number of random meaningless sequences. Now it could well be that something like 500 heads is caused by necessity rather than design, so we would need to consider the possibility of necessity as an explanation. Same would go for sequences like all tails, or HTHTHT repeated, and so on. A long sequence of adjacent prime numbers in binary on the other hand, is not only immediately recognizable as meaningful, it is also recognizable as not something that would not likely occur by necessity. ----- Anyway, it is too late and I'm perhaps not explaining this as well as I might. But I think there are two critical aspects (though related) that need to be kept in mind: (i) the question of what we are claiming is just as probable as the other*, and (ii) the independent recognition of the specification as something "meaningful," if you will allow me to use that word. ----- * Incidentally, when arguing that 500 heads is just as probable as any other sequence, we quickly find that if we take just a little more effort to identify what "other sequence" we have in mind, we either end up giving some other unique specification, or we have to write out an entire random sequence of 500. This is why we never find anyone actually arguing that "500 heads is just as improbable as [actual sequence]." Instead the latter is left conveniently vague. To immediately see the other side of the coin, so to speak, try making the argument with a specific sequence in shorthand. Say, for example, "500 heads is just as improbable as 500 tails." Sure. We all agree. Or "500 heads is just as improbable as HTHTHT repeated over and over. Sure. I'm willing to agree. But notice in each of these kinds of cases we end up referring to one of the very few other "meaningful sequences" that potentially could be found amidst the sea of meaninglessness. Eric Anderson
=> CentralScrutinizer, Graham2,cantor, The HHH vs HHT sequnece odds 1/2 follows Fibonacci series, HHH vs TTH odds 3/10 follows Padovan series, HHT vs THT odds 5/8 follows non-attacking bishops series, HTT vs TTT odds 7/8 follows Narayana's cow series, HHT vs HTT odds 2/3 follows quarter square series.. and so on coldcoffee
Look ... at the two-dimensional pattern
If I stare at it long enough, and cross my eyes, and play some Pink Floyd, will I see a face or something? cantor
Paul Giem @24: Several years ago I read some papers by Greg Chaitin. I seem to recall he said you can never know for sure if any apparently random pattern is actually random. There may be some simple algorithm that generates it. It is not possible to rule that out. I'm assuming that's *not* what you meant when you said one was designed (i.e. it was not "designed" by generating it from a simple algorithm). Is that correct? cantor
CentralScrutinizer @23: Read Post16. cantor
Cantor, Look not only at the number of ones and zeroes, but also at the two-dimensional pattern. Then get ready to defend your answer. Paul Giem
Graham2: All sequences have the same probabillity, and that is (1/2)^n. That is probability 101.
Only if the sequence is generated all at once. However, in a situation where 500 bits are generated sequentially, where there is no memory from one bit generation to the next, the number of sequences that contains a 50/50 distribution of ones and zeroes far out number the sequence of all zeroes. CentralScrutinizer
Oops. Typo. I should have said B is designed. cantor
Paul Giem @12 Well, there's only a one-in-a-thousand chance of getting 85 heads (B) in 210 flips of a fair coin, and a one-in-20 chance of getting 104 heads (A) in 210 flips of a fair coin. So if I had to choose, since one is random and the other designed, my guess would be that A is designed. Of course, you didn't explicitly stipulate that you did only one 210-flip trial to get the random pattern, but I am assuming that's what you meant. In other words, you didn't "design" the random pattern by doing repeated 210-flip trials until you got something lopsided (like only 85 heads). cantor
cantor @19 wrote: There are 8 *permutations*. There are only 4 *combinations* Of those 4 combinations, 2 of them are 3 times more probable than the other 2. cantor
Graham2 @15 wrote: "As an example, write down all possible combinations for 3 coins, ie: 8" There are 8 *permutations*. There are only 4 *combinations* . Graham2 @15 wrote: "etc etc etc, for all 8 combinations" 8 *permutations* cantor
Graham2, Check out comment #12. Can you pick the designed sequence? Is it hard? How can you be reasonably sure you are correct? Paul Giem
probabilities for unordered outcomes #heads: (%o5) 500 (%o6) 3.0549363634996047E-151 (%i7) (%o7) 490 (%o8) 7.5093570626414577E-131 (%i9) (%o9) 480 (%o10) 8.1481217255391563E-116 (%i11) (%o11) 470 (%o12) 4.4151760704460009E-103 (%i13) (%o13) 460 (%o14) 6.8561010581851309E-92 (%i15) (%o15) 450 (%o16) 7.0704144577228293E-82 (%i17) (%o17) 440 (%o18) 7.9566767033071793E-73 (%i19) (%o19) 430 (%o20) 1.3560904548240456E-64 (%i21) (%o21) 420 (%o22) 4.4143083255676851E-57 (%i23) (%o23) 410 (%o24) 3.260326580098254E-50 (%i25) (%o25) 400 (%o26) 6.2372459645159932E-44 (%i27) (%o27) 390 (%o28) 3.4310672035531094E-38 (%i29) (%o29) 380 (%o30) 5.9030476959980622E-33 (%i31) (%o31) 370 (%o32) 3.4021177134614534E-28 (%i33) (%o33) 360 (%o34) 6.9513988502272299E-24 (%i35) (%o35) 350 (%o36) 5.2788956350363829E-20 (%i37) (%o37) 340 (%o38) 1.5499249799853283E-16 (%i39) (%o39) 330 (%o40) 1.8186578531782093E-13 (%i41) (%o41) 320 (%o42) 8.7680376134625957E-11 (%i43) (%o43) 310 (%o44) 1.7774510269452038E-8 (%i45) (%o45) 300 (%o46) 1.5442550112234925E-6 (%i47) (%o47) 290 (%o48) 5.8397562360778391E-5 (%i49) (%o49) 280 (%o50) 9.7308127160684876E-4 (%i51) (%o51) 270 (%o52) 0.0072113402123203 (%i53) (%o53) 260 (%o54) 0.023923296060922 (%i55) (%o55) 250 (%o56) 0.035664645553349 (%i57) cantor
A *sequence* is an ordered set. So Graham2 is correct: all *sequences* have the same probability. However, *unordered* outcomes (like "50% heads" or "100% heads") most certainly do *not* have the same probability: 50% heads probability is 0.035664646 100% heads probability is 3.0549E-151 cantor
As an example, write down all possible combinations for 3 coins, ie: 8 possible sequences. THH is 1 of the 8, so it has a probability of 1/8. HHH is 1 of 8, so it has a probability of 1/8, etc etc etc, for all 8 combinations. Graham2
For all those who claim sequences have the same probability : Sequences do not have the same probability Here's a rehash of my comments elsewhere: Eg. In coin sequence, THH has an odds of 7 to 1 against HHH In fact sequence odds has been worked out in Penney’s Game J.A. Csirik has a formula for more than 3 bit sequence, which can be seen in the wiki reference or you could implement John Conway’s algorithm to calculate odds of various sequences against each other. selvaRajan
All sequences have the same probabillity, and that is (1/2)^n. That is probability 101. The question of whether we should be suspicious of a particular sequence is a different question. We are (rightly) suspicious of 500 heads, but not because it is more or less likely than any other sequence, but because (as I understand it), it matches a pre-defined sequence, one that we regard as significant. Graham2
Let's play this game. Graham2, or any Darwinist, or failing that, anyone at all, can play. Here are two different patterns of coins, stolen from my comment https://uncommondescent.com/intelligent-design/mark-frank-ok-im-with-you-fellas/#comment-484552 , with heads represented as 1 and tails as 0. If I tell you (honestly, but how do you know besides that you trust me) that one was done by flipping coins, and one was done by an intelligent design, can you tell which is which? Do you believe me? And do you know how the intelligently designed pattern was made? Do you have any way to tell? (this is less than 500 bits/flips) A 10010001010100111010110010000111011 00101001100001010111001010110110110 10101010011000001001010101010000000 01101110111010001101111001100011110 11011100111111010000001011110100111 01001001011110001101000001000111101 B 01110010001000100010001001110011111 10001010001001010011001010001010000 10000010001010001010101010000010000 10000011111011111010101010000011110 10000010001010001010011010001010000 01110010001010001010001001110011111 Paul Giem
I meant @10, it's less likely than coin tosses coming up all heads. Mapou
Graham2:
But all sequences have the same probability, so whats the difference ?
Aw, come on. All sequences of coin tosses do not have the same probability. That's the fallacy that you're having a hard time understanding. The reason is that coins only have two faces and therefore the probability of either head or tails is always 50%. This is true no matter how often you flip the coin. Having an all-heads outcome after many flips is extremely unlikely. Any deviation over the long run from the 50% expectation is less likely and the more the result deviates from 50%, the less likely it is. As an aside, I was thinking about this within the context of our finding that some (supposedly) non-functional DNA sequences are repeated many times in the genome. How likely is that if DNA sequences are strictly the result of random mutations? That's even less likely than coin tosses. Mapou
Graham2, all sequences have the same probability, however a set of 100% head has a very low probability :) Box
Graham2, you already gave the game away in your comment at 5. You recognized one was random and the other was not, and you let your Darwinst "never give an inch" resolve drop just a split second and that allowed you to state the obvious. No good trying to take it back now. You can't unring that bell. Barry Arrington
But all sequences have the same probability, so whats the difference ? I agree the 500-head one is suspicious, but Im asking you to explain your position. Graham2
Put this one under amusing things Darwinists say: Graham2:
EA #2: You clearly agree with Sal that 500 heads is suspicious, yet a random pattern is not, so whats the difference ?
Uh, one is random and the other isn’t. Barry Arrington
EA #2: You clearly agree with Sal that 500 heads is suspicious, yet a random pattern is not, so whats the difference ? Graham2
Although I like the 'made in God's image' inference oozing out of this comment:
But the “blueprint” in question is already somewhat hard-wired into the human brain, that’s why in the exercise for the ID class, we never failed to detect design.
As does Michael Behe like the inference:
Michael Behe - Life Reeks Of Design - video https://www.youtube.com/watch?v=Hdh-YcNYThY
I would like to expand a bit on this following comment instead:
except they look like they were crafted by a Mind far greater than any human mind.
Bur what gives us the impression that life was 'crafted by a Mind far greater than any human mind'? Well for starters even the simplest life ever found on earth is far, far, more complex than any machine, or integrated circuit, devised by man:
To Model the Simplest Microbe in the World, You Need 128 Computers - July 2012 Excerpt: Mycoplasma genitalium has one of the smallest genomes of any free-living organism in the world, clocking in at a mere 525 genes. That's a fraction of the size of even another bacterium like E. coli, which has 4,288 genes.,,, The bioengineers, led by Stanford's Markus Covert, succeeded in modeling the bacterium, and published their work last week in the journal Cell. What's fascinating is how much horsepower they needed to partially simulate this simple organism. It took a cluster of 128 computers running for 9 to 10 hours to actually generate the data on the 25 categories of molecules that are involved in the cell's lifecycle processes.,,, ,,the depth and breadth of cellular complexity has turned out to be nearly unbelievable, and difficult to manage, even given Moore's Law. The M. genitalium model required 28 subsystems to be individually modeled and integrated, and many critics of the work have been complaining on Twitter that's only a fraction of what will eventually be required to consider the simulation realistic.,,, http://www.theatlantic.com/technology/archive/2012/07/to-model-the-simplest-microbe-in-the-world-you-need-128-computers/260198/
But perhaps the best way to get this life was 'crafted by a Mind far greater than any human mind' inference across more effectively is to highlight the overlapping coding on the DNA. It recently made headlines in major new outlets that there is dual coding in DNA:
Time mag: (Another) Second Code Uncovered Inside the DNA -- Scientists have discovered a second code hidden within the DNA, written on top of the other. - December 2013 http://science.time.com/2013/12/13/second-code-uncovered-inside-the-dna/
Which is astonishing enough 'since our best computer programmers can't even conceive of overlapping codes.',,,
'It's becoming extremely problematic to explain how the genome could arise and how these multiple levels of overlapping information could arise, since our best computer programmers can't even conceive of overlapping codes. The genome dwarfs all of the computer information technology that man has developed. So I think that it is very problematic to imagine how you can achieve that through random changes in the code.,,, and there is no Junk DNA in these codes. More and more the genome looks likes a super-super set of programs.,, More and more it looks like top down design and not just bottom up chance discovery of making complex systems.' - Dr. John Sanford http://www.youtube.com/watch?feature=player_detailpage&v=YemLbrCdM_s#t=31s
But the News release for dual coding did not tell the whole story. They have been discovering overlapping coding in DNA for years. In fact it is shown that DNA 'can carry abundant parallel codes'.
The genetic code is nearly optimal for allowing additional information within protein-coding sequences - Shalev Itzkovitz and Uri Alon - 2006 Excerpt: Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes.... the present findings support the view that protein-coding regions can carry abundant parallel codes. http://genome.cshlp.org/content/17/4/405.full
Moreover, in the following video, Edward N. Trifonov humorously reflects how they have been 'RE'-discovering 'second' codes in the DNA for years, all the while forgetting to count above the number two for the previous code that was discovered:
Second, third, fourth… genetic codes - One spectacular case of code crowding - Edward N. Trifonov - video https://vimeo.com/81930637
In the preceding video, Trifonov also talks about 13 different codes that can be encoded in parallel along the DNA sequence. As well, he elucidates 4 different codes that are, simultaneously, in the same sequence, coding for DNA curvature, Chromatin Code, Amphipathic helices, and NF kappaB. In fact, at the 58:00 minute mark he states,
"Reading only one message, one gets three more, practically GRATIS!".
And please note that this was just an introductory lecture in which Trifinov just covered the very basics and left many of the other codes out of the lecture. Codes which code for completely different, yet still biologically important, functions. In fact, at the 7:55 mark of the video, there are 13 different codes that are listed on a powerpoint, although the writing was too small for me to read. Concluding powerpoint of the lecture (at the 1 hour mark) states:
"Not only are there many different codes in the sequences, but they overlap, so that the same letters in a sequence may take part simultaneously in several different messages." Edward N. Trifonov - 2010
As well, according to Trifonov, other codes, on top of the 13 he listed, are yet to be discovered. In a paper, that was in a recent book that Darwinists tried to censor from ever getting published, Robert Marks, John Sanford, and company, mathematically dotted the i's and crossed the t's in what is intuitively obvious to the rest of us about finding multiple overlapping codes in DNA:
Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez 1, Robert J. Marks II 2, Jorge Fernandez 3 and John C. Sanford 4 - published online May 2013 Excerpt: In the last decade, we have discovered still another aspect of the multi- dimensional genome. We now know that DNA sequences are typically “ poly-functional” [38]. Trifanov previously had described at least 12 genetic codes that any given nucleotide can contribute to [39,40], and showed that a given base-pair can contribute to multiple overlapping codes simultaneously. The first evidence of overlapping protein-coding sequences in viruses caused quite a stir, but since then it has become recognized as typical. According to Kapronov et al., “it is not unusual that a single base-pair can be part of an intricate network of multiple isoforms of overlapping sense and antisense transcripts, the majority of which are unannotated” [41]. The ENCODE project [42] has confirmed that this phenomenon is ubiquitous in higher genomes, wherein a given DNA sequence routinely encodes multiple overlapping messages, meaning that a single nucleotide can contribute to two or more genetic codes. Most recently, Itzkovitz et al. analyzed protein coding regions of 700 species, and showed that virtually all forms of life have extensive overlapping information in their genomes [43]. 38. Sanford J (2008) Genetic Entropy and the Mystery of the Genome. FMS Publications, NY. Pages 131–142. 39. Trifonov EN (1989) Multiple codes of nucleotide sequences. Bull of Mathematical Biology 51:417–432. 40. Trifanov EN (1997) Genetic sequences as products of compression by inclusive superposition of many codes. Mol Biol 31:647–654. 41. Kapranov P, et al (2005) Examples of complex architecture of the human transcriptome revealed by RACE and high density tiling arrays. Genome Res 15:987–997. 42. Birney E, et al (2007) Encode Project Consortium: Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project. Nature 447:799–816. 43. Itzkovitz S, Hodis E, Sega E (2010) Overlapping codes within protein-coding sequences. Genome Res. 20:1582–1589. Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez 1, Robert J. Marks II 2, Jorge Fernandez 3 and John C. Sanford 4 - May 2013 Conclusions: Our analysis confirms mathematically what would seem intuitively obvious - multiple overlapping codes within the genome must radically change our expectations regarding the rate of beneficial mutations. As the number of overlapping codes increases, the rate of potential beneficial mutation decreases exponentially, quickly approaching zero. Therefore the new evidence for ubiquitous overlapping codes in higher genomes strongly indicates that beneficial mutations should be extremely rare. This evidence combined with increasing evidence that biological systems are highly optimized, and evidence that only relatively high-impact beneficial mutations can be effectively amplified by natural selection, lead us to conclude that mutations which are both selectable and unambiguously beneficial must be vanishingly rare. This conclusion raises serious questions. How might such vanishingly rare beneficial mutations ever be sufficient for genome building? How might genetic degeneration ever be averted, given the continuous accumulation of low impact deleterious mutations? http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0006
Of course, multiple overlapping coding that man cannot even fathom of building is not all there is to drawing the inference that life was 'crafted by a Mind far greater than any human mind', but it is very a good start. But to briefly touch on what else is in the cell, Quantum Computation, which man has barely taken his first baby steps in, is now heavily implicated to be involved within DNA repair mechanisms (3D fractal). As well, biophotonic communication (think laser light), between all the molecules of the cell, DNA and proteins, is now also heavily implicated to be within the cell. As well 'Reversible computation' is heavily implicated to be involved in cellular processes. All in all, given the unfathomable complexity being dealt with in the 'simple' cell, I think this following quote is quite fitting for expressing the awe of what is being found in life:
Systems biology: Untangling the protein web - July 2009 Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. "Combine all this and you can start to think that maybe some of the information flow can be captured," he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. "The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent," he says. "The simple pathway models are a gross oversimplification of what is actually happening." http://www.nature.com/nature/journal/v460/n7253/full/460415a.html
Verse and Music:
John 1:1-5 In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made. In him was life, and that life was the light of men. The light shines in the darkness, but the darkness has not understood it. Lindsey Stirling & Kuha'o Case - Oh Come, Emmanuel - video http://www.youtube.com/watch?v=ozVmO5LHJ2k
bornagain77
Sal:
But the “blueprint” in question is already somewhat hard-wired into the human brain, that’s why in the exercise for the ID class, we never failed to detect design. Humans are like-minded and they make like-minded constructs that other humans recognize as designed.
It may be true that there is some "like-mindedness" going on, but it doesn't necessarily come from the fact that the designer and the observer are both human. The like-mindedness can be described more generally: specifically, functional complexity or meaningful communication, to name a couple. The reason SETI believes it will be able to detect a signal from an intelligent alien is not because the alien is human, or even hominid, or has much of any similarity with humans. It is the mere recognition of a meaningful communication that allows the inference. Same with function. Were we ever to capture a confirmed UFO, it would be immediately recognized as designed. Not so much because the designer is "like-minded" in any sense of being human or even similar to human, but because the existence of a functional complex object is adequate to infer design (a la Behe's original, if perhaps simple, description of [functional] irreducible complexity). You are quite right, however, that many things in biology appear to be designed (as even ardent materialists admit). So the onus should be firmly on those disputing that obvious and reasonable inference to provide a decent alternative explanation. Eric Anderson
Graham (quoted by Sal):
They only become suspicious if we have specified them in advance.
This is clearly, obviously wrong. There are many, many cases in which the specification is discovered after the fact. New civilizations discovered, the Rosetta Stone, the effort to unlock Egyptian, and on and on. Indeed, every major "surprise" discovery in archaeology is a surprise precisely because the specification was not known, was not identified, was not expected beforehand. Same is true in living systems as new things are discovered. SETI relies on the same concept and certainly hasn't specified in advance the precise message, not even the type of message, it must receive. So, no, it is clearly false that the specification has to be identified, known, agreed to in advance. Eric Anderson
Sal:
They would each have two boxes, and each box contained dice and coins. They were instructed to randomly shake one box and then put designs in the other box. While they did their work, I and another volunteer would leave the room or turn our backs. After the students were done building their designs, I and the volunteer would inspect each box, and tell the students which boxes we felt contained a design, and the students would tell us if we passed or failed to recognize their designs. We never failed!
Excellent. I have long thought of doing an experiment of essentially this type with volunteers to get the exact same point across. I was thinking of some kind of lincoln logs or similar blocks, but I like the dice/coins approach. I have also contemplated having the students themselves go around the room and make the inference (the primary caveat being that you would have to control for false negatives (i.e., the blocks purposely arranged to appear random). With your approach of having two sets each with the clear instructions, however, this should resolve that problem. Great pedagogical exercise. I'm jealous! Eric Anderson

Leave a Reply