“Take the coins and dice and arrange them in a way that is evidently designed.” That was my instruction to groups of college science students who voluntarily attended my extra-curricular ID classes sponsored by Campus Crusade for Christ at James Madison University (even Jason Rosenhouse dropped in a few times). Many of the students were biology and science students hoping to learn truths that are forbidden topics in their regular classes…

They would each have two boxes, and each box contained dice and coins. They were instructed to randomly shake one box and then put designs in the other box. While they did their work, I and another volunteer would leave the room or turn our backs. After the students were done building their designs, I and the volunteer would inspect each box, and tell the students which boxes we felt contained a design, and the students would tell us if we passed or failed to recognize their designs. We never failed!

Granted, this was not a rigorous experiment, but the exercise was to get the point across that even with token objects like coins and dice, one can communicate design.

So what is the reason that human designs were recognized in the classroom exercise? Is it because one configuration of coins and dice are inherently more improbable than any other? Let us assume for the sake of argument that no configuration is more improbable than any other, why then do some configurations seem more special than others with respect to design? The answer is that some configurations suggest a like-minded process was involved in the assembly of the configuration rather than a chance process.

A Darwinist once remarked:

if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins,

But what is the real probability in question? It clearly isn’t about the probability of each possible 500-coin sequence, since each sequence is just as improbable as any other. Rather the probability that is truly in question is the probability our minds will recognize a sequence that conforms to our ideas of a non-random outcome. In other words, outcomes that look like “the products of a like-minded process, not a random process”. This may be a shocking statement so let me briefly review two scenarios.

A. 500-fair coins are discovered heads up on a table. We recognized it to be a non-random event based on the law of large numbers as described in The fundamental law of Intelligent Design.

B. 500-fair are discovered on a table. The coins were not there the day before. Each coin on the table is assigned a number 1-500. The pattern of heads and tails looks at first to be nothing special with 50% of the coins being heads. But then we find that the pattern of coins matches a blueprint that had been in a vault as far back as a year ago. Clearly this pattern also is non-random, but why?

The naïve and incorrect answer is “the probability of that pattern is 1 out of 2^500, therefore the event is non-random”. But that is the wrong answer since every other possible coin pattern has a chance of occurring of 1 out of 2^500 times.

The correct answer as to why the coin arrangement is non-random is “it conforms to blueprints”, or using ID terminology, “it conforms to independent specifications”. The independent specification in scenario B is the printed blueprint that had been stored away in the vault, the independent specification of scenario A is all-coins heads “blueprint” that is implicitly defined in our minds and math books.

The real probability at issue is the probability the independent specification will be realized by a random process.

We could end the story of scenario B by saying that a relative or friend put the design together as a surprise present to would-be observers that had access to the blueprint. But such a detail would only confirm what we already knew, that the coin configuration on the table was not the product of a random process, but rather a human-like, like-minded process.

I had an exchange with Graham2, where I said:

But what is it about that particular pattern [all fair coins heads] versus any other. Is it because the pattern is not consistent with the expectation of a random pattern? If so, then the pattern is special by its very nature.

to which Graham2 responded:

No No No No. There is nothing ‘special’ about any pattern. We attach significance to it because we like patterns, but statistically, there is nothing special about it. All sequences (patterns) are equally likely.

They only become suspicious if we have specified them in advance.

Whether Grahams2 is right or wrong is a moot point. Statistical tests can be used to reject chance as the explanation that certain artifacts look like the products of a like-minded process. The test is valid provided the blueprint wasn’t drawn up after the fact (postdictive blueprints).

A Darwinist will object and say, “that’s all well and fine, but we don’t have such blue prints for life. Give me sheet paper that has the blueprint of life and proof the blueprint was written before life began.” But the “blueprint” in question is already somewhat hard-wired into the human brain, that’s why in the exercise for the ID class, we *never* failed to detect design. Humans are like-minded and they make like-minded constructs that other humans recognize as designed.

The problem for Darwinism is that biological designs resemble human designs. Biological organisms look like like-minded designs except they look like they were crafted by a Mind far greater than any human mind. That’s why Dawkins said:

it was hard to be an atheist before Darwin: the illusion of living design is so overwhelming.

Richard Dawkins

Dawkins erred by saying “illusion of living design”, we know he should have said “reality of living design”. 🙂

How then can we reconstruct the blueprints embedded in the human mind in such a sufficiently rigorous way that we can then use the “blueprints” or independent specifications to perform statistical tests? How can we do it in a way that is unassailable to complaints of after-the-fact (postdictive) specifications?

That is the subject of Part II of this series. But briefly, I hinted toward at least a couple methods in previous discussions:

The fundamental law of Intelligent Design

Coordinated Complexity, the key to refuting single target and postdiction objections.

And there will be more to come, God willing.

**NOTES**

1. I mentioned “independent specification”. This obviously corresponds to Bill Dembksi’s notion of independent specification from *Design Inference* and *No Free Lunch*. I use the word blueprint to help illustrate the concept.

2. The physical coin patterns that conform to independent specifications can then be said to evidence specified improbability. I highly recommend the term “specified improbability” (SI) be used instead of Complex Specified Information (CSI). The term “Specified Improbability” is now being offered by Bill Dembski himself. I feel it more accurately describes what is being observed when identifying design, and the phrase is less confusing. See: Specified Improbability and Bill’s letter to me from way back.

3. I carefully avoided using CSI, information, or entropy to describe the design inference in the bulk of this essay. Those terms could have been used, but I avoided them to show that the problem of identifying design can be made with simpler more accessible arguments, and thus hopefully make the points more unassailable. This essay actually describes detection of CSI, but CSI has become such a loaded term in ID debates I refrained from using it. The phrase “Specified Improbability” conveys the idea better. The objects in the students’ boxes that were recognized as designed were improbable configurations that conformed to independent specifications, therefore they evidenced specified improbability, therefore they were designed.

Sal:

Excellent.

I have long thought of doing an experiment of essentially this type with volunteers to get the exact same point across. I was thinking of some kind of lincoln logs or similar blocks, but I like the dice/coins approach.

I have also contemplated having the students themselves go around the room and make the inference (the primary caveat being that you would have to control for false negatives (i.e., the blocks purposely arranged to appear random). With your approach of having two sets each with the clear instructions, however, this should resolve that problem.

Great pedagogical exercise. I’m jealous!

Graham (quoted by Sal):

This is clearly, obviously wrong. There are many, many cases in which the specification is discovered after the fact. New civilizations discovered, the Rosetta Stone, the effort to unlock Egyptian, and on and on. Indeed, every major “surprise” discovery in archaeology is a surprise precisely because the specification was not known, was not identified, was not expected beforehand. Same is true in living systems as new things are discovered. SETI relies on the same concept and certainly hasn’t specified in advance the precise message, not even the

typeof message, it must receive. So, no, it is clearly false that the specification has to be identified, known, agreed to in advance.Sal:

It may be true that there is some “like-mindedness” going on, but it doesn’t necessarily come from the fact that the designer and the observer are both human. The like-mindedness can be described more generally: specifically, functional complexity or meaningful communication, to name a couple.

The reason SETI believes it will be able to detect a signal from an intelligent alien is not because the alien is human, or even hominid, or has much of any similarity with humans. It is the mere recognition of a meaningful communication that allows the inference.

Same with function. Were we ever to capture a confirmed UFO, it would be immediately recognized as designed. Not so much because the designer is “like-minded” in any sense of being human or even similar to human, but because the existence of a functional complex object is adequate to infer design (a la Behe’s original, if perhaps simple, description of [functional] irreducible complexity).

You are quite right, however, that many things in biology appear to be designed (as even ardent materialists admit). So the onus should be firmly on those disputing that obvious and reasonable inference to provide a decent alternative explanation.

Although I like the ‘made in God’s image’ inference oozing out of this comment:

As does Michael Behe like the inference:

I would like to expand a bit on this following comment instead:

Bur what gives us the impression that life was ‘crafted by a Mind far greater than any human mind’? Well for starters even the simplest life ever found on earth is far, far, more complex than any machine, or integrated circuit, devised by man:

But perhaps the best way to get this life was ‘crafted by a Mind far greater than any human mind’ inference across more effectively is to highlight the overlapping coding on the DNA. It recently made headlines in major new outlets that there is dual coding in DNA:

Which is astonishing enough ‘since our best computer programmers can’t even conceive of overlapping codes.’,,,

But the News release for dual coding did not tell the whole story. They have been discovering overlapping coding in DNA for years. In fact it is shown that DNA ‘can carry abundant parallel codes’.

Moreover, in the following video, Edward N. Trifonov humorously reflects how they have been ‘RE’-discovering ‘second’ codes in the DNA for years, all the while forgetting to count above the number two for the previous code that was discovered:

In the preceding video, Trifonov also talks about 13 different codes that can be encoded in parallel along the DNA sequence. As well, he elucidates 4 different codes that are, simultaneously, in the same sequence, coding for DNA curvature, Chromatin Code, Amphipathic helices, and NF kappaB. In fact, at the 58:00 minute mark he states,

And please note that this was just an introductory lecture in which Trifinov just covered the very basics and left many of the other codes out of the lecture. Codes which code for completely different, yet still biologically important, functions. In fact, at the 7:55 mark of the video, there are 13 different codes that are listed on a powerpoint, although the writing was too small for me to read.

Concluding powerpoint of the lecture (at the 1 hour mark) states:

As well, according to Trifonov, other codes, on top of the 13 he listed, are yet to be discovered.

In a paper, that was in a recent book that Darwinists tried to censor from ever getting published, Robert Marks, John Sanford, and company, mathematically dotted the i’s and crossed the t’s in what is intuitively obvious to the rest of us about finding multiple overlapping codes in DNA:

Of course, multiple overlapping coding that man cannot even fathom of building is not all there is to drawing the inference that life was ‘crafted by a Mind far greater than any human mind’, but it is very a good start. But to briefly touch on what else is in the cell, Quantum Computation, which man has barely taken his first baby steps in, is now heavily implicated to be involved within DNA repair mechanisms (3D fractal). As well, biophotonic communication (think laser light), between all the molecules of the cell, DNA and proteins, is now also heavily implicated to be within the cell. As well ‘Reversible computation’ is heavily implicated to be involved in cellular processes. All in all, given the unfathomable complexity being dealt with in the ‘simple’ cell, I think this following quote is quite fitting for expressing the awe of what is being found in life:

Verse and Music:

EA #2: You clearly agree with Sal that 500 heads is suspicious, yet a random pattern is not, so whats the difference ?

Put this one under amusing things Darwinists say:

Graham2:

Uh, one is random and the other isn’t.

But all sequences have the same probability, so whats the difference ?

I agree the 500-head one is suspicious, but Im asking you to explain your position.

Graham2, you already gave the game away in your comment at 5. You recognized one was random and the other was not, and you let your Darwinst “never give an inch” resolve drop just a split second and that allowed you to state the obvious. No good trying to take it back now. You can’t unring that bell.

Graham2,

all sequences have the same probability, however a set of 100% head has a very low probability 🙂

Graham2:

Aw, come on. All sequences of coin tosses do not have the same probability. That’s the fallacy that you’re having a hard time understanding. The reason is that coins only have two faces and therefore the probability of either head or tails is always 50%. This is true no matter how often you flip the coin. Having an all-heads outcome after many flips is extremely unlikely. Any deviation over the long run from the 50% expectation is less likely and the more the result deviates from 50%, the less likely it is.

As an aside, I was thinking about this within the context of our finding that some (supposedly) non-functional DNA sequences are repeated many times in the genome. How likely is that if DNA sequences are strictly the result of random mutations? That’s even less likely than coin tosses.

I meant @10, it’s less likely than coin tosses coming up all heads.

Let’s play this game. Graham2, or any Darwinist, or failing that, anyone at all, can play. Here are two different patterns of coins, stolen from my comment

http://www.uncommondescent.com.....ent-484552 ,

with heads represented as 1 and tails as 0. If I tell you (honestly, but how do you know besides that you trust me) that one was done by flipping coins, and one was done by an intelligent design, can you tell which is which? Do you believe me? And do you know how the intelligently designed pattern was made? Do you have any way to tell? (this is less than 500 bits/flips)

A

10010001010100111010110010000111011

00101001100001010111001010110110110

10101010011000001001010101010000000

01101110111010001101111001100011110

11011100111111010000001011110100111

01001001011110001101000001000111101

B

01110010001000100010001001110011111

10001010001001010011001010001010000

10000010001010001010101010000010000

10000011111011111010101010000011110

10000010001010001010011010001010000

01110010001010001010001001110011111

All sequences have the same probabillity, and that is (1/2)^n. That is probability 101. The question of whether we should be suspicious of a

particularsequence is a different question. We are (rightly) suspicious of 500 heads, but not because it is more or less likely than any other sequence, but because (as I understand it), it matches a pre-defined sequence, one that we regard as significant.For all those who claim sequences have the same probability : Sequencesdo nothave the same probabilityHere’s a rehash of my comments elsewhere:

Eg. In coin sequence, THH has an odds of 7 to 1 against HHH In fact sequence odds has been worked out in Penney’s Game

J.A. Csirik has a formula for more than 3 bit sequence, which can be seen in the wiki reference or you could implement John Conway’s algorithm to calculate odds of various sequences against each other.

As an example, write down all possible combinations for 3 coins, ie: 8 possible sequences. THH is 1 of the 8, so it has a probability of 1/8. HHH is 1 of 8, so it has a probability of 1/8, etc etc etc, for all 8 combinations.

A *sequence* is an ordered set. So Graham2 is correct: all *sequences* have the same probability.

However, *unordered* outcomes (like “50% heads” or “100% heads”) most certainly do *not* have the same probability:

50% heads probability is 0.035664646

100% heads probability is 3.0549E-151

probabilities for unordered outcomes #heads:

(%o5) 500

(%o6) 3.0549363634996047E-151

(%i7)

(%o7) 490

(%o8) 7.5093570626414577E-131

(%i9)

(%o9) 480

(%o10) 8.1481217255391563E-116

(%i11)

(%o11) 470

(%o12) 4.4151760704460009E-103

(%i13)

(%o13) 460

(%o14) 6.8561010581851309E-92

(%i15)

(%o15) 450

(%o16) 7.0704144577228293E-82

(%i17)

(%o17) 440

(%o18) 7.9566767033071793E-73

(%i19)

(%o19) 430

(%o20) 1.3560904548240456E-64

(%i21)

(%o21) 420

(%o22) 4.4143083255676851E-57

(%i23)

(%o23) 410

(%o24) 3.260326580098254E-50

(%i25)

(%o25) 400

(%o26) 6.2372459645159932E-44

(%i27)

(%o27) 390

(%o28) 3.4310672035531094E-38

(%i29)

(%o29) 380

(%o30) 5.9030476959980622E-33

(%i31)

(%o31) 370

(%o32) 3.4021177134614534E-28

(%i33)

(%o33) 360

(%o34) 6.9513988502272299E-24

(%i35)

(%o35) 350

(%o36) 5.2788956350363829E-20

(%i37)

(%o37) 340

(%o38) 1.5499249799853283E-16

(%i39)

(%o39) 330

(%o40) 1.8186578531782093E-13

(%i41)

(%o41) 320

(%o42) 8.7680376134625957E-11

(%i43)

(%o43) 310

(%o44) 1.7774510269452038E-8

(%i45)

(%o45) 300

(%o46) 1.5442550112234925E-6

(%i47)

(%o47) 290

(%o48) 5.8397562360778391E-5

(%i49)

(%o49) 280

(%o50) 9.7308127160684876E-4

(%i51)

(%o51) 270

(%o52) 0.0072113402123203

(%i53)

(%o53) 260

(%o54) 0.023923296060922

(%i55)

(%o55) 250

(%o56) 0.035664645553349

(%i57)

Graham2,

Check out comment #12. Can you pick the designed sequence? Is it hard? How can you be reasonably sure you are correct?

Graham2 @15 wrote: “As an example, write down all possible combinations for 3 coins, ie: 8”

There are 8

*permutations*. There are only 4*combinations*.

Graham2 @15 wrote: “etc etc etc, for all 8 combinations”

8

*permutations*cantor @19 wrote:

There are 8 *permutations*. There are only 4 *combinations*Of those 4 combinations, 2 of them are 3 times more probable than the other 2.

Paul Giem @12

Well, there’s only a one-in-a-thousand chance of getting 85 heads (B) in 210 flips of a fair coin, and a one-in-20 chance of getting 104 heads (A) in 210 flips of a fair coin. So if I had to choose, since one is random and the other designed, my guess would be that A is designed.

Of course, you didn’t explicitly stipulate that you did only one 210-flip trial to get the random pattern, but I am assuming that’s what you meant. In other words, you didn’t “design” the random pattern by doing repeated 210-flip trials until you got something lopsided (like only 85 heads).

Oops. Typo. I should have said B is designed.

Only if the sequence is generated

all at once.However, in a situation where 500 bits are generated sequentially, where there is no memory from one bit generation to the next, the number of sequences that contains a 50/50 distribution of ones and zeroesfar out numberthe sequence of all zeroes.Cantor,

Look not only at the number of ones and zeroes, but also at the two-dimensional pattern. Then get ready to defend your answer.

CentralScrutinizer @23:Read Post16.Paul Giem @24:

Several years ago I read some papers by Greg Chaitin.

I seem to recall he said you can never know for sure if any apparently random pattern is actually random. There may be some simple algorithm that generates it. It is not possible to rule that out.

I’m assuming that’s *not* what you meant when you said one was designed (i.e. it was not “designed” by generating it from a simple algorithm). Is that correct?

If I stare at it long enough, and cross my eyes, and play some Pink Floyd, will I see a face or something?

=> CentralScrutinizer, Graham2,cantor,

The HHH vs HHT sequnece odds 1/2 follows Fibonacci series,

HHH vs TTH odds 3/10 follows Padovan series,

HHT vs THT odds 5/8 follows non-attacking bishops series,

HTT vs TTT odds 7/8 follows Narayana’s cow series,

HHT vs HTT odds 2/3 follows quarter square series.. and so on

Graham2 @13:

You are correct that from a purely statistical standpoint any

particularsequence of 500 coins is just as improbable as any otherparticularsequence of 500 coins. However, there are two important issues at play.1. As Mapou points out, if we say that a sequence of 500 heads is “just as probable as any random sequence,” we are tricking ourselves, because the referenced “random sequence” is inadequately defined, and in fact encompasses a hugely massive group, whereas the 500 heads group is singular. In other words, it is

vastly more likelythat we will run into some old “random sequence” than it is that we will run into 500 heads.As a result, if we are to avoid allowing the vagary of our identification of sequences to cloud our judgment, we have to be very careful when arguing that 500 heads is just a probable as any other sequence, to specify precisely which other sequence we have in mind.

That brings us to a related point (the one that I highlighted in my comment #3) that is important to ID: that of independent specification.

2. 500 heads is more “meaningful” than any old random sequence. Everyone knows it is more meaningful; indeed it jumps out at us, as you acknowledge. Why is that? Well, we recognize that it occupies a special set of 1, something unique in the sea of other possible sequences.

Now, to be sure, 500 heads is not the only “meaningful” sequence, not the only one that communicates something or that stands out based on our experience, or sets itself apart from the sea of random sequences. We could probably come up with a number of sequences that would be immediately recognizable as “special” and perhaps several more that would be recognizable as “special” with some work. But the number of meaningful (to use my term) sequences is miniscule, a drop in the bucket, compared to the number of random meaningless sequences.

Now it could well be that something like 500 heads is caused by necessity rather than design, so we would need to consider the possibility of necessity as an explanation. Same would go for sequences like all tails, or HTHTHT repeated, and so on.

A long sequence of adjacent prime numbers in binary on the other hand, is not only immediately recognizable as meaningful, it is also recognizable as not something that would not likely occur by necessity.

—–

Anyway, it is too late and I’m perhaps not explaining this as well as I might. But I think there are two critical aspects (though related) that need to be kept in mind: (i) the question of what we are claiming is just as probable as the other*, and (ii) the independent recognition of the specification as something “meaningful,” if you will allow me to use that word.

—–

* Incidentally, when arguing that 500 heads is just as probable as any other sequence, we quickly find that if we take just a little more effort to identify what “other sequence” we have in mind, we either end up giving some other unique specification, or we have to write out an entire random sequence of 500. This is why we never find anyone actually arguing that “500 heads is just as improbable as [actual sequence].” Instead the latter is left conveniently vague.

To immediately see the other side of the coin, so to speak, try making the argument with a specific sequence in shorthand. Say, for example, “500 heads is just as improbable as 500 tails.” Sure. We all agree. Or “500 heads is just as improbable as HTHTHT repeated over and over. Sure. I’m willing to agree. But notice in each of these kinds of cases we end up referring to one of the very few other “meaningful sequences” that potentially could be found amidst the sea of meaninglessness.

=> CentralScrutinizer, Graham2,cantor,Eric

The HHH vs HHT sequnece odds 1/2 follows Fibonacci series,

HHH vs TTH odds 3/10 follows Padovan series,

HHT vs THT odds 5/8 follows non-attacking bishops series,

HTT vs TTT odds 7/8 follows Narayana’s cow series,

HHT vs HTT odds 2/3 follows quarter square series.. and so on

Both A and B seem a little off.

Singles, Pairs, and Triplets histogram (adjacent pairs and triplets)

A

0: 125

1: 85

00: 34

01: 26

10: 31

11: 14

000: 15

001: 11

010: 12

011: 3

100: 8

101: 11

110: 4

111: 6

B

0: 106

1: 104

00: 26

10: 25

01: 29

11: 25

000: 17

011: 20

100: 21

111: 12

Singles, Pairs, and Triplets histogram (sequential pairs and triplets)

A

0: 125

1: 85

00: 70

01: 55

10: 54

11: 30

000: 37

001: 33

010: 44

011: 11

100: 33

101: 21

110: 10

111: 19

B

0: 106

1: 104

00: 49

01: 57

10: 57

11: 46

000: 49

011: 56

100: 57

111: 46

In string A, all pairs and triplets are represented, but there is some definite disparity between occurrences.

In string B, some triplets are not represented at all, whether adjacent pairs or sequential iterations. The triplets 000, 111, 100, and 011 are represented in both B sets, but the other half of the triplet permutations are missing.

Neither one looks strictly random, but if only one is actually designed, I’d go with B.

EDIT: I assume coldcoffee’s patterns are found on B.

Paul Giem @12

Would I be literally spelling out “CHANCE” if I guessed that “B” was designed?

Addendum to my #31: for the B group, whether adjacent triplets or sequential triplets, the base 10 numbers 0, 3, 4, and 7 are present, while 1, 2, 5, and 6 are missing. Strange.

Footnote to my #33, the triplets are more accurately described as octal, as opposed to decimal.

JDH,

I find your proposal ridiculous.

How can B possibly be designed? If it spells out CHANCE, isn’t it incompetent design, with the middle arm of the E too low, and the A being misshapen?

Isn’t it bad design if it spells out CHANCE instead of DESIGN? Doesn’t that disprove design?

Graham said (and he is right) that any one sequence is just as likely as any other sequence, so why should Sequence B be any more likely to be designed than Sequence A?

You want to say that it spells out a word and therefore it is designed? Do you have any idea how many words can be made in a 210 bit string? Besides CHANCE and DESIGN, there could have been thousands, perhaps millions, of other words that could have been spelled, such as PURPLE, SWINGS, and HELPED, not to mention small letters, German words, Spanish words, Russian words in the Cyrillic alphabet, Greek words, Hebrew words, Arabic words, Korean words, Chinese words, Sanskrit words, and Sumerian words. Did you have a target sequence before you looked at the string? I bet you didn’t. So why are you so sure that this particular string is designed? What about if you had arranged it into seven rows? The pattern you think you see entirely disappears. I think you are like Hamlet just peering into clouds and seeing familiar shapes. Why do you persist in seeing design?

Besides that, isn’t 210 bits below the universal probability bound, and therefore you can’t say that B is designed?

Maybe, since God designs everything, both Sequence A and Sequence B were designed, in which case, how can you say that Sequence B has any more design than Sequence A?

I bet you don’t have any answers for this, you stupid punk! I bet you’ve never even taken statistics. You still wanna tell me B is designed?

Paul Giem and JDH, well done.

111 1 1 1 1 1 111 11111

1 1 1 1 1 1 11 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 11111 11111 1 1 1 1 1111

1 1 1 1 1 1 11 1 1 1

111 1 1 1 1 1 1 111 11111

A better rendering:

`111 1 1 1 1 1 111 11111`

1 1 1 1 1 1 11 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 11111 11111 1 1 1 1 1111

1 1 1 1 1 1 11 1 1 1

111 1 1 1 1 1 1 111 11111

Squint if y’all don’t see it right away.

Even better:

`*** * * * * * *** *****`

* * * * * * ** * * * *

* * * * * * * * * *

* ***** ***** * * * * ****

* * * * * * ** * * *

*** * * * * * * *** *****

Definitely “bad” design. 😉

So WTF with the missing triplets in the B set, with adjacent triplets and sequential triplets? It just seems odd. (cf. #31).

Paul Giem =>Graham said (and he is right) that any one sequence is just as likely as any other sequence

Me=>I just gave an elaborate reason for why sequence can’t be same #28. Can you please explain why you still believe any sequence has the same probability?

One more time, with feeling:

`ooo o o o o o ooo ooooo`

o o o o o o oo o o o o

o o o o o o o o o o

o ooooo ooooo o o o o oooo

o o o o o o oo o o o

ooo o o o o o o ooo ooooo

The weirdness I noted at my #41 was an indexing error in my program. The missing triplets reappeared when it was corrected. All is well.

CC: Your comment at #28 doesnt make a lick of sense to me.

Jeez, this is school stuff: The probability of getting

anysequence is (1/2)^n. Im usingsequenceto mean anorderedsequence (I think this is the term). If you think the probablility is something else, then tell us. A group of 3 should be enough. Just tell us what you think is the probability of HHH. A number now, not heaps of verbiage, a number. A single number. (Hint: the answer is 1/8).Hi Graham,

I know this stuff is easy as pie to you, and I may come across as a complete dunce, but could you better explain this to me:

You say,

“The probability of getting any sequence is (1/2)^n. Im using sequence to mean an ordered sequence (I think this is the term).”

So basically what you are saying is ‘the probability of getting an ordered sequence is (1/2)^n’, is that correct?

PeterJ

Yes.

Thanks Graham,

So what your saying is this:

If I was to sit down with 500 coins and decide that I want the sequence ‘200 heads, followed with 10 tails then 10 heads to 480 tosses, ending with 1 tail 1 head repeated till 500 tosses is complete. Are you saying that the probability of that happening is (1/2)^n?

As I take it that would be an ordered sequence, agreed?

PeterJ

Yes. Is this going anywhere ?

Thanks Graham,

I was just making sure I understood you.

Just one last thing.

If you were to walk into the room when I had finished tossing my coins, and I asked you to start at the beginning and work your way along the sequence to verify that my desired sequence had in fact been successful, would there be any point at which you might become suspicious, or might you say ‘well, it’s nothing special really Peter … the chance of that happening is (1/2)^n. In fact do it again, me old flower, you’ll see’?

Yes, I believe that is correct.

“pre-defined sequence” is referred to as “independent specification” in ID literature. Specification is not limited to sequences. For example a blueprint of a skyscraper is not really describing a sequence, but a skyscraper.

“Independent” is used instead of pre-defined, to cover situations where the specification came about independently but after the design in question was observed by someone.

For example, scale-free networks are now widely recognized as designs for the internet. It turns out such networks already existed in biological systems. Technically speaking the “blueprint” for scale free networks arrived long after the first cell came into existence

Thank you for your participation.

I’ll make a radical statement.

The way we conceive of mathematics implicitly makes certain sequences special even though they are no more improbable than any other.

For example, the conception of the Law of Large numbers and averages converging on expectation has implicitly made “all coins heads” special. But it seems so natural that all coins heads is special we hardly give it a thought — that’s how deeply hard-wired certain specifications are in the human mind, it’s like we were born to recognize them.

Our math proceeds in part based on our hardwiring — we’re hard wired to think in terms of expectation and averages and regularities. The way we state our math is a reflection of our hardwired intuitions.

So deeply hard wired is “all coins heads” that our math which resulted from our hard-wired thought processes, spits out things like expectation values and averages and thus our math automatically makes “all heads” special. As if our math were some universally immutable truth.

Whether “all heads” is special in the ultimate philosophical sense, I have no idea, and I don’t care. The same question could be posed of math in general, is it a figment of our thought process or is it real in some ultimate philosophical sense.

Hi Graham2, Dr.Paul Giem,

Sal’s post is about “comparing to blueprint” so why are you

not pitting the sequences against each other as shown in comment #14, or at least check out the permutations and calculated the unordered sequences probability?`P(HHHH) = (0.5)^4 = 0.0625`

`P(HTTT) = 4!/1!3! = 4(0.0625)= 0.25`

P(HHTT) = 4!/2!2! = 6 X(0.0625) = 0.375

Of note:

Of interest to the universal RecA protein that Dr. Durston studied:

I would say that the actions of the RecA protein point to more than just a slight anomaly in the whole ‘bottom up’ neo-Darwinian paradigm!

Now I have a question for you guys is DNA simply just chemical reactions between DNA and RNA, does it really contain information or is it just the result of a chemical potential? From this video: http://m.youtube.com/watch?v=18ivdLtR7IA

Get back to the basics. Statistical tests do not reject an outcome as impossible, they only assign a probability to the event. 500 heads is extremely unlikely, and so is our communicating by internet rather than jumping about in a tree looking for breakfast, but neither is impossible.

Put another way, there is no statistical test that will say that a particular sequence is not random; it will only say that it is very unlikely that it is.

scordova:

Perhaps that is true of the way you conceive of mathematics. It is not how I conceive of mathematics.

No, it hasn’t. You seem to be confusing your personal interpretation of the theorem with the theorem itself.

The students that I have taught must have somehow missed that hardwiring part.

The missed that hardwiring part, too.

It is a universally immutable truth. But it is a truth about abstractions, and its universality extends only to those who share the same abstractions.

The mathematics says nothing about heads. We make a mathematical model, and we might represent heads in that model. Then the mathematics talks about the model. We have to interpret that back to reality. The mathematics won’t do that interpretation for us.

Instead of using heads and tail, take a marker and put an X on one side of each coin and a Y on the other side. You can do this in a mixed up way, so that the X is on the heads side of some coins and on the tails side of the others.

The mathematics works just as well with the X and Y as it does with the heads and tails. And, in a sense, that’s the whole point of mathematics being abstract. The mathematics doesn’t make the X special, and it doesn’t make heads special. Maybe you are making them special.

Post53 selvaRajanDecember 21, 2013 at 6:37 am

is an oxymoron.“unordered sequences”Chance Ratcliff (#47),

You are just conditioned to see your name everywhere. Can’t you listen to reason? Didn’t you find any of my arguments in #35 persuasive? You’re just impervious to reason, you Creationist IDiot. You are probably a Christian Reconstructionist, getting ready to take us back to the Dark Ages when they believed in a flat earth.

Seriously, one of my critiques is at least superficially valid. Which one is it, and is it really valid?

To repeat from comment 12,

Anybody want to answer?

Toddlers prove that Sal is right and Neil is wrong:

Jaceli123 @ 55,

I see that you’ve asked that question a couple of times. The answer is “it both” information and a chemical reaction. All transfers of information are material events; that’s how they have material effects in a material universe.

You might try reading this thread for some perspective.

Jaceli123 @ 55, yes UB’s post is a great resource. I have also created a new post on the topic you raise.

Granting for the sake of argument that you are right, if we have made “all heads” special in our minds, and then we find 500 fair coins all heads, we can reject chance as a mechanism for creating a pattern that looks like the product of a like-minded process.

My point was, it’s not ultimately important whether “all fair coins heads” is some how special in a philosophical sense any more than my example using blue prints.

My point is, the real probability in question is not the probability of any one coin configuration, but the probability it lines up with patterns hard-wired or learned by us.

For example, Paul Giem, showed how a learned pattern is particularly special to others. I actually didn’t see the word “chance” but I did notice:

1st line identical to 6th line, 2nd line identical to 5th line. I came up with the design inference via a different route than JDH. I couldn’t figure out why Paul made the 3rd and 4th lines different from each other since the symmetry was destroyed, but there was enough to make the design inference. JDH was able to see why the symmetry was destroyed because he was like-minded with Paul, I wasn’t….

What’s interesting here is the design inference is valid even if only one person is able to see a like-minded pattern. Chance can be rejected as a hypothesis.

Bill Dembksi illustrates this with the Champernowne sequence. Some people will recognize the Champernowne sequence in a binary string, other people won’t. Thus some will fail to see the string is designed. But the fact that only a few people can recognize the string does not invalidate the design inference.

Now that the cat is out of the bag thanks to JDH and Mr. Ratcliff, we all see the design inference which some of us didn’t see at first. I see it more clearly now than I did earlier with my primitive analysis.

It doesn’t matter the reason why we have an independent specification, it just has to be independent. It will work.

Some will say this process is subjective. It doesn’t matter, it’s an objective fact that a subjective thought process in one mind produces products that other like-minded people can recognize is the product of a like-mind, not a mindless chance process.

The circumstantial case then is biological organisms look like they were made by a like-minded process except by a far greater mind.

I know biology doesn’t look designed to you. I respect that, but that doesn’t invalidate the design inference. The fact that some people didn’t recognize the design in Paul’s example, doesn’t invalidate the fact it was designed.

I honestly didn’t recognize the complete design by Paul myself, only parts of it.

If I framed the question “Neal, all fair coins heads is a pattern special to some human minds (if not all). Would you, practically speaking reject a mindless chance process as an explanation if you found such a pattern in 500 fair coins?”

The question isn’t about the inherent improbability of any one configuration (a mistake many ID proponents make when trying to define CSI), but the improbability a configuration will line up with the patterns our minds would view as the product of a like-mind, not some mindless process.

Sal

In math or science disciplines, there can be multiple paths to arrive at the same conclusion.

The route I’m going in this line of inquiry is probability and statistics rather than information. Why? The simplicity of the argument.

Many of the students that were in my ID class would not be able to appreciate arguments in favor of ID using information ideas.

As some have guessed, part of my offerings at UD and TSZ are to clean up teaching materials for the ID and Creationist underground matriculating through university. Developing simple, succinct, accessible, unassailable arguments are my goal.

I’d like to publicly than Nick Matzke for empirically proving it is possible to develop unassailable arguments for the students in the ID underground. 🙂

Yes indeed, and you have just stated more clearly than I ever could why the LLN will work just as well for a variety of coin sequences, not just all coins heads, but for independently specified sequences.

You’ve given me the means to make the arguments more forceful.

Thank you. This has been a fruitful exchange.

Paul Giem at #60, I repent. That the sequence spells “chance” is no less likely than if it were to spell “design”, I confess. And with such a relatively small sample space (around 10^63 permutations) it had to happen eventually.

I was noticing, in the adjacent multiverse it spells “supercalifragilisticexpialidocious.” Using the same number of digits, no less. Go figure.

Hey all,

I find this subject fascinating. Humans are indeed pattern recognizing creatures. A couple of years ago it was shown that humans can tell actual financial data from random permutations of the same numbers.

from here:

http://arxiv.org/abs/1002.4592

I think that everyone would agree that the charts with the actual data are not the product of intelligent design they are just not the product of random chance so they stick out to us.

Perhaps it’s not that we always recognize design when we see it. it’s just that we know what random noise looks like and non random configurations stand out like a sore thumb against that background.

As has been pointed out here ruling out chance is only the first step in a design inference.

It’s sad that the “never give an inch” crowd won’t let us get past that first step if they did we could have some very interesting discussions.

Peace

I think Neil @ 58 got it: If we were to label coins with many different symbols, not just H/T, then ALL outcomes would look random and we would be surprised about none of them. What used to be all H would now appear random, just like the rest.

Yep, I will buy that.

it spells “supercalifragilisticexpialidocious.” Using the same number of digits, no less. Go figure.

210 bits is more than adequate to code that word, with lots of bits left over.

No doubt cantor, but what’s really inexplicable — or maybe just weird — print up Pedro Giem’s (yes, in that universe he’s an illegal alien — WHOOPS! I’ll get in trouble in

both universestalking that way — I meanundocumented worker) on an 8 1/2 x 11 sheet of paper, default margins and fonts in MF Word (the meanings of “soft” and “fuzz” being flipped over there) and punch holes where the zeros are, staple it to a scroll, and run it through a 1923 Wrigley Player Piano, and it plays six measures of “Ode To Joy” backwards. But — and here’s the really weird part — play it backwards to hear that piece forward and it plays the chorus of “Sgt. Pepper” instead; the same tune as that hit in this universe, except the Walrusis Ringo(how do you like those cucumbers? yes, over there cucumbers, not apples grow on trees, got to Eve, hit Newton, inspired Jobs, etc., etc., etc.). Which, you know, that’s just like science, right? The multiverse makes duck soup of all the vexing OOL questions, but at the same time poses even more vexing questions. Go figure.Once we agree that humans can differentiate between random noise and patterns the next step is to understand that absent an reasonable explanation humans are hardwired to infer design to nonrandom patterns. It’s just what we do

from here:

“http://www.dailymail.co.uk/sciencetech/article-1136482/Brains-hardwired-believe-God-imaginary-friends.html

So when we see something that is obviously not random like the rapid emergence of body forms in the Cambrian explosion the default explanation is Design. There is just no getting around that it’s in our genes.

It might not be fair but the burden of proof in such cases will always fall to the person denying design. It’s human nature.

Until a convincing explanation can be given that does not rely on chance people with out an axe to grind will always assume design once we rule out randomness.

That is why ID will not go away no matter the efforts of the critics.

peace

Graham2 @70:

C’mon. Neil’s example doesn’t change a single thing. This is all pretty simple. We are still dealing with a binary coin. And if you add more characters, we just end up with more possible combinations. Doesn’t change a thing.

In Neil’s example are you saying that the probability of getting all the coins to fall with X mark upside is the same as the probability of getting a sequence that doesn’t have all the X marks upside?

Or are we going to be more careful with our use of “any other sequence” kind of language?

Re-read my #29. That is the key.

I got the right answer (B) without seeing the pattern.

B has 85 heads. The probability of getting 85 or fewer heads with one random trial of 210 bits is about 0.3%

A has 104 heads. The probability of getting more than 85 and fewer than 105 heads with one random trial of 210 bits is about 47%

Assuming you didn’t:

1) design A and then purposely add (or subtract) heads to make the number 104, and

2) randomize B until you got an outlier, then

… is it not valid to infer (with some geater than 50/50 probability) that B is the designed pattern?

EA #75 Your post at #29 was sort of incomprehensible.

The probability of

anysequence is (1/2)^n. Look it up.If you dont agree, then use just 3 coins as an example and tell us the probability of HHH and something else (eg: HTH), in other words, put your money where your mouth is. A number now, not all that waffle about ‘specified’ etc. An actual number.

(Before you post it, just do a quick check that all 8 values add up to 1: this is a dead giveaway)

Graham2,

What is your comment on this article by Scordova?

excerpt:

Box: If 500 heads were thrown, we would suspect it wasnt a fair throw. If a (apparantly) random sequence was thrown, we wouldnt be concerned, yet both sequences have

exactlythe same probability. It appears counter-intuitive, but its not. Its a psychological effect, nothing to do with mathematics (the universe doesnt care). We are suspicious of the 500 heads because it matches a small pool of sequences that we carry round with us, that we regard as ‘special’.Graham2, you make no sense whatsoever. There is no use in arguing with you.

Box ,

I will make this clear:

Every sequence has same probability -I was talking about permutations of the sequence-because Sal talked about students arranging the sequence

building their design(and Penny’s Game in respect of pitting first sequence obtained against another):`P(HHHH) = (0.5)^4 = 0.0625`

P(HTTT) = 4!/1!3! = 4(0.0625)= 0.25

P(HHTT) = 4!/2!2! = 6 X(0.0625) = 0.375

so yes Paul,Graham2 and Cantor are right in saying sequence has same probability unless Sal meant something else.

Mapou,

I think you too are confusing sequence with permutations of the sequence-

`P(HHHH) = (0.5)^4 = 0.0625`

P(HTTT) = 4!/1!3! = 4(0.0625)= 0.25

P(HHTT) = 4!/2!2! = 6 X(0.0625) = 0.375

I have not followed this at all but haven’t seen the real issue in my cursory reading. It is not that the 500 heads is a unique sequence (every sequence is unique), it is that it represents a specific proportion of heads versus tails which is very unique also. If it was 499 heads and one tail, we would still be highly suspect because there are only 500 possible combinations that give rise to this proportion compared to other proportions.

So when one offers a different sequence of heads and tails, say it is 250 heads and 250 tails, there could be almost an infinite number of ways of getting this combination (well an extremely high number).

So the difference is that one proportion is incredibly unlikely and the other is much more common. It is not a specific sequence but a specific proportion that is at issue.

In DNA, it is those combinations that give rise to a folding protein versus those combinations that do not. The proportion of combinations that give rise to a folding protein is infinitesimally small compared to those combinations that do not. So how does one stumble on one of these incredibly small instances of a folding protein or how does one stumble on the incredibly small number of instances of 500 straight heads. It is not by chance or any naturalistic process known to man.

As an aside someone did an experiment with dice that designed a machine that flipped the die to land on a table and give the same number each time. So 500 heads should be easy. Just that I said it was a machine that was designed.

[…] may have escaped many, myself included, until Neil made this comment in another discussion, is that when the coin manufacturer created the a heads-tails coin (instead of a 2-headed or […]

Nope.

http://www.uncommondescent.com.....led-coins/

jerry

Don’t you mean this?:

P(HTTT) = (0.5)^4 = 0.0625

P(HHHH) = (0.5)^4 = 0.0625

unless you talk about permutations, proportions don’t make a difference to coins.

`P(HTTT) = 4!/1!3! = 4(0.0625)= 0.25`

#79

Why? You say it has the same probability as any other outcome.

I think this has been done to death.

So you are not going to say why you would

“suspect [500 heads] wasn’t a fair throw”?Why not?

Youve come in a bit late, try reading from here backwards for a bit to get up to speed.

I’ve been here all along.

Why would you

“suspect [500 heads] wasn’t a fair throw”?Then you didnt read #79.

Out of interest, do you think 500 heads has the same probablility as other sequences or not?

#79 doesn’t answer the question.

Why would you

“suspect [500 heads] wasn’t a fair throw”?Thats the best I can express it. Its a bit like moving a target to fit the arrow.

Do you think 500 heads has the same probablility as other sequences or not?

It has the same probability as any other specific sequence, but that is not the question being asked, the question being asked is whether a chance process can be expected to make 500 fair coins heads reasonably speaking.

They are subtly two different questions, and you are equivocating one with the other. Here are the questions:

Answer: yes

Answer: no, it deviates from expectation of 50% heads by wide margin (on the order of 22-sigma or whatever)

You said it’s a psychological effect, and I actually agreed with you. Some IDists find that uncomfortable, btw.

The problem however, is that with respect to all coins heads, the target has been well known throughout human history in as much as humans like simple repetitive patterns, and all coins heads is only an extension of a pre-existing fixed target.

Thus, the objection of after-the-fact drawing of targets cannot be sustained, and thus you would suspect something was up if you saw 500 fair coins on a table all heads.

Even you admitted you’d be suspicious, and all I’m trying to do is explain why you’d be suspicious.

PS

For what it’s worth, I once arragned 50 fair coins all heads on a plate. The phenomenon isn’t that improbable if intelligence is involved.

so no answer

okay

Sal: I understand the difference perfectly, and its really not subtle at all. My reply at #79 sums up my position as best as I can express it.

I was trying to get UBP to come clean.

Guys I really hate to interrupt your discussion and would like you guys to check this video about a guy who can move his caudal appendage or tail does this show common decent?!?! m.youtube.com/watch?v=xnxzqeT466A

Graham2 @79:

Please define precisely which “random sequence” you have in mind that 500 heads would be just as probable as.

—-

Then try re-reading #29.

You are missing the boat.

You are stuck on a simple statistical level 101. We’ve moved beyond that long ago. Everyone understands the point you are making and it (i) is entirely trivial, (ii) misses the point, and (iii) demonstrates that you are not willing to even think through your own acknowledgement as to why 500 heads in a row is suspicious.

Since you can’t seem to grasp our explanation, why don’t you offer your own. Why does 500 heads in a row seem suspicious to you? Think about it carefully and once you’ve come up with a decent answer then our “incomprehensible” answers might suddenly make a lot more sense.

Sorry heres the link again! http://m.youtube.com/watch?v=-G6UkPS9YjU he can actually move the tail!

This is getting tiresome. My reply at #79 is the best I can explain it.

Graham, there is nothing for me to “come clean” about.

You said you’d be suspect of seeing 500 heads in a row. I am asking to think about it and tell me:

Why?UBP: For the umpteenth time, I have explained it at #79 as best as I can. If you have any questions about that, then ask. In the meantime please dont keep asking the same question.

You’ve lost your place Graham.

I accepted ‘no answer’ from you in #96, and left it alone.

You then came back to say that I needed to “come clean”.

As crazy as it sounds, I actually have nothing whatsoever to do with why you would see 500 heads as suspect.

I can tell you that I would see 500 heads as suspect, not because (as you suggest) it’s a phychological thing we carry around, but because it deviates wildly from the a random distributiuon of fair coin tosses.

If I commissioned a research project of 500 parents, and sit down to find that everyone in my sample has a female child, then I can assure you I will have the director in my office to be “suspect” with. I would not think “Gee, all children are either girls or boys, so it must be a phychological thing I carry around”. And no matter how many times the RD tells me that “having a girl is as likely as having a boy” I would not be swayed by that reasoning.

I was just wondering why you are.

typo

phychological -> psychological

OK, then we are all waiting. Your explanation is …

I already gave my answer, and compared it to yours – but here again, my answer has nothing whatsoever to do with

why youwould find 500 heads suspect. That is the question at hand, why did you say that *you* would find 500 heads suspect?Apparently, it’s all about a psychological thing you carry around. I think the weakness of that answer is rather ironic for someone who operates around here with such self-certainty, but it is what it is, and I am prepared to leave it at that.

The reason I said “whatever” is when the term sigma is used, it implies a normal distribution. The binomial distribution can be approximated by the normal distribution, and thus I can borrow some language, but it is inexact in extreme case. In this case, the probability of all heads is 1 out of 2^500 = 3.2 x 10^150

When I put 26 sigma in Wolfram Alpha to get expected frequency of 1 out of some huge number, I got

1/(1-erf(

26/sqrt(2))) = 2 x 10^148so 22 sigma actually understates the severity of the deviation if we are borrowing terminology of the normal distribution. Something like 26 sigma would be more accurate.

As I said in the original discussion, that numbers involved are so extreme for the normal approximation for a binomial distribution, “22-sigma” becomes a figure of speech.

So I actually understated my case.

it deviates wildly from a random distributiuon of fair coin tosses…. Is that your explanation ?Is there a follow-on comment you’d like to make Graham?

So if I tossed 500 coins and the result represented the value of PI (correct to 500 bits) you wouldnt see any problem. None at all. A perfectly reasonable result. I see.

If I tossed 250 heads then 250 tails, it also conforms exactly to the the expected distribution (50% heads), but you can probably see through that one.

Again, you’ve lost your place in the conversation.

The question you posed was -> why I would be suspicious if 500 fair coin tosses came up with all heads, and I gave my answer.

The result of a fair coin toss is either one of two values, heads or tails, at roughly a 50/50 distribution. That is a known value of a physical event controlled by inexorable law. If the result deviates from that value by some wild factor, then I would have every right to be suspicious of that result.

Do you disagree?

The question that remains is why *you* would be suspicious of it … setting aside the ridiculous answer that it’s a

psychological thing you carry around,having nothing whatsoever to do with the simple fact that it’s a physical event with a known random distribution.If you tossed 250 heads straight, it would still deviate wildly from the known value of a fair coin toss. For some reason that seems to give you trouble understanding. Following that with 250 straight tails would not make it an even distribution.

good grief.

You are probably right about the H/T distribution test. I dont mind that, but

in generalwe are suspicious of outomes that dont ‘look’ random. The case of 500 heads would be suspicious on 2 counts: the unexpected distribution, and the fact that it matches what we regard as ‘unnatural’. My example of PI is a better test. The H/T distribution is (I presume) close to 50/50. Would you regard such an outcome as suspicious ?We regard them as “unnatural” only because we have studied them enough to know what “natural” is. This particular practice has served humanity very well. We look for both regularities and their counterparts. The incessant attempt to paint ID proponents as seeing “patterns everywhere” is cheap BS offered as a rhetorical placemat in leau of enaging the arguments that ID proponents actually make. The “500 fair coins” conversation has been a great testament to that attempt. You should not have stood on the trivial fact that a coin has two faces which are equally likely to appear – while ignoring the fact that coins have two faces that

mustequally appear as a regularity of the physical event known as a “fair coin toss”.So could you answer the question about the PI case … would you regard this with any suspicion ?

Sal ?

PI

The single highest movement from a 50/50 distribution would be 9 straight tosses of either heads or tails out of 500 tosses. That would not make me be suspcious of the sequence in terms of deviation.

goodnight

And PI doesnt have that ?

Apparantly, no one knows if the digits of PI are randomly distributed, its an unsolved problem, but it certainly appears that way.

G2:

The digits of pi are an example of the uncorrelated clash of two deterministic systems giving rise to effective, evident — as opposed to proved — randomness.

The ratio of the circumference to the diameter of a circle has no necessary correlation with the decimal number place value notation system, and so it is no surprise that we can use tables of pi — cf here 1 mn digits — as random number tables for practical purposes. Of course, even pseudorandom numbers can be used as random numbers for many purposes.

Here’s a bloc:

Similarly, it is possible to use the local line loop codes — phone numbers — of telephones in a book as a poor man’s random number table, based on the same root of chance.

If you want effectively guaranteed chance digits, get a zener and drive a circuit that flattens out the distribution. Quantum noise. Sky noise may work as well.

God old fashioned Johnson noise form a high value resistor would also work.

KF

PS: If you link the page, you can then use the in-page search feature of your blog to see if interesting digit strings crop up. I find that consistently, you may find 5 – 6 digits that strike us, but 7 up begins to get no hits. That looks like a threshold of 1 in a million . . . and that happens to be precisely the number of digits we have!

oops, your browser!

G2:

Nope.

For instance, X and Y would add informational features, it would not subtract the underlying ones. And if you were to write the alphabet’s worth of characters over and over on coins scattered H vs T, it still would not change the fact that 500 coins all H will be maximally unlikely on a chance process.

KF

G2, 77:

True but a strawman, as I pointed out by highlighting INDIVIDUAL.

In short, ti is maximally improbable to get any arbitrary specific 500-character sequence. But as the difference between garbage hands and valuable ones in card games shows, there are CLUSTERS of sequences that are of interest, that form isolated target zones in the config space of all sequences.

By contrast there is an overwhelmingly dominant cluster of sequences that are near 50-50 and which hold not particularly interesting pattern or order or organisation.

It is unsurprising to obtain by a chance process one of these. But, it IS highly unexpected to obtain one of the special sequences by such a process, but we know that patterns — simple ones — can be triggered by lawlike mechanisms [all H, all T, alternating H-T and the like, similar to crystals] and/or by design.

The pattern 500-H, is an example of the simple repetitive pattern, which can be necessity or design mimicking necessity.

And frankly, this fairly obvious distinction has been well known for a long time, so the plain point is this is a concept and perception gap triggered by ideological bias in a context of polarisation over the design inference.

But, when you are a reasonably educated person and the matter has been pointed out to you in a reasonably clear way, then clinging to such a gap begins to look a lot like closed-mindedness.

To show that you are not being closed minded, kindly accurately put the above in your own words, and then discuss it and its implications.

KF

SC, 95:

With CSI as a broad thing, yes that is often true.

But you are very close to why I have focussed our attention on functionally specific complex organisation and/or associated information [FSCO/I].

The isolated target zone in the space of configs is there, but now there is an objective test: does this thing work in a way that depends on configs?

Scrambled text does not work, beyond a certain threshold. Scrambled genes, too. Scrambled car parts, scrambled electronic parts, scrambled programs etc etc etc.

Hence the ideological rage to refuse to acknowledge this obvious reality.

It is increasingly evident that we are up against the ideologised, closed, hostile mind, and that beyond a certain point we can only ring fence, and put up warning labels.

The patent absurdities will in the end tell.

But so long as entrenched power backs up absurdity, “it’s dangerous to be right when city hall is wrong.”

Thus, the sadly revealing expelled phenomenon.

KF

G2: Have you done basic statistical mechanics? Try this class slide show, paying particular attention to the pattern of dominant clusters explained in slides 1 – 6, esp. the diag in 4. The links to the statistical principles behind the 2nd law of thermodynamics should be clear. KF

PS: For tossed coins that dominant cluster tends to be near 50-50 in no particular order.

Yes. To illustrate why just use the procedure outlined in

http://www.uncommondescent.com.....led-coins/

So use red numbered labels to specify PI. If “all red labels are up” after a random process, then imagine the first binary digits of PI generated by a random shaking of coins. 😯

Here are some of the first digits:

The reason this works is that humans can only in principle write down or conceive so many highly specific specifications.

Try writing down how many specific sequences that are 500 binary digits long. You’ll be hard pressed to find anything that reaches 2^500. On earth there are only 2^149 atoms, so you won’t even be able to take all 500-bit sequences from the printed books of history and fiction, all the sentences every person has ever spoken that was recorded and then locate it in a random sea of 2^500 sequences.

500 seems like tiny number. Agree, but 2^500 is big, and if we go to 1000 coins, then 2^1000 is astronomical relative to 2^500.

That’s why Bill Dembski went to a lot of trouble to estimate how likely it was we’d be able to use the following metaphors to describe biology:

code

control

error correction

language

interpreter

feedback

sensor

redundancy

translation

transcription

wing

gear

wheel

copy

blueprint

etc.

How difficult is it to project engineering metaphors onto biology? You can’t do that with a rock. But biological organisms seem so amenable to these metaphors.

Compare then the class exercise in the OP. It was relatively easy to project my hard-wired and learned patterns onto the students designs and recognize them as designs.

Detecting designs in biology is detecting patterns that conform to engineering designs. It’s no coincidence ID seems to be over represented by engineers. They find it outrageous a chance hypothesis in a pre-biotic soup could even synthesize the first DNA/protein system in the ancestral cell at a the nano-level where there is tons of thermal and quantum noise to destroy any would-be pre-cursor of a cell rather quickly.

fifthmonarchyman,

Wow. Long time no see! Thanks for dropping in.

Sal

SC: Pi in bin seems hard to come by to 10^6 or so listed digits. Best I came up with is 32k+, here. But of course, without claiming a proof, I note that again we have a clash between uncorrelated deterministic entities so we should expect to get effective randomness . . . especially as pi goes on forever. KF

PS: Onlookers may get a kick out of a discussion of that here, which does not bring out the little problem that searching out the relevant items is a solar system and observed cosmos scale supertask.

Sal: So we both agree that the PI case is suspicious, but why ? It satisfys your std dev test (I presume), so why is it suspicious ?

You seem to be suggesting that it matches a recognizable pattern, but thats exactly what Ive been agreeing with all along.

Graham, lol, you’ve been trying to make this point for so long, Why don’t you just make it directly.

I would suspect that you and everyone else on the surface of the planet would be surprised to see a person flip 500 coins that perfectly responded to the value of pi.

So what?

What are you specifically saying in regards to a specific ID argument?

You have already asked the question,

why. There was a brief diversion into std deviations, but sal, above, seems to be saying that we suspect something if we see a recognizable pattern in the result, which is what Ive been saying all along.What all this has to do with ID (or evolution) you will have to ask sal.

So you don’t have any point you’ve been trying to make in relation to any specific ID argument.

Okay.

Why me? But any way, I did right up something to that effect just now:

http://www.uncommondescent.com.....n-biology/

Thanks for being such a good sport Graham. Not all of us are out to humiliate you. I hope you might learn something from these discussions.

Thanks for participating. The questions you raise I know may be on the minds of some of our ID friendly readers who are too shy to ask.

Sal

scordova said,

Long time no see! Thanks for dropping in.

I say,

Your welcome I come by from time to see how the debate is progressing.

ID has the potential to be very a fruitful exercise but its hard to see how we will ever get there as long as:

1) Our side thinks it can be used as a tool to prove God’s existence. As if God’s existence was not already patently obvious to everyone.

2) The other side is unwilling to give even an inch of ground for fear that they will be tricked into acknowledging God’s existence.

Until one or other of those factors changes we will continue to butt heads in long threads about whether or not we can rule out chance if we discover 500 fair coins on a table heads up.

it is comical if you think about it

peace

fifthmonarchyman,

I have no clue why the spam filter held your comment up in moderation. I hope you visit again.

Sal

I asked you sal, because your name is on the top of the thread.

I presume the point of the whole thread is that if we see some non-random pattern in life/genome etc, then we assume some agent is reponsible, ergo design. But all this only advances the ID cause if natural selection doesnt operate, in which case there dont seem to be many options left. If natural selection doesnt operate, that is, a fictitious world you have invented.

1. Natural selection can’t operate if you don’t have a population of living organisms to begin with. Natural selection cannot solve the OOL problem. And Darwinists themselves insist on not using selection as a solution to OOL. Thus the arguments I laid out as pertaining to OOL cannot be solved by Darwinian mechanisms. So my point holds.

2. “Natural Selection” as defined by Dawkins and Darwin’s isn’t how nature really works, the fictitious world is Darwin’s, Dawkins’, and Dennett’s (D+D+D = 3D) not mine.

See: NS is double speak for DFFM.

You are so convinced that Dawkins is right that you think his proposed solution of Natural Selection will actually work as advertised. Lab and field observations plus analysis by population geneticists prove otherwise. Dawkins is wrong, if he debated us at UD like Nick Mazke did, we dispose of Dawkins in a week.

For selection to work as you suppose, it has to select for precursors of systems that are not even in existence. The problem with that is outlined in:

Selection after something exists is not the same as selection before something exists.

A biased coin turns up tails exactly twice as often as heads.

What’s the probability you’ll get 330,000 or fewer heads if this coin is flipped one million times?

I’m wondering if anyone participating in or reading this thread has the knowledge and resources to find the answer.

Cantor,

As the number of trials go up, the standard deviation as a percentage of the number of trials goes down due to the law of large numbers.

Try

http://www.stattrek.com/online.....omial.aspx

First try with these parameters

P success = .333333

trials = 1000000

number of successes = 333333

and you see that probability is 50% which means the number of successes is right at expectation, change that slightly to what you were aiming for

P success = .333333

trials = 1000000

number of successes = 330000

and the chance of success is 100% or close to it. Why? It’s way outside 3 sigma from the expectation of 333,333 heads.

The website cautions:

Sal, I don’t think that answer is anywhere near correct.

The question was, What’s the probability you’ll get 330,000

or fewerheads0% effectively of getting 330,000 or less. It is several sigma from the expectation of 333,333 heads.

100% of getting 330,000 or more

A 1 sigma deviation is 471 heads

(333,333 – 330,000)/471 = 33,333/471 = 7 sigma deviation, thus it is effectively 0%

sigma = sqrt ( np(1-np) )

n = number trials

p = probability of success

sigma here is normal distribution sigma approximation of the binomial sigma

I suspect the normal distribution approximation gives a wildly inaccurate answer.

Can someone compute an answer accurate to 2 significant digits?

It’s about 7×10^-13.

Actually the approximation gets better with larger number of trials, not less, so I’m inclined to think this is good enough. I don’t know that many computers can handle a binomial distribution with 1,000,000 trials.

You’d have to be plugging in n = 1,000,000 trials

and to accurately calculate the binomial distribution you’d have to be processing numbers like 1,000,000 factorial. Even at 100 factorial, lots of calculators will be going to some sort of approximation anyway.

How did you arrive at that answer?

I plugged the numbers into R, which calculates the values by making used of the Beta function.

OK. I’m on board now. I was figuring it with 333,000 instead of 330,000.

Sal,

If I want the probability of

333,000 or fewer heads, using your method:n = 1000000

p = 1/3

sigma = sqrt(n*p*(1-p)) = 471.4045207910317

Z = (333333-333000)/sigma = 0.706399674405361

The area under the standard normal distribution between its peak and that Z score is 0.2776,

so the area of the tail is 0.5-0.2776 =

0.2224…

But if I use Octave’s binocdf() function I get a different result:

Octave 3.6.4> binocdf(333000,1000000,1/3) =

0.24010…

Scilab agrees with Octave:

Scilab 5.4.1> cdfbin(“PQ”,333000,1000000,1/3,2/3) =

0.2400981…

What am I missing here?

Good question, I don’t know. Where are the mathematicians and statisticians like Neil, DiEb, and Mark Frank when you need them? 🙂

It would be news to me the normal approximation of the binomial distribution is that much off for 1,000,000 trials, but maybe that’s the way it is.

Sal

Try this

[1 / (1 – ERF (x / sqrt(2)) ] / 2 = 0.239969812

where x = 0.706399674405361

That tells you given the deviation, the one-sided area of the population that lies outside the deviation.

That looks better. 🙂

B(n,p) ~= N(mu,sigma^2) if both np and n(1-p) are large

sigma of B(n,p) = sqrt(n*p*(1-p)) = sqrt(1000000*(1/3)*(2/3)) = 471.4045207910317

CDF of B(n,p) ~= CDF of N(mu,sigma^2) = (1/2)*(1+erf((value-mean)/sqrt(2*sigma^2)))

= (1/2)*(1+erf((333000-333333)/sqrt(2*471.4045207910317^2)))

=

0.23996981165296…

Or do it this way:

The standard normal distribution probability density function PDF is:

PDF(x):=exp((-x^2)/2)/sqrt(2*%pi)

Integrate PDF from x=0 to x=Z to get the CDF:

CDF = erf(Z/sqrt(2))/2

Z = (mu-x)/sigma = (333333-333000)/sigma = 0.70639967440536

Plug in the Z score into CDF and crunch the numbers:

CDF = erf(0.70639967440536/sqrt(2))/2 = 0.26003018834704

Subtract from 0.5:

0.5 – 0.26003018834704 =

0.23996981165296…

But the question remains:Z = (x-mu)/sigma = (333000-333333)/471.4045207910317 = -0.70639967440536

If I use a table to look up the probability for the tail of the standard normal distribution for this Z score, I get a value 0f

0.2224Why is the value so different?…

I plugged in the following number in this

Standard Normal Distribution Z-Score Calculator :

http://www.danielsoper.com/sta.....aspx?id=19

Cumulative probability level : .2399

I got a Z score of : -0.70662427

Which looks right. Are you sure you’re interpreting the tables correctly? I must confess, I’m not a statistician, this is starting to get beyond my level of knowledge.

Ignore the above post. My aging eyes were looking at the wrong column in the Table. The Table values do agree with the error function. (1/2)*erf(Z/sqrt(2).

Sorry for taking this down the rabbit hole.

Please, no need to apologize, this was fun, a lot more fun than some of the ugly debates that sometimes transpire on the internet.

I hope to see you some more. Take care.

Sal

Sal (and JDH, Chance Ratcliff,

et al.)In #35 I made several objections to design detection that should sound familiar.

Imperfect design is a non-starter. Even though on closer inspection I put a 0 instead of a 1 on the lower end of the first C, making the two C’s different, the design is still clearly recognizable. We will come back to this.

Bad or malevolent design is likewise a non-starter. Even if one can prove malevolent design, a bad designer is still a designer. That just would mean that a theology of “God’s in Heaven, and all’s right with the world” is poor theology. But then, that’s not either Job’s theology or Jesus’ theology.

The idea that any one sequence is just as likely as any other sequence, and therefore Sequence B is just as likely as Sequence A. But if Sequence B can be shown to be in a small subset of sequences, which are very unlikely to be chance arrangements, then that argument fails. How many different ways are there to create essentially a 210 bit bitmap of the word CHANCE? Exactly, there are probably about 32 different ways to do so. Let’s say I am wrong and it is really closer to 1024 different ways. If one allows for a single error, one can have 210 different errors on each of these 1024 ways, which means we are talking about roughly 2^18 different bitmaps for CHANCE. If we have perhaps a million, or 2^20 words to choose from in English, and perhaps a million languages, then we have perhaps 2^58 sequences that would look this good in bitmap. Let’s say that we have 1,000 times as many nearly perfect pictures as we have words (a word is equivalent to 1000 pictures 🙂 ), then we are looking at 2^68 special sequences. All heads is just one of those sequences.

That sounds like a lot, until you consider that there are 2^210 total sequences, so that the ratio of (that) special sequences to more usual sequences is 1 to 2^142, or roughly 6 x 10^42.

That’swhy the impression is so strong that the other arguments about this not being designed seem so totally irrelevant. Notice if I underestimated the obviously special sequences by a factor of a million, it makes virtually no difference in the final argument.Dembski’s universal probability bound is not a sacred number. It gives up way more ground than necessary. Dawkin’s 10^50 is even being too generous for events on earth proper.

It is important to note that this argument completely destroys the argument about not pre-specifying the word CHANCE. If the sequence is functional in painting a picture, especially a word-picture, with minimal errors, one can be virtually certain that it is not a random sequence, regardless of whether the particular word is specified.

It is also interesting that knowing that the design was intelligently made tells us nothing about the method of producing the pattern. I could have typed it out by hand, cut and pasted some of it (the first and last row are identical), written a computer program to print out the sequence, or had my secretary (if I had a secretary) write the letters, and scanned them into a bitmap that was then transcribed into 1’s and 0’s.

You can say a little more about the designer of the sequence. Obviously the designer knows about Roman letters, and probably knows English, or at least knows someone who knows. But that’s more than you can say about how he did it.

This sequence is more obviously designed than the all heads sequence, because that sequence can be made by a simple law, whereas this one cannot.

But Sal, I think that you should not give up yet on the idea that there are objective reasons for our subjective identification of design. The other side has not properly done the math.

Cantor (#76), you did get the right answer without seeing the pattern, as did Sal. But seeing the pattern vastly increases the strength of the conviction that the pattern is not due to chance (pun intended).

[…] (where Heads =1, Tails =0), and he ought to recognize it as designed. As pointed out in the essay To recognize design is to recognize products of a like-minded process, Part I, the real probability in the question of design recognition isn’t the probability of a given […]

[…] Here I described this exercise: […]