SSDD – Same Stuff, Different Darwinist. This time someone said at skeptical zone:

if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair

coins,

So if someone has 500 fair coins, and he finds them all heads, that is consistent with expected physical outcomes of random flips? 😯 I don’t think so!

Correct me if I’m wrong but if you have 500 fair coins, the expectation is 250 coins will be heads, not 500. Now if you have 261 of the 500 coins heads, that is still within a standard deviation of expectation, and thus would still be a reasonable outcome of a random process. But 500 coins heads out of 500 fair coins? No way!

Given:

p = probability of heads: 0.5

n = number of coins: 500

Then the standard deviation for binomial distributions yields:

So 261 coins heads is (261 -250)/11 = 1 standard deviations (1 sigma) from expectation from a purely *random* process of coin flips.

So 272 coins heads is (272 -250)/11 = 2 standard deviations (2 sigma) from expectation from a purely *random* process of coin flips.

….

So 500 coins heads is (500-250)/11 = 22 standard deviations (22 sigma) from expectation! These numbers are so extreme, it’s probably inappropriate to even use the normal distribution’s approximation of the binomial distribution, and hence “22 sigma” just becomes a figure of speech in this extreme case…

There are many configurations that are 250 coins heads. The number is:

, thus there are many coin configurations most consistent with the expectation of 250 coins heads (50%), whereas only 1 configuration of all heads for the least consistent behavior for a fair coin.

Bottom line, the critic at skeptical zone is incorrect. His statement symbolizes the determination to disagree with my reasonable claim that 500 fair coins heads is inconsistent with a random physical outcome.

SSDD.

Sal

I am not quite sure what the comment was getting at but remember that every specific sequence of coins has exactly the same probability of arising given a fair coin. Why is a sequence of 500 heads more inconsistent with a fair coin than a sequence that is a jumble of heads and tails? There are of course many more sequences that have roughly the same number of heads and tails than there are sequences that are all heads. By why does the fact that a sequence belongs to the larger class of sequences with roughly 50% heads make it less surprising than a sequence that is all heads?

There is an answer to this question but it involves Bayesian thinking which is anathema to the ID community.

Mark is correct. Any particular sequence of heads and tails has the same 2^-500 probability of occurring.

The distribution of the outcomes addresses the number of kinds of outcomes not the probability of any given one.

If I got 500 heads in a row I’d be very surprised and suspicious. I might even get the coin checked. But it could happen. You might not ever see that particular outcome even if you flipped coins your whole life. But you might not ever get HTHTHTHTHT . . . . either. Or THTHTHTHTH . . . . Or HHTTHHTTHHTT . . . . (and those three sequences are at 0 sigma). Or any other particular sequence. If you write down a sequence of 500 Hs and Ts and start flipping coins you will probably not get that particular sequence in your life time.

I’ll add my agreement with Mark.

One has to distinguish between a sequence and a combination. For two coins,

HH

HT

TH

TT

are equally probable sequences. But if one only looks at the combination, ignoring the order, then mixed (HT or TH) is twice as likely as all heads (HH), because it counts two distinct sequences.

The issue then is whether eigenstate (the TSZ responder) was correct in taking your post to be about a sequence of events, where the order (sequencing) does matter.

Thank you gentleman for you comments, but you could for once agree I was right and I am in good agreement with operational practice that we would say 500 coins heads is inconsistent with fair coins such that you would reject the chance hypothesis if you saw it.

The way I used the math is not at all idiosyncratic, and in fact is based on standard practice.

For various reasons in operational practice we would group together strings that are consistent with 50% heads.

A similar issue arose when people were able to beat dice games in the 1970s by intelligently designing their throwing technique. This resulted in the passing of a law in Nevada to prevent such non-random throws. See:

Couple Accused of Dice Sliding in Wynn Las Vegas

It is unsurprising unless the sequence had statistics in inconsistent with the chance hypothesis or was specified by a recognizable pattern. Examples:

1. H T H T H T……

2. Champernowne sequence

3. identical to another set of coins or record of coins that we are aware of (very similar issue to the FBI case I mentioned here: Coordinated Complexity, the key to refuting postdiction and single target objections).

But that is an aside. I must admit, I’m rather stunned at the reluctance to reject the chance hypothesis for 500 coins heads. Even though chance is formally possible as an explanation, operationally speaking, would any of you personally accept it? Just curious…

Thanks again for coming across the ailse from skeptical zone. I know we all have intense disagreements, and thank you for keeping the discussion civil…

Mark Frank:

LoL! Mark if you call the sequence ahead of time and then toss the coin and get a match, then that would be highly improbale given a chance only scenario.

Unbelieveable how freaking dull these anti-design people. are.

So why won’t you agree with me? I provided a similar analysis above with the combinations of 500 coins that are 50% heads and showed that they are 1.17 x 10^149 times more likely than all coins heads. You only provided the case for a 2-coins set and I provided one for a 500-coin set, which in principle could be extended to an N-coin set. You could have pointed out you’re just agreeing with what I laid out with C(n,r) 🙄

I’m practically quoting standard practice in statistics and discrete math you guys can’t be forthcoming and say, “I agree with Sal”. Nothing I said in the OP is outside of reasonable practice.

Anyway, thank you for commenting, but it seems even the appearance of agreement with a creationist on any non-trivial topic (even textbook math) is avoided by Darwinists….

“Thank you gentleman for you comments, but you could for once agree I was right and I am in good agreement with operational practice that we would say 500 coins heads is inconsistent with fair coins such that you would reject the chance hypothesis if you saw it.”

I would NOT say it was ‘inconsistent with fair coins’. I would agree it was highly unlikely. As far as rejecting the chance hypothesis . . . what is your hypothesis exactly? Do you mean ‘fair’ hypothesis? And how are you going to test your alternate hypothesis. It matters. If each coin flip is a data point then 500 such data points is pretty good. If the whole sequence of 500 flips is a ‘data point’ then I’d say one point is not enough to reject. You must be specific in your statements. You kind of imply that it’s the whole sequence you’re interested in.

“But that is an aside. I must admit, I’m rather stunned at the reluctance to reject the chance hypothesis for 500 coins heads. Even though chance is formally possible as an explanation, operationally speaking, would any of you personally accept it? Just curious…”

As I said, if I got 500 heads in a row I’d probably check to see if the coin was really fair. I’d also check the flipping technique. All part of making sure the procedure really was ‘fair’. AND, if my data points were the individual flips then it would depend on the p-value I picked, etc. But checking the probabilites of each flip is different from checking the probability of a sequence of flips.

“So why won’t you agree with me? I provided a similar analysis above with the combinations of 500 coins that are 50% heads ignoring order and showed that they are 1.17 x 10^149 times more likely than all coins heads. You only provided the case for a 2-coins set and I provided one for a 500-coin set, which in principle could be extended to an N-coin set. You could have pointed out you’re just agreeing with what I laid out with C(n,r) ”

We all agree that it’s incredibly more likely that you’ll get 250 heads in 500 throws when you don’t specify the order. All we are saying is that any particular sequence is equally unlikely and that 500 heads is just one of those particular sequences.

The probability of getting EXACTLY 250 heads and 250 tails in any order is fairly small. Calculate it out.

Sal

If it helps I would agree that if I tossed a coin 500 times and it came down heads every time then there something unfair about the tossing mechanism. I don’t think that was commentator’s point, the comma indicates the quote continued, but there is such a mass of comments there I cannot be bothered to sort out what everyone was saying.

What is far more interesting and relevant is why do we reject the fair coin hypothesis in this case or indeed a large range of other interesting strings – some with 50% heads? It clearly is not because this particular string is more improbable than other strings – they are all equally improbable. Nor can it be that other strings belong to larger classes of strings – the all heads string belongs to the very large class of strings with more than 260 heads.

Jerad,

Specify a pattern and then flip the coin to try to get a match.

It’s called “matching a prespecification”. And it is pretty fundamental to design detection.

Jerad,

First of all, I did not read the original post, but I don’t need to in order to critique the utter foolishness of your argument.

If you consider 1/1.17*10^149 “highly unlikely” and not “improbable” you don’t seem to really know a thing about numbers and what an exponential means. For instance – even in 1 billion trials something that had a chance of occurring of 1/1.17*10^149 of happening it would still be improbable.

Please do not write dribble. Something that has a probability of happening of 1/1.17*10^149 is impossible. If it happened, then you do not have a fair coin. There is no other reasonable opinion. The fact that you don’t understand this makes me wonder how you reason.

Jerad – And I perfectly understand the idea that every specifically

orderedsequence is unique and has equal chance of occurring. So don’t pretend that I don’t. But we can never claim that anything with a 1/1.17 * 10^149 chance of ever occurring. It is not reasonable to do so.JDH,

The probability of getting exactly 250 heads is the number of ways that can happen divided by the number of possible outcomes which is 2^500.

So (1.17 x 10^149) / (2^500) is the correct probability. I figured it’s about 0.035. About 3.5 times out of 100.

I don’t even know who you are. I was just responding to Sal’s post. I wasn’t pretending anything. I”m sorry if my use of actual quote marks instead of the blockquote html tag confused the issue. Perhaps you were responding to things other people said? Maybe try reading the thread?

About 3.57% according to the binomial distribution:

See:

http://stattrek.com/online-cal.....omial.aspx

enter

probability: 0.5

number of trials : 500

number of successes : 250

67% of population is within one sigma from the mean, 95% within 2 sigma, and 99 44/100% within 3 sigma, etc.

22 sigma is such an extreme case, it is only a figure of speech since the binomial distribution is not well approximated by the normal distribution in such an extreme case.

nor the lifetime of the universe.

scordova:

The disagreement is not about the analysis of combinations. It is about whether the issue was combinations or ordered sequences.

The reason why 500 straight coins would raise eyebrows, and most other results, while equally improbable, would not, is easy: because “all heads” is simply describable, and most others are not (many would be describable only in 500 bits, by actually listing the result). If we flip n fair coins, the probability that the result can be described in m bits, since there are at most 2^m such results, is less than 2^m/2^n. So if you flip a billion coins, and get “all heads” or “(only) all prime numbered coins are heads” you would rightly be surprised and suspect something other than chance.

If the TSZ writer is correct, why do Darwinists need Darwinism? Pure chance is sufficient to explain human beings, the probability of the exact arrangement of atoms in a human is no more improbable than the exact arrangement of atoms in a given pile of rubble.

Sal, you simply missed eigenstate’s point, which was fairly trivial. I think you just mistyped. You wrote:

He wrote:

All he’s saying is that

giventhe coins are fairandthat they were flipped by theandthat they were flipped by the normal “physics of fair coins”,thenthe outcome is perfectly consistent with both fairness and physics.However, if the fairness of the coins is not a given, or the physics of the flipping is not a given, then you would have reason to suspect unfair coins, or unfair tossing, respectively.

It was just the way you phrased it. I presume you meant to say something like:

All heads is a prespecification.

One may attribute it to the convenience descriptions that we humans use to do statistics and science. One may even argue it is purely subjective that we concoct notions of expectation values in order to make inferences about the universe tractable to our brains.

To paraphrase Laplace on Probabilities

We have the sort of math we do because of our human uncertainty of outcomes. Hence we think of the world in terms of expectation values since we have uncertainty…

So let us even suppose for the sake of argument, operational practice in statistics and physics is a matter of intellectual convenience in a world of uncertainties. This does not negate the fact when intelligent agents act in a way that violate our convenience descriptions of what we deem chance processes, we will reasonably (not absolutely) infer a human-like intelligence is in operation, or at least something that looks like a human-like intelligence.

And if I may make a nuanced description. I was not talking about 500 sequential coin flips of one coin, I was talking about a set of 500 fair coins.

If we found 500 fair coins all heads we would reasonably (not absolutely)

1. reject the chance hypothesis

2. accept some mechanism was responsible for the 22-sigma deviation from expectation value

Because we know human designers are capable of directly or indirectly making all the coins heads, we could reasonably infer a human-like intelligence made the coins heads since the such intelligent designers (humans) have sufficient capability to do so.

I respect that some will not be willing to extrapolate such reasoning to designs in life since the Intelligent Designer of life is not seen in operation today. That’s a respectable position, but one I don’t share personally.

I have gone through a lot of trouble to suggest, on scientific grounds alone, one such Intelligent Designer can be postulated:

Quantum Enigma of Consciousness and the Identity of the Designer.

On scientific grounds this would be viewed as speculative. Some would say it’s not even science. I respect that.

But life to me doesn’t seem to accord with a chance hypothesis, it violates mathematical expectation from what we know of chemistry and physics. In principle, our understanding of physics and chemistry could change, but I personally don’t think that will happen.

Thank you for sharing your thoughts, and it is nice to see you here at UD.

“Pure chance is sufficient to explain human beings, the probability of the exact arrangement of atoms in a human is no more improbable than the exact arrangement of atoms in a given pile of rubble.”

Is that Granville Sewell’s definition of ‘probabilism’?

ORthat a human didn’t flip the coins in the 500 coin set randomly, but rather arranged them intelligently. That inference is reasonable (not absolute), especially given we know the capabilities of human intelligent designers.I respect many will not be willing to make such extrapolations to the designs of life, but I’m astonished that eigenstate would object to my claim of “all coins heads” as inconsistent with random processes on fair coins.

I’ve shown in this essay, I was correct in saying “all coins heads” is inconsistent with the physics of fair coins from the standpoint of probability and statistics as used in operational practice, especially the notion of expectation values and deviations from expectation.

I appreciate the TSZ crowd joining in the conversation, but I have to point out, it seems evolutionists really don’t want to even give the appearance of agreeing a creationist even on uncontroversial matters.

I could have said, “finding 500 coins heads, violates expectation of fair coins, therefore finding 500 fair coins all heads is not reasonably the result of chance.” And that should have been the end of it, but because I’m a creationist, the point has to be belabored….

That’s ok, it give us a chance to talk about math in detail, and that is a good thing, in and of itself.

Agreed. However, my point is we can in special cases bypass the discussion of specification in rejecting the chance hypothesis.

I did not have to invoke CSI to make a case for design in certain statistical situations…

Elizabeth @17 said:

But Elizabeth you are wrong. Let’s break down each statement.

1. Coins are fair. ( This is consistent with the “physics of fair coins”. )

2. Flipping a fair coin creates some sequence of heads and tails ( This is consistent with the “physics of fair coins”. )

3. A human being can read that sequence and classify the number of heads and the number of tails, and write it down in a different representation ( “HTHH..”) ( This is consistent with the “physics of fair coins”. )

4. The set of flips matches exactly

anypre determined sequence. ( This isNOTconsistent with the “physics of fair coins”).I can only conclude that the reason you don’t see the difference between statements 1-3 and statement 4 is because you are willingly closing your eyes and ears to reasonable, logical argument.

No, the conclusion you should draw is that the point is so trivial, people are missing it!

Clearly the chances of getting a predetermined (written down in advance, for instance) sequence from a series of coinflips is so infinitessimal, whatever the sequence (and, as Barry rightly says, no one sequence is any more probable than any other), that were we to see it done, we’d rightly assume some kind of skulduggery.

Sal said, or meant, something slightly more interesting, which is that

even if we hadn’t predetermined the sequences, if we were to observe one of a tiny class of sequences that is easily described (all heads, all tails, alternating heads and tails, runs of prime numbers of heads, whatever), we’d suspect skullduggery.You agree, I agree, Sal agrees, Barry agrees, Jerad agrees (I think), eigenstate agrees, we all agree. All heads, all tails, regular stripes, whatever, would make us go count the spoons. There’s something weird with coin, or there’s something weird with the toss.

Unfortunately Sal worded it confusingly. He told us that the coin WAS fair, and seemed to imply that they WERE flipped by fair means.

And that the result was all heads. Clearly that can’t mean skullduggery, because Sal’s

already told us there wasn’t any. And as all-heads is perfectly consistent with fair coins and fair tossing, there’s no reason to invoke anything design.On the other hand if one of the coins turned into a flamingo and walked away, well, then, we’d have to seriously consider that the law of physics had been violated!

Honestly, that’s all the issue was. 🙂

Schnapps?

or, wait…

Is it possible that some people really think that it is

physically impossible(“against the laws of physics”) for a series of 500 tosses to come down 500 heads?That would be interesting.

Jerad agrees of course.

I did not say that, I said if 500 fair coins were found to be all heads. We would reject that a random coin tosses was the means that created it.

But the next iteration, I’ll try to make it even more clear. Suppose you opened a box containing the coins, and all of the 500 of them were in the heads state…

Thank you for the criticism, and I’ll change my wording in response to you criticism.

Thanks to all for reading and commenting on my thread.

Neil Rickert:

No, according to Elizabeth, you have to know the distribution before you can assign a probability.

Yes, she really said that.

Sal,

The point of your OP was to disagree with this statement by eigenstate:

Even ignoring the fact that you quotemined him, his statement is correct as it stands. All heads is perfectly consistent with the physics of fair coins, as is any other specific sequence.

The reason you would find an all heads outcome to be suspicious is not because it is inherently improbable, but because it is both signficant and improbable.

See this comment on the other thread for details.

Since eigenstate was correct all along, I think you should append a notice to your OP stating that he was correct and that your challenge was mistaken.

No it is not, unless of course you figure a 22-sigma deviation from expectation is “perfectly consistent” with expectation of fair coins.

You’re assessing probabilities with respect to other sequences, but in practice, one also assesses probabilities with respect to expectation.

With eigenstate’s approach, the chance hypothesis is never rejectable in principle! And that is completely in opposition to operational practices where deviations from expectation count for something.

If you stand by eigenstate’s comments, then on what grounds will you ever reject the chance hypothesis short of you seeing someone rigging an apparatus, etc.? Answer: NEVER, because in eigenstate’s world, what matters to him is every sequence is just as probable as the next, whereas in operational practice, deviations from expectation value count for something.

Sal,

Follow the link in my last comment. I’ve explained it on the other thread.

keiths:

A “fair” coin with a “heads” on each side is perfectly consistent with “the physics of fair coins.”

A “fair” coin with a “tails” on each side is perfectly consistent with “the physics of fair coins.”

A “fair” coin with three sides with “blech” on each side is perfectly consistent with “the physics of fair coins.”

thanks keiths

No dice, KeithS, you’ll have to assail the calculation deviation from expectation I provided, and you know you aren’t going to do that since those are textbook derivations. You and Eigenstate are the ones promoting an idiosyncratic interpretation of statistics.

You’re the guys who’ll have to admit error and stop trying to save face.

Sal,

If you flip a fair coin 500 times, there are 2^500 possible sequences. Each specific sequence is possible, and each specific sequence is equally (im)probable. They are all equally consistent with the physics of fair coins.

If any of them were inconsistent with the physics of fair coins, it would mean that they could not happen. But that’s ridiculous. Sure, a sequence of all heads is improbable, as is any other specific sequence. But impossible? No way. The probability of getting all heads is small but nonzero — just like every other sequence.

Eigenstate was correct:

He was correct because the all-heads sequence, like every other specific sequence, is equally consistent with the physics of fair coins. After all, that’s what it

meansfor the coins to be fair.What KeithS, no refutation of the calculations I provided above. You think a 22-sigma deviation from the mean is “consistent” with expected behavior? 🙄

Sal,

Of courseit is more likely for a 500-flip sequence to containsome combinationof 250 heads versus all 500 heads. Both Eigenstate and I have said exactly that:link, link

But absolutely nothing about that contradicts what eigenstate said in the quote you are disputing. Here’s what we get if we undo your quotemine.

The portions you cut are highlighted in bold):Eigenstate is correct. Every possible sequence is equally probable and therefore equally consistent with the physics of fair coins.

You were wrong to challenge him (and also wrong to quotemine him) and it is only appropriate that you acknowledge this in an appendix to your OP.

Lizzie, KeithS, eignestate, Jared, and anyone else “listening.” I will tell you what so bothers me about this whole thread.

I believe firmly that God has made it obvious that He designed the world. I think there is abundant evidence for this everywhere.

What I would like each one of you to consider is how do you find evidence for the presence of a superior intellect. Can you do a “controlled” experiment? No way. You can’t do a controlled experiment to prove the existence of a superior intelligence. This should be obvious, because you don’t control the superior intelligence. You can’t order Him to do something, you can only observe what He supposedly does.

This is the fallacy of trying to design a controlled experiment about prayer. If God exists he can choose to have the answer to the experiment be anything He wants. But why should He subject himself to the experimenters. He is God. He intervenes when He wants to, in a way that is consistent with His agenda, not ours.

The only way to scientifically determine if there is a superior intelligence is to investigate what you suppose He has done, ( like create the world ) and see if it shows design. While I see design all over the place ( and according to his own quotes Mr. Dawkins at least see the appearance of it ) the real stickler is how can I get others to admit that the acknowledged appearance of design is because it was designed. I can only show that the chances of a natural event being the cause of the apparent design is improbable.

And here is the real problem. I don’t think any of the evidences I could present ( e.g. fine-tuning, consciousness, free-will ) are as understandable as a simple binary event done “n” times.

If 500 heads in a row of a fair coin would not convince you that “design” is taking place, then the debate is over. It’s not because I cannot come up with evidence, it’s that you will not believe any evidence that I bring forth.

That is untrue. Set up a proper hypothesis testing trial and then you WILL get a rejection of a null hypothesis with a defined confidence interval.

Sal, you’ve already shot your argument in the foot. The only way for all possible 500 sequences of Hs and Ts to be equally likely is for them all to be consistent with a fair coin being flipped.

If 500 heads in a row convinces you of some kind of divinity does that mean if you never see it you won’t believe?

And one thing else I want people to recall.

Many of the laws of physics – at least the ones we can observe at the macroscopic level are probabilistic in nature. Second law of Thermodynamics, Diffusion, Resistance, Radioactivity etc. these are all well established laws which work because the probability of observing an exception is essentially nil.

In my opinion your insistence that there is a small but finite probability that 500 heads in a row

couldhappen is not clarification of a mathematical curiosity, it is a misunderstanding of physics and how it works.Hi JDH,

See my comment here.

Jerad,

I hope I don’t sound condescending in my reply. I do have respect for all of my opponents intellects, I just think they believe things that are foolish. I hope this belief of mine does not make my writing offensive.

Anyway – here is where understanding the terms

necessaryandsufficientis really useful.500 heads in a row of a proven fair coin is certainly a

sufficientproof of divine intervention. ( i.e. P => Q )It is

not a necessarycondition. ( i.e. !P does not => !Q )In other words

I would choose to believe divine intervention has taken place

if500 heads of a fair coin were flipped in a row.I would not choose to believe in divine intervention

if and only if500 heads of a fair coin were flipped in a row.Is that clear?

Keiths,

In another thread, keiths said or quoted,

Since when is Darwinian evolution a

non-chancemechanism. This is what I believe is the most foolish thing about Darwinism. It really believes that you can get purpose out of non-purpose, directed out of unguided.Natural selection is not a magic potion that builds design out of non-design. I firmly believe it is impossible to generate purpose and information from random steps. The best you can do with random steps is a fluctuation from the mean. You can’t have direction come about from unguided random steps. It is mathematically impossible. If you think it is possible, please prove it to me with clear mathematics. Not with hand-waving and assertion.

Oh and don’t point to self organization. Self-organization is an information destroying process, not an information increasing process.

What a very peculiar conversation!

Mung:

Would you like to link to just

whereI said that, Mung?What I have said, and will say again, is that unless you know the

probabilitydistribution of your data, you can’t assign a probability to any one observation. Do you disagree? I assume not!Sal:

Yes, of course it is. Every single sequence you toss has a 1/2^500 probability of being tossed. You know this, and so does Barry (he sais so in the OP). We all know this.

This argument is about zilch! None of us even disagrees (except possibly Granville).

All toss sequences are equally

consistentwith the laws of physics, and all are equally improbable. However, someclassesof sequence are less probable than others, so if you get a member of thatclassyou may be rather surprised, and, rightly, suspect skulduggery.But you should be neither more nor less suspicious if you get all heads, than if you get alternating heads or tails, or a sequence of prime numbers in binary, or the sequence you wrote down beforehand (including the one I just posted).

The point is that what is extremely

unlikelyis a member of the tiny set of sequences that you consider “special” – you are much more likely to get a member of the much larger class of sequences that are not “special”.So while every single sequence has a 1/2^500 probability of being tossed, the probability of seeing one of the class of “special” numbers is (size of set of special numbers)/2^500 probability, while the probability of seeing some other number is (500-size of set of special numbers)/2^500

JDH:

I absolutely agree.

Sal

Your only error, Sal, and I think was just mistyping, was to imply that a priori, we know that the coin was fair and that it had been normally tossed. In other words, you inadvertently eliminated design a priori, leaving fluke. However, that probably wasn’t what you meant. Either you meant that the coin was fair, but the tossing was dodgy, or the tossing was fair but the coin was dodgy. In which case, getting 500 heads would certainly lead you to conclude “dodgy”, with a confidence of 500 sigma. But not, obviously, if you knew for a fact that it was all kosher.

The

whole pointof null hypothesis testing is not that it tells you your observation is “impossible” under your null, but that it Extremely Unlikely.Setting your rejection region at 22 sigma doesn’t suddenly turn “Unlikely” into “Impossible”, simply because 2^500 is deemed to be the maximimum number of events in the universe.

Or do people think that it does?

If it did,

nosequence would ever be tossed, because every single sequence, as Barry says, has a probability of 1/2^500!As I said, we all pretty much agree on this.

I understand necessary and sufficient conditions. And ‘if and only if’ statements.

You are allowed to find a divinity where you will of course. But there is no mathematical problem with getting 500 heads in a row even though it is exceedingly unlikely.

I implied we know a priori the coin is fair, I did not say the tossing was normal or even that tossing was used to make the pattern, all I said was the coins were discovered in the all heads state and that the pattern is INCONSISTENT with a process of random tossing (the implication being that a process other than random tossing is responsible for the pattern).

I said in

http://www.uncommondescent.com.....-csi-v2-0/

Did I say 1 coin tossed 500 times? No. Did I mention that we saw a tossing process? No. I stated “if we saw 500 fair coins” — that means 500 coins that we observe in an observable state, not 1 coin flipped 500 times.

So next time I’ll just say we open a box and find fair 500 coins in the all heads state just to emphasize we didn’t see a tossing process.

Thank you any way for your criticism, and future iterations of this description will hopefully preclude the mis-interpretations floating around.

I never said otherwise, but I did say:

So are you saying KeithS that all coins 500 fair coins heads is consistent with the binomial distribution? Yes or no?

Are you saying patterns that are 10 sigma or more from expectation are consistent with theory?

True, but that’s not the point I was making nor contesting, and it is rebutting an argument I didn’t make, I said nothing of the sequences being or not being equiprobable (a false insinuation by you and eigenstate), I said:

If all sequences are equiprobable, is there any sequence, based on the sequence alone which you can reject the chance hypothesis? Yes, and it has nothing to do with it being equiprobable, but with its inconsistency against expectation of fair coins (in your words, advance knowledge).

All sequences are equiprobable, but not all sequences are consistent expectation plus or minus a few sigma. You and eigenstate are misreading, misconstruing, misattributing arguments to me which I didn’t make.

I never said or implied that all heads is more probable than any other specific sequence. That is you and eigenstate’s false insinuation…

Eigenstate has to make a retraction:

NO it is not. Equiprobability is not the criteria for rejection or acceptance of the chance hypothesis, it is the deviation from expectation.

If you had a theory that predicted an expectation value and then you ran an experiment that yielded results 10 sigma from expectation, are you saying you wouldn’t find that disconcerting? That this wouldn’t raise alarm.

This is exactly what I was getting at when I said:

And now that I went into even more detail with the binomial distribution in the OP, you have no excuse to keep repeating your misreading and misrepresentations of what I said or implied.

Apparently eigenstate will have to live with that declaration from now on. 🙂

Sal,

Here are the facts:

1. You quotemined eigenstate in your OP.

2. You claimed that eigenstate’s statement was wrong, even including an emoticon to express your incredulity at his supposed error.

3. Here is eigenstate’s full statement — the one you chose to quote in the OP — with the parts you cut out

highlighted in bold:Eigenstate was correct. All sequences are equally probable, as even you now admit.

You were wrong to dispute his statement.

The error is entirely yours, and the responsibility for retracting your claim thus rests entirely with you.

F/N: I think it useful to clip the following from BA’s tread, as a point of reference:

_______________________

[Clipping 48 in the DDS mendacity thread, for record:]

>>It seems people have a major problem appreciating: (a) configuration spaces clustered into partitions of vastly unequal statistical weight,and (ii) BLIND sampling/searching of populations under these

circumstances.

It probably does not help, that old fashioned Fisherian Hyp testing has fallen out of academic fashion, never mind that its approach is sound on sampling theory. Yes it is not as cool as Bayesian statistics etc, but there is a reason why it works well in practice.

It is all about needles and haystacks.

Let’s start with a version of an example I have used previously, a large plot of a Gaussian distribution using a sheet of bristol board or the like, baked by a sheet of bagasse board or the like. Mark it into 1-SD wide stripes, say it is wide enough that we can get 5 SDs on either side. Lay it flat on the floor below a balcony, and drop small

darts from a height that would make the darts scatter roughly evenly across the whole board.

Any one point is indeed as unlikely as any other to be hit by a dart. BUT THAT DOES NOT EXTEND TO ANY REGION. As a result, as we build up the set of dart-drops, we will see a pattern, where the likelihood of getting hit is proportionate to area, as should be obvious.

That immediately means that the bulk of the distribution, near the mean value peak, is far more likely to be hit than the far tails. For exactly the same reason why if one blindly reaches into a haystack and pulls a handful, one is going to have a hard time finding a needle in

it.

The likelihood of getting straw so far exceeds that of getting needle that searching for a needle in a haystack has become proverbial.

In short, a small sample of a very large space that is blindly taken, will by overwhelming likelihood, reflect the bulk of the distribution, not relatively tiny special zones.

(BTW, this is in fact a good slice of the statistical basis for the second law of thermodynamics.)

The point of Fisherian testing is that skirts are special zones and take up a small part of the area of a distribution, so typical samples are rather unlikely to hit on them by chance. So much so that one can determine a degree of confidence of a suspicious sample not being by

chance, based on its tendency to go for the far skirt.

How does this tie into the design inference?

By virtue of the analysis of config spaces — populations of

possibilities for configurations — which can have W states and then we look at small, special, specific zones T in them. Those zones T are at the same time the sort of things that designers may want to target, clusters of configs that do interesting things, like spell out strings of at least 72 – 143 ASCII characters in contextually relevant,

grammatically correct English, or object code for a program of similar complexity in bits [500 – 1,000] or the like.

500 bits takes up 2^500 possibilities, or 3.27*10^150.

1,000 bits takes up 2^1,000, or 1.07*10^301 possibilities.

To give an idea of just how large these numbers are, I took up the former limit, and said now our solar system’s 10^57 atoms (by far and away mostly H and He in the sun but never mind) for its lifespan can go through a certain number of ionic chemical reaction time states taking 10^-14s. Where our solar system is our practical universe for atomic

interactions, the next star over being 4.2 light years away . . . light takes 4.2 years to traverse the distance. (Now you know why warp drives or space folding etc is so prominent in Sci Fi literature.)

Now, set these 10^57 atoms the task of observing possible states of the configs of 500 coins, at one observation per 10^-14 s. For a reasonable estimate of the solar system’s lifespan.

Now, make that equivalent in scope to one straw. By comparison, the set of possibilities for 500 coins will take up a cubical haystack 1,000 LY on the side, about as thick as our galaxy.

Now, superpose this haystack on our galactic neighbourhood, with several thousand stars in it etc.

Notice, there is no particular shortage of special zones here, just that they are not going to be anywhere near the bulk, which for light years at a stretch will be nothing but straw.

Now, your task, should you choose to accept it is to take a

one-straw sized blind sample of the whole.

Intuition, backed up by sampling theory — without need to worry over making debatable probability calculations — will tell us the result, straight off. By overwhelming likelihood, we would sample only straw.

That is why the instinct that getting 500 H’s in a row or 500 T’s or alternating H’s and T’s or ASCII code for a 72 letter sequence in English, etc, is utterly unlikely to happen by blind chance but is a lot more likely to happen by intent, is sound.

And this is a simple, toy example case of a design inference on FSCO/I as sign.

A very reliable inference indeed, as is backed up by literally billions of cases in point.

Now, onlookers, it is not that more or less the same has not been put forth before and pointed out to the usual circles of objectors.

Over and over and over again in fact.

And in fact, here is Wm A Dembski in NFL:

(And, Stephen Meyer presents much the same point in his Signature in the Cell, 2009, not exactly an unknown book.)

Why then do so many statistically or mathematically trained

objectors to design theory so often present the strawman argument that appears so many times yet again in this thread?

First, it cannot be because of lack of capacity to access and understand the actual argument, we are dealing with those with training in relevant disciplines.

Nor is it that the actual argument is hard to access, especially for those who have hung around at UD for years.

Nor is such a consistent error explicable by blind chance, chance would make them get it right some of the time, by any reasonable finding, given their background.

So, we are left with ideological blindness, multiplied by willful neglect of duties of care to do due diligence to get facts straight before making adverse comment, and possibly willful knowing distortion out of the notion that debates are a game in which all is fair if you can get away with it.

Given that there has been corrective information presented over and over and over again, including by at least one Mathematics professor who appears above, the collective pattern is, sadly, plainly: seeking rhetorical advantage by willful distortion.

Mendacityin one word.If we were dealing with seriousness about the facts, someone would have got it right and there would be at least a debate that nope, we are making a BIG mistake.

The alignment is too perfect.

Yes, at the lower end, those looking for leadership and blindly following are jut that, but at the top level there is a lot more responsibility than that.

Sad, but not surprising.

This fits a far wider, deeply disturbing pattern that involves outright slander and hateful, unjustified stereotyping and scapegoating.

Where, enough is enough.>>

______________

I trust that we can now proceed with a more reasonable approach.

G’day

KF

keiths:

If you really believe that then start flipping a coin and let us know when you have flipped 500 heads in a row.

Jerad:

If you really believe that then start flipping a coin and let us know when you have flipped 500 heads in a row.

See:

The Law of Large Numbers vs. KeithS, Eigenstate and my TSZ Critics

This coin toss stuff is a pretty good litmus test of rationality.

Anyone who says 500 heads in a row is just as probable as any other sequence of 500 coin tosses is right.

And they are completely missing the point.What is it about the 500 heads in a row that demands our attention, that demands an explanation? That is the key question here.

To say that “500 heads is just as probable as any other sequence” and think that this is somehow a refutation of the design inference is just as illogical as saying “every arrangement of stones is just as improbable as the next; therefore the Taj Mahal was not designed.”

There is a huge difference between the one and the other. Everyone knows it. A small child knows it. For a speaker to use “every outcome is just as improbable” to try and refute design serves two purposes: (i) it gives the speaker a rhetorical hook to latch onto in the attempt to avoid admitting any possibility of design, and (ii) it allows everyone else to see that the speaker is a fool.

Anyone who thinks that 500 heads is no different from any other sequence needs to take a break from posting, go on a couple of long walks, think about it carefully, and sincerely ask themselves the following question: “We know all the sequences are equally improbable. Yet we also know there is something unique or unusual or special about 500 heads (or the Taj Mahal compared to a pile of rubble, or whatever other reasonable example you want). Why is that? What is it that makes it unique? What aspect, or quality, or characteristic is at play?”

Once you have answered that question, you will be able to understand the design inference.

Eric,

A resolution of the ‘all-heads paradox’keiths-

How are you defining “resolution”?

Joe,

To resolve a paradox is to explain it in a way that makes complete sense, but that Joe couldn’t follow even if his life depended on it.

I know how it is normally defined. I was asking how you are defining it.

The point being keiths couldn’t resolve a paradox if his life depended on it. And he cannot resolve that which he clearly doesn’t understand.

For example:

If I rolled my SSN I would be totally amazed. How many other people’s SSN have numbers greater than 6?

All heads is a pre-specification, keiths.

The probability of getting 1 head is 1/2. 2 heads in a row is 1/4. 3 heads in a row is 1/8. 10 heads is 1/1024. The odds of 500 heads in a row is 1/2^500.

I don’t know if that is a paradox, but it is what it is.

BTW, the odds of flipping a coin 500 times and getting some pattern is 1.

I would not think that is a refutation of the design inference except in the case where it is used as a justification of the design inference. Nor would I equate 500 heads in a row with something that was clearly NOT created via a series of random events like the Taj Mahal.

IF a coin toss is fair, i.e. truly random, then each event is independent of the ones that came before and therefore there can be no conscious pattern or structure being created. Something like the Taj Mahal or the Pyramids or Stonehenge (all of which are much more complicated than a sequence of coin tosses) were clearly created in a sequence where every new stone was placed in a non-random process, in a particular relationship with all that had gone before. The raw materials have been displaced from their natural sources and have been worked in ways consistent with the working techniques known to be used by the intelligent agents known to be around at the time. In all those cases we have a good idea of how and who made them. And pretty good ideas of why as well.

keiths @53:

I think you have some good thoughts there. Perhaps we can pursue your example just one step further.

It isn’t just that it is

mySSN. If my friend is over one evening and I ask him his SSN, and then I roll one-by-one each number in order to match his SSN, we know something is up. In other words, it doesn’t have to be meaningful to me, it just has to have someindependentmeaning apart from the roll itself.Same thing would be the case if, instead of an SSN, I just asked my friend to call out 9 numbers in a row and then I roll them one by one. We’d be suspicious. Even though those number don’t have any particular “meaning.” The only “meaning” is that they were specified independent of the roll itself.

Now if I had a whole room full of people, it might be unlikely to roll someone’s SSN, but we would be slightly less surprised. If we had a stadium of people, we probably would think “that’s interesting,” but wouldn’t be terribly surprised. Finally, if there is no specification at all beforehand — I have the entire SSN database at my disposal — then not only is it not surprising if I roll someone’s number, I would expect to. In your example of Delbert, there was no number called beforehand, no individual named. I just roll and then look up whomever it happened to land on in the database. Not at all surprising that it landed on

someone. Why? Because there is no specification; in other words, no independent meaning to the number.In the case of a coin toss, there are a small number of obvious specific cases (i.e., specifications) that we can easily have in mind: all heads, all tails, some kind of repeating pattern (e.g., HTHT all the way), 250 heads followed by 250 tails, etc. Probably no more than a handful of specific scenarios (or specifications) that we would all readily recognize. They are already recognizable to us as having some independent significance beforehand. Thus, when we see them, we are surprised. And we should be just as surprised at seeing 500 heads as if we had asked someone to write down a series of H’s and T’s up to 500 and then we roll, and amazingly, roll the exact list they wrote on their paper.

So it all comes back to the specification. And that is the key to the coin toss example and the key to the design inference generally.

—–

Incidentally, I should add that I’m not sure all ID folks who use coin toss examples are quite as careful with their descriptions or setup of the scenario as they should be. That may be part of the disconnect we sometimes see. But I think an objective person can easily see that there is something “special” about particular outcomes and should spend some time thinking about the why.

Eric,

Thanks.

Yes, the numbers don’t matter; their meaning doesn’t matter; and even the fact that they belong to my set of “numbers that are meaningful to me for any reason or just because I say so” doesn’t really matter.

What matters is that they form a small set, so that getting one of those numbers is improbable under the assumption of fairness. In other words, a surprise.

But here the paradox rears its head again. As I put it in the OP at TSZ:

That last paragraph could use some improvement. I’m working on a better description of my resolution of the paradox, but it will have to wait until tomorrow.

I agree with both these points.

I’d also like to be clear that people don’t think that there is some arcane physical law that means that the more heads you’ve tossed the more likely it is that the next one will be tails. It is not the “Laws of Physics” that make more equal ratios more probable than less equal ratios. I think that is a real danger inherent in using the word “probability” too loosely. As I constantly ask my students: “probability

of what?”In the context of Fisherian inference and the binomial theorem, sequences with fairly equal ratios are more “probable” is simply that they are more of them!

Same with sequences that are “special” – there are far more non-special sequences than special sequences, however we describe “special” – whether it’s your phone number, or stripes, or prime numbers, or even the sequence you last threw.

I’d like, as a test question, to ask people this question:

I just threw (well, “threw” – I used Excel) this 500 toss sequence:

T H H T H T H T H T T T H H H T T T H T H H T H T T H H T H T T T T H H T T H H H H T T H H H T H T T H T H T T H T H H T T T T T T T H T T H T H T T T H T H H T H T H H H H H T H H T T H H H T T T T T T T H T T H T T T H T H H H H H T H T T T H T T T H H H T T T T T T T H T T H H H T H T H H T H H H H T H H T H T T T H H H T T T T T H T H T T T T T H T T T H H H T T H H T T H H H H H T T H H H T T H T T H T T T H H H H H T H H T T H H T H T H T H T T T H T T H T T T T T T T H H H T T T T T T T H T H T T H H H T T T H H T T H H T T H T T T H H H T T H T H T T H T T T H T H T H H H T H H T T T T H T H T H T H T T T T T T T H H H H T T T T T H H H T H H T T T T H H H T T H T T H T H T T H H T H H H T H T T T T H H H H T T T H H H H H T H H H T T T T T H H H T H T T T H H T T H T H H T H H H T H T H H T T T T T T H T T H H H H T H H H H H T H T H H T H T H T H T H T H T H H T H H T H H T T H T T T T H H T T H H T T H T T H H T H T H H H T H H T T T H T H T H T T T H H H H T H T H T H T H

There are 236 Heads and 264 Tails.

Is the probability that this sequence will ever be thrown again by anyone in the history of the universe:

A:Greater than the probability that someone will throw an all-heads sequence?

B:Less than the probability that someone will throw an all-heads sequence?

C:The same as the probability that someone will throw an all-heads sequence?

Answers to this question might at least reconnect some of the apparent disconnects!

Or possibly reveal more fundamental differences than I think exist)

Anyone?

I vote for C.

But, sadly, the attention seems to have moved on.

gpuccio might respond but I suspect Sal and Barry have moved to greener pastures.

#62

Lizzie – actually I think it is B. I imagine there are people out there working on ways to cleverly produce a sequence of 500 heads e.g. by special means of tossing the coin – no one is likely to be working on ways to produce the sequence you tossed.

You didn’t throw it, so the next time it is thrown will be the first.

Over on TSZ they are discussing the physics of it and you took that out. And then you act as if that’s OK.

But anyway, the odds of 500 heads in a row is 1 in 2^500. The odds of getting a pattern is 1.

The safe answer is the odds of hitting your pattern is less than or equal to 2^500.

@Elizabeth:

How is the experiment performed? If you look into a sequence of coin tosses, the odds are 2:1 that your pattern of 236 Heads and 264 Tails will appear before a pattern of 500 Heads (See e.g. Li, S., 1980:

A martingale approach to the study of occurrence of sequence patterns in repeated experiments.The Annals of Probability 8, 1171–1176), thus the answer would be B.If you perform only tests of 500 tosses, than the answer is C.

Isn’t the theory of probability marvelous?

Which do you vote for, Joe?

DiEB just gave away the store. It most likely will never happen but in the magical world of infinities your pattern is more likely to appear than 500 heads.

That’s a very nice point DiEb! I had envisaged only tests of 500 tosses.

Answer B is:

Less than the probabilitythat someone will throw an all-heads sequence?If odds are 2 to 1 that Liz’s sequence will appear before all coins heads, then that means it has a higher probability than all coins heads, not lower.

Thus for the Answer is A not B for non-tests of 500, Answer C for tests of 500 tosses.

Sorry, I misread.

OK, well, that’s reassuring, even if only a few people answered!