# Jerad’s DDS Causes Him to Succumb to “Miller’s Mendacity” and Other Errors

Part 1:  Jerad’s DDS (“Darwinist Derangement Syndrome”)

Sometimes one just has to stop, gape and stare at the things Darwinists say.

Consider Jerad’s response to Sal’s 500 coin flip post.  He says:  “If I got 500 heads in a row I’d be very surprised and suspicious. I might even get the coin checked. But it could happen.”  Later he says that if asked about 500 heads in a row he would respond:  “I would NOT say it was ‘inconsistent with fair coins.’”  Then this:  “All we are saying is that any particular sequence is equally unlikely and that 500 heads is just one of those particular sequences.”

No Jerad.  You are wrong. Stunningly, glaringly, gobsmackingly wrong, and it beggars belief that someone would say these things.  The probability of getting 500 heads in a row is (1/2)^500.  This is a probability far far beyond the universal probability bound.  Let me put it this way:  If every atom in the universe had been flipping a coin every second for the last 14.5 billion years, we would not expect to see this sequence even once.

But, insists Jerad, it could happen.  Jerad’s statement is true only in the trivial sense that flipping 500 heads in a row is not physically or logically impossible.  Nevertheless, the probability of it actually happening is so vanishingly small that it can be considered a practical impossibility.  If a person refuses to admit this, it means they are either invincibly stupid or piggishly obstinate or both.  Either way, it makes no sense to argue with them.  (Charity compels me to believe Jerad will reform his statements upon reflection.)

But, insists Jerad, the probability of the 500-heads-in-a-row sequence is exactly the same as the probability of any other sequence.  Again, Jerad’s statement is true only in the trivial sense that any 500 flip sequence of a fair coin has the exact same probability as any other.  Sadly, however, when we engage in a non-trivial analysis of the sequence we see that Jerad’s DDS has caused him to succumb to the Darwinist error I call “Miller’s Mendacity” (in homage to Johnson’s Berra’s Blunder).*  Miller’s Mendacity is named after Ken Miller, who once made the following statement in an interview:

One of the mathematical tricks employed by intelligent design involves taking the present day situation and calculating probabilities that the present would have appeared randomly from events in the past. And the best example I can give is to sit down with four friends, shuffle a deck of 52 cards, and deal them out and keep an exact record of the order in which the cards were dealt. We can then look back and say ‘my goodness, how improbable this is. We can play cards for the rest of our lives and we would never ever deal the cards out in this exact same fashion.’ You know what; that’s absolutely correct. Nonetheless, you dealt them out and nonetheless you got the hand that you did.

Miller’s analysis is either misleading or pointless, because no ID supporter has ever, as far as I know, argued “X is improbable; therefore X was designed.” Consider the example advanced by Miller, a sequence of 52 cards dealt from a shuffled deck. Miller’s point is that extremely improbable non-designed events occur all the time and therefore it is wrong to say extremely improbable events must be designed. Miller blatently misrepresents ID theory, because no ID proponent says that mere improbability denotes design.

Let’s consider a more relevant example.  Suppose, Jerad and I played 200 hands of heads up poker and I was the dealer.  If I dealt myself a royal flush in spades on every hand, I am sure Jerad would not be satisfied if I pointed out the (again, trivially true) fact that the sequence “200 royal flushes in spades in a row” has exactly the same probability as any other 200 hand sequence.  Jerad would naturally conclude that I had been cheating, and when I had shuffled the deck I only appeared to randomize the cards.  In other words, he would make a perfectly reasonable design inference.

What is the difference between Miller’s example and mine?  In Miller’s example the sequence of cards was only highly improbable. In my example the sequence of cards was not only highly improbable, but it also conformed to a specification.  ID proponents do not argue that mere improbability denotes design. They argue that design is the best explanation where there is a highly improbable event AND that event conforms to an independently designated specification.

Returning to Jerad’s 500 heads example, what are we to make of his statement that if that happened he “might” get the coin checked.  Blithering nonsense.  Of course he would not get the coin checked, because Jerad would already know to a moral certainty that the coin is not fair, and getting it “checked” would be a silly waste of time.  If Jerad denies that he would know to a moral certainty that the coin was not fair, that only means that he is invincibly stupid or piggishly obstinate or both.  Again, either way, it would make no sense to argue with him.  (And again, charity compels me to believe that upon reflection Jerad would not deny this.)

Part 2:  Why Would Jerad Say These Things?

Responding to Jerad’s probability analysis is child’s play.  He makes the same old tiresome Darwinist errors that we have had to correct countless times before and will doubtless have to correct again countless times in the future.

As the title of this post suggests, however, far more interesting to me is why Jerad – an obviously reasonably intelligent commenter – would say such things at all.  Sal calls it SSDD (Space Shuttle Denying Darwinist or Same Stuff, Different Darwinist).  I call it Darwinist Derangement Syndrome (“DDS”).  DDS is somewhat akin to Tourette syndrome in that sufferers appear to be compelled to make inexplicable statements (e.g., if I got 500 heads in a row I “might” get the coin checked or “It could happen.”).

DDS is a sad and somewhat pathetic condition that I hope one day to have included in the Diagnostic and Statistical Manual of Mental Disorders published by the American Psychiatric Association.  The manual is already larded up with diagnostic inflation; why not another?

What causes DDS?  Of course, it is difficult to be certain, but my best guess is that it results from an extreme commitment to materialist metaphysics.  What is the recommended treatment for DDS?  The only thing we can do is patiently point out the obvious over and over and over, with the small (but, one hopes, not altogether non-existent) chance that one day the patient will recover his senses.

*I took Ken Miller down on his error in this post

## 110 Replies to “Jerad’s DDS Causes Him to Succumb to “Miller’s Mendacity” and Other Errors”

1. 1

Barry,

If I’ve made a statement that is mathematically incorrect then I would take it back.

You also fail to note that I said later in the thread I would check out the coin and the technique used to make sure there wasn’t some bias creeping in. I’d be very, very suspicious that something was not right and I’d check that out. Just like I would in your card playing example.

And, I will say it again: it is possible that you could get 500 heads in a row with a fair coin. I agree that getting that particular outcome is vanishingly small as is every other specified sequence of 500 Hs and Ts. Flip a coin 500 times, you get a sequence of Hs and Ts. If you’d specified that particular sequence ahead of time the pre-trial probability of getting that particular sequence would have also been 1/2^500. But it happened. That’s my point. With a fair coin there is nothing special about 500 Hs. Or 500 Ts. Or the sequence HTHTHTHTH . . . . Or the sequence HHTTHHTTHHTT . . . .

I was not addressing any ID or evolutionary issues, I was just responding to a mathematical post from a mathematical point of view.

2. 2
Joe says:

And, I will say it again: it is possible that you could get 500 heads in a row with a fair coin.

It would be doubtful even if you had magical infinity at your disposal. It would definitely be impossible in any human life-time.

3. 3
scordova says:

It’s gotten to the point Creationist can’t even state accepted science without a evolutionist challenging them, lest it shows the creationist might have actually taken some time to read a little math and science, that a creationist might actually be fascinated with science.

It goes against the Darwinists narrative:

“It is absolutely safe to say that if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid or insane (or wicked, but I’d rather not consider that).”

I sort of like the wicked label myself.

Either way, it makes no sense to argue with them.

Except that it may be of interest to people on the sidelines and it helps reassure me that I’m not mistaken especially when they take on indefensible positions.

We’re aren’t even talking ID in biology or origin of life, we’re talking about 500 fair coins, and Darwinists won’t even budge.

he is invincibly stupid or piggishly obstinate or both.

Jeepers, we’re only talking about 500 fair coins!

Thanks for referencing my posts, Barry.

FWIW, I’m not a poker player like you, but I play other card games, have made over \$30,000 playing them, and the math I provided is relevant to those card games. In fact, the notion of expectation value is central to beating those games.

4. 4
Granville Sewell says:

Unintelligent forces can produce extremely improbable results, such as a particular coin toss sequence, what they cannot produce are simply describable, extremely improbable, results.

The reason why 500 straight coins would raise eyebrows, and most other results, while equally improbable, would not, is easy: because “all heads” is simply describable, and most others are not (many would be describable only in 500 bits, by actually listing the result). If we flip n fair coins, the probability that the result can be described in m bits, since there are at most 2^m such results, is less than 2^m/2^n. So if you flip a billion coins, and get “all heads” or “(only) all prime numbered coins are heads” you would rightly be surprised and suspect something other than chance.

5. 5
JWTruthInLove says:

@Joe:

It would definitely be impossible in any human life-time.

Why?

6. 6
Neil Rickert says:

No Jerad. You are wrong. Stunningly, glaringly, gobsmackingly wrong, and it beggars belief that someone would say these things. The probability of getting 500 heads in a row is (1/2)^500. This is a probability far far beyond the universal probability bound. Let me put it this way: If every atom in the universe had been flipping a coin every second for the last 14.5 billion years, we would not expect to see this sequence even once.

Flip a coin 500 times. Write down the exact sequence that you got.

We can say of that sequence, that it had a probability of (1/2)^500. It is a sequence that we would not expect to see even once. Yet we saw it.

This is a common fallacy about probabilistic thinking. You are making one particular sequence as especially improbably, when all sequences are equally improbable. And since what you wrote down came from an actual sequence, you can see that highly improbable things can happen. Although it is highly improbable for any particular person to win the lottery, we regularly see people winning.

7. 7
Joe says:

Neil,

pre-specify the pattern and then see if you can flip a coin to match it.

8. 8
Joe says:

JWTruthInLove,

The odds are too high. That’s why.

9. 9
JWTruthInLove says:

@Joe:

The odds are too high. That’s why.

The odds are the same for any sequence.

10. 10
Neil Rickert says:

Joe:

pre-specify the pattern and then see if you can flip a coin to match it.

It is very unlikely.

There’s a problem when you take a pattern that has already occurred, and then claim it is so improbable that it could not have occurred naturally.

11. 11
scordova says:

This is a common fallacy about probabilistic thinking. You are making one particular sequence as especially improbably

All coins heads is especially improbable for fair coins based on standard practice in statistics. If you fault Barry, then you may as well fault entire industries that profit from notions of expectation value or sequences consistent with expectation.

Lotteries have guaranteed winners, so the lottery analogy is inappropriate. There is no guarantee of inevitability with 500 fair coins that they become all heads given the number of trials possible in the life of the universe…

Why expectation value seems to work in science and engineering and the gambling industry may be a philosophical question since it seems we’re just subjectively saying one sequence is especially more important than other, but this “prejudice” seems to work from a practical standpoint.

I’m a pragmatist, and it is pragmatic to say a set of 500 fair coins are all heads indicates intelligent design.

I’m astonished that people who’ll readily accept speculative scenarios for the Origin of Life will go to great lengths to contest what ought to be non controversial inferences about 500 fair coins!

12. 12
Neil Rickert says:

All coins heads is especially improbable for fair coins based on standard practice in statistics.

It is no more improbable than any other sequence.

The gambling industry knows this, and sets its odds accordingly.

I’m astonished that people who’ll readily accept speculative scenarios for the Origin of Life will go to great lengths to contest what ought to be non controversial inferences about 500 fair coins!

Just to be clear, I take the Origin-of-life question to be unsolved. Speculative scenarios are interesting and perhaps suggest research directions, but they don’t actually resolve the question.

13. 13
scordova says:

It is no more improbable than any other sequence.

True, but we’re talking about improbable with respect to expectation, and that has practical significance in finding sufficient reason to reject the chance hypothesis.

14. 14
Barry Arrington says:

Jerad @ 1: Is there ANY number of heads in a row that would satisfy you. Let’s say that the coin was flipped 100 million times and they all came up heads. Would you then know for a moral certainty that the coin is not fair without having to check it?

Neil @ 6 and 10: You have now committed “Miller’s Mendacity” twice in one comment thread. Did you even read the OP?

15. 15

Statistically, if you do a long series of 500 (fair) coin flip you see that the majority of flips show a quantity of head H staying in the interval between (250 – k*sd) and (250 + k*sd), where sd is the “standard deviation” (=11) and k is equal 2 or 3.

When H is consistently beyond such interval, the coin is suspicious (not fair). With 500 heads in a row it is practically certain that the coin is loaded (design).

So it is useless to claim that in theory 500 heads in a row has the same probability of the alternate sequence HTHTHTHTHTHTHTHT… where there are exactly 250 Hs and 250 Ts.

What matters is the statistics of a long series of many 500 coin flips. This statistics tells us when the coin is not fair, which, by the way, is a design inference.

About evolutionists denying this concept, I would be curious to know what they would do if we play money at coin flip with them, we always bet on H and we get 500 Hs in a row… I bet that they wouldn’t say “well, money lost, but after all 500 heads in a row has the same probability…”. 🙂

16. 16
Seqenenre says:

Mister Arrington:
TTTTTTTTTH repeated 50 times would serve the same function as 500 times H if I understand you correctly?

17. 17
scordova says:

In my example the sequence of cards was not only highly improbable, but it also conformed to a specification

And identifying something as improbable with respect to expectation is a form of specification, but I chose the notion of expectation because it is so standard in statistics that it is practically unassailable as a specification.

Again, it is not improbability with respect to other sequences, it is improbability with respect to a specification or in the case of the 500 coins a specification defined by expectation.

Expectation values are so ingrained in statistics we don’t even realize it is a subtle form of specification.

I wonder if the folks critical of UD essays on this topic would like to be a part of the defense team for the false-shuffle team I described in:
Coordinated Complexity

In this case, a team of cheaters bribed a casino dealer to deal cards and then reshuffle them in same order that they were previously dealt out (no easy shuffling feat!). They would arrive at the casino, play cards which the dealer dealt and secretly record the sequence of cards dealt out. Thus when the dealer re-shuffled the cards and dealt out the cards in the exact same sequence as the previous shuffle, the team of cheaters would be able to play knowing what cards they would be dealt, thus giving them substantial advantage. Not an easy scam to pull off, but they got away with it for a long time.

The evidence of cheating was confirmed by videotape surveillance because the first random shuffle provided a specification to detect intelligent design of the next shuffle. The next shuffle was intelligently designed to preserve the order of the prior shuffle.

What would the defense at trial be? “Uh, your honor, we aren’t guilty because the sequence of all cards dealt throughout several shoes is not particularly special compared to all possible other sequences. Therefore my client shouldn’t be sent to jail for intelligently designing a false shuffle scam. The FBI was wrong to make a design inference by simply observing the sequences of cards coming out.”

18. 18
Neil Rickert says:

You have now committed “Miller’s Mendacity” twice in one comment thread.

That seems to be a mis-attribution. I have not said anything specifically about ID arguments. I have been trying to correct some mistaken ideas about probability. Many folk, not only ID proponents, make similar mistakes. You might say that this is a pet peeve among mathematicians.

19. 19
scordova says:

I have been trying to correct some mistaken ideas about probability. Many folk, not only ID proponents, make similar mistakes. You might say that this is a pet peeve among mathematicians.

I don’t see the error if your willing to give a charitable reading. We’re talking about improbability with respect to a specification, and in one specific case the improbability with respect to expectation, not other sequences.

But to be clear, do you think 500 fair coins heads violates the chance hypothesis? If you don’t, then we’ll never agree on many things.

20. 20

It would be doubtful even if you had magical infinity at your disposal. It would definitely be impossible in any human life-time.

That is incorrect as I’m sure Dr Sewell will point out if you ask him.

We’re aren’t even talking ID in biology or origin of life, we’re talking about 500 fair coins, and Darwinists won’t even budge.

If you’re going to make mathematical arguments then make sure you get them right. THAT’s what I’m addressing.

Unintelligent forces can produce extremely improbable results, such as a particular coin toss sequence, what they cannot produce are simply describable, extremely improbable, results.

The reason why 500 straight coins would raise eyebrows, and most other results, while equally improbable, would not, is easy: because “all heads” is simply describable, and most others are not (many would be describable only in 500 bits, by actually listing the result). If we flip n fair coins, the probability that the result can be described in m bits, since there are at most 2^m such results, is less than 2^m/2^n. So if you flip a billion coins, and get “all heads” or “(only) all prime numbered coins are heads” you would rightly be surprised and suspect something other than chance.

Dr Sewell, is it possible with a fair coin to get 500 heads in 500 flips? I accept that it’s extremely improbable but just say yes or no: it is possible?

And, if you would please, also address the question of whether or not any particular, specified sequence of 500 Hs and Ts is more or less likely than 500 Hs. If you would please.

The odds are too high. That’s why.

It is non-sensical to say that something will never happen because the odds are against it. It is sensical to say that’s it’s stupid to place a bet on that outcome as is evidenced by the hierarchy of poker hands and the craps and roulette betting tables. But improbable does NOT mean never or impossible. Or even “never in a lifetime.” It could happen on the first try. Just as could any other specified sequence.

I’m a pragmatist, and it is pragmatic to say a set of 500 fair coins are all heads indicates intelligent design.

I’m astonished that people who’ll readily accept speculative scenarios for the Origin of Life will go to great lengths to contest what ought to be non controversial inferences about 500 fair coins!

I’m only addressing the mathematics. And the mathematics says that you could get 500 heads in a row. It’s extremely unlikely and I have never denied that. But it could happen via purely random processes. There is no mathematical argument that would say that 500 heads in 500 coin tosses is proof of intervention.

Jerad @ 1: Is there ANY number of heads in a row that would satisfy you. Let’s say that the coin was flipped 100 million times and they all came up heads. Would you then know for a moral certainty that the coin is not fair without having to check it?

A moral certainty? What does that mean? I’m talking about mathematics here.

I would never, ever bet on getting 10 heads in a row with a fair coin. That would be stupid. But it could happen.

So it is useless to claim that in theory 500 heads in a row has the same probability of the alternate sequence HTHTHTHTHTHTHTHT… where there are exactly 250 Hs and 250 Ts.

What matters is the statistics of a long series of many 500 coin flips. This statistics tells us when the coin is not fair, which, by the way, is a design inference.

Any one randomly generated sequence of Hs and Ts has an equally likely chance of generating and of the possible outcomes. Personally, I would not want to rest my case on your reasoning.

21. 21

Sorry, I failed to close the blockquote tags properly in my response. Please read my post with some understanding.

22. 22

“There is no mathematical argument that would say that 500 heads in 500 coin tosses is proof of intervention.”

Then you, before 500 heads in a row in a money coin flip game, would really say “well, money lost, after all 500 heads in a row is not proof of intervention”!

23. 23
Neil Rickert says:

But to be clear, do you think 500 fair coins heads violates the chance hypothesis?

If that happened to me, I would find it startling, and I would wonder whether there was some hanky-panky going on. However, a strict mathematical analysis tells me that it is just as probable (or improbable) as any other sequence. So the appearance of this sequence by itself does not prove unfairness.

Apart from the mathematics, there is the question of whether there might be people involved who might be playing practical jokes. This would be an attractive sequence for tricksters.

24. 24

Then you, before 500 heads in a row in a money coin flip game, would really say “well, money lost, after all 500 heads in a row is not proof of intervention”!

As I have already said I would be very, very suspicious and would do my utmost to check and see if there were some bias entering into the system.

But I would do the same if any particular pre-specified sequence arose. Psychologically we fixate on sequences like all Hs. Or all Ts. Or HTHTHT . . . because they don’t seem ‘normal’ to us. But mathematically they’re all the same. Probability wise.

Mathematics is not a spectator sport. If you’re going to play you have to work at it.

I do know what you’re trying to get me to address. The notion that certain developmental/evolutionary sequences are so unlikely that it just makes sense to fall back on design as a more reasonable explanation. If you put your supposition(s) into a mathematical context then I’ll do my best to address them. But there’s no need to be vague and suggestive. Or to draw unimplied conclusions. Ask.

25. 25
SteveGoss says:

All we are saying is that any particular sequence is equally unlikely and that 500 heads is just one of those particular sequences.

I am not a mathematician and I do not play one on TV, so I don’t know what real mathematicians would say about the comment above, but it strikes me as being false.

500 heads in 500 tosses is not really just one probability. it’s 500 discrete tosses accumulated. So there are 500 probabilities to be dealt with.

The probability of getting heads on one random toss is 50% but the result of one toss is either 100% head, or 100% tail. With two coins the possible results are 0, 50 or 100% heads (and 100, 50 or 0% tails). But two of the possible outcomes have 50% possibility (Head / tail and tail / head). And once we start flipping the coin again and again, the number of possible outcomes that are clustered around 50% head and 50% tails increases.

So I would think that a sequence with roughly 50% heads would be much more likely than a sequence with either 0% or 100% heads.

As a corollary it would also seem that sequences with large clumps of the same result (a string of heads, or instance) would be more unlikely than sequences with the results randomly scattered around.

26. 26

“Jerad #20: There is no mathematical argument that would say that 500 heads in 500 coin tosses is proof of intervention.”

“Jerad #24: I would be very, very suspicious and would do my utmost to check and see if there were some bias entering into the system.”

27. 27

I’ve realised that perhaps some people really are making an error here, in thinking that if something is extremely unlikely, it cannot happen unless there are enough trials to make it considerably more likely.

This is not the case. As Barry rightly says:

Jerad’s statement is true only in the trivial sense that flipping 500 heads in a row is not physically or logically impossible.

Yes, it is trivially true. It is, indeed, simply true!

If you had a gazillion^gazillion monkeys tossing for a gazillion years, you’d almost certainly get 500 heads at some point. But that “some point” could be on the first try, or the last, or any point in between with equal probability

Granville:

Unintelligent forces can produce extremely improbable results, such as a particular coin toss sequence, what they cannot produce are simply describable, extremely improbable, results.

Not sure what you’re saying here, Granville. If a describable result is extremely improbable, and a non-describable one is also extremely improbable, why is the second producible by unintelligent forces, and the first not? Or is this not what you are saying?

28. 28

Every toss of a fair coin is 50-50 heads or tails. So it’s 1/2 chance that the first toss comes up heads or tails. It’s 1/2 chance that the second toss comes up heads or tails. Etc. It’s a fair coin, No bias.

Give me any specified sequence of Hs and Ts. Each postion has a 50-50 chance of coming up with the specified value either H or T. To find the total probability of the whole sequence coming up EXACTLY as specified you multiply the positional probabilities together. 500 positions, each position has a probability of 1/2 or 50-50 you get a the probability of any specific sequence of Hs and Ts is 1/2 x 1/2 x 1/2 . . . 500 times. You get 1/2^500.

OF COURSE it’s more likely you’ll get some Hs and some Ts. Of course I would never bet on getting 500 Hs in a row. But if 500 Hs did happen it’s not an indication of design. And if the design inference depends on such arguments then it’s doomed.

29. 29
Joe says:

If you had a gazillion^gazillion monkeys tossing for a gazillion years, you’d almost certainly get 500 heads at some point.

Maybe, but one thing is certain, the monkeys would never know.

30. 30
Joe says:

Neil:

There’s a problem when you take a pattern that has already occurred, and then claim it is so improbable that it could not have occurred naturally.

The thing about that, Neil, is all someone has to do is come along and demonstrate it indeed can arise via nature, operating freely. That would shut those people up in a hurry.

Strange that no one ever seems to get around to doing that though.

31. 31

I think we should drop the word “proof”. We don’t do proofs in empirical science. 500 heads would be extremely good evidence that the coin was rigged.

However, if careful investigation showed that the coin was perfectly balanced and the tossing mechanism perfectly adjusted, you might still have to conclude it was Just One of Those Crazy Things.

But I don’t think anyone seriously disagrees, do they?

32. 32
Joe says:

Elizabeth,

Science can only allow for a certain amount of luck.

33. 33

I’m a realist but at the same time I understand the mathematics.

If I look at any event in my life I can compute the chances of that outcome to be astronomical. And if that outcome is significantly outside of my expected or normal outcomes then I might get suspicious.

But it doesn’t meant there was any design behind it. That is an assumption.

Correlation still does not equal causation. Random stuff happens. All the time. And random stuff can be very . . . improbable.

34. 34
Joe says:

But if 500 Hs did happen it’s not an indication of design.

That, design, would be a very safe inference

And if the design inference depends on such arguments then it’s doomed.

35. 35
bw says:

I urge you to use this opportunity to really think about probabilities.

Many people are far to quick to dismiss it!

Think of it this way, any probability – by it’s very nature has a chance of happening so yes both 500 heads and 5 million heads could happen.

What really matters is the point at which you can no longer accept something as being the result of chance.

Chance is what we are talking about here at the end of the day, selection can be ignored when it comes to the chances of genetic mutations ending up at a specified outcome. Natural selection does nothing to reduce the number of changes get to a result. Overall changes yes, but the number required no.

You could argue that “outcomes” don’t exist in nature but I would contest that you are wrong.

Start to look around at living organisms and you will hopefully see what I am getting at.

Consider snake venom. (Literally the first thing that sprang to mind).
For venomous snakes to be able to kill their prey, it is a requirement that they:
A. produce and store venom
B. can deliver this venom into their prey

When you really think about this is should strike you first as a puzzling scenario and one that involves chances.

What are the odds of randomly producing a venom and sac to contain it. Perhaps, finger in the wind here, 100 base pairs are needed to control the development of the sac including its structure, lining, blood supply, venom gland etc… I chose 100 to counter the fact there there could be a number of different ways of producing such a mechanism. (yes in reality it is probably in the tens or hundreds of thousands but I am trying to make this simple).

Great a sequence of 100, that ain’t too bad.

Well now we need a delivery system. One that happens to connect the sac to the specialized teeth. So we need to evolve ourselves a duct, one that attach at both the sac and the teeth. No good if it connects the sac to it’s arse – got to be the teeth. We also need those specialized teeth with the holes in them though or the venom won’t get out.

No worries another sequence of say 100 should take care of that.

Ah but we also need a muscle to eject the venom and also to connect to the brain and also to make the snake aware of this muscle and when to contract it.

Another 100 pairs should suffice.

See the problem. Not just one sequence… three of them!!!

THREE!!!

This is a huge problem, if you fail to see why, stop reading.

The reason is because there is clearly more than three sequences that are required for a snakes venom system. Also those sequences are clearly longer than 100.

So this was just ONE example from ONE animal which no doubt contains many many more such examples.

So when you look at EVERY animal – and there are a lot of them, and really think about the thousands and thousands and thousands of far more unlikely that 500 heads that have some together you really got to start to question if random mutations are enough.

If you even think the word “selection” now I am sending you a virtual punch – selection solves nothing of this problem. If you don’t see a problem then YOU have a problem.

I can’t offer an alternative sadly, my best hope is that organisms are procedurally built allowing for greater changes to be made by early mutations. But this just adds further problems 🙁

I hope that made some sense to someone and believe me if you just look around and think you will see problems like the one in our friend the snake in nearly every living thing you encounter.

I am not saying God did it, or evolution is a lie or anything like that… just that mutations acted upon by natural selection
is not the final picture, not by a long shot.

b

36. 36

“And if that outcome is significantly outside of my expected or normal outcomes then I might get suspicious. But it doesn’t meant there was any design behind it.”

Your position continues to be contradictory. You cannot be “suspicious” and in the same time believe “there was no design behind it”. What you are suspicious of, but design?

37. 37

That, design, would be a very safe inference

That is an assumption. You cannot possibly make such a claim without first discovering and elucidating and assigning probabilities to all the possible explanations. And design is only one of many possible explanations. Each explanation implies a cascade of follow-on manifestations or expectations.

Science and applied mathematics look for the best explanatory models for the observed evidence. A good model matches the existing data AND predicts future observations. But even the best model is provisional and subject to change. This is a reasonable and sensible approach. Before a new model is adopted it must beat the old model on all counts. It’s hard work since our current models have come from decades of verification.

38. 38

SteveGoss:

So I would think that a sequence with roughly 50% heads would be much more likely than a sequence with either 0% or 100% heads.

As a corollary it would also seem that sequences with large clumps of the same result (a string of heads, or instance) would be more unlikely than sequences with the results randomly scattered around.

Adn you are absolutely right. But it’s important to distinguish between the probability of any one sequence, and the probability of a class of sequences.

Probabilities are derived from frequency distributions.

500 coin tosses can fall in 2^500 different ways, which is a huge number. Of those 2^500 different ways, only two of them have all the coins facing the same way up. So “all the same” is an incredibly rarer way for the coins to fall.

On the other hand there are many many sequences where about half the coins fall one way and half fall the other. So you are far more likely to get one of those, than one of the rare ones.

BUT: No ONE of the 2^500 different possible sequences is any more or less likely than any other. Some classes of sequences are rarer than others, and therefore one of those rare classes is much less likely than one of the common classes.

But that is not the same as saying that any ONE sequence is any more likely than any other. They are all the same, and this one, which I just generated:

T
T
H
H
H
H
T
H
H
T
T
H
H
T
T
T
T
T
T
H
H
H
H
T
T
H
H
H
T
H
H
H
T
H
T
T
H
T
H
T
H
T
T
T
T
H
T
T
T
H
T
T
T
T
T
T
H
H
T
H
H
T
H
T
H
H
H
T
T
T
H
H
H
T
H
T
H
H
H
H
H
H
T
T
H
T
T
H
T
H
H
H
T
T
H
H
H
T
T
H
T
H
T
H
H
H
H
H
H
H
H
H
H
T
T
T
H
H
T
H
T
H
H
H
H
H
T
T
T
H
T
T
H
H
H
H
T
T
H
H
T
T
H
H
T
H
T
T
H
H
H
H
T
H
T
H
H
H
H
T
H
H
H
H
T
T
H
H
H
T
T
T
H
H
H
H
H
H
H
H
H
T
T
H
H
H
H
H
H
T
T
H
H
H
T
H
T
H
T
H
H
T
H
H
T
T
T
T
T
T
H
T
H
H
H
H
H
H
T
H
T
T
T
H
T
T
H
T
T
H
T
H
T
T
H
T
T
H
T
H
H
H
H
H
T
T
H
H
T
T
H
H
T
T
T
H
T
T
T
T
H
T
T
H
T
T
H
T
T
T
T
H
T
H
H
H
H
T
T
T
H
H
T
H
T
T
T
H
H
T
T
H
T
H
T
T
T
T
T
H
H
H
T
T
T
H
T
T
H
T
T
H
T
T
H
H
T
T
T
H
T
T
H
H
H
T
T
H
H
T
T
T
H
T
T
H
T
T
H
H
H
H
H
T
T
H
H
T
T
H
H
T
T
T
H
T
H
T
H
H
H
T
H
H
T
T
H
H
H
H
T
T
H
T
T
H
T
T
T
T
T
T
H
T
H
T
T
H
H
H
T
H
H
H
T
T
T
H
H
H
T
H
T
H
H
H
H
H
T
H
T
T
T
T
T
H
T
H
H
H
T
H
H
T
H
T
H
T
T
H
H
H
H
H
T
T
H
H
H
T
T
H
H
T
T
H
T
H
T
T
T
H
T
T
T
T
T
T
T
H
H
H
T
T
H
T
H
T
H
H
H
H
T
T
H
T
H
T
T
T
H
H
T
T
T
T
H
T
T
H
T
H
H
T
T
T
T
T
H
T

And which Excel tells me has 254/500 Heads, is just as (un)likely as 500 Heads. The chances of me ever throwing this sequence again are infinitessimal, exactly as infinitessimal as the chances of me ever throwing all Heads. Yet I just threw it (well, virtually threw it), on my very first throw!

39. 39

You could argue that “outcomes” don’t exist in nature but I would contest that you are wrong.

In hindsight outcomes exists but not as pre-existing goals in my mind.

Your position continues to be contradictory. You cannot be “suspicious” and in the same time believe “there was no design behind it”. What you are suspicious of, but design?

I’m suspicious of human intervention for various reasons. I’m suspicious of poorly minted coins. I’m suspicious of poorly designed testing procedures that favour certain outcomes. I’m suspicious of everyday honest human beings being completely objective in reporting what they’ve observed. As I well know from my own experience getting things wrong.

There is no contradiction. We are fallible human beings, we get things wrong, we use fallible, bias methodologies. We have no absolutes. Before we make probabalistic arguments we have to let go of all our biases.

Anyway, my point is that you cannot make a design inference from that line of reasoning. You could be very, very wrong.

40. 40

Joe

Maybe, but one thing is certain, the monkeys would never know.

heh.

41. 41
Joe says:

Science and applied mathematics look for the best explanatory models for the observed evidence.

That is why design is a safe inference

A good model matches the existing data AND predicts future observations.

And that is why dawininism isn’t a good model.

Before a new model is adopted it must beat the old model on all counts. It’s hard work since our current models have come from decades of verification.

And yet there still isn’t any evidence for natural selection being a designer mimic. Lenski’s 50,000+ generations and no new proteins, no new functions- just an existing function used in a different environment- that was the BIG news.

So what is this alleged verification you are talkimg about? And what math supports darwinism?

42. 42

“And if that outcome is significantly outside of my expected or normal outcomes then I might get suspicious. But it doesn’t meant there was any design behind it.”

Yes, your position is contradictory. In fact it is obvious that here with “design” we mean sensu lato any cause different from pure chance involved in the process. Therefore:

(A) if you are “suspicious” this means that you suspect a non random cause involved;

(B) if you “don’t believe in a design” this means that you do NOT suspect a non random cause involved.

Do you see that you cannot concatenate A and B, as you did, because you become illogical?

43. 43

niwrad, he’s not saying “[I] don’t believe in design”. He just said that he’d be suspicious, but that “it doesn’t mean there was any design behind it”.

Lots of things can make you suspicious, but have an “innocent” explanation. 500 heads is capable of an innocent explanation In other words 500 heads is perfectly possible under the fair-coin/fair-toss hypothesis, as is any sequence, although all are extremely improbable. However 500 Heads is not only improbable (as all sequences are) but also striking, and therefore would arouse suspicion. I don’t see anything contradictory.

And it’s worth pointing out that alternating heads and tails (HTHTHTHTHTHTH…..) would be just as striking (and almost as as easy to describe), just as improbable, but a member of a much larger set or sequences with the same ratio of heads-to-tails. As eigenstate suggests, it would nonetheless probably excite just as much suspicion, with just as much justification.

But be just as possible.

44. 44

Elisabeth B Liddle #43

You are charitable, but don’t defend the indefensible.

45. 45
Axel says:

Sovereign element though mathematics is in empirical science, it has been proven to be unable to give a perfect description, in its own terms, of our universe. So, it has a kind of ‘mortal’, imperfect relationship, to match the imperfect, material, space-time world it helps to describe.

Surely, here is a circumstance, where a slavish deference to statistical/mathematical possibility should give way to common sense – which, of course, is a bit of a misnomer, where atheists of all stamps are concerned, now that there are so many unambiguous pointers to, at the very least, theism.

What is virtually impossible, is to get an atheist to look at evidence they don’t wish to. Godel might as well not have bothered.

So, I don’t believe any number of monkeys tossing that coin would ever, in all eternity, produce a perfect sequence of 500 heads, as asserted by Liddle.

46. 46
Alan Fox says:

I don’t believe any number of monkeys tossing that coin would ever, in all eternity, produce a perfect sequence of 500 heads, as asserted by Liddle.

Given a coin, after being tossed will land either face up or down, the probability is 0.5 for either outcome, no matter what else is going on. The probability of parallel and/or series of such events is easily calculable. The key point is that whatever else is going on, the outcome of the single event is still 50:50. A coin has no memory.

Whether you believe it or not, the chance of a sequence of 500 heads is remote but not impossible.

47. 47

500 heads is more than enough proof for any reasonable person to conclude that the coin has been rigged – unless, of course, the person flipping the coin is an ID advocate.

Stupid professor of physics … doesn’t he realize that the computer browser error check code he uncovered embedded in superstring equations is just as likely to exist as any other series of 1’s and 0’s?

48. 48
kairosfocus says:

F/N: It seems people have a major problem appreciating: (a) configuration spaces clustered into partitions of vastly unequal statistical weight, and (ii) BLIND sampling/searching of populations under these circumstances.

It probably does not help, that old fashioned Fisherian Hyp testing has fallen out of academic fashion, never mind that its approach is sound on sampling theory. Yes it is not as cool as Bayesian statistics etc, but there is a reason why it works well in practice.

It is all about needles and haystacks.

Let’s start with a version of an example I have used previously, a large plot of a Gaussian distribution using a sheet of bristol board or the like, baked by a sheet of bagasse board or the like. Mark it into 1-SD wide stripes, say it is wide enough that we can get 5 SDs on either side. Lay it flat on the floor below a balcony, and drop small darts from a height that would make the darts scatter roughly evenly across the whole board.

Any one point is indeed as unlikely as any other to be hit by a dart. BUT THAT DOES NOT EXTEND TO ANY REGION. As a result, as we build up the set of dart-drops, we will see a pattern, where the likelihood of getting hit is proportionate to area, as should be obvious.

That immediately means that he bulk of the distribution, near the mean value peak, is far more likely to be hit than the far tails. For exactly the same reason why if one blindly reaches into a haystack and pulls a handful, one is going to have a hard time finding a needle in it.

The likelihood of getting straw so far exceeds that of getting needle that searching for a needle in a haystack has become proverbial.

In short, a small sample of a very large space that is blindly taken, will by overwhelming likelihood, reflect the bulk of the distribution, not relatively tiny special zones.

(BTW, this is in fact a good slice of the statistical basis for the second law of thermodynamics.)

The point of Fisherian testing is that skirts are special zones and take up a small part of the area of a distribution, so typical samples are rather unlikely to hit on them by chance. So much so that one can determine a degree of confidence of a suspicious sample not being by chance, based on its tendency to go for the far skirt.

By virtue of the analysis of config spaces — populations of possibilities for configurations — which can have W states and then we look at small, special, specific zones T in them. Those zones T are at the same time the sort of things that designers may want to target, clusters of configs that do interesting things, like spell out strings of at least 72 – 143 ASCII characters in contextually relevant, grammatically correct English, or object code for a program of similar complexity in bits [500 – 1,000] or the like.

500 bits takes up 2^500 possibilities, or 3.27*10^150.

1,000 bits takes up 2^1,000, or 1.07*10^301 possibilities

To give an idea of just how large these numbers are, I took up the former limit, and said now our solar system’s 10^57 atoms (by far and away mostly H and He in the sun but never mind) for its lifespan can go through a certain number of ionic chemical reaction time states taking 10^-14s. Where our solar system is our practical universe for atomic interactions, the next star over being 4.2 light years away . . . light takes 4.2 years to traverse the distance. (Now you know why warp drives or space folding etc is so prominent in Sci Fi literature).

Now, set these 10^57 atoms the task of observing possible states of the configs of 500 coins, at one observation per 10^-14 s. For a reasonable estimate of the solar system’s lifespan.

Now, make that equivalent in scope to one straw. By comparison, the set of possibilities for 500 coins will take up a cubical haystack 1,000 LY on the side, about as thick as our galaxy.

Now, superpose this haystack on our galactic neighbourhood, with several thousand stars in it etc.

Notice, there is no particular shortage of special zones here, just that they are not going to be anywhere near the bulk, which for light years at a stretch will be nothing but straw.

Now, your task, should you choose to accept it is to take a one-straw sized blind sample of the whole.

Intuit5ion, backed up by sampling theory — without need to worry over making debatable probability calculations — will tell us the result, straight off. By overwhelming likelihood, we would sample only straw.

That is why the instinct that getting 500 H’s in a row or 500 T’s or alternating H’s and T’s or ASCII code for a 72 letter sequence in English, etc, is utterly unlikely to happen by blind chance but is a lot more likely to happen by intent, is sound.

And this is a simple, toy example case of a design inference on FSCO/I as sign.

A very reliable inference indeed, as is backed up by literally billions of cases in point.

Now, onlookers, it is not that more or less the same has not been put forth before and pointed out to the usual circles of objectors.

Over and over and over again in fact.

And in fact, here is Wm A Dembski in NFL:

p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

Biological specification always refers to function . . . In virtue of their function [[a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”

p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

(And, Stephen Meyer presents much the same point in his Signature in the Cell, 2009, not exactly an unknown book.)

Why then do so many statistically or mathematically trained objectors to design theory so often present the strawman argument that appears so many times yet again in this thread?

First, it cannot be because of lack of capacity to access and understand the actual argument, we are dealing with those with training in relevant disciplines.

Nor is it that the actual argument is hard to access, especially for those who have hung around at UD for years.

Nor is such a consistent error explicable by blind chance, chance would make them get it right some of the time, by any reasonable finding, given their background.

So, we are left with ideological blindness, multiplied by willful neglect of duties of care to do due diligence to get facts straight before making adverse comment, and possibly willful knowing distortion out of the notion that debates are a game in which all is fair if you can get away with it.

Given that there has been corrective information presented over and over and over again, including by at least one Mathematics professor who appears above, the collective pattern is, sadly, plainly: seeking rhetorical advantage by willful distortion.

Mendacity in one word.

If we were dealing with seriousness about the facts, someone would have got it right and there would be at least a debate that nope, we are making a BIG mistake.

The alignment is too perfect.

Yes, at the lower end, those looking for leadership and blindly following are jut that, but at the top level there is a lot more responsibility than that.

This fits a far wider, deeply disturbing pattern that involves outright slander and hateful, unjustified stereotyping and scapegoating.

Where, enough is enough.

KF

49. 49
Axel says:

All sorts of things are possible, but, as I say, you repose too much faith in statistics/mathematics. I might grow a third ear on the top of my head the size of the US. (Ask a multiverser…).

If mathematics defined it to be possible, though gazillions to the power of gazillions to one, I would repose absolutely no faith in it whatsoever. Not even theoretically, because mathematics is not the perfect discipline bitter-ender mechanists still hold it to be.

50. 50
kairosfocus says:

AF: Kindly, convert our solar system into monkeys tossing 500 coins for the lifespan of the solar system. Say at one toss per second, a reasonable rate to take into account reading he tosses before recycling. Don’t forget you will need banana plantations to keep them going, a sun and terrestrial planets, within atomic abundances and our monkeys will have to be on the surfaces. I guarantee that they will come nowhere near the number of samples I just gave in outline for the needle in haystack calc. There are things that are empirically unobservable to such high reliability that they are practically impossible. KF

51. 51
CS3 says:

Elizabeth Liddle: As I believe you agree, the Excel-generated sequence you presented can (I assume) only be described using a long sequence of bits (i.e., a 500-bit coin-by-coin accounting, or perhaps something a little shorter by applying a compression algorithm), whereas the sequence “all heads” can be described with one bit. As Granville wrote,

The reason why 500 straight coins would raise eyebrows, and most other results, while equally improbable, would not, is easy: because “all heads” is simply describable, and most others are not (many would be describable only in 500 bits, by actually listing the result). If we flip n fair coins, the probability that the result can be described in m bits, since there are at most 2^m such results, is less than 2^m/2^n. So if you flip a billion coins, and get “all heads” or “(only) all prime numbered coins are heads” you would rightly be surprised and suspect something other than chance.

Also, you said, correctly:

BUT: No ONE of the 2^500 different possible sequences is any more or less likely than any other. Some classes of sequences are rarer than others, and therefore one of those rare classes is much less likely than one of the common classes.

However, it should be noted that fundamental scientific laws are based on the fact that some classes of sequences (i.e., simply describable ones) are more improbable than others.

From General Chemistry, 5th Edition, by Whitten, Davis, and Peck, discussing the Second Law of Thermodynamics and referencing a figure showing a closed system consisting of two bulbs connected by an open stopcock containing molecules of two gasses (one red and one blue)::

The ideas of entropy, order, and disorder are related to probability. The more ways an event can happen, the more probable that event is. In Figure 15-10b (showing both red and blue molecules randomly mixed in both bulbs) each individual red molecule is equally likely to be in either container, as is each individual blue molecule.
As a result, there are many ways in which the mixed arrangement of Figure 15-10b can occur, so the probability of its occurrence is high, and so its entropy is high. In contrast, there is only one way the unmixed arrangement in Figure 15-10a (showing all red molecules in one bulb and all blue molecules in the other bulb) can occur. The resulting probability is extremely low, and the entropy of this arrangement is low.

So, even though the particular arrangement in Figure 15-10a is indeed just as likely as any particular arrangement in Figure 15-10b, the Second Law predicts it will not occur, because there are many more arrangements of the type shown in Figure 15-10b.

Similarly, from Granville:

Natural forces may turn a spaceship into a pile of rubble, but not vice-versa — not because the exact arrangement of atoms in a given spaceship is more improbable than the exact arrangement of atoms in a given pile of rubble, but because (whether the Earth receives energy from the Sun or not) there are very few arrangements of atoms which would be able to fly to the moon and return safely, and very many which could not.

For those arguing, sure it’s astronomically unlikely, but it’s still theoretically possible: well, sure, but in what other branch of science is an astronomically unlikely explanation considered a good explanation, much less an unassailable “fact” as “undisputed as gravity”? If the mere bare chance is sufficient, why are multiverses needed to explain the “lucky” values of the fundamental constants, and why is a gradual Darwinian process needed to explain complex life? While some IDers might argue that there are epigenomic concerns that make the evolution of complex life not just a matter of probabilities, most would agree that the information needed for, say, a flagellum, could theoretically have arisen by an extraordinarily lucky set of random mutations. Most Darwinists would agree though that this is a poor scientific explanation, hence the need to explain it in terms of smaller, much less improbable steps. If a scientist observed the stopcock example above and saw that, after opening the stopcock, all the red particles stayed in one bulb and all the blue particles stayed in the other bulb, would he/she just conclude, well, this is extremely improbable, but still possible, so I’ll just continue to assume diffusion alone is operative in this situation, or would he/she look for an alternative explanation (i.e., the gasses have a difference in some relevant property that means diffusion is not the only process in effect, or the stopcock is jammed, or something)?

With regards to probabilistic resources:

If we repeat an experiment 2^k times, and define an event to be ‘‘simply describable’’ (macroscopically describable) if it can be described in m or fewer bits (so that there are 2^m or fewer such events), and ‘‘extremely improbable’’ when it has probability 1/2^n or less, then the probability that any extremely improbable, simply describable event will ever occur is less than 2^(k+m)/2^n. Thus we just have to make sure to choose n to be much larger than k + m. … For practical purposes, almost anything that can be described without resorting to an atom-by-atom (or coin-by-coin) accounting can be considered ‘‘macroscopically’’ describable.

52. 52
kairosfocus says:

F/N: in 13.7 BY, at 10^124 events/s 10^80 atoms will undergo 4.323*10^111 events. 1,000 bits has 1.07 *10^301 possibilities. Using the same needle in haystack back of envelope calc [subject to cross check], we are dealing with a 1 -straw sample of a haystack 1.43 *10^41 LY across. The observed cosmos is maybe 50 – 100 bn LY across, totally lost in that conceptual haystack. In short, 1,000 H in a row “by chance” on the gamut of the observed cosmos is an are you joking case. KF

53. 53
kairosfocus says:

CS3, actually, well beyond astronomical. KF

54. 54
Axel says:

Not only that, but there is an erroneous, if conventional, assumption that the odds for each toss are evens for both, while no allowance is made for the primacy of divine Providence even over mathematics, which latter It has disposed, doubtless indicatively, to be incomplete in its scope. And there wold be no way of factoring in such an assumption, since God’s will is essentially arbitrary.

Indeed, He might choose not to intervene, other than to ordain ball-park limits, also on patterns of sequences occurring, in actual coin-toss trials. A pound to a pinch of snuff, such a practical limit is way below even the realistic, theoretical one.

55. 55
kairosfocus says:

OOPS, 10^14 events/s. KF

56. 56
CS3 says:

One more note: while “simply describable” may seem a broad term that could apply to most outcomes, that is actually not the case:

If we toss a billion coins, it is true that any sequence is as improbable as any other, but most of us would still be surprised, and suspect that something other than chance is going on, if the result were “all heads”, or “alternating heads and tails”, or even “all tails except for coins 3i + 5, for integer i”. When we produce simply describable results like these, we have done something “macroscopically” describable which is extremely improbable. There are so many simply describable results possible that it is tempting to think that all or most outcomes could be simply described in some way, but in fact, there are only about 230000 different 1000-word paragraphs, so the odds are about 2999970000 to 1 that a given result will not be that highly ordered – so our surprise would be quite justified. And if it can’t be described in 1000 English words and symbols, it isn’t very simply describable.

57. 57
bb says:

KF,

I often hear it said that the subject of evolution doesn’t involve origin of life. That really strikes me as odd but, other than the Miller-Urey experiment mentioned in the “evolution” section of biology textbooks, I can’t think of an example of where the 2 are linked. Can you help me?

58. 58
keiths says:

Cross-posted from TSZ:
Sal,

If we flip a coin 500 times and get all heads, then yes, of course it requires an explanation — but not because 500 heads are less probable than any other specific sequence, and also not because there are many, many more ways of getting roughly 250 heads than there are of getting 500 heads.

The reason that getting 500 heads is surprising is that a 500-head sequence is one of a very small number of sequences that are significant to us in advance. The number of possible sequences is huge, and the number of significant sequences is tiny, so the probability of hitting a significant sequence is extremely low.

I explained this above in my Social Security number analogy:

Mike,

How can anyone claim to assert that an event is improbable when they have seen only one instance of it?

In Sal’s defense, even one-off events can be identified as improbable under certain hypotheses. If I roll a fair ten-sided die nine times and come up with my Social Security number, then a very improbable event has occurred, even if I don’t repeat the experiment.

My SSN is no more improbable than any other 9-digit number, but it is one of a very small set of 9-digit numbers that are significant to me. The odds of sitting down and rolling a personally significant number are therefore low.

So yes, a sequence of 500 heads requires an explanation. It’s just that design isn’t the only alternative to chance.

Likewise for homochirality. As I said in an earlier comment:

If I were arguing Sal’s case for him, I would put it this way:

Given that we observe a sequence of 500 heads, which explanation is more likely to be true?

a) the coins are fair, the flips were random, and we just happened to get 500 heads in a row; or

b) other factors are biasing (and perhaps determining) the outcome.

In the case of homochirality, Sal’s mistake is to leap from (b) directly to a conclusion of design, which is silly.

In other words, he sees the space of possibilities as {homochiral by chance, homochiral by design}. He rules out ‘homochiral by chance’ as being too improbable and concludes ‘homochiral by design’.

Such a leap would be justified only if he already knew that homochirality couldn’t be explained by any non-chance, non-design mechanism (such as Darwinian evolution). But that, of course, is precisely what he is trying to demonstrate.

He has assumed his conclusion.

59. 59
kairosfocus says:

BB: The reason the MU experiment — decades after it was put under a cloud regarding likely atmospheric chemistry — still appears in textbooks is that it provides at least an icon on the root of the tree of life model. And that is your other icon. To which we can point out, no roots, no shoots or anything else. KF

60. 60
Mung says:

Jerad is not a Darwinist, he is a mathematician.

Now right off the bat that makes me wonder just what sort of mathematician could believe in Darwinism.

61. 61
Mung says:

Neil:

“Flip a coin 500 times. Write down the exact sequence that you got.”

Mung:

Flip a coin 500 times. Have it write down the exact sequence that it produced.

The odds are similar, imo.

62. 62
Mung says:

Neil Rickert:

Flip a coin 500 times. Write down the exact sequence that you got. We can say of that sequence, that it had a probability of (1/2)^500. It is a sequence that we would not expect to see even once. Yet we saw it.

This is a common fallacy about probabilistic thinking.

What a laugher. Neil, it’s posts just like this that make your credibility hover consistently right around zero.

God help us.

Flip a two-sided coin. Record the result.

Repeat this process 499 more times.

What are the odds that the results you record are not the same as what the actual coin tosses produced?

Well, if you’re a moron, or a darwinist …

Take 500 coins and toss them in the air.

Once they have landed, record the number of heads and the number of tails.

How surprised are you by the results?

But really, that’s a once in a lifetime thing you just observed!

*sigh*

REALLY?

63. 63
Mung says:

Neil Rickert:

There’s a problem when you take a pattern that has already occurred, and then claim it is so improbable that it could not have occurred naturally.

Like tossing a coin 500 times and recording the results?

That’s something that occurs “naturally”?

REALLY?

64. 64
Mung says:

Elizabeth B. Liddle:

Probabilities are derived from frequency distributions.

Right.

We take every two-sided coin in the universe and toss it billions of times in order to determine the probability that a two-sided coin will come up heads.

65. 65
Mung says:

Keiths:

My SSN is no more improbable than any other 9-digit number

REALLY?

So your SSN is just as likely to identify you as it is to identify someone who is not you?

And that’s why the IRS uses it to identify you rather than someone who is not you?

*sigh*

66. 66
bb says:

KF,

Thanks. What I was looking for, and I’m sorry I wasn’t clear, was examples of evolutionists conflating OOL with evolution while on the other hand, they assert that the two aren’t related.

67. 67
keiths says:

Yes, Mung, REALLY.

If I roll a ten-sided die 9 times, my SSN is no more likely or unlikely to come up than any other 9-digit number. That’s Probability 101.

The reason I would be surprised to see it come up is because it is significant to me. As I explained above, in the quote that you cut short:

My SSN is no more improbable than any other 9-digit number, but it is one of a very small set of 9-digit numbers that are significant to me. The odds of sitting down and rolling a personally significant number are therefore low.

Mung:

So your SSN is just as likely to identify you as it is to identify someone who is not you?

No. Who said it was?

68. 68
Mung says:

Mung:
So your SSN is just as likely to identify you as it is to identify someone who is not you?

keiths:
No. Who said it was?

You said it was. You claimed that all 9 digits numbers used to identify people by the US Government (Social Security Number) are equiprobable.

I shouted NONSENSE!

Now, please explain why you and dozens of other people do not have the same SSN if any 9-Digit number assigned by the US Government is equiprobable.

keiths:

My SSN is no more improbable than any other 9-digit number

REALLY?

Do you think the US Government would take the chance that you and anyone else might share the same SSN?

If not, then your claim is false.

69. 69
Mung says:

*sigh*

70. 70
keiths says:

Could someone please explain this to Mung?

My SSN is no more improbable than any other 9-digit number, but it is one of a very small set of 9-digit numbers that are significant to me. The odds of sitting down and rolling a personally significant number are therefore low.

71. 71
Mung says:

Can someone please explain reality to keiths?

He thinks his SSN was assigned to him based on a lottery.

He can’t explain why his SSN is his and not someone else’s.

REALLY?

*sigh*

72. 72
TheMapleKind says:

Mung,
I doubt he’s saying that. I’m pretty sure he’s talking about a randomly rolled sequence of die being rolled to match his SSN, not his SSN being assigned.

73. 73
Mung says:

keiths, please tell us why your SSN is no different from any other 9 digit number.

plz

74. 74
TheMapleKind says:

Now, regarding this business of a sequence of 500 coin flips of a fair coin, the only time we should not be surprised at the result is for one of two reasons, I think:

a) All possible sequences represent an unspecified sequence (e.g., combination results in no self-replicator)
b) A large portion of possible sequences symbolize an unspecified sequence (e.g., 51% of possible sequences result in something signifying self-replicator… heck, if even 10% were classified as such, we still ought not be flabbergasted).

I believe the point is that if, let’s say, out of the 3.27*10^150 possible sequences for the 500 flips, only 1 or even 10,000 signify a self-replicator, why ought one not be surprised when the result matches one of those sequences? If we’re talking about the odds of getting a self-replicator vs. not getting one, then the odds are incredibly in favor of no self-replicator even if 10,000 possible sequences represent a self-replicator. That’s 1 in 3.27*10^-146. Just a wee bit unlikely, wouldn’t one think?

75. 75
TheMapleKind says:

Regarding my last post, please note that I meant to drop the negative in the exponent. Whoopsie!

76. 76
CS3 says:

BTW, for the record, in my post #56, 230000 and 299997000 should be 2^30000 and 2^999970000.

77. 77
Andre says:

Consider me obtuse if you will, chance means nothing because you still require a mechanism (cause) to flip the coin. The coin won’t do zip and neither will chance something has to trigger it.

BTW a 500 heads in a row coin flip is possible but not probable.

For the 3 billion base pairs that make a human, please explain to me how it is even possible or probable that their arrangement by chance over time could give rise to the complexity of a human. you may even use small successive steps in your explanation if you like.

Good luck!

78. 78
computerist says:

To Darwinists, 500 H’s in a row is just a state of mere complexity no different or more/less valuable/important than any other state.

However, to ID’st who wisely take the concept of information to a higher semantic and functional level, 500 H’s in a row indeed translates into design.

79. 79
Mark Frank says:

But, insists Jerad, the probability of the 500-heads-in-a-row sequence is exactly the same as the probability of any other sequence. Again, Jerad’s statement is true only in the trivial sense that any 500 flip sequence of a fair coin has the exact same probability as any other

In what other sense might it be true?

80. 80

(A) if you are “suspicious” this means that you suspect a non random cause involved;

(B) if you “don’t believe in a design” this means that you do NOT suspect a non random cause involved.

Do you see that you cannot concatenate A and B, as you did, because you become illogical?

Elizabeth said much the same thing I would have said if I wasn’t asleep at the time.

So, I don’t believe any number of monkeys tossing that coin would ever, in all eternity, produce a perfect sequence of 500 heads, as asserted by Liddle.

Specify any sequence of Hs and Ts 500 long and if you set it as your target you might not ever get it. They’re all equally likely and unlikely.

81. 81
JWTruthInLove says:

Even darwinist mathematicians think that Neil & Friends are wrong:

Confusion Everywhere
So Rickert and his defenders are simply wrong.

82. 82
tjguy says:

Excellent post!

It is amazing to see what Darwinists are willing to believe!

And yet they accuse us of having faith.

This is just a simple ploy to try and avoid the implications of the astronomically small odds of their creation story being true.

So, in the origin of life, we have the problem of chirality – which is not the only problem by any means, but it fits this illustration.

All amino acids used to make proteins in life are left handed acids, but in nature they appear with a 50-50 mix of both right handed and left handed molecules.

So a Darwinist has to believe that all the amino acids used in the original cell, the first life, just happened by pure chance to be left handed molecules.

Here is a specified pattern that has to be met for life to exist. Now most Darwinists have enough sense NOT to just blindly accept that this happened by chance because they know that looks really ridiculous.

So they look for other explanations. No good explanations have been forthcoming yet, but that doesn’t stop them from trying. Which is fine, but the evidence we have available to us now, points to intelligent intervention because I doubt anyone is really willing to say that it happened by pure chance.

83. 83
tjguy says:

Specify any sequence of Hs and Ts 500 long and if you set it as your target you might not ever get it. They’re all equally likely and unlikely.

When you throw a coin 500 times, the probability of having some kind of an outcome is 100%. But the probability of getting the outcome you need, is astronomically small, so small as to be virtually zero.

Is it impossible? Mathematically speaking, of course not.

But, like everyone mentioned, if you got it on the first chance, everyone would “know” you cheated.

And yet, Darwinists have to believe that it happened without any monkey business!

That takes faith!

Which is more rational to believe? Which takes more faith to believe?

1. That 500 coins were tossed and they landed in exactly the order you predicted ahead of time by pure chance?

Or

2. That there was monkey business involved?

If you have the faith to believe that it happened by pure total chance, fine, we just don’t think that is rational given the odds.

84. 84
keiths says:

JWTruthInLove,

Shallit’s post is entitled “Confusion Everywhere”, which is appropriate since he himself is confused.

He writes:

The example is given of flipping a presumably fair coin 500 times and observing it come up heads each time. The ID advocates say this is clear evidence of “design”, and those arguing against them (including the usually clear-headed Neil Rickert) say no, the sequence HH…H is, probabilistically speaking, just as likely as any other.

Which is correct if the coin is fair, not just “presumably fair”. And that is what eigenstate specified in the quote that started this whole debate:

Maybe that’s just sloppily written, but if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins, and as an instance of the ensemble of outcomes that make up any statistical distribution you want to review.

That is, physics is just as plausibly the driver for “all heads” as ANY OTHER SPECIFIC OUTCOME.

Eigenstate is correct. Take any specified sequence of coin flips {F1, F2, … Fn} where each Fi is either H (heads) or T (tails). The probability of getting that precise sequence when flipping a fair coin is equal to:

the probability that the first flip matches F0,
times the probability that the second flip matches F1,
times the probability that the third flip matches F2,

times the probability that the Nth flip matches Fn.

The coin is fair, meaning that the probability of a match is the same whether Fi is H or T: exactly 1/2.

Therefore, the probability of matching any specific sequence of length n is exactly the same, regardless of its content:
(1/2)^n.

Now if you drop the stipulation that the probability distribution is known and fair, then the question becomes much more interesting. However, Sal is still wrong.

The solution is by my UW colleague Ming Li and his co-authors. The basic idea is that Kolmogorov complexity offers a solution to the paradox: it provides a universal probability distribution on strings that allows you to express your degree of surprise on enountering a string of symbols that is said to represent the flips of a fair coin.

Two problems with that statement:

1. We don’t need a probability distribution, because we already have one. Eigenstate specified that the coins were fair, and we know what that distribution looks like.

2. Even setting #1 aside, Kolmogorov complexity cannot act as a proxy for (lack of) surprise. Consider my example above involving social security numbers. If I roll my SSN, I’m surprised because it is my SSN, not because of its Kolmogorov complexity.

But the ID advocates are also wrong, because they jump from “reject the fair coin hypothesis” to “design”.

Yes, as I pointed out earlier:

Given that we observe a sequence of 500 heads, which explanation is more likely to be true?

a) the coins are fair, the flips were random, and we just happened to get 500 heads in a row; or

b) other factors are biasing (and perhaps determining) the outcome.

In the case of homochirality, Sal’s mistake is to leap from (b) directly to a conclusion of design, which is silly.

In other words, he sees the space of possibilities as {homochiral by chance, homochiral by design}. He rules out ‘homochiral by chance’ as being too improbable and concludes ‘homochiral by design’.

Such a leap would be justified only if he already knew that homochirality couldn’t be explained by any non-chance, non-design mechanism (such as Darwinian evolution). But that, of course, is precisely what he is trying to demonstrate.

He has assumed his conclusion.

I suspect that Shallit will agree with all of this once he realizes that this entire debate has been about a case in which the coins are known to be fair, not just “presumably fair”.

85. 85
keiths says:

Correction to my comment above. The indices are wrong:

the probability that the first flip matches F1,
times the probability that the second flip matches F2,
times the probability that the third flip matches F3,

times the probability that the Nth flip matches Fn.

86. 86
Andre says:

Assuming the coins are fair go have fun, you can do up to 200 at one time…. see if you will ever get a 200 heads only!

http://www.random.org/coins/

87. 87
kairosfocus says:

BB: In short, EVERY time Darwinists appeal to the tree of life icon — starting with Darwin himself (the ONLY diagram in Origin as originally published) — they imply a root. The utter absence of a plausible explanation for the root, highlighted by the sort of thing we see with the MU experiment in textbooks, is a smoking gun. Indeed, it is worse than that, as we are talking about origin of digital info bearing coded systems and the machines that process in co-ordination, for which the only credible, empirically warranted explanation is design. Then, design sits at the table from the root up, so design is available at every step of the tree of life, and it is the only thing that can in light of empirical verification of capacity explain the origin of major body plans dozens of times over needing 10 – 100 mn + bits of additional info, each. So is the sort of rhetorical game above that ignores what was pointed out, step by step at 48 above. KF

88. 88
kairosfocus says:

Onlookers:

Observe how studiously Darwinist objectors have ignored the issues pointed out step by step at 48 above. It is patent that mere facts and reason are too inconvenient to pay attention to in haste to make favourite talking points.

Which reminds me all too vividly about the exercise over the past month in which direct proof of the undeniability of a patent fact, that error exists suddenly turned into rhetorical pretzels. We are dealing here with ideological agendas all too willing to resort to mendacity by continuing a misrepresentation, not reason and certainly not reason guided by a sense of duty to accuracy and fairness.

Be warned accordingly.

KF

89. 89
Andre says:

And if you want to do 20 amino acids in a specified sequence here is more fun!

http://www.random.org/sequences/

Good Luck with chance and random! You will quickly learn the only workable solution is by doing a very specific arrangement using a mind!

Knock ourselves out!

90. 90
kairosfocus says:

PS: Let me here reproduce the core argument from 48, just to show the point:

_______________________

[Clipping 48 in the DDS mendacity thread, for record:]

>>It seems people have a major problem appreciating: (a) configuration spaces clustered into partitions of vastly unequal statistical weight,and (ii) BLIND sampling/searching of populations under these
circumstances.

It probably does not help, that old fashioned Fisherian Hyp testing has fallen out of academic fashion, never mind that its approach is sound on sampling theory. Yes it is not as cool as Bayesian statistics etc, but there is a reason why it works well in practice.

It is all about needles and haystacks.

Let’s start with a version of an example I have used previously, a large plot of a Gaussian distribution using a sheet of bristol board or the like, baked by a sheet of bagasse board or the like. Mark it into 1-SD wide stripes, say it is wide enough that we can get 5 SDs on either side. Lay it flat on the floor below a balcony, and drop small
darts from a height that would make the darts scatter roughly evenly across the whole board.

Any one point is indeed as unlikely as any other to be hit by a dart. BUT THAT DOES NOT EXTEND TO ANY REGION. As a result, as we build up the set of dart-drops, we will see a pattern, where the likelihood of getting hit is proportionate to area, as should be obvious.

That immediately means that the bulk of the distribution, near the mean value peak, is far more likely to be hit than the far tails. For exactly the same reason why if one blindly reaches into a haystack and pulls a handful, one is going to have a hard time finding a needle in
it.

The likelihood of getting straw so far exceeds that of getting needle that searching for a needle in a haystack has become proverbial.

In short, a small sample of a very large space that is blindly taken, will by overwhelming likelihood, reflect the bulk of the distribution, not relatively tiny special zones.

(BTW, this is in fact a good slice of the statistical basis for the second law of thermodynamics.)

The point of Fisherian testing is that skirts are special zones and take up a small part of the area of a distribution, so typical samples are rather unlikely to hit on them by chance. So much so that one can determine a degree of confidence of a suspicious sample not being by
chance, based on its tendency to go for the far skirt.

By virtue of the analysis of config spaces — populations of
possibilities for configurations — which can have W states and then we look at small, special, specific zones T in them. Those zones T are at the same time the sort of things that designers may want to target, clusters of configs that do interesting things, like spell out strings of at least 72 – 143 ASCII characters in contextually relevant,
grammatically correct English, or object code for a program of similar complexity in bits [500 – 1,000] or the like.

500 bits takes up 2^500 possibilities, or 3.27*10^150.

1,000 bits takes up 2^1,000, or 1.07*10^301 possibilities.

To give an idea of just how large these numbers are, I took up the former limit, and said now our solar system’s 10^57 atoms (by far and away mostly H and He in the sun but never mind) for its lifespan can go through a certain number of ionic chemical reaction time states taking 10^-14s. Where our solar system is our practical universe for atomic
interactions, the next star over being 4.2 light years away . . . light takes 4.2 years to traverse the distance. (Now you know why warp drives or space folding etc is so prominent in Sci Fi literature.)

Now, set these 10^57 atoms the task of observing possible states of the configs of 500 coins, at one observation per 10^-14 s. For a reasonable estimate of the solar system’s lifespan.

Now, make that equivalent in scope to one straw. By comparison, the set of possibilities for 500 coins will take up a cubical haystack 1,000 LY on the side, about as thick as our galaxy.

Now, superpose this haystack on our galactic neighbourhood, with several thousand stars in it etc.

Notice, there is no particular shortage of special zones here, just that they are not going to be anywhere near the bulk, which for light years at a stretch will be nothing but straw.

Now, your task, should you choose to accept it is to take a
one-straw sized blind sample of the whole.

Intuition, backed up by sampling theory — without need to worry over making debatable probability calculations — will tell us the result, straight off. By overwhelming likelihood, we would sample only straw.

That is why the instinct that getting 500 H’s in a row or 500 T’s or alternating H’s and T’s or ASCII code for a 72 letter sequence in English, etc, is utterly unlikely to happen by blind chance but is a lot more likely to happen by intent, is sound.

And this is a simple, toy example case of a design inference on FSCO/I as sign.

A very reliable inference indeed, as is backed up by literally billions of cases in point.

Now, onlookers, it is not that more or less the same has not been put forth before and pointed out to the usual circles of objectors.

Over and over and over again in fact.

And in fact, here is Wm A Dembski in NFL:

p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is
to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

Biological specification always refers to function . . . In
virtue of their function [[a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”

p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed
event from that field], T is detachable from E, and and T measures at least 500 bits of information
. . . ”

(And, Stephen Meyer presents much the same point in his Signature in the Cell, 2009, not exactly an unknown book.)

Why then do so many statistically or mathematically trained
objectors to design theory so often present the strawman argument that appears so many times yet again in this thread?

First, it cannot be because of lack of capacity to access and understand the actual argument, we are dealing with those with training in relevant disciplines.

Nor is it that the actual argument is hard to access, especially for those who have hung around at UD for years.

Nor is such a consistent error explicable by blind chance, chance would make them get it right some of the time, by any reasonable finding, given their background.

So, we are left with ideological blindness, multiplied by willful neglect of duties of care to do due diligence to get facts straight before making adverse comment, and possibly willful knowing distortion out of the notion that debates are a game in which all is fair if you can get away with it.

Given that there has been corrective information presented over and over and over again, including by at least one Mathematics professor who appears above, the collective pattern is, sadly, plainly: seeking rhetorical advantage by willful distortion.

Mendacity in one word.

If we were dealing with seriousness about the facts, someone would have got it right and there would be at least a debate that nope, we are making a BIG mistake.

The alignment is too perfect.

Yes, at the lower end, those looking for leadership and blindly following are jut that, but at the top level there is a lot more responsibility than that.

This fits a far wider, deeply disturbing pattern that involves outright slander and hateful, unjustified stereotyping and scapegoating.

Where, enough is enough.>>
______________

Prediction: this too will be studiously ignored in the rush to make mendacious talking points.

(NR, KS, AF et al just prove me wrong by actually addressing this on the merits. Please.)

KF

91. 91
kairosfocus says:

BB: For examples of contradictory other hands, watch what happens when an evo mat advocate is pressed on the want of a root tot eh tree, and how the evidence of what chem and physics applies in warm little ponds does not point to credible possibility of OOL. Very fast, they will pull the switcheroo, that OOL strictly is not part of the theory of evo. (This has happened ever so many times here at UD, and I suspect Talk Origins will exemplify same, etc.) KF

92. 92

Which is more rational to believe? Which takes more faith to believe?

1. That 500 coins were tossed and they landed in exactly the order you predicted ahead of time by pure chance?

Or

2. That there was monkey business involved?

If you have the faith to believe that it happened by pure total chance, fine, we just don’t think that is rational given the odds.

As I said, if I got 500 Hs or, indeed, any prespecified sequence on the very first try I’d be suspicious and I would check for anything that had affected the outcome. But if I found nothing ‘wrong’ then I’d conclude it was a lucky fluke.

There’s no faith involved.

I would also NOT conclude ‘design’ since, as stated by Dr Dembski, first you have to allow for any and all non-design explanations. And chance is such a plausible explanation.

93. 93
Andre says:

Chance can’t do anything chance can not flip the coins…..

94. 94

I don’t think I’ve ever seen a thread generate so much heat with so little actual fundamental disagreement!

Almost everyone (including Sal, Eigenstate, Neil, Shallit, Jerad, and Barry) is correct. It’s just that massive and inadvertent equivocation is going on regarding the word “probability”.

The compressibility thing is irrelevant. Where we all agree is that “special” sequences are vastly outnumbered by “non-special” sequences, however we define “special”, whether it’s the sequence I just generated yesterday in Excel, or highly compressible sequences, or sequences with extreme ratios of H:T, or whatever. It doesn’t matter in what way a sequence is “special” as long as it was either deemed special before you started, or is in a clear class of “special” numbers that anyone would agree was cool. The definition of “special” (the Specification) is not the problem.

The problem is that “probability” under a frequentist interpretation means something different than under a Bayesian interpretation, and we are sliding from frequentist interpretation (“how likely is this event?”) which we start with, to a Bayesian interpretation (“what caused this event?”) , which is what we want, but without noticing that we are doing so.

Under the frequentist interpretation of probability, a probability distribution is simply a normalised frequency distribution – if you toss enough sequences, you can plot the frequency of each sequence, and get a nice histogram which you then normalise by dividing by the total number of observations to generate a “probability distribution”. You can also compute it theoretically, but it still just gives you a normalised frequency distribution albeit a theoretical one. In other words, a frequentist probability distribution, when applied to future events, simply tells you how frequently you can expect to observe that event. It therefore tells you how confident you can be (how probable it is) that that the event will happen on your next try.

The problem is arises when we try to turn frequentist probabilities about future events into a measure of confidence about the cause of a past event. We are asking a frequency probability distribution to do a job it isn’t built for. We are trying to turn a normalised frequency, which tells us the how much confidence we can have of a future event, given some hypothesis, into a measure of confidence in some hypothesis concerning a past event. These are NOT THE SAME THING.

So how do we convert our confidence about whether a future event will occur into a measure of confidence that a past event had a particular cause? To do so, we have to look beyond the reported event itself (the tossing of 500 heads), and include more data.

Sal has told us that the coin was fair. How great is his confidence that the coin is fair? Has Sal used the coin himself many times, and always previously got non-special sequences? If not, perhaps we should not place too much confidence in Sal’s confidence! And even if he tells us he has, do we trust his honesty? Probably, but not absolutely. In fact, is there any way we can be <absolutelysure that Sal tossed a fair coin, fairly? No, there is no way. We can test the coin subsequently; we can subject Sal to a polygraph test; but we have no way of knowing, for sure, a priori, whether Sal tossed a fair coin fairly or not.

So, let’s say I set the prior probability that Sal is not honest, at something really very low (after all, in my experience, he seems to be a decent guy): let’s say, p=.0001. And I put the probability of getting a “special” sequence at something fairly generous – let’s say there are 1000 sequences of 500 coin tosses that I would seriously blink at, making the probability of getting one of them 1000/2^500. I’ll call the observed sequence of heads S, and the hypothesis that Sal was dishonest, D. From Bayes theorem we have:

P(D|S)=[P(S|D)*P(D)]/[ P(S|D)*P(D)*+ P(T|~D)*P(~D)]

where P(D|S) is what we actually want to know, which is the probability of Sal being Dishonest, given the observed Sequence.

We can set the probability of P(S|D) (i.e. the probability of a Special sequence given the hypothesis that Sal was Dishonest) as 1 (there’s a tiny possibility he meant to be Dishonest, but forgot, and tossed honestly by mistake, but we can discount that for simplicity). We have already set the probability of D (Sal being Dishonest) as .0001. So we have:

P(D|S)=[1*.0001]/[1*.0001+1000/2^500*(1-.0001)]

Which is, as near as dammit, 1. In other words, despite the very low prior probability of Sal being dishonest, now that we have observed him claiming that he tossed 500 heads with a fair coin, the probability that he was being Dishonest, is now a virtual certainty, even though throwing 500 Heads honestly is perfectly possible, entirely consistent with the Laws of Physics, and, indeed, the Laws of Statistics. Because the parameter (P(T|~D) (the probability of the Target given not-Dishonesty) is so tiny, any realistic evaluation of P(~D) (the probability that Sal was not Dishonest) , however great, is still going to make the term on the denominator, P(T|~W)]P(~W), negligible, and the denominator always only very slightly larger than the numerator. Only if our confidence in Sal’s integrity exceeds 500 bits will we be forced to conclude that the sequence could just or more easily have been Just One Of Those Crazy Things that occasionally happen when a person tosses 500 fair coins honestly.

In other words, the reason we know with near certainty that if we see 500 Heads tossed, the Tosser must have been Dishonest, is simply that Dishonest people are more common (frequent!) than tossing 500 Heads. It’s so obvious, a child can see it, as indeed we all could. It’s just that we don’t notice the intuitive Bayesian reasoning we do to get there – which involves not only computing the prior probability of 500 Heads under the null of Fair Coin, Fairly Tossed, but also the prior probability of Honest Sal. Both of which we can do using Frequentist statistics, because they tell us about the future (hence “prior”). But to get the Posterior (the probability that a past event had one cause rather than another) we need to plug them into Bayes.

The possibly unwelcome implication of this, for any inference about past events, is that when we try to estimate our confidence that a particular past event had a particular cause (whether it is a bacterial flagellum or a sequence of coin-tosses), we cannot simply estimate it from observed frequency distribution of the data. We also need to factor in our degree of confidence in various causal hypotheses.

And that degree of confidence will depend on all kinds of things, including our personal experience, for example, of an unseen Designer altering our lives in apparently meaningful and physical (increasing our priors for the existence of Unseen Designers), our confidence in expertise, our confidence in witness reports, our experience of running phylogenetic analyses, or writing evolutionary algorithms. In other words, it’s subjective. that doesn’t mean it isn’t valid, but it does mean that we should be wary (on all sides!) of making over confident claims based on voodoo statistics in which frequentist predictions are transmogrified into Bayesian inferences without visible priors.

95. 95

(above cross-posted at TSZ, with some typos fixed).

96. 96
scordova says:

JWTruthInLove,

Awesome find of Shallit’s essay!

97. 97
scordova says:

The law of large numbers is well-accepted in mathematics.

Thus, I don’t think Barry is misusing probability with respect to the coins.

I wrote on the issue here:
The Law of Large Numbers vs. KeithS

98. 98
tjguy says:

As I said, if I got 500 Hs or, indeed, any prespecified sequence on the very first try I’d be suspicious and I would check for anything that had affected the outcome. But if I found nothing ‘wrong’ then I’d conclude it was a lucky fluke.

There’s no faith involved.

Well, I guess I’m a bit more simple minded than Dr. Dembski. And I bet you would be too if you were playing a poker hand and ran into someone with that kind of “luck” opposing you.

I would also NOT conclude ‘design’ since, as stated by Dr Dembski, first you have to allow for any and all non-design explanations. And chance is such a plausible explanation.

Hmm. I would conclude “design” until an explanation surfaces. And I do not think that chance is a plausible explanation given the astronomical odds. This is where the faith comes in. You are willing to allow chance stand as a “plausible explanation” even though the odds are ridiculously low. This to me is not a scientific or a rational conclusion.

Let’s say you were unable to examine the details of the experiment because it took place 1 million years ago. All you have are the odds to go by. You can either accept chance as a plausible explanation or you can posit some type of design.

Which would you choose?

Which is the more rational or the more probable explanation?

99. 99

Hmm. I would conclude “design” until an explanation surfaces. And I do not think that chance is a plausible explanation given the astronomical odds. This is where the faith comes in. You are willing to allow chance stand as a “plausible explanation” even though the odds are ridiculously low. This to me is not a scientific or a rational conclusion.

We’ll just have to agree to disagree on that then I guess.

Let’s say you were unable to examine the details of the experiment because it took place 1 million years ago. All you have are the odds to go by. You can either accept chance as a plausible explanation or you can posit some type of design.

Which would you choose?

Without other data or information pointing to the existence of a designer present at the time with the necessary skills and opportunity then I’d say chance is a more parsimonious explanation as it posits no undefined or unproven cause. I would also point out that accepting design as a plausible explanation is already heading down the path of defining and limiting the skills and motivations of the designer. Something that I’ve been told over and over again ID does not do.

Which is the more rational or the more probable explanation?

Having no independent evidence of a designer then I’d go with chance. OR, just say we don’t know.

I do not see how you can think an undefined and unproven designer is a more rational explanation. That’s just faith. I have nothing against faith but I don’t think it should be promoted as scientific. Especially when, although admittedly highly improbable, chance is ‘consistent with the laws of mathematics’ and physics and known to happen.

100. 100
kairosfocus says:

Dr Liddle:

The fundamental issue is that we are dealing with large config spaces and blind samples (for sake of argument).

Once we can define narrow zones of interest [so, partition the space on something separately specifiable than to list out the configs in detail that we want . . . ), and once we have rather limited resources — with W = 2^500, 10^57 atoms in our solar system for 10^17 s is very limited, we have a situation where not on probability but on sampling theory, we have very little likelihood of capturing such zones of interest on any blind process within the reach of resources. We have no right to expect to see anything but the overwhelming bulk partition.

In the case of 500 coins, the distribution is very sharply peaked indeed, centred on 250 H:250 T.

500 H is so far away form that that it is a natural special zone (and notice how simply it can be described, i..e. how easy the algor to construct this config is).

In more relevant cases, we have clusters, which I have described as Z1, z2, . . . zn, where our sampling resources are again constrained. For the 500 bit solar system case, we are looking at samplinghte equivalent of 1 straw size blindly in a haystack 1,000 LY across. Even if the stack were superposed on our galactic neighbourhood, with 1,000’s of star systems, since stars are several LY apart and are as a rule much smaller than a LY across, we are in a needle in haystack challenge on steroids.

And, notice, I am not here demanding thatonly one state be in a zone, or that there be just one zone. Nope, so long as there is reason to se that zones are idolated and search resources are vastly overwhelmed, we are in a realm where the point holds.

This then extends to the genome, where a viable one starts at 100 – 500,000 base pairs, and multicellular body plans are looking at 10 – 100+ mn bases, dozens of times over on the scope of the solar system. Where, also we know that functional specificity and complexity joined together are going to sharply confine acceptable configs. As can be seen from just requisites of text strings in English.

Such gives analytical teeth tot he inductive point that the only known, observed source of FSCO/I is design.

KF

101. 101
kairosfocus says:

Jerad: Are you familiar with the related results of statistical thermodynamics, which ground the 2nd law of thermodynamics? Namely that here are some things that are so remote, so beyond observability on relative statistical weight of clusters of partitioned configs that hey are not spontaneously observable on the gamut of a lab or the solar system or the observed cosmos? You have just done the equivalent of suggesting that you believe in perpetuum mobiles as feasible entities. KF

102. 102

Are you familiar with the related results of statistical thermodynamics, which ground the 2nd law of thermodynamics? Namely that here are some things that are so remote, so beyond observability on relative statistical weight of clusters of partitioned configs that hey are not spontaneously observable on the gamut of a lab or the solar system or the observed cosmos? You have just done the equivalent of suggesting that you believe in perpetuum mobiles as feasible entities

If you can find something wrong with my mathematics then please point it out. I don’t see how you can argue with the fact that any given sequence of Hs and Ts, including all Hs or all Ts or HTHTHTH . . . . or HHTTHHTTHHTT . . . or whatever sequence you’d like to specify are all equally likely to occur if the generating procedure is truly ‘fari’.

Obviously any class of outcomes is more likely to occur than any given single sequence. And obviously classes closer to the ‘mean’ (depending on what your measure is) are more likely to occur. But just because ‘we’ assign meaning or significance to certain outcomes or classes of outcomes doesn’t change the mathematics.

103. 103
kairosfocus says:

With al due respect, you are conflating two very different things, and refusing to acknowledge the relevance of one of them.

Namely, you are looking at the bare logical possibility of a given single state of 500 coins as an outcome of chance and suggest that any given state is as improbable as any other on Bernouilli-Laplace indifference. (Funny how this is popped up when it suits, and discarded by your side when it isn’t; henfce ever so many silly talking points about how you can’t calculate probabilities you require, nyah nyah nyah nyah nah! [That itself leaves out of the reckoning the most blatant issue of all: sampling theory.])

But that is not what we are looking at in praxis.

What we have in fact, is the issue of arriving as a special — simply describable or functionally specific, or whatever is relevant — state or cluster of states, vs other, dominant clusters of vastly larger statistical weight. I am sure you will recognise that in an indifference [fair coins, here] situation, when we have such an uneven partition of the space of possibilities, clusters of overwhelming statistical weight — which will be nearly 250 H:250T, in no particular pattern, will utterly dominate the observable outcomes.

What happens is that the state 500 H, that of 500 T, or a state that has in it a 72 or so character ASCII text in English, are all examples of remarkable, specially describable, specific and rare outcomes, deeply isolated in the field of possibilities.

So, if you are in an outcome state that is maximally improbable on chance, in a special zone that a chance based search strategy is maximally unlikely to attain, that is highly remarkable. Especially in a situation where there is the possibility of accessing such by choice contingency as opposed to chance contingency.

This is the context too of the second law of thermodynamics, which points out that on chance based changes of state, the strong tendency is to migrate from less probable clusters of configs to more probable ones.

(In communication systems, it is notorious that noise strongly tends to corrupt messages. And so we have an actual pivotal technical metric, signal to noise ratio, that recognises the importance and ready identifiability of the distinction between signals and noise on typical characteristics, so much so that we can measure and compare their power levels on a routine basis as a figure or merit for a communication system. but, strictly, on logical possibility, noise can mimic any signal, so why do we confidently make the distinction? Because, we have confidence in the sampling result that our observations of noise in systems will overwhelmingly come from the overwhelming bulk of possibilities.)

In short, you have been tilting at a strawman.

In the wider context, it is obvious that the root problem is that there is a strong aversion to the reality of choice contingency as a fundamental explanation of outcomes.

Let me just say that it is for instance strictly logically possible that by lucky noise on the Internet, every post I have ever made here at UD is actually the product of blind chance causing noise on the network. However, the readers of UD have had no problem inferring that the best explanation for posts under this account is that here is an individual out there who has this account.

So, while it is strictly logically possible that lucky noise has caused all of this, that is by no means the best, empirically warranted, reasonable explanation. Indeed, it is quite evident on analysis of relevant scientific investigations, that a great many things in science are explained by investigating the sort of causal factors that are empirically reliable as causes of a given effect, then once that has been established, one treats the effect as a sign of its credible, reliably established cause. Text of posts by unseen posters is a good simple case in point.

And, if you or I were to come across a tray of 500 coins with all heads uppermost, or alternating heads and tails, or ASCII code for a text in English, that would be on its face string evidence of choice contingency, AKA design as best and most reasonable — though not only possible — explanation.

That is patent.

So, why the fuss and bother not to infer the blatantly reasonable?

Because of something else connected to it. namely, this is not isolated form something very important that has been discovered over the past 60 years or so.

That is, that the living cell has in its heart, digital code a associated execution machinery implemented in C-chemistry nanomachines.

If we were to take this on the empirically grounded reliable inference that he best explanation for codes and code-executing machines, on billions of cases in point without observed exception is design, then that would immediately lead tot he conclusion that the living cell is best explained as designed.

But those of us who do that, are commonly held up to opprobrium, to the point where I have found myself recently unfairly compared to Nazis. (And no apologies or retractions have been forthcoming when the outrage and the denial of having harboured such have been exposed.)

This is because, origins science is in the grips of an a priori ideology of evolutionary materialism dressed up in a lab coat.

Under those circumstances, of institutionalised question-begging in favour of an ideology that is actually inherently self-refuting, it is not surprising that common sense inductive reasoning is routinely sent to the back of the bus on whatever convenient excuse.

So, I think the issue is to get the ideologically loaded and polarising a priori’s fixed, then we can look back at the science, underlying inductive logic and actual evidence on a more objective basis.

KF

104. 104

Namely, you are looking at the bare logical possibility of a given single state of 500 coins as an outcome of chance and suggest that any given state is as improbable as any other on Bernouilli-Laplace indifference.

Yes, that is what I am addressing. And if I’ve doen something incorrectly then please point it out.

But that is not what we are looking at in praxis.

That is all I was doing. Just discussing the mathematics.

What we have in fact, is the issue of arriving as a special — simply describable or functionally specific, or whatever is relevant — state or cluster of states, vs other, dominant clusters of vastly larger statistical weight. I am sure you will recognise that in an indifference [fair coins, here] situation, when we have such an uneven partition of the space of possibilities, clusters of overwhelming statistical weight — which will be nearly 250 H:250T, in no particular pattern, will utterly dominate the observable outcomes.

As I already said above in different terms. I don’t know what you’re arguing against. Obviously a fairly jumbled mix of 500 Hs and Ts is more likely than any single outcome including all Hs. So?

What happens is that the state 500 H, that of 500 T, or a state that has in it a 72 or so character ASCII text in English, are all examples of remarkable, specially describable, specific and rare outcomes, deeply isolated in the field of possibilities.

Any particular single sequence is just as likely as any other single sequence in a truly ‘fair’ or random selection process. And, as I’ve said already, groups or clusters of outcomes will always have a higher probability than any single outcome. What are you arguing against?

So, if you are in an outcome state that is maximally improbable on chance, in a special zone that a chance based search strategy is maximally unlikely to attain, that is highly remarkable. Especially in a situation where there is the possibility of accessing such by choice contingency as opposed to chance contingency.

Why don’t you specify a null and an alternate hypothesis and a confidence interval you’d like to test? Or be more clear what you’re getting at.

In short, you have been tilting at a strawman.

I’ve been addressing a very particular mathematical point. If you can find any fault with what I’ve actually said then please point it out.

So, while it is strictly logically possible that lucky noise has caused all of this, that is by no means the best, empirically warranted, reasonable explanation. Indeed, it is quite evident on analysis of relevant scientific investigations, that a great many things in science are explained by investigating the sort of causal factors that are empirically reliable as causes of a given effect, then once that has been established, one treats the effect as a sign of its credible, reliably established cause. Text of posts by unseen posters is a good simple case in point.

How come everyone misses the point that I’ve made MANY TIMES that I would be extremely careful to first root out any bias or influence in the system before I attributed an outcome to chance?

And, if you or I were to come across a tray of 500 coins with all heads uppermost, or alternating heads and tails, or ASCII code for a text in English, that would be on its face string evidence of choice contingency, AKA design as best and most reasonable — though not only possible — explanation.

That is patent.

So, why the fuss and bother not to infer the blatantly reasonable?

What are you arguing against? If design was detectable then I assume I would discover that BEFORE ascribing a highly unusual outcome to chance!! Design would be a bias in the system, making it not ‘fair’.

You are the first one to accuse your opponent of attacking a strawman but you seem to be doing so here. Nothing I’ve said has been overturned, I was addressing a pretty basic mathematical issue, I’ve been very clear that ascribing chance is my last fall back for a highly organised outcome AFTER first making very, very, very sure there was no other detectable influence.

I don’t get it. Should I just repeat myself over and over again?

105. 105
kairosfocus says:

Let us observe your crucial strawmannising step:

Any particular single sequence is just as likely as any other single sequence in a truly ‘fair’ or random selection process. And, as I’ve said already, groups or clusters of outcomes will always have a higher probability than any single outcome. What are you arguing against?

Your error has been repeatedly pointed out, from several quarters. You have insisted on a strawman distortion that caricatures the situation by highlighting something that is true in itself — that on a fair coin assumption or situation any one state is as probable as any one other state.

But in the wider context, we are precisely not dealing with any one state in isolation, but partitioning of the space of possibilities in light of various significant considerations.

What is to be explained is the partition we find ourselves in, on best explanation. And zones of interest that are overwhelmed by the statistical weight of the bulk of the possibilities are best explained on choice not chance.

In short, you refuse to recognise that there are such things as significant partitions that can have a significance above and beyond merely being possible outcomes. Also, you are suppressing the underlying issue, that there are two known causes of highly contingent outcomes, chance and choice. Where, what is utterly unexpected — in the relevant case on the gamut of atomic resources in our solar system for its plausible lifespan — on chance, indeed is so maximally improbable that it is reliably unobservable on chance, is easily explained and feasibly observable on choice.

So, in a context of inference to best causal explanation, it is morally certain that choice was involved not chance. That is, one would be foolishly naive or irresponsible on matters of moment, to insist on acting as though chance is a serious explanation where choice is at all possible.

And, in the wider context that surfaces the underlying issue behind ever so many objections on the credible source of objects exhibiting configurations reflecting islands of function in seas of non-function: there is a major ideological a priori bias acting in origins science in our day that needs to be exposed for what it is and how it distorts reasoning on matters of origins. Namely Lewontin’s a priori materialism.

It is as simple as that.

KF

106. 106
kairosfocus says:

PS: It is also highly significant in the context of the wider contentious debate that you have twisted the logic of the design inference into pretzels. When we come across an outcome that we cannot directly observe the source of, we need to infer a best empirically warranted explanation. The first default is mechanical necessity leading to natural regularities (which includes the latest attempted red herring, standing wave patterns revealed by dusting vibrating membranes with sand or the like). This is overturned by high contingency. If a given aspect is highly contingent — and this includes cases which may reflect a peaked/biased distribution of outcomes, not just a flat one — the default is chance if the outcome is reasonably observable on chance. Choice comes in in cases where we have highly unlikely outcomes on chance that come from highly specific, restricted zones of interest that plausibly reflect patterns that purposeful choice can account for. All heads, HT in alternation, Ht patterns exhibiting ASCII code in English, etc are cases in point.

107. 107

Your error has been repeatedly pointed out, from several quarters. You have insisted on a strawman distortion that caricatures the situation by highlighting something that is true in itself — that on a fair coin assumption or situation any one state is as probable as any one other state.

No, that was not a strawman distortion, that was the topic of the 22 sigma thread I responded to.

But in the wider context, we are precisely not dealing with any one state in isolation, but partitioning of the space of possibilities in light of various significant considerations.

What wider state are you talking about? I haven’t responded to any thread which was about anything other than mathematics. Intentionally so.

What is to be explained is the partition we find ourselves in, on best explanation. And zones of interest that are overwhelmed by the statistical weight of the bulk of the possibilities are best explained on choice not chance.

Whatever. You’re talking about clusters or groups of outcomes again and I’ve already agreed they are more likely to happen.

In short, you refuse to recognise that there are such things as significant partitions that can have a significance above and beyond merely being possible outcomes. Also, you are suppressing the underlying issue, that there are two known causes of highly contingent outcomes, chance and choice. Where, what is utterly unexpected — in the relevant case on the gamut of atomic resources in our solar system for its plausible lifespan — on chance, indeed is so maximally improbable that it is reliably unobservable on chance, is easily explained and feasibly observable on choice.

You can pick or define clusters or zones or partitions of outcomes that are in your interest. Sure. And you have to pick a ‘measure’ which, in this particular case was number of heads or tails. On other measures the outcome of all Hs would NOT be so far from the mean.

If you’ve got a particular situation you want me to address then bring it up.

So, in a context of inference to best causal explanation, it is morally certain that choice was involved not chance. That is, one would be foolishly naive or irresponsible on matters of moment, to insist on acting as though chance is a serious explanation where choice is at all possible.

It’s like shouting at a storm. I’ve said, MANY TIMES, I would first try and find some explanation other than chance before I fell back on that for a highly unusual result.

And, in the wider context that surfaces the underlying issue behind ever so many objections on the credible source of objects exhibiting configurations reflecting islands of function in seas of non-function: there is a major ideological a priori bias acting in origins science in our day that needs to be exposed for what it is and how it distorts reasoning on matters of origins. Namely Lewontin’s a priori materialism.

Can I get a translation please?

Have you found an error in my mathematics? Doesn’t look like it.

If you have a situation you’d like me to address that I haven’t already done multiple times then bring it up.

It is also highly significant in the context of the wider contentious debate that you have twisted the logic of the design inference into pretzels. When we come across an outcome that we cannot directly observe the source of, we need to infer a best empirically warranted explanation. The first default is mechanical necessity leading to natural regularities (which includes the latest attempted red herring, standing wave patterns revealed by dusting vibrating membranes with sand or the like). This is overturned by high contingency. If a given aspect is highly contingent — and this includes cases which may reflect a peaked/biased distribution of outcomes, not just a flat one — the default is chance if the outcome is reasonably observable on chance. Choice comes in in cases where we have highly unlikely outcomes on chance that come from highly specific, restricted zones of interest that plausibly reflect patterns that purposeful choice can account for. All heads, HT in alternation, Ht patterns exhibiting ASCII code in English, etc are cases in point.

I’ve already stated many times my response to this. You don’t agree with me so you’re trying to intimidate me into backing down or agreeing with you by posting long, rambling paragraphs which make comprehension difficult.

I’ll say it one more time: IF I flipped a coin 500 times and got all heads I’d try very, very, VERY hard to find an explanation for it even though I know that outcome is just as likely as any other. I might not ever really believe it was due to chance. BUT, if I couldn’t find some explanation then I would write it off as a fluke, a result which is physically and mathematically consistent with the situation and no need to invoke some other agency which I looked for and couldn’t find!!

Same with getting a randomly generated line of Shakespeare.

I am not trying to distort anything, I am not trying to speak to anything other than those simple, clear, fairly abstract and ideal circumstances.

108. 108
kairosfocus says:

Jerad: All you have managed to do is to underscore my point. KF

109. 109

I am not trying to speak to anything other than those simple, clear, fairly abstract and ideal circumstances.

And this is why this whole conversation is a nonsense unless we factor in our actual state of knowledge (as you have just done).

In an “ideal circumstance” we might *know* with God’s Eye (or Mathematician’s Eye) knowledge, that the coin was fair, and was fairly tossed. In which case, no matter what the sequence, we would reject Design.

But the whole point of making inferences is that we do NOT know, with God’s Eye knowledge that the coin was fair, fairly tossed.

So we have to weigh up the relative probabilities of a fair coin, fairly tossed, or something else.

And as 500 Heads one of a tiny subset of Special sequences, and therefore extremely probable, almost any other explanation is more likely than “fair coin fairly tossed”.

It’s really no more complicated than that.

Which is why I suggested a Bayesian formalisation of the inference, where at least we make our state of knowledge explicit.

If we do not, we end up in silly arguments where the only difference is the amount of knowledge assumed.

Jerad isn’t suffering from “DDS” any more than Barry is suffering from “IDS”. But the whole conversation is suffering from people thinking other people are being stupid when they are simply making different but unspecified (sometime) assumptions about what we know to start with.

It always happens in probabily discussions. It’s very annoying.

*harrumph*

110. 110