# Evolution and Poker: Professor Says There are no Scientific Problems With Evolution

Poker players don’t lose with bad hands, they lose with great hands. I once saw a player dealt a four of a kind while the other player was dealt a full house. Those are the second and third best hands in poker, but the fellow dealing the cards had a royal flush—the highest hand in all of poker. If you have four of a kind or a full house, then you are supremely confident, and by the time those cards were dealt everyone had all their pennies on the table. It was a complete loss for those two players and an illustration of the dangers of a great hand. If you have a bad hand, then you won’t make losing bets. But great hands are susceptible to losing bets.  Read more

## 9 Replies to “Evolution and Poker: Professor Says There are no Scientific Problems With Evolution”

1. 1
Granville Sewell says:

Cornelius,

One of the published replies to my 2000 Math. Intelligencer article made exactly the same argument. Here is how I responded to this in my book later:

In the original Mathematical Intelligencer article I made the assertion that the underlying principle behind the second law is that natural forces do not do extremely improbable things. The journal and I received several replies arguing that everything Nature does can be considered extremely improbable—the exact arrangement of atoms at any time at any place is extremely unlikely to be repeated, noted one e-mail. In another published reply [Davis 2001], the author made an analogy with coin flipping and argued that any particular sequence of heads and tails is extremely improbable, so something extremely improbable happens every time we flip a long series of coins. If a coin were flipped 1000 times, he would apparently be no more surprised by a string of all heads than by any other sequence, because any string is as improbable as another. This critic concedes that it is extremely unlikely that humans and computers would arise again if history were repeated, “but something would.”

Obviously, I should have been more careful with my wording in the first article: I should have said that the underlying principle behind the second law is that natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view. A “macroscopically describable” event is just any event which can be described without resorting to an atom-by-atom (or coin-by-coin) accounting.

Carbon distributes itself more and more uniformly in an insulated solid because there are many more arrangements of carbon atoms which produce nearly uniform distributions than produce highly nonuniform distributions. Natural forces may turn a spaceship into a pile of rubble, but not vice-versa—not because the exact arrangement of atoms in a given spaceship is more improbable than the exact arrangement of atoms in a given pile of rubble, but because (whether the Earth receives energy from the Sun or not) there are very few arrangements of atoms which would be able to fly to the moon and return safely, and very many which could not.

The reader familiar with William Dembski’s “specified complexity” concept [Dembski 2006], will recognize similarities to the argument here: natural forces do not do things which are “specified” (macroscopically describable)and “complex” (extremely improbable). Both are just attempts to state in more “scientific” terms what is already obvious to the layman, that unintelligent forces cannot do intelligent things.

If we toss a billion (fair) coins, it is true that any sequence is as improbable as any other, but most of us would still be surprised, and suspect that something other than chance is going on, if the result were “all heads,” or “alternating heads and tails,” or even “all tails except for coins 3i+5, for integer i.” When we produce simply describable results like these, we have done something “macroscopically” describable which is extremely improbable. There are so many simply describable sequences possible that it is tempting to think that all or most outcomes could be simply described in some way, but in fact, there are fewer than 2^(30000) different 1000-word paragraphs, so the odds are about 2^(999970000) to 1 that a given sequence will not be that highly ordered—so our surprise would be quite justified. And if it can’t be described in 1000 English words and symbols, it isn’t very simply describable.

Footnote: To be more general, if we define a sequence or set of sequences to be “simply describable” when it can be described in m or fewer bits, and “extremely improbable” when it has probability 1/2^n or less, then the probability that any extremely improbable, simply describable event will occur is less than 2^m/2^n. Thus we must choose n to be much larger than m, and set the threshold for “extremely” improbable events sufficiently low.

2. 2
MrDunsapy says:

Here is what the head line said:
Evolution and Poker: Professor Says There are no Scientific Problems With Evolution
———————————————————–
Well of course not, science isn’t used in ‘evolution’.
The hypothesis of ‘evolution’, is used for evidence.
So no matter the probability, it has to have happened, because ‘evolution’ says so.

http://patternsofcreation.weebly.com/

3. 3
kairosfocus says:

Dr Hunter:

Looks like some folks are confusing particular individual outcomes of a configuration space, and specific, describable clusters of such microstates.

When we can cluster like that, then the relevant probability is not that of having any one individual state, but of being in one or the others of a given cluster.

This should be familiar from the trick of designating far tails of distributions in hyp testing. The statistical weight of the bulk is so high relative to that of the tail that if one is somewhere by chance, one is far more likely to be in the bulk. And, within a certain degree of confidence, being in an unusual outcome that fits a function or the like by chance is minimal relative to getting there by intent.

And, this is indeed closely related to the warrant for the second law of thermodynamics, where spontaneous changes strongly tend towards the more heavily weighted clusters.

When I therefore see the kind of objection noted, and I know that most scientists with graduate level training basically will all but certainly have done basic statistics along the way, I am left to wonder why the obvious connexions are not being made.

GEM of TKI

4. 4
kairosfocus says:

F/N: Where we have a 500 bit string, the Planck time quantum state resources of our solar system, ~ 10^57 atoms, 10^102 is about 1 in 10^48 of the possibilities. A sample on blind chance and mechanical necessity will be so unlikely to be in anything but the bulk that we have no right to expect anything unusual, If we see something very unusual, the most reasonable explanation is choice. Posts in this thread — as opposed to monkeys typing at random, are a case in point

5. 5
junkdnaforlife says:

right on, the tighter you can compress the information in a random coin toss distribution string, the more unlikely it is. The probability that the distribution of 1000 coin tosses could be described in 1000 bits(H,T) is 1. However, the distribution of a 1000 fair coin tosses being compressible to a single bit H or T, is close to 0. Therefore hitting 1000 heads or tails in row is nearly 0.

6. 6
junkdnaforlife says:

hey kf, merry Christmas pal

7. 7
kairosfocus says:

Many happy returns!

8. 8
mk says:

nice article

9. 9
kairosfocus says:

F/n: it would help to read here on. Happy Christmas to all, don’t go too heavy on inductive turkeys and Christmas puddings!