Uncommon Descent Serving The Intelligent Design Community

The Fundamental Law of Intelligent Design

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

After being in the ID movement for 10 years, and suffering through many debates, if someone were to ask me what is the most fundamental law upon which the ID case rests, I would have to say it is the law of large numbers (LLN). It is the law that tells us that a set of fair coins randomly shaken will converge on 50% heads and not 100% heads. It is the law that tells us systems will tend toward disorganization rather than organization. It is the law of math that makes the 2nd law of thermodynamics a law of physics. Few notions in math are accorded the status of law. We have the fundamental theorem of calculus, the fundamental theorem of algebra, and the fundamental theorem of arithmetic — but the law of large numbers is not just a theorem, it is promoted to the status of law, almost as if to emphasize its fundamental importance to reality.

To understand what the law of large numbers is, it requires understanding the notion of expected value or expectation value. Rather than giving the somewhat brutal mathematical formalism of expected value, let me give an illustration with coins. If we have large set of fair coins, there is an expectation that approximately 50% of the fair coins will be heads after a vigorous shaking or flipping of the coins (a random process). That is, the expected value for the proportion of heads is 50%.

As we examine sets of coins that are very large (say 10,000 coins), the outcome will tend to converge so close to 50% heads so frequently that we can say from a practical standpoint, the proportion will be 50% or close to 50% with every shaking of the set. If we consider each coin in the set as a “trial”, the example illustrates the law of large numbers. Formally stated the law of large numbers says:

the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

Law of Large Numbers

How does this play out for ID? Before answering that question, let me classify 3 kinds of designs (or forms of organization).

A. Non-functional ordered objects (like all fair coins heads, or homochirality in biology)

B. Non-functional dis-ordered, but recognizably designed objects (like a set of numbered coins organized according to a pre-specified pattern, an binary representation of Hamlet, DNA strings that identify GMOs, etc.)

C. Functional objects (like components assembled into a functioning machine, a software bit stream, etc.)

In this essay, I’ll illustrate design using the law of large numbers with the “non-functional ordered objects”. I’ll save for later discussion the illustration of design in the more challenging cases of “non-functional dis-ordered, but recognizably designed objects” and “functional objects”.

If I had 500 fair coins in a box all heads, I would conclude the 100% proportion of heads is far away from the expected value of 50% heads, thus we have a significant violation of the law of large numbers for random processes, thus a random process is rejected as the mechanism of creating the all-heads pattern. By convention, the ID community classifies objects as designed if they do not conform to the products of law and chance. Whether they are designed in the ultimate sense is a separate question, but the practical rejection of the chance hypothesis in this case is unassailable.

A typical mistake ID proponents make is saying, “the all-heads pattern happens on average only 1 out 2^500 times, therefore the chance hypothesis is rejected”. The Darwinists will counter by saying, “that pattern is no more special than any other since every pattern happens only 1 out of 2^500 times, therefore all-coins heads is consistent with the chance hypothesis”. Last year, Darwinists at The Skeptical Zone tried to pull that same rhetorical stunt on me with these words:

if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins,

Law of Large Numbers vs. Keiths

But I came prepared to counter their maneuvers. 🙂 They obviously didn’t anticipate I’d debate them from an unorthodox angle, namely the law of large numbers and application of expected value. I pointed out based on the binomial distribution and expectation value of 50% heads, 100% heads is a violation of law of large numbers and hence a violation of the chance hypothesis from a practical standpoint. My opponents in the debate were thrown into disarray. But as always, they never admitted defeat in the exchange. They camped out at UD and would not rest until I confessed the following creed:

if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins,

Law of Large Numbers vs. Keiths

I told them, “no dice”. Or maybe I should have said, “no coins.” The Darwinists at Skeptical Zone fared so badly that even arch Darwinist Jeffrey Shallit felt it necessary to call his associates out on their folly.

The advantage of using the law of large numbers is it brings clarity to the probability arguments. It negates the Darwinists claim that “every pattern is just as improbable as another, therefore design is nothing special”. The 500-coins heads example illustrates how to apply the law of large numbers in identifying designs for non-functional ordered objects, and thus, certain patterns are indeed special because their very nature is at variance with the chance hypothesis.

It occurred to me, since the law of large numbers was such a fruitful way to refute the materialists on the question of non-functional ordered designs, how about we use the law of large numbers when dealing with other more challenging kinds of designed objects? Those ideas, Designer willing, will be explored in subsequent discussions.

NOTES:

For some history of debates with Darwinists over the Law of Large Numbers see:

SSDD: A 22-sigma event is consistent with the physics of fair coins?

Law of Large Numbers vs. Keiths

Siding with mathgrrl on a point, alternative to CSI V2

Comments
I've known Nick a long time, and here is one thing he might say. "I'd reject the chance hypothesis because I know humans can make the all-heads pattern". The proper counter-response by an ID proponent: "But what is it about the all-heads pattern that would induce you to even consider humans in the first place, is it because the pattern lies far outside expectation? After all a human can also make a random looking pattern deliberately, so why does this pattern force you to consider an intelligent agency?" To which Nick should say: "Because the pattern is far outside expectation" To which an ID proponents would respond: "So there are indeed patterns in nature which would incline you to consider an intelligent agency at work if you were acquainted with such an intelligent agency" To which Nick should say: "Yes" To which I would say: "So some patterns are more special than others?" To which Nick should say: "Yes"scordova
December 16, 2013
December
12
Dec
16
16
2013
11:26 AM
11
11
26
AM
PDT
33 BoxDecember 16, 2013 at 6:37 am Barry #12 No, Box. Nick is quite correct to point out that the series 500 heads in a row has the exact same chance of happening as any other 500 toss sequence. Sal knows that too, but it does not change his analysis. Thanks for pointing that out Barry. So, for once, Nick is right and I’m wrong :) Each individual outcome has the exact same chance of happening as any other outcome. However there are vastly more outcomes in the center of the spectrum: Scordova #20: 47.8% – 52.2% heads or would cover one standard deviation or 68% of all possible outcomes, 43.4% – 56.6% would cover three standard deviations or 99.7% of the cases. So the chance that an outcome is within 43.4% – 56.6% range is vastly more likely (99.7%) than the outcome being outside that range (0.3%). At first sight this conclusion seems contradictory to the fact that each individual outcome has the exact same chance of happening as any other outcome. Thanks to Scordova’s excellent explanations I understand now that it is not.
This is the difference between looking at the outcome of a process as a specific sequence, versus the outcome as some aggregate. The law of large numbers does apply to things like expected percentages of heads, just like it applies to things like e.g. the mean. But creationism/ID advocates are almost always looking at specific DNA sequences, and sequences of coin flips etc. are used as analogies to that (horrible, silly analogies, but if the creationists acknowledged how silly the analogy was, they'd probably give up creationism). In fact, I'm sure we'll get back that sooner rather than later. I doubt they intend to argue that ID predicts that we will see GGGGGGGGGGGGGGGG...x1000s in a DNA sequence, that would be evidence of ID.NickMatzke_UD
December 16, 2013
December
12
Dec
16
16
2013
11:16 AM
11
11
16
AM
PDT
Thanks for pointing that out Barry. So, for once, Nick is right and I’m wrong
But the usually verbose Nick Matzke is strangely silent on a simple question:
So what say you Nick, is chance a practical explanation for 500 fair coins heads or not?
Nick could say, "no", in which case we'll have it recorded that the world's leading Darwinist can't even understand basic statistics. Nick could say, "yes", in which case I'll ask him why. To which he should say, "it's far outside of expectation", to which I'll ask "you mean the expectation that the law of large numbers says we should expect in practice?" to which Nick should say: "yes, Sal", to which I'll say, "So I'm right, Nick?" To which Nick should say, "Yes, you're right, Sal. Any wagers whether Nick will answer this simple question. Now, Barry, you've been really trying to put the screws to Nick about Niles Eldridge. Nick has been so generous and quick to respond. You might consider you post this question to Nick:
Dr. Matzke, If you found 500 fair coins all heads on a tray, would you reject chance as an explanation. If yes, explain why. Thank you.
Here is the problem. If Nick says, "no", he'll look like a fool. If Nick says, "yes", he'll have to explain why, and if he explains why, he'll have to use exactly the line of argument I laid out. This puts Nick in a tough position, he'll have to either: 1. Say "no" and thus get disgraced 2. Say "yes" and thus publicly agree with a creationist and thus get disgraced too . :-) Whatever he says, we'll get a lot of mileage out of it.scordova
December 16, 2013
December
12
Dec
16
16
2013
10:19 AM
10
10
19
AM
PDT
Barry #12 No, Box. Nick is quite correct to point out that the series 500 heads in a row has the exact same chance of happening as any other 500 toss sequence. Sal knows that too, but it does not change his analysis.
Thanks for pointing that out Barry. So, for once, Nick is right and I’m wrong :) Each individual outcome has the exact same chance of happening as any other outcome. However there are vastly more outcomes in the center of the spectrum:
Scordova #20: 47.8% – 52.2% heads or would cover one standard deviation or 68% of all possible outcomes, 43.4% – 56.6% would cover three standard deviations or 99.7% of the cases.
So the chance that an outcome is within 43.4% – 56.6% range is vastly more likely (99.7%) than the outcome being outside that range (0.3%). At first sight this conclusion seems contradictory to the fact that each individual outcome has the exact same chance of happening as any other outcome. Thanks to Scordova’s excellent explanations I understand now that it is not.Box
December 16, 2013
December
12
Dec
16
16
2013
04:37 AM
4
04
37
AM
PDT
Mapou: Recall, we have here people perfectly willing to throw logic overboard -- as we have seen. KFkairosfocus
December 16, 2013
December
12
Dec
16
16
2013
02:41 AM
2
02
41
AM
PDT
=> I think scordova's LLN doesn't take sequence into consideration.I am sure you are aware that the chance of some sequence occurring is faster than other sequence.If you are talking of gene, then some sequence of A,T,C,G will occur sooner than other sequence.I think this should also be taken into account in our ID law.coldcoffee
December 16, 2013
December
12
Dec
16
16
2013
01:06 AM
1
01
06
AM
PDT
scordova
LLN is the law that tells us systems will tend toward disorganization rather than organization. It is the law of math that makes the 2nd law of thermodynamics a law of physics.
Since evolution is systems tending toward organization, LLN and evolution are exactly the inverse and are incompatible in principle. Doubters about ID/evo must be well aware of the dramatic choice they face. They must choose between math/physics and evolution. If they believe evolution rather than ID then they deny math/physics, full stop.niwrad
December 15, 2013
December
12
Dec
15
15
2013
11:59 PM
11
11
59
PM
PDT
Thanks, kairosfocus. I guess I am not well informed about this topic. I'll keep on reading.Mapou
December 15, 2013
December
12
Dec
15
15
2013
11:36 PM
11
11
36
PM
PDT
PS: WmAD in NFL:
p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
Any reasonably knowledgeable person reading this would and should immediately recognise the underlying issues and the cogency of this case. But for 15 years, this has met nothing but obfuscations, willful distortions and misrepresentations, and worse from precisely those who are educated enough to know what this is saying -- and recall, every Biology major has had to do significant statistics. EVERY Physics and chemistry major has at least one good stat thermodynamics course under his or her belt. The objectors know sufficient, or should know more than sufficient. That is the smoking gun. KFkairosfocus
December 15, 2013
December
12
Dec
15
15
2013
11:27 PM
11
11
27
PM
PDT
Mapou: Cf just above. The point has been on the table for a very long time, and has long been sufficiently clear for reasonable people [e.g. it is quite clear in WmAD's NFL . . . ], but this is not a reasonable time -- it is a highly ideologically polarised time. KFkairosfocus
December 15, 2013
December
12
Dec
15
15
2013
11:19 PM
11
11
19
PM
PDT
Sal, I think you may have struck the mother lode, so to speak. Sure, it needs to be fleshed out and expanded further but I can see the beginning of something big. My only worry is that, in the case of a complex genetic code, we need to derive a principle based on the LLN that can be applied to calculate its probability and exclude chance as much as possible. One of the problems with the random mutation of genes is that it not only flips new coins into the set (new codes), but it keeps on flipping the old ones as well. I hope this makes sense.Mapou
December 15, 2013
December
12
Dec
15
15
2013
11:14 PM
11
11
14
PM
PDT
F/N: The LLN of course is about fluctuations, and about definable clusters of accessible microstates -- accessible specific outcomes -- consistent with a given macrostate. (This is the heart of thermodynamics and particularly the second law. More to follow.) What happens is that there strongly tends to be a predominant cluster of states -- in the coin example 50:50 H:T and neighbouring outcomes -- that as a cluster overwhelmingly dominate the population of possibilities. So, on the presumption of fair coins -- two-sided dice in effect -- we have exceedingly good reason to expect that 1,000 coin toss or 500 coin toss outcomes will be in that predominant cluster, and will therefore be near-50-50 in no particular pattern. That is what gives a basis for LLN, and it is what drives the inference that an outcome far away from that pattern is a reliable sign that something other than chance was at work in a highly contingent situation. Namely design. So, if one sees all heads, or all tails or another special pattern like alternating tails and heads -- notice the specifications that set target zones -- one is entirely justified on reliable empirical inference to conclude that the best explanation of such ORDER is design. Given, that we have a known highly contingent situation. (The order of ions in a crystal of NaCl is not a highly contingent outcome and is best explained on mechanical necessity.) However, there is another relevant case. Now, coins can be seen as 1-bit devices and we can assign H and T to 1 or 0. From this we see that we can find our 500 coins in a case where they say have the ASCII code for the first 73 letters of this post or a similar message. (Notice, again, the specification.) The coins are going to be close to 50:50 distribution, but that is not the only issue, they are in an organised and functionally specific, code bearing pattern. Can this happen by chance? Strictly, yes -- any distribution is in principle accessible to chance. Is chance the reasonable inference? Only if the known, overwhelmingly likely explanation for such code based patterns exhibiting FSCO/I has been ruled out on good grounds. And that is the real problem with the typical objections to the design inference that appeal to how chance can come up with any particular outcome: they are implicitly appealing to the practically impossible, because of an implicit a priori ideological presumption. Why do I say that with such confidence? Because of the other factor at work: sampling/ blond search opportunity. 500 coins is right on the solar system level complexity threshold used for the design inference. Why? It turns out that on the gamut of atomic resources, 10^57 atoms [mostly H, He and locked up in good old Sol BTW . . . this is very generous] and on a scale of 10^17 s, near enough the typical estimated age of the observed cosmos, and searching at a rate where every atom is an observer and searches are taken at a rate comparable to the fastest ionic reactions [~ 10^-14 s], the sample size:population size of possible outcomes ratio for 500 coins, is as a one straw sample to a cubical haystack 1,000 light years across. That is the stack is as thick as the central bulge of our galaxy. If such a stack were superposed on our galactic neighbourhood and we were to move about at random and pick a one straw sized sample, the outcome would be all but certain: straw and nothing else. This is the reason why SC's appeal to LLN is highly relevant. Given the sampling/ search resources of our solar system, no practical sampling of the space for 500 coins can amount to sufficient of a scope that it is reasonable to expect on chance, 500 H or a similar pattern or an outcome reflecting a code etc. The overwhelming bulk cluster of microstate outcomes sees to that. So, when we see the appeal to "any outcome is possible so there should be no surprise," that boils down to an implicit admission of having already ruled out on other grounds that another possibility known to be capable of easily and repeatedly producing patterns and codes, was at work. Namely, design. (And BTW, the number of reliably observed and reported cases of chance tosses producing all H, all T etc for 500 or more coins is? . . . You guessed it, zip.) Bottomline: the issue is not the math of coins or LLN, but the a priori impositions that demand that we revert to a practically impossible outcome, as the alternative is ruled out of bounds a priori. And, this is not exactly a new point, it is one that has been made any number of times, and there has never been a substantial refutation. (Indeed, the very same 500 or 1,000 coin example is the opening case used in one of my favourite statistical thermodynamics introductory texts, by L K Nash. Yes, this is the underlying statistical reasoning that undergirds thermodynamics. Another hot button case for the objectors.) But then, we have seen how the ilk of objectors we are routinely dealing with react to self-evident truths, which are absolutely certain. So, the issue is to address what is reasonable, and to ring fence and red flag what is unreasonable. And it is patent that to expect or argue with a straight face that within the resources of the solar system, it is plausible to generate 500 H's, etc through chance tosses of fair coins, is -- with all due respect -- utterly unreasonable. So, I think that it is time to face the facts of the evident rhetorical situation. KFkairosfocus
December 15, 2013
December
12
Dec
15
15
2013
11:07 PM
11
11
07
PM
PDT
=> scordova, Mapou I would bring in probability distribution of event and event sequences into the mix. However, it is true LLN can be used to exclude chance events in many cases. But ,I am sure if Darwinist are not convinced with existing proofs, they are not going to consider LLN and probability distribution eithercoldcoffee
December 15, 2013
December
12
Dec
15
15
2013
10:31 PM
10
10
31
PM
PDT
1. What about other types of ordered sets, e.g., alternating heads/tails, or first half of set is all heads, etc.?
For the alternating pattern H T H T H... Take set composed of every other coin, and you'd get H H H H .... the expected value of the set would also be 50%, but since the set is all heads you can reject the chance hypothesis. That's how to deal with such patterns. The challenge is figuring out clever ways to apply the LLN. Salscordova
December 15, 2013
December
12
Dec
15
15
2013
10:15 PM
10
10
15
PM
PDT
2. What if, instead of coins, you used something with many more possibilities such as dice or lottery numbers or, in the case of genes, a huge number of possibilities? How would the LLN help the ID hypothesis in such cases?
That's the subject of future posts, it can be done, it just take a bit more work. I've been working on the problem for about a year. :-) For starters, consider this post about a casino cheating scandal that involved a non-random shuffle. https://uncommondescent.com/intelligent-design/coordinated-complexity-the-key-to-refuting-postdiction-and-single-target-objections/ The next level is analyzing the probability of syntactically correct and semantically correct constructs within the computers of living systems. You can frame the expectation in terms of the probability of making functionally viable Quine computing systems. Life is like a Quine computer. But I'm getting way way way ahead of myself here in the scope of this discussion.scordova
December 15, 2013
December
12
Dec
15
15
2013
10:09 PM
10
10
09
PM
PDT
Querius:
Every coin flip is a separate event.
We are not concerned with physical events. We are concerned with sampling events in our mathematical model. In the case mentioned, with a sequence of 500 tosses, that full sequence is a single sampling event. Yes, you can have a different model where each coin toss is a sampling event. But that changes the question. The questions to be asked about this different model are different from the questions asked about the original model. Different questions have different answers.Neil Rickert
December 15, 2013
December
12
Dec
15
15
2013
10:02 PM
10
10
02
PM
PDT
Box, You have some good instincts. Having 50% or 49% or 48% heads would not cause any notice, but 100% does. Using the formula I provided here: https://uncommondescent.com/mathematics/ssdd-a-22-sigma-event-is-consistent-with-the-physics-of-fair-coins/ 47.8% - 52.2% heads or would cover one standard deviation or 68% of all possible outcomes, 43.4% - 56.6% would cover three standard deviations or 99.7% of the cases. Now if we used a much larger set of coins, say 1,000,000 -- 99.7% of the cases would lie in a deviation of only 0.15% from expectation. That is to say, in a sample size that large, a proportion of heads above 50.15% would incline someone to suppose that there could have been some manipulation, at 50.5% heads (10 standard deviations), you'd almost be sure there was some shenanigans going on. A multi-sigma (multiple standard deviation) event allowed an online poker cheating ring to be identified: http://www.nbcnews.com/id/26563848#.Uq6VejaA3rc
When that information was posted, Michael Josem, a mathematics-minded Australian poker player, charted NioNio’s results in comparison to the results of 870 “normal” accounts with at least 2,500 hands recorded by poker-tracking software. The result, seen at left, showed that NioNio’s win rate was 10 standard deviations above the mean, or less likely than “winning a one-in-a-million lottery on four consecutive days," Josem said.
In the case of 500 coins heads, that configuration approaches whopping 22 standard deviations -- you'd really think something was rigged! But apparently some Darwinists will hold fast to the notion a 22-sigma deviation from the expectation of large numbers is perfectly consistent with random processes. :roll: Salscordova
December 15, 2013
December
12
Dec
15
15
2013
10:01 PM
10
10
01
PM
PDT
Sal, I'm digging this post because it is clearly and succinctly worded. I understand the 50% probability predicted by the law of large numbers as it applies to binary coins. I have a few concerns/questions. 1. The LLN does not say anything about the order of the set, only about the probability of heads, right? But it does predict that certain ordered sets (such as all heads) are less probable. What about other types of ordered sets, e.g., alternating heads/tails, or first half of set is all heads, etc.? 2. What if, instead of coins, you used something with many more possibilities such as dice or lottery numbers or, in the case of genes, a huge number of possibilities? How would the LLN help the ID hypothesis in such cases? 3. Or am I mistaken in item 2 above with regard to genes? Should we instead use a huge number of 4-digit "dices" to represent DNA codes? If so, what would the LLN predict as far the probability of the 4-letter code distribution in an average gene?Mapou
December 15, 2013
December
12
Dec
15
15
2013
09:44 PM
9
09
44
PM
PDT
I know to a moral certainty that playing an optimal craps strategy means only that I will lose my money at the slowest possible rate. The optimal mathematical strategy is well known: The true odds of winning the “don’t pass” bet are 976:949. The house pays 1:1 on this bet for a 1.36% advantage. Therefore, I know that if I bet one million dollars (not that I have that kind of money!) one dollar at a time, at the end of the game it is mathematically certain that I would win only approximately 986,000 for a net overall loss of approximately 14,000.
You can beat the casino craps game if there is a promotion in play, and I did just that with a partner. I had her bet the passline with the exact same amount that I would bet on the don't pass. The trick was trying to keep the casino from figuring out we were in collusion. It's perfectly legal, but casinos hate people using their brains. I dressed like a boy from the hood, and she like debutant. We would basically cancel out each other's losses except when 2 sixes were rolled -- in such case she would lose, I'd merely push. So there is a slight loss rate with almost non-existent variance. So why did we do this? We knew if we even played a few hours the casino would lavish us with lots of goodies that would exceed our loss, plus some cash back that would theoretically cancel our cash loss. If the comp accrual rate in terms of cash, food, hotels in addition to promotional coupons you get in the mail out does your expected loss, you can get a slight cash edge and get some nice free vacations. Now if the variance for one of the partners is particularly acute, that partner might get some generous offers in the mail to boot. The excitement here wasn't so much craps, but getting away with the scheme and outsmarting the opponent. Those were some fun memories... I hope that might help you get some nice vacations and see some more nice shows. Salscordova
December 15, 2013
December
12
Dec
15
15
2013
09:29 PM
9
09
29
PM
PDT
Tell that to the casino owners who bet tens of billions of dollars on the law every year. You would get a chuckle or two I suspect.
You are confusing a mathematical model with a theorem in mathematics. Look at the actual statement of the law of large numbers, as for example in the Wikipedia article. You cannot apply that to reality. You can apply it to a suitable mathematical model of some aspect of reality, which is how casinos use it.Neil Rickert
December 15, 2013
December
12
Dec
15
15
2013
09:24 PM
9
09
24
PM
PDT
“It is the same basic theory that casinos use to fleece gamblers” Sal, I have a confession. I do not always put my money where my mouth is. I have a weakness for craps. I know to a moral certainty that playing an optimal craps strategy means only that I will lose my money at the slowest possible rate. The optimal mathematical strategy is well known: The true odds of winning the “don’t pass” bet are 976:949. The house pays 1:1 on this bet for a 1.36% advantage. Therefore, I know that if I bet one million dollars (not that I have that kind of money!) one dollar at a time, at the end of the game it is mathematically certain that I would win only approximately 986,000 for a net overall loss of approximately 14,000. Why do I play? Because it’s fun! Duh. The fun comes from the short term variance, which gives the illusion that the game can be beaten. Example. My wife and I had tickets to David Copperfield a couple of months ago. We were early and I asked her if she wanted me to teach her how to play the game while we waited. She said OK, so I started playing with $200. The next shooter caught a hot hand, and 15 minutes later I cashed out with $650.00, and a good time was had by all. Of course, I know that unlike my poker career (in which superior play can create positive EV for the player), if I continue to play craps I will give that $450 back to the house plus a little more (assuming I continue to play optimally; if I don't play optimally I will give back a lot more). So the key to playing craps? I should enjoy the short term variance, but should not fool myself about the long term inevitability, which means I should expect to lose in the long run and budget my play accordingly. My inevitable net losses will be just a “cost of entertainment” no different in principle from the money I paid for the David Copperfield tickets.Barry Arrington
December 15, 2013
December
12
Dec
15
15
2013
09:00 PM
9
09
00
PM
PDT
Barry, Clint Eastwood once said, "a man's got to know his limitations." I've never played a hand of poker in my life despite doing quite well in blackjack by counting cards and moving the expected value in my favor. I don't play poker because I know I'd have to play against guys like you. :-) I'd rather play against an easier opponent. Sal PS I once took the Blackhawk casinos for a couple thousand before I scooted out of town. :-)scordova
December 15, 2013
December
12
Dec
15
15
2013
08:34 PM
8
08
34
PM
PDT
Sal, as a poker player, an understanding of the law of large numbers is one of the keys to understanding my game. Indeed, it is fair to say that someone who does not understand the law can never be a good, much less a great, poker player. The best book on basic poker theory (Sklansky’s The Theory of Poker) uses the law at a fundamental level. Sklansky says that poker is not about winning any particular hand or any particular session. It is about making correct decisions, and if you consistently make correct decisions you will win in the long run even if you lose in the short run. That is why Sklansky says that you should not think of poker in terms of individual sessions. Instead, you should think of your entire poker life as one long session. That way, by making correct decisions, you skew the expected value in your favor and over a sufficient number of trials (tens of thousands of hands, not hundreds of hands), the law of large numbers say you are bound to be a net winner. It is the same basic theory that casinos use to fleece gamblers except in poker you have a chance to be the position of the casino.Barry Arrington
December 15, 2013
December
12
Dec
15
15
2013
08:27 PM
8
08
27
PM
PDT
Nice post Sal. Maybe 500 doesn't seem like a "large" number to Darwinists (though for purposes of demonstrating your point it certainly is). Try "one billion heads." Hey Darwinists, if you found one billion heads in a row, would that be consistent with a chance hypothesis? After all, what Nick says about 500 is true about one billion, i.e., "any other individual sequence of [one billion] heads/tails also has the same probability."Barry Arrington
December 15, 2013
December
12
Dec
15
15
2013
08:16 PM
8
08
16
PM
PDT
Neil Rickert:
Moreover, by itself, [the law of large numbers] is a theorem in mathematics with no relevance to reality
Tell that to the casino owners who bet tens of billions of dollars on the law every year. You would get a chuckle or two I suspect. Box,
If I’m right, Matzke cannot say that “any other individual sequence of 500 heads/tails also has the same probability”. Because sequences which are accommodated by the ’50% heads set’ have the highest probability.
No, Box. Nick is quite correct to point out that the series 500 heads in a row has the exact same chance of happening as any other 500 toss sequence. Sal knows that too, but it does not change his analysis. Nick is also correct that a simple probability calculation (i.e., "it is vastly improbable”) does not get you to a design inference. See my post here for an explanation. https://uncommondescent.com/intelligent-design/jerads-dds-causes-him-to-succumb-to-millers-mendacity-and-other-errors/ Finally, Nick is right that a design inference under these circumstances requires a specification. But before he shot off his mouth, he should have checked the post more closely, because, as Sal explains above, he did provide a specification. Thus, shooting off his mouth in this case only made him look foolish. Barry Arrington
December 15, 2013
December
12
Dec
15
15
2013
08:13 PM
8
08
13
PM
PDT
is still 500! (factorial) times as large as all heads, an extremely large number.
Small correction (I think): I got C(500,250) = 500!/[(250!)(250!)] = 1.17 x 10^149scordova
December 15, 2013
December
12
Dec
15
15
2013
08:13 PM
8
08
13
PM
PDT
Some of you materialists should be ashamed of yourselves. Would any of you, teaching the next generation assert with a clear conscience that if an individual found a configuration of 500 fair coins heads, that from a practical standpoint, they should consider chance as a possible mechanism? Even though 500 coins all heads is as probable as any other specific sequence, it is not an argument in favor of accepting chance as the reason the coins are all heads. The reason we accept random process to create sequences of 50% heads is that it is within expectation. So why reject 100% heads as being the result of chance but not 50% heads? Answer: The law of large numbers.scordova
December 15, 2013
December
12
Dec
15
15
2013
08:10 PM
8
08
10
PM
PDT
Neil @ 2 . . . Every coin flip is a separate event. What you're suggesting concerns averages of averages, which gets more complicated. Take a look at Stein's paradox for more information. -QQuerius
December 15, 2013
December
12
Dec
15
15
2013
08:02 PM
8
08
02
PM
PDT
Instead of randomly tossing coins, lets randomly shuffle up the molecules in the atmosphere. The law of large numbers says you should get mild weather.
Man, Neil, you're on a roll. :roll: That's another quote of the day. Everyone reading this with any modest science background knows you just made that up and that it isn't true.scordova
December 15, 2013
December
12
Dec
15
15
2013
07:54 PM
7
07
54
PM
PDT
Professor Matzke, If you apply the binomial theorem, the probability of 100% heads is (1/2)^500. Getting exactly 250 heads out of 500, while not very probable, is still 500! (factorial) times as large as all heads, an extremely large number. Incidentally, statistical analysis is exactly why Gregor Mendel is believed to have "enhanced" the results of his genetics experiments (confirmation bias). But I'm sure you already know that. Now, about that small blue planet that I challenged you with . . . ;-) -QQuerius
December 15, 2013
December
12
Dec
15
15
2013
07:53 PM
7
07
53
PM
PDT
1 2 3

Leave a Reply