Uncommon Descent Serving The Intelligent Design Community

Yes, Lizzie, Chance is Very Often an Explanation

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Over at The Skeptical Zone Elizabeth Liddle has weighed in on the “coins on the table” issue I raised in this post.

Readers will remember the simple question I asked:

If you came across a table on which was set 500 coins (no tossing involved) and all 500 coins displayed the “heads” side of the coin, how on earth would you test “chance” as a hypothesis to explain this particular configuration of coins on a table?

Dr. Liddle’s answer:

Chance is not an explanation, and therefore cannot be rejected, or supported, as a hypothesis.

Staggering. Gobsmacking. Astounding. Superlatives fail me.

Not only is Dr. Liddle’s statement false, it is the exact opposite of the truth. Indeed, pharmaceutical companies, to name just one example, have spent countless billions of dollars in clinical trials of drugs attempting to rule out the “chance explanation.”

Don’t take my word for it. Here is a paper called What is a P-value? by Ronald A. Thisted, PhD, a statistics professor in the Departments of Statistics and Health Studies at the University of Chicago. The abstract states:

Results favoring one treatment over another in a randomized clinical trial can be explained only if the favored treatment really is superior or the apparent advantage enjoyed by the treatment is due solely to the working of chance. Since chance produces very small advantages often but large differences rarely, the larger the effect seen in the trial the less plausible chance assignment alone can be as an explanation. If the chance explanation can be ruled out, then the differences seen in the study must be due to the effectiveness of the treatment being studied. The p-value measures consistency between the results actually obtained in the trial and the “pure chance” explanation for those results. A p-value of 0.002 favoring group A arises very infrequently when the only differences between groups A and C are due to chance. More precisely, chance alone would produce such a result only twice in every thousand studies. Consequently, we conclude that the advantage of A over B is (quite probably) real rather than spurious.

(emphasis added)

In a clinical trial the null hypothesis is that the apparent advantage of the treatment is due to chance. The whole point of the trial is to see if the company can rule out the chance explanation, i.e. to rule out the null hypothesis that the results were due to chance, i.e., the chance hypothesis. So, if “chance is not an explanation” what is the point of spending all those billions trying to rule it out?

Want more? Here’s a paper from Penn State on the Chi-square test. An excerpt:

Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. For example, if, according to Mendel’s laws, you expected 10 of 20 offspring from a cross to be male and the actual observed number was 8 males, then you might want to know about the “goodness to fit” between the observed and expected. Were the deviations (differences between observed and expected) the result of chance, or were they due to other factors. How much deviation can occur before you, the investigator, must conclude that something other than chance is at work, causing the observed to differ from the expected. The chi-square test is always testing what scientists call the null hypothesis, which states that there is no significant difference between the expected and observed result

(emphasis added)

Obviously, asking the question, “were the deviations the result of chance, or were they due to other factors” makes no sense if as Liddle says, “chance is not an explanation.”

I don’t know why Dr. Liddle would write something so obviously false. I am certain she knows better. “Darwinist Derangment Syndrome” or just sloppy drafting? I will let the readers decide.

Comments
#35 Box You have picked out one sentence of Lizzie's which is not her best (I suspect a typo). The passage as a whole is authoritative and well-written. Do you agree that comment 26 is an example of browbeating?Mark Frank
December 19, 2013
December
12
Dec
19
19
2013
09:42 AM
9
09
42
AM
PDT
MF #29: I have just seen that Lizzie has addressed this OP on TSZ in a more complete and rigorous fashion
Lizzie: But nobody IS denying ID as an explanation for the configuration of coins. I have rejected the hypothesis that they were fairly tossed. That is not the same as inferring that they were laid by an ID I do not “agree that ID is the best explanation” although, given the nature of coins and tables, it probably is, just as it would be if they’d been tossed (most likely tosser is an ID).
Very very complete and rigorous indeed. She has stopped making any sense whatsoever. This is scary ….Box
December 19, 2013
December
12
Dec
19
19
2013
09:34 AM
9
09
34
AM
PDT
Comment #26 above is a prime example of sniping and browbeating. “I can’t believe someone with your intellect/education/experience would say something stupid as . . .”
It's strange Mark. I've looked back and taken note, and I cannot find you chiding, or calling for civility from the likes of Matzke, or others arguing for your side of the aisle. If I were to venture to TSZ or PT would I see you ardently defending ID proponents from incivility? You aren't a hypocrite are you Mark? You also wouldn't be trying to shift focus from a nonsensical position to a perceived moral high-ground to invalidate your opponents, would you?TSErik
December 19, 2013
December
12
Dec
19
19
2013
09:27 AM
9
09
27
AM
PDT
MF: here we go again on probability hyps. While Wm AD did speak on this in connexion of a theoretical value, it is blatant that where we have 500+ bits of informational complexity sand a solar system scope, using the 10^57 atoms as observers observing every 10^-14 s for 10^17s will only be able to sample as 1 straw to a 1,000 light year thick cubical haystack of the config space. In teh case of the observed cosmos as a whole, 1,0000 bits will suffice to swamp search capacity to a much worse degree. So, sampling resources, or rather cosmic scale lack thereof, dominates any blind search on blind chance plus mechanical necessity. As a consequence we have no good reason to expect the allowed blind search of any character limited by atomic resources, to find specific, rare clusters of configurations. This has been pointed out any number of times and has been willfully ignored. That speaks volumes, utter volumes. There is just one empirically warranted source of FSCO/I and it is design by intelligence. The search challenge above easily tells why. But then, we are dealing here with an ilk that will not acknowledge self evident truths. KFkairosfocus
December 19, 2013
December
12
Dec
19
19
2013
09:17 AM
9
09
17
AM
PDT
#23 Box
Not that I agree – I accept the necessity for self-evident truths and I don’t believe it was helpful that you went on about coins in packages – but can you name one other ID tactic?Because I have never witnessed a discussion where the tables were actually turned. Never have I seen that ID-proponents were forced to adopt the methods eloquently described by Querius in post #9.
Comment #26 above is a prime example of sniping and browbeating. “I can’t believe someone with your intellect/education/experience would say something stupid as . . .”  Mark Frank
December 19, 2013
December
12
Dec
19
19
2013
09:15 AM
9
09
15
AM
PDT
#28 Barry
But there was an implicit probability model in the case Lizzie was discussing (the 500 heads scenario).
Barry when discussing the 500 coins I repeatedly asked if you meant a particular probability model (50% probability of each coin being head or tails independent of other coins). I said - if this is what you mean by chance then I reject it - Lizzie I am sure would do the same. However, you refused to confirm that was what you meant. Is that all you meant by chance in relation to the 500 coins? If so, we can all agree and go on to something more useful.Mark Frank
December 19, 2013
December
12
Dec
19
19
2013
09:06 AM
9
09
06
AM
PDT
Something Dr. Liddle actually said at TSZ:
But nobody IS denying ID as an explanation for the configuration of coins. I have rejected the hypothesis that they were fairly tossed. That is not the same as inferring that they were laid by an ID I do not “agree that ID is the best explanation” although, given the nature of coins and tables, it probably is, just as it would be if they’d been tossed (most likely tosser is an ID)
William J Murray
December 19, 2013
December
12
Dec
19
19
2013
09:05 AM
9
09
05
AM
PDT
I have just seen that Lizzie has addressed this OP on TSZ in a more complete and rigorous fashion than my comments. I would like to emphasise that she is someone who lives and breathes (and teaches) statistics professionally. Of course an argument from authority is not proof but it does merit trying to understand what the authority is saying.Mark Frank
December 19, 2013
December
12
Dec
19
19
2013
09:01 AM
9
09
01
AM
PDT
Mark:
However, to use Chance in the abstract without an implicit or explicit probability model (in this case the null hypothesis) explains nothing
But there was an implicit probability model in the case Lizzie was discussing (the 500 heads scenario). So from your comment, I take it you agree Lizzie was wrong. OK.Barry Arrington
December 19, 2013
December
12
Dec
19
19
2013
08:59 AM
8
08
59
AM
PDT
In a clinical trial the null hypothesis is that the apparent advantage of the treatment is due to chance. It's really not. The null is that the treatment has no effect. It's possible (and in fact quite likely) to reject that hypothesis with some p-value threshold or other and it still be most probable that the apparent-effect is due to chance. (i.e. if an hyptothesis is very unlikely, a "signficant" p-value only pushes the probability a little towards the belief that it is true). The argument about chance as a explanation seems like a complete waste of time to me. If you can carefully define what the chance hypotheses are (e.g. sampling from a known probability distribution) then I guess chance is an explanation, even if there is a mechanistic reason underlying the abstraction we make for that variation.wd400
December 19, 2013
December
12
Dec
19
19
2013
08:56 AM
8
08
56
AM
PDT
Barry:
RB, your assertions in 15 are wrong in every particular. Darwinists’ willingness, even eagerness, to twist, distort and obfuscate never ceases to amaze.
RB:
I eagerly await your rebuttal of each of those particulars.
In the words of the man in black, “get used to disappointment.” Your assertions in 15 are so egregiously off base that they indicate one of two things: (1) someone who is invincibly stupid and incapable of understanding the issues; or (2) someone being intentionally dishonest and attempting to obscure the issue. Either way, it is pointless to engage with you. BTW, charity compels me to assume (1) is true. For the readers, I am not going to rise to RB’s bait. If anyone has a good faith question about the nonsense he spewed in 15, post it and I will answer it, or, better yet, go read the paper for yourself.Barry Arrington
December 19, 2013
December
12
Dec
19
19
2013
08:55 AM
8
08
55
AM
PDT
#21 BA - I was attempting an ironic comment on your debating style which mostly comprises assertions that you are right. As it happens I think Thisted's use of "chance" is pretty much the same as Lizzie's but he was a bit sloppy in suggesting that chance is the explanation. If the p value is high enough then the explanation may well be the null hypothesis (whatever that is) which incorporates a probability model. This variation in this model can be called chance (i.e. the bit we can't explain as described in #7). As noted, chance in this sense may well include some intended or designed element which has not been detected. However, to use Chance in the abstract without an implicit or explicit probability model (in this case the null hypothesis) explains nothing. In fact it is pretty much meaningless. If you don't believe me ask William Dembski. He recognises the need for a specific hypothesis which includes an element of chance when he defines CSI.Mark Frank
December 19, 2013
December
12
Dec
19
19
2013
08:32 AM
8
08
32
AM
PDT
KF: You are quite right, and have fleshed out in your comment some of the things I had in mind by my obscure "Now I happen to think there are problems with this argument and that chance may indeed be real." Furthermore, as you point out, the whole point of many, perhaps most, statistical analyses is to reject the chance explanation. I don't know if Lizzie is referring to the word "chance" in a very particular usage in a very particular paper and disputing how it is used in that case. But as a general matter to say that chance isn't an explanation is just silly.Eric Anderson
December 19, 2013
December
12
Dec
19
19
2013
08:13 AM
8
08
13
AM
PDT
Mark Frank,
MF #8: I personally get very frustrated when I raise a point and the response is not to address the point but to declare that the opponent’s position is self-evident or that I am being pedantic in trying to define something in detail (two favourite ID tactics).
Not that I agree - I accept the necessity for self-evident truths and I don't believe it was helpful that you went on about coins in packages - but can you name one other ID tactic? Because I have never witnessed a discussion where the tables were actually turned. Never have I seen that ID-proponents were forced to adopt the methods eloquently described by Querius in post #9.Box
December 19, 2013
December
12
Dec
19
19
2013
07:46 AM
7
07
46
AM
PDT
BA:
RB, your assertions in 15 are wrong in every particular.
I eagerly await your rebuttal of each of those particulars.Reciprocating Bill
December 19, 2013
December
12
Dec
19
19
2013
07:39 AM
7
07
39
AM
PDT
Mark, if that is not what you meant, then what was your point? You said you were defending Lizzie, so I assumed you were trying to make what she said not conflict with what the professor said. If you are now backing off and admitting Lizzie was gobsmackingly wrong, I'm OK with that too. Cheers.Barry Arrington
December 19, 2013
December
12
Dec
19
19
2013
07:36 AM
7
07
36
AM
PDT
#19 BA "The professor of statistics really means the same thing as Lizzie." Barry says so. I guess it must be true. Interesting that "chance" in this sense includes design!Mark Frank
December 19, 2013
December
12
Dec
19
19
2013
07:23 AM
7
07
23
AM
PDT
Lizzie: Chance is not an explanation. Professor of statistics: The whole point of statistical testing is to rule out the “chance explanation.” Mark Frank 1: The professor of statistics really means the same thing as Lizzie. Mark Frank 2: “There is a lot of stuff here about how difficult ID opponents are to deal with.” Irony. You know how I love it.Barry Arrington
December 19, 2013
December
12
Dec
19
19
2013
07:01 AM
7
07
01
AM
PDT
SC: Pardon but I am a little uncomfortable with a maximising uncertainty definition for chance phenomena [which points to flat randomness], as random variable can also show bias by central tendency or inclination to one end or another of a range of possibilities or more. That is why I think a more physical approach that starts with a paradigm case such as fair dice then uses it to introduce the concept of highly contingent outcomes for similar initial conditions that have no credible intelligent direction. As you know I spoke of clashing uncorrelated chains of events and also of the sort of hard core written in randomness that we find in quantum phenomena and statistical mechanics etc. For these I think the model of a box of marbles with pistons at the ends that can give a hard push and set in train movements and collisions culminating in Maxwell Boltzmann statistics is useful and points to thermodynamics. With Brownian motion as an observable and sufficiently close case that played a role in award of a Nobel prize. That of course then raises the issue of when do we see from results that intelligence is a likely cause, and that raises the issue that at some reasonable threshold of complexity measured by scope of configuration space, and available search resources, a blind process such as chance becomes maximally implausible. It is not hard to see -- save for those with a will problem -- that something that is functionally specific and complex beyond 500 bits worth of possible configs, will not plausibly result from blind chance and/or mechanical necessity. As the 500 H coins in a row case will aptly illustrate, and as would a similar row of coins spelling out the ASCII code for the first 72 or so characters of this message. KFkairosfocus
December 19, 2013
December
12
Dec
19
19
2013
06:49 AM
6
06
49
AM
PDT
RB, your assertions in 15 are wrong in every particular. Darwinists' willingness, even eagerness, to twist, distort and obfuscate never ceases to amaze.Barry Arrington
December 19, 2013
December
12
Dec
19
19
2013
06:43 AM
6
06
43
AM
PDT
Q @ 6: Your list reminds me of the last argument I had with my wife. :-)Barry Arrington
December 19, 2013
December
12
Dec
19
19
2013
06:34 AM
6
06
34
AM
PDT
MF:
He is pointing out that a result may deviate from the expected value by chance as opposed to some underlying cause.
Mark is right. “Chance explanation” in the context of Thist’s paper refers to the fact that even perfectly executed random sampling from a population will select samples with means (of whatever variable is of interest) that inevitably differ to some degree from the mean of the population from which the samples are drawn. Samples may also display differing means. Nothing is being hypothesized to “cause” either individual measured values or sample means to take on the values they do, apart from the probabilities inherent in random sampling. The “chance” of concern is inherent in the experimental sampling procedures, not the phenomenon being measured. Fortunately, the probability that random sampling error will result in a sample with a mean that differs from the population mean by a given value is exactly calculable given knowledge of the variability of the value of interest and the size of the sample. “Ruling out chance” refers to quantifying the confidence one has that the difference one observes between sample mean and population mean (or between the means of several samples) is not likely to have arisen due to sampling itself. The “p-value” is an arbitrary threshold vis that confidence. Experimental variables that become the focus of hypothesis testing differ. What is hypothesized is that the sample and population mean (or multiple sample means) of the dependent variable of interest differ due to variations in an independent variable, ideally one manipulated by the experimenter. With appropriate experimental controls, large enough sample sizes the study acquires power sufficient that causal relationships may be established against the background of differences due to sampling error - always within the limitations of that confidence. So we are detecting hypothesized causal relationships against a background of statistical noise due to limitations inherent in random sampling - that is, inherent in the experimental procedure. This sense of “chance” is therefore NOT on the same footing as an “explanation” as the independent variables one is investigating.Reciprocating Bill
December 19, 2013
December
12
Dec
19
19
2013
04:17 AM
4
04
17
AM
PDT
EA: As I have had to point out yesterday, chance denotes credibly undirected contingency in a situation. Thus, when chance acts -- I will explain a bit more -- we will see for quite similar initial conditions, a variation of outcomes across some range in accord with some distribution or the other. A variation that is consistent with undirected contingency. The common way for this to happen at macro level is based on the butterfly effect and the uncorrelated collision of causal chains that are often deterministic in themselves. E.g. a die drops under 9.8 N/kg, hits a table and then tumbles and settles. Thanks to unavoidable irregularities and variations, plus twelve edges and eight corners, we see a fair die giving a good imitation of a flat random distribution with the values from 1 to 6. It is reasonable to summarise this sort of undirected contingency under the name, chance. In effect we are getting a random variable as our outcome that sufficiently mimics mathematical models of randomness to be good enough for government work. (And don't ask me about where dice or the equivalent are used in Government work, on the principle of if you eat sausages don't visit a sausage factory. Let me just say that my Dad once taught me how to use a telephone directory as a poor man's random number table, as the line codes are generally uncorrelated with names so even though names are not random and line cores are not random, the uncorrelated clash sufficiently often is. But it won't work if all the Smiths live in the same district and all the Browns in another.) The second area is one where randomness may be directly manifest: quantum based phenomena, especially potential barrier tunnelling. Alpha particle emission is a classic case in point. Random rate effect, giving rise to a reliable and precise half-life for a sufficiently large sample. It is also reasonable in this case to speak about a chance process. So, there is nothing wrong whatsoever in discussing chance causal factors in these sorts of contexts. Where of course in physics these factors came in once gases were studied through kinetic theory and statistical mechanics. It was soon realised that the best explanations of gas behaviour was random molecular motions, connected to temperature as an index of the average random kinetic energy per degree of freedom. (And that is getting too close to SC's overkill.) Let's just say that the phenomenon of Brownian Motion, was recognised as a manifestation of this motion and from this, the reality of atoms and molecules was firmly established by Einstein, in one of the papers that led to his Nobel Prize. (He did not win the prize because of Relativity!) So, when I see the sorts of dismissals we are seeing, it is clear that the objectors are refusing to acknowledge basic statistics -- from which chance is a well recognised concept, and a lot of basic physics too that builds on the concept. But then the root point is in the tail of the first paragraph: we accept chance when the variation is in accord with what would happen with credibly undirected contingency, which we can often model, e.g. with the coins or with the Gaussian curve etc. That is, implicitly, we have the contrast that there are two explanations for contingency, chance and design. And, we must needs be able to distinguish them credibly, i.e. the need for a design inference explanatory filter is obvious once we squarely face the issue, what is chance? Hence, the resistance we are seeing, to the point of absurdity. KFkairosfocus
December 19, 2013
December
12
Dec
19
19
2013
03:31 AM
3
03
31
AM
PDT
Dr Liddle et al: I presume you are watching. I simply beg to remind you that for many years, there has been a common practice of hypothesis testing by rejecting the null in light of evidence, the null being a hypothesis that chance -- undirected contingency -- accounts for the results observed. This comes out in Fisherian inference testing, and is in the picture in ANOVA. Where, basically the idea is that if we are sufficiently in a far-skirt tail zone of interest for a proposed distribution, it is unlikely that that is by chance. 5% tails are commonly used, as are 1% tails. This, you MUST know. It is basic statistics. You may be able to dimly recall how, several times, I set up the mental exercise of setting up a chart with a bell distribution with stripes and then suggesting dropping darts from a height sufficient that the darts would fall more or less evenly. Obviously, the central bulge of the bell shape is going to be hit far more often than the far tails are. And, if our search by darts is sufficiently small in number, the sufficiently far tails will reliably not be hit. In the case of 500 coins -- let's refine: fair coins, in a row, on a table, the situation is that there are 3.27*10^150 possible configurations, i.e. 2 ^ 500. No search process on the gamut of our solar system since it began can sample more than the equivalent of 1 straw to a cubical haystack as thick as our barred spiral galaxy's central bulge. In such a situation it is utterly unreasonable to expect to hit 500 H or the like by chance. The dominant cluster of configs will be near 50-50, with no particular order of H and T. So, we are utterly unlikely to hit alternating H and T either. For reasons which are only too plain. So, you are speaking against the truth you know, in hopes of scoring points off those who would not know better. Revealing. And, sad. KFkairosfocus
December 19, 2013
December
12
Dec
19
19
2013
03:06 AM
3
03
06
AM
PDT
#11 CC I share your scepticism. The trouble is that most of this research takes place in the context of furthering democracy. To exclude one group from the debate is undemocratic. I would like to try a very structured environment such as MIT's deliberatorium (http://cci.mit.edu/klein/deliberatorium.html) where a main proposal is articulated, comments have to indicate how they relate to that proposal (e.g. reason for, reason against, request for clarification,). I would also limit comments to 200 words so that participants were not tempted to waste words on personal comments or irrelevances.Mark Frank
December 19, 2013
December
12
Dec
19
19
2013
02:05 AM
2
02
05
AM
PDT
=> Mark Frank,Querius I am very skeptical that such an environment is possible unless there is a criteria for the participants. If you have a open forum where any one can sign up and participate, you will always have a problem in maintaining the forum discussion. On second thoughts, even in such environment, you will have ego clashes so it is not possible!coldcoffee
December 19, 2013
December
12
Dec
19
19
2013
01:30 AM
1
01
30
AM
PDT
Q You seem to asking for what the deliberative democracy theorists would call authentic deliberation. There have been various proposals for the criteria for such deliberation. A good example is the Discourse Quality Index proposed by Steenberg which is based on six criteria: I. Justification Assertions are backed up with justifications II. Common Good Arguments are for the common good and not for the benefit of particular citizens III. Respect Discussion is on the basis of respect for participants and their arguments IV. Constructive politics Discussion is constructive and attempts to find a mutually acceptable solution V. Participation All citizens affected by the deliberation are involved (presence) and have equal ability to express their views (voice) VI. Authenticity Participants do not attempt to deceive each other There have also been various attempts to create Internet environments which encourage such discourse. However, none of them have been outstandingly successful.  Above all it needs a will from the participants to make it that way.Mark Frank
December 19, 2013
December
12
Dec
19
19
2013
12:54 AM
12
12
54
AM
PDT
Mark Frank wrote
May I politely suggest that it is more constructive to address the argument than rant about the unreasonable nature of your opponents.
It might indeed be more constructive, but I usually find myself interrupted every few words, followed by many of the items that I listed. I'd propose a set of discussion rules that would ensure equal time by both parties in the discussion, no interruptions, no long lectures, and each point must be fairly addressed and answered. The discussion is judged a loss for the first person to use an ad hominem attack, or other unscrupulous tactic. Judges determine to what extent each party has answered the other. It's probably a hopeless cause, but I can dream . . . -QQuerius
December 18, 2013
December
12
Dec
18
18
2013
11:16 PM
11
11
16
PM
PDT
There is a lot of stuff here about how difficult ID opponents are to deal with. In fact there is a lot of stuff about this throughout UD!   Whenever two parties get into a debate about things they care about, each party always thinks the other party is difficult, irrational, etc. Querius list in #6 is quite a good list but it always applies to the other guy. I personally get very frustrated when I raise a point and the response is not to address the point but to declare that the opponent’s position is self-evident or that I am being pedantic in trying to define something in detail (two favourite ID tactics). May I politely suggest that it is more constructive to address the argument than rant about the unreasonable nature of your opponents.Mark Frank
December 18, 2013
December
12
Dec
18
18
2013
10:52 PM
10
10
52
PM
PDT
Barry You have of course banned Lizzie from responding  so I will do my best. Thisted’s paper is excellent but perhaps he could have phrased it a bit better. He is pointing out that a result may deviate from the expected value by chance as opposed to some underlying cause. Another way of phrasing this is to say there is no known explanation for the deviation.  This use of chance in this context does not preclude design. In fact the deviation from the expected value might have been because of some undetected, intelligent interference. Chance in this context just stands for – explanation not known. Mark Frank
December 18, 2013
December
12
Dec
18
18
2013
10:43 PM
10
10
43
PM
PDT
1 2 3

Leave a Reply