Over at The Skeptical Zone Elizabeth Liddle has weighed in on the “coins on the table” issue I raised in this post.
Readers will remember the simple question I asked:
If you came across a table on which was set 500 coins (no tossing involved) and all 500 coins displayed the “heads” side of the coin, how on earth would you test “chance” as a hypothesis to explain this particular configuration of coins on a table?
Dr. Liddle’s answer:
Chance is not an explanation, and therefore cannot be rejected, or supported, as a hypothesis.
Staggering. Gobsmacking. Astounding. Superlatives fail me.
Not only is Dr. Liddle’s statement false, it is the exact opposite of the truth. Indeed, pharmaceutical companies, to name just one example, have spent countless billions of dollars in clinical trials of drugs attempting to rule out the “chance explanation.”
Don’t take my word for it. Here is a paper called What is a P-value? by Ronald A. Thisted, PhD, a statistics professor in the Departments of Statistics and Health Studies at the University of Chicago. The abstract states:
Results favoring one treatment over another in a randomized clinical trial can be explained only if the favored treatment really is superior or the apparent advantage enjoyed by the treatment is due solely to the working of chance. Since chance produces very small advantages often but large differences rarely, the larger the effect seen in the trial the less plausible chance assignment alone can be as an explanation. If the chance explanation can be ruled out, then the differences seen in the study must be due to the effectiveness of the treatment being studied. The p-value measures consistency between the results actually obtained in the trial and the “pure chance” explanation for those results. A p-value of 0.002 favoring group A arises very infrequently when the only differences between groups A and C are due to chance. More precisely, chance alone would produce such a result only twice in every thousand studies. Consequently, we conclude that the advantage of A over B is (quite probably) real rather than spurious.
In a clinical trial the null hypothesis is that the apparent advantage of the treatment is due to chance. The whole point of the trial is to see if the company can rule out the chance explanation, i.e. to rule out the null hypothesis that the results were due to chance, i.e., the chance hypothesis. So, if “chance is not an explanation” what is the point of spending all those billions trying to rule it out?
Want more? Here’s a paper from Penn State on the Chi-square test. An excerpt:
Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. For example, if, according to Mendel’s laws, you expected 10 of 20 offspring from a cross to be male and the actual observed number was 8 males, then you might want to know about the “goodness to fit” between the observed and expected. Were the deviations (differences between observed and expected) the result of chance, or were they due to other factors. How much deviation can occur before you, the investigator, must conclude that something other than chance is at work, causing the observed to differ from the expected. The chi-square test is always testing what scientists call the null hypothesis, which states that there is no significant difference between the expected and observed result
Obviously, asking the question, “were the deviations the result of chance, or were they due to other factors” makes no sense if as Liddle says, “chance is not an explanation.”
I don’t know why Dr. Liddle would write something so obviously false. I am certain she knows better. “Darwinist Derangment Syndrome” or just sloppy drafting? I will let the readers decide.