Uncommon Descent Serving The Intelligent Design Community

Mark Frank, “OK, I’m With You Fellas.”

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

O Brother Where Art Thou is in my top five all time favorite movies. In this particular clip both Everett and Pete want to be the leader of the three-man “gang.” So they take a vote . . .

O Brother Clip.

I was reminded of this when I read one of Mark Frank’s comments to my last post.

In that post I pointed out that over at The Skeptical Zone, Elizabeth Liddle says this:

Chance is not an explanation, and therefore cannot be rejected, or supported, as a hypothesis.

But Ronald A. Thisted, PhD, a statistics professor in the Departments of Statistics and Health Studies at the University of Chicago, says this:

If the chance explanation can be ruled out, then the differences seen in the study must be due to the effectiveness of the treatment being studied.

Mark Frank commented on the post, and I tried to pin him down as to whether he agreed with Thisted or Liddle. After much squirming he finally said:

I never disagreed with either Lizzie or Thisted on the essentials because they are in agreement. All that has happened is that Thisted has used ‘chance’ in a somewhat slipshod way.

Liddle: “Chance is not an explanation.”

Thisted: “The purpose of statistical testing is to rule out the chance explanation.”

Frank: “OK, I’m with you fellas.”

One of them might be right and the other wrong. They may both be wrong. One thing is certain, they can’t both be right.

Hey Mark, is this why you are so squishy on the Law of Noncontradiction? You want to reserve the option of having it both ways?

Comments
Generally, the larger your sample size, the smaller this risk is
But remember Barry, you're responding to someone who said the Law of Large Numbers has no relevance to reality.
it [the law of large numbers] is a theorem in mathematics with no relevance to reality Neil Rickert comment on the Fundamental Law of ID
scordova
December 19, 2013
December
12
Dec
19
19
2013
04:23 PM
4
04
23
PM
PDT
OT: My favorite 'O Brother Where Art Thou' song: Alison Krauss – Down in the River to Pray https://www.youtube.com/watch?v=7VLKngHexeUbornagain77
December 19, 2013
December
12
Dec
19
19
2013
03:53 PM
3
03
53
PM
PDT
My God Neil. Are you serious? You may have taught math, but you have no idea what you are talking about here. Or else, there really is no depths to which Darwinists will not sink in their ideology-driven scorched earth sophistry. Your entire comment rests on this assertion:
If the experiment fails to prove the efficacy of the drugs, that is because the noise due to the random sampling is too high.
That is wrong in so many ways it is difficult to know where to begin. First, if anyone says “what does a lawyer know about statistical sampling,” I would respond that before I was an attorney I was a certified public accountant. I was an auditor for Ernst & Whinney (now Ernst & Young), and in my audit work a used statistical sampling all the time. To prepare for that work, I studied statistics in college. Second, no one needs to believe me. Go read Professor Thisted’s paper. As a professor of statistics, he can be counted on to get it right. Here’s another link to his paper. http://galton.uchicago.edu/~thisted/Distribute/pvalue.pdf Third, Neil does not even get the question right, much less the answer. The null hypothesis in a drug trial is not that the drug is efficacious. The null hypothesis is that the difference between the groups is due to chance. If the drug is in fact efficacious, there will be a “real” difference between the two groups. How do you know if there is a real difference. By ruling out the chance explanation, as Professor Thisted says. Fourth, the “chance” at issue is not the noise in the sampling. I mean, this statement is absurd on its face. If group A takes the treatment and group B takes the placebo, what is being measured when they report back different results? Ask yourself this question. If the treatment is not effective, what difference would you expect between the two groups? Of course, you would expect their response to be roughly equal. But no two groups are ever going to be exactly equal. Random differences between the groups will result in some difference. A statistical test starts with this assumption (the null hypothesis): There is no difference between the two groups and any difference that is reported is due to chance (i.e., the “chance explanation). The statistical analysis then determines whether that null hypothesis is rejected. In other words, if you reject the chance explanation, you are left with the conclusion that the best explanation for the data is that the drug is efficacious. Finally, while Neil is wrong about the “sampling noise” being the “chance” that is tested, there is such a thing as sampling noise. There is a chance that the sampled population does not truly reflect the real population. Generally, the larger your sample size, the smaller this risk is but it cannot be eliminated completely. In other words, there is a “chance” that the “chance explanation” is correct even though your test says it should be rejected. That risk is measured by the “p-value” Professor Thisted is discussing in his paper. A low p-value means the chance of your analyses being wrong is low. How low is low enough to rely on the test? There is no universally accepted answer to that question. Generally, however, a p-value of 0.05 or less is said to be “statistically significant,” which means that for practical purposes the sampled group can be assumed to be reflective of the population as a whole.Barry Arrington
December 19, 2013
December
12
Dec
19
19
2013
03:23 PM
3
03
23
PM
PDT
I'll try to avoid getting into semantic arguments about the meaning of "chance". It's the nature of language, that people disagree about meanings. So I'll concentrate on the statistics. And, incidentally, I have taught courses in mathematical statistics. I'll use drug testing as an example. We do use null hypothesis testing to evaluate the efficacy of drugs. I'm not an expert in medicines, but it is my understanding that people don't all respond in the same way to a particular drug treatment. So we might say that there is some randomness in the response to drugs. To use null hypothesis testing, we use a sample of the population. The double-blind protocols are supposed to minimize bias in the sampling. However, the sampling is still random -- we do not test the entire population, only a sample. When we design the null hypothesis experiment, we determine confidence intervals, which deal with possible randomness in the results of the test. The confidence intervals are based only on the random sampling errors. They have no relation to possible randomness in the way people respond to the drug. If the experiment fails to prove the efficacy of the drugs, that is because the noise due to the random sampling is too high. If you want to say that chance explains that noise, then you are talking about the chance involved in the sampling, and not about anything probabilistic about the way the drug acts. Personally, I would not use that "chance explains" way of talking, but if used for such statistical testing, it could only apply to the randomness of the sampling. If it turns out that the drug is efficacious, but its effects vary for different patients, then we might want to know the mean and standard deviation of that variation. We could design a statistical experiment for that, too. And the result would give us confidence intervals for the mean and deviation of the drugs effects. But the randomness involved in having confidence intervals, rather than exact measurements, is again the randomness due to the random sampling.Neil Rickert
December 19, 2013
December
12
Dec
19
19
2013
02:29 PM
2
02
29
PM
PDT
1 2 3

Leave a Reply