Uncommon Descent Serving The Intelligent Design Community

Misuse of p-values and design in life?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Statisticians interviewed for Five-Thirty-Eight think so:

It may sound crazy to get indignant over a scientific term that few lay people have even heard of, but the consequences matter. The misuse of the p-value can drive bad science (there was no disagreement over that), and the consensus project was spurred by a growing worry that in some scientific fields, p-values have become a litmus test for deciding which studies are worthy of publication. As a result, research that produces p-values that surpass an arbitrary threshold are more likely to be published, while studies with greater or equal scientific importance may remain in the file drawer, unseen by the scientific community. More.

P-values? From Rob Sheldon

The “p-value” is a Fisher correlation statistic that asks the question “If I have a plot with n-points on it, what is the probability that I would get this distribution by pulling n-points out of a hat?” If the “random probability” is less than 0.05, then “classical statistics” people say “Wow, that is significant!”.

Numerous statisticians have pointed out that not only is it not significant, but it is actually erroneous. The first problem, is that Fisher assumed you had one bag, and you are doing this random thing once. But if you have M bags, then the probability you will randomly find a p<0.05 for one of those bags gets a lot better by a factor M. So a “honest” statistician needs to factor in all the formulae he tried, all the data sets he looked at before he assigns “significance”. But they don’t. They don’t even realize that they are biassing the statistic this way. More.

A common misconception among nonstatisticians is that p-values can tell you the probability that a result occurred by chance. This interpretation is dead wrong, but you see it again and again and again and again. The p-value only tells you something about the probability of seeing your results given a particular hypothetical explanation — it cannot tell you the probability that the results are true or whether they’re due to random chance. The ASA statement’s Principle No. 2: “P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.”

P-values don’t help us know whether things happened by “random chance alone.” Hmmm. What does then? How do we know the assumption is correct?

Thoughts?

See also: Rob Sheldon explains p-value vs. R2 value in research, and why it matters

If even scientists can’t easily explain p-values… ?

and

Nature: Banning P values not enough to rid science of shoddy statistics

Follow UD News at Twitter!

Comments
You do realize that Dembski’s later versions of CSI were based, in part, on Fisher’s significance test? See section 2 of “Specification: The Pattern That Signifies Intelligence”.
CSI is different from mere specification. The paper you reference deals with mere specification.Virgil Cain
March 10, 2016
March
03
Mar
10
10
2016
05:02 AM
5
05
02
AM
PDT
You do realize that Dembski's later versions of CSI were based, in part, on Fisher's significance test? See section 2 of "Specification: The Pattern That Signifies Intelligence". My take on p-values is about the same as my take on CSI: they're potentially useful tools if understood and used properly, but in both cases a disturbing number of people will go ahead and rely on them without bothering to understand them first (let alone use them properly). Dembski's changes avoid some of the problems with Fisher's approach, but also add new opportunities for confusion and abuse, so if anything CSI is even harder to use properly. IMO the real problem with p-values is that people think p<0.05 is a magic indicator of a real result, and it's actually much more complicated than that. But it really is more complicated than that, and any attempt to replace p-values with a different measure will ultimately run into the same problem: you need to put some effort into understanding what you're doing, or you're going to produce nonsense. BTW, here's an XKCD comic illustrating Rob's point about multiple bags.Gordon Davisson
March 9, 2016
March
03
Mar
9
09
2016
01:24 PM
1
01
24
PM
PDT

Leave a Reply