Uncommon Descent Serving The Intelligent Design Community

The war over P-values is now a quagmire, but a fix is suggested

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

From Steven Novella at Science-Based Medicine:

he p-value is defined as the probability of the results of an experiment deviating from the null by as much as they did or greater if the null hypothesis is true. If that sounds difficult to parse, don’t feel bad. Many scientists cannot give the correct technical definition. To put it more simply, what are the odds that you would have gotten the results you did (or greater) if your hypothesis is not true? In medicine this usually refers to an effect, such as the difference in pain reduction between a placebo and an experimental treatment. Is that difference statistically significant? A p-value of 0.05, the traditional threshold, means that there is a 5% chance that you would have obtained those results without there being a real effect. A p-value of 0.005 means there is a 0.5% chance – or a change from 1/20 to 1/200.

There are major problems with over-reliance on the p-value. It was never intended to be the one measure of whether or not an effect is real, but unfortunately the human desire for simplicity has pushed it into that role.

What the authors propose would certainly shift the balance away from false positives. It is a straightforward fix, but I have concerns that it may not be optimal, or even enough by itself. I do like their suggestion that we consider 0.005 to be statistically significant, and anything between 0.05 and 0.005 to be “suggestive.” This is closer to the truth, and would probably help shift the way scientists and the public think about p-values. I have already made this mental shift myself. I do not get excited about results with a p-value near 0.05. It just doesn’t mean that much.

The downside, of course, is that this will increase the number of false negatives. Given how overwhelmed the literature is with false positive studies, however, I think this is a good trade-off. … You can still do your small study and if you get marginal p-values you can even still publish. Just don’t call your results “significant.” Call them “suggestive” instead.More.

It’s always better to acknowledge uncertainty than to insist on certainty and then find that others don’t essentially trust you.

See also: Deep problem created by Darwinian Ron Fisher’s p-values highlighted again

Early Darwinian Ronald Fisher’s p-value measure is coming under serious scrutiny

Misuse of p-values and design in life?

Rob Sheldon explains p-value vs. R2 value in research, and why it matters

If even scientists can’t easily explain p-values… ?

and

Nature: Banning P values not enough to rid science of shoddy statistics

Comments

Leave a Reply