Statisticians interviewed for Five-Thirty-Eight think so:
It may sound crazy to get indignant over a scientific term that few lay people have even heard of, but the consequences matter. The misuse of the p-value can drive bad science (there was no disagreement over that), and the consensus project was spurred by a growing worry that in some scientific fields, p-values have become a litmus test for deciding which studies are worthy of publication. As a result, research that produces p-values that surpass an arbitrary threshold are more likely to be published, while studies with greater or equal scientific importance may remain in the file drawer, unseen by the scientific community. More.
P-values? From Rob Sheldon
The “p-value” is a Fisher correlation statistic that asks the question “If I have a plot with n-points on it, what is the probability that I would get this distribution by pulling n-points out of a hat?” If the “random probability” is less than 0.05, then “classical statistics” people say “Wow, that is significant!”.
…
Numerous statisticians have pointed out that not only is it not significant, but it is actually erroneous. The first problem, is that Fisher assumed you had one bag, and you are doing this random thing once. But if you have M bags, then the probability you will randomly find a p<0.05 for one of those bags gets a lot better by a factor M. So a “honest” statistician needs to factor in all the formulae he tried, all the data sets he looked at before he assigns “significance”. But they don’t. They don’t even realize that they are biassing the statistic this way. More.
A common misconception among nonstatisticians is that p-values can tell you the probability that a result occurred by chance. This interpretation is dead wrong, but you see it again and again and again and again. The p-value only tells you something about the probability of seeing your results given a particular hypothetical explanation — it cannot tell you the probability that the results are true or whether they’re due to random chance. The ASA statement’s Principle No. 2: “P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.”
P-values don’t help us know whether things happened by “random chance alone.” Hmmm. What does then? How do we know the assumption is correct?
Thoughts?
See also: Rob Sheldon explains p-value vs. R2 value in research, and why it matters
If even scientists can’t easily explain p-values… ?
and
Nature: Banning P values not enough to rid science of shoddy statistics
Follow UD News at Twitter!