Uncommon Descent Serving The Intelligent Design Community

Abandon statistical significance, learn to live with uncertainty, scientists demand

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
File:FileStack.jpg
What’s hot? What’s not?/Niklas Bildhauer, Wikimedia

Nature reports that three statisticians and more than 800 signatories call argue for scientists to abandon statistical significance:

How do statistics so often lead scientists to deny differences that those not educated in statistics can plainly see? For several generations, researchers have been warned that a statistically non-significant result does not ‘prove’ the null hypothesis (the hypothesis that there is no difference between groups or no effect of a treatment on some measured outcome)1. Nor do statistically significant results ‘prove’ some other hypothesis. Such misconceptions have famously warped the literature with overstated claims and, less famously, led to claims of conflicts between studies where none exists. …

We must learn to embrace uncertainty. One practical way to do so is to rename confidence intervals as ‘compatibility intervals’ and interpret them in a way that avoids overconfidence. Specifically, we recommend that authors describe the practical implications of all values inside the interval, especially the observed effect (or point estimate) and the limits. In doing so, they should remember that all the values between the interval’s limits are reasonably compatible with the data, given the statistical assumptions used to compute the interval7,10. Therefore, singling out one particular value (such as the null value) in the interval as ‘shown’ makes no sense. Valentin Amrhein, Sander Greenland & Blake McShane, “Scientists rise up against statistical significance” at Nature

Retraction Watch interviewed statistician Nicole Lazar, who explains:

One such principle about which there has been contentious debate, especially in the Frequentist versus Bayesian wars, is objectivity. It is important to understand and accept that while objectivity should be the goal of scientific research, pure objectivity can never be achieved. Science entails intrinsically subjective decisions, and expert judgment – applied with as much objectivity and as little bias as possible – is essential to sound science.

Hold that thought when another flatulent editorial in a local news source extols the “objectivity of science” while recommending some bad policy or other based on a probably questionable study.

Let’s see where this goes. Will it lead to less magic with numbers or more and bigger magic?

Follow UD News at Twitter!

Comments
This seems like a good time and place to reminisce with David Berlinski The Strength of Natural Selection in the Wild:
The statistical methods by which Kingsolver proceeded are simple to the point of triteness. One the one hand, there are a series of quantitative biological traits, chiefly morphological in nature; and on the other hand, certain quantitative measure of fitness. Beak length in finches is a typical morphological trait, and survival, mating success or fecundity typical measure of fitness. Using the methodology first introduced by R. Lande and S. J. Arnold in their 1983 study, “The Measurement of selection on correlated characteristics,” published in Evolution, 37, Kingsolver proposed to define selection in terms of the slope of the regression between a quantitative trait of interest and specific measures of fitness. This provides an estimate of the strength of selection. Natural selection disappears as a biological force and reappears as a statistical artifact. The change is not trivial. It is one thing to say that nothing in biology makes sense except in the light of evolution; it is quite another thing to say that nothing in biology makes sense except in the light of various regression correlations between quantitative characteristics. It hardly appears obvious that if natural selection is simply a matter of correlations established between quantitative traits, that Darwin’s theory has any content beyond the phenomenological, and in the most obvious sense, is no theory at all.
It's sad that we have known that natural selection is impotent for a long time and yet it is still promoted as being able to do seemingly magical feats.ET
March 22, 2019
March
03
Mar
22
22
2019
01:08 PM
1
01
08
PM
PDT
"I can’t speak for climate change data as I do not follow it that closely." Indeed. If you did, you'd think climate change was a joke. Andrewasauber
March 22, 2019
March
03
Mar
22
22
2019
11:18 AM
11
11
18
AM
PDT
MG
I think I just saw a pig fly out the window! I agree with EG !!
Ouch.
Why is it that we don’t see such error bars on aggregate climate change data?
I can't speak for climate change data as I do not follow it that closely. However, for analytical chemistry data is is often because the decision makers (i.e., regulators) don't want to see the term "uncertainty" or a +/- in a result. It makes their lawyers nervous. For example, the drinking water limit for lead where I live is 10 ug/L. If a lab obtains a result of 10.1 it is considered a violation of the regulations and action must be taken. Even though an accurate portrayal of the result may have been 10.1 +/- 1.Ed George
March 22, 2019
March
03
Mar
22
22
2019
07:40 AM
7
07
40
AM
PDT
This adjustment is fine when it comes to marginal results like efficacy of xyz new drug. But when it comes to Intelligent Design Detection http://nonlin.org/intelligent-design/, the threshold can easily be .005 or .0005 or ...Nonlin.org
March 22, 2019
March
03
Mar
22
22
2019
06:14 AM
6
06
14
AM
PDT
EG, PM & MG: yes, we want de error-bars, whether 95% or two sigma points or the older maximum credibly possible errors estimates, etc. Where also, a probability value is inherently an estimate of error. KF PS: Hurricane track projection cones are a common case of error bars.kairosfocus
March 22, 2019
March
03
Mar
22
22
2019
01:43 AM
1
01
43
AM
PDT
I think I just saw a pig fly out the window! I agree with EG !! "a measured value such as the lead concentration in your drinking water, is completely meaningless without its associated uncertainty. And uncertainty is a statistically derived value." Well spoken sir! Error bars are absolutely essential to any statement concerning physical measurements. Why is it that we don't see such error bars on aggregate climate change data?math guy
March 21, 2019
March
03
Mar
21
21
2019
08:38 PM
8
08
38
PM
PDT
the law/s of probability are an integral part of science, first science is probability based so not absolute, second in order to know which is the stronger valid science. so even if NDT Darwinism was never falsified (missing/required predicted transitional and dead end fossils) it would still have about as close to zero probability of being the actuality barring super-miraculous intervention and I am not sure that could even work, vs w/in ID over 99% probability of being the historic actuality to account for the natural observations.. so too in cosmology SCM-LCDM near zero assuming not already falsified (missing required/predicted dark matter and energy) and SPIRAL cosmological redshift hypothesis and model well over 50% probability of being the description of the observations.Pearlman
March 21, 2019
March
03
Mar
21
21
2019
06:48 PM
6
06
48
PM
PDT
When I was doing my masters it was almost impossible to get published unless you included statistics in the paper. As such, we would often perform some statistical analysis, often improperly, even though it added nothing to the actual research. In my chosen career (analytical chemistry) a measured value such as the lead concentration in your drinking water, is completely meaningless without its associated uncertainty. And uncertainty is a statistically derived value.Ed George
March 21, 2019
March
03
Mar
21
21
2019
02:41 PM
2
02
41
PM
PDT

Leave a Reply