Uncommon Descent Serving The Intelligent Design Community

At Nautilus: Scientists should not accept unreplicated results – yawn

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
File:FileStack.jpg
What’s hot? What’s not?/Niklas Bildhauer, Wikimedia

From Ahmed Alkhateeb at Nautilus:

Widespread irreproducibility is often misconceived as intentional fraud—which does occur, and is documented by websites like Retraction Watch. But the majority of irreproducible research stems from a complex matrix of statistical, technical, and psychological biases that are rampant within the scientific community.

The institutionalization of science in the early decades of the 20th century created a scientific sub-culture, with its own reward systems, behaviors, and social norms.

But what to do?

To make the desire for recognition compatible with prioritizing good science, we need quality metrics that are independent of sociological norms. Above all, objective quality should be based on the concept of independent replication: A finding would not be accepted as true unless it is independently verified.

Distinguishing between replicated and un-replicated studies would change how science is reported and discussed, increase the visibility of both strong and weak papers, incentivize scientists to only publish findings they have confidence in, and discourage publishing for the sake of publishing. More.

True, but that’s precisely why Alkhateeb’s suggestions are not the usual practice everywhere now. Ahmed Alkhateeb is “a postdoctoral research fellow at Harvard Medical School and Massachusetts General Hospital. His research focuses on stromal-tumor interactions in pancreatic cancer,” which is to say that he is in a field where unreplicated claims, taken seriously, are not trivia. They waste time in the fight against life-threatening cancers.

But many other fields in science may be quite content to sloven along with unlikely or impossible – but nonetheless “arresting” theses – because creative failure is okay. Think origin of life and cosmology, for example. Readers will doubtless think of other such fields quite readily.

One wonders whether it would help if, in serious fields like Alkhateeb’s, research that is correctly done were not punished just because it isn’t replicated. Knowing what isn’t correct is an important contribution to the body of knowledge. Information consists in ruling out possibilities.

See also: At Nature: Change how we judge research. Hmm… Using this scheme, what would protect the researcher who submits the suggested bio-sketch from becoming a target for political reasons that are unrelated to research quality? Think Jordan B. Peterson. or Gunter Bechly. Or anyone who sounds like a risk for blowing the whistle on corruption. The fate of whistleblowers is already often grim.

and

Does it matter in science if no one can replicate your results?

Comments
Before applying a replicability filter, publishers should strictly separate work done to build a resume or gain tenure from work that tries to solve a real industrial or medical problem. Most academics know the difference, but for some reason journals are unwilling to split out the just-for-practice stuff. In other areas the split is routine. No pilot counts hours spent on a flight simulator as actual flying time. Lawyers don't get credit for mock trials in class. Singers don't try to sell rehearsals as performances.polistra
February 15, 2018
February
02
Feb
15
15
2018
05:24 PM
5
05
24
PM
PDT

Leave a Reply