Uncommon Descent Serving The Intelligent Design Community

Replication: Can new metric crack science’s credibility problem?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
File:FileStack.jpg
What’s hot? What’s not?/Niklas Bildhauer, Wikimedia

From Dalmeet Singh Chawla at PhysicsToday:

A newly proposed, citation-based metric assesses the veracity of scientific claims by evaluating the outcomes of subsequent replication attempts. Introduced in an August bioRxiv preprint by researchers at the for-profit firm Verum Analytics, the R-factor was developed in response to long-standing concerns about the lack of reproducibility in biomedicine and the social sciences. Yet the measure, which its creators also plan to apply to physics literature, has already triggered concerns among researchers for its relatively simple approach to solving a complex problem.

Although it takes on a critical flaw in modern science, the new metric has drawn plenty of criticism. Pseudonymous science blogger Neuroskeptic, who was one of the first to report on R-factors, writes that the metric fails to account for the fact that positive results are submitted and selected for publication more often than negative ones.

Another caveat is the tool’s simplicity, says Adam Russell, an anthropologist and program manager at the Defense Advanced Research Projects Agency who has called for solutions to improve the credibility of social and behavioral sciences research. “History suggests that simple metrics are unlikely to address the multifaceted problems that have given rise to these crises of reproducibility, in part because simple metrics are easier to game,” Russell says. Verum’s Rife, however, says R-factors are less susceptible to gaming than existing metrics are. More.

But social and behavioural sciences are mostly PC bunk anyway. True, a few brave souls battle the tsunami of grant-enabled, grantor-pleasing PR that too often becomes policy. But no metric aimed at science values can address that.

Question: Do fields like origin of life, evolution, and cosmology redound with looniness because the concept of replication is inherently difficult for them?

See also: The “Grand Challenge” for evolutionary psychology is that it is bunk

P-values: Scientists slam proposal to raise threshold for statistically significant findings

The war over P-values is now a quagmire, but a fix is suggested

Deep problem created by Darwinian Ron Fisher’s p-values highlighted again

Early Darwinian Ronald Fisher’s p-value measure is coming under serious scrutiny

Misuse of p-values and design in life?

Rob Sheldon explains p-value vs. R2 value in research, and why it matters

If even scientists can’t easily explain p-values… ?

and

Nature: Banning P values not enough to rid science of shoddy statistics

Comments
Social "science" won't reform for the same reason that media won't reform and economics won't reform. In all three fields the customers are getting what they want. No need to mess with success. The only thing that needs reforming is our attitude about these fields. We've bought the myths that economists are supposed to analyze the economy, and journalists are supposed to report the news, and social "scientists" are supposed to analyze human behavior. Those myths have nothing to do with the actual missions of the fields. Each field is assigned the task of creating unsolvable problems which require more budget and workforce to "solve" by creating more unsolvable problems which require more budget and workforce to "solve" by.....polistra
October 8, 2017
October
10
Oct
8
08
2017
12:48 AM
12
12
48
AM
PDT

Leave a Reply