Replication: Can new metric crack science’s credibility problem?
|October 7, 2017||Posted by News under Intelligent Design, Peer review, Science|
From Dalmeet Singh Chawla at PhysicsToday:
A newly proposed, citation-based metric assesses the veracity of scientific claims by evaluating the outcomes of subsequent replication attempts. Introduced in an August bioRxiv preprint by researchers at the for-profit firm Verum Analytics, the R-factor was developed in response to long-standing concerns about the lack of reproducibility in biomedicine and the social sciences. Yet the measure, which its creators also plan to apply to physics literature, has already triggered concerns among researchers for its relatively simple approach to solving a complex problem.
Although it takes on a critical flaw in modern science, the new metric has drawn plenty of criticism. Pseudonymous science blogger Neuroskeptic, who was one of the first to report on R-factors, writes that the metric fails to account for the fact that positive results are submitted and selected for publication more often than negative ones.
Another caveat is the tool’s simplicity, says Adam Russell, an anthropologist and program manager at the Defense Advanced Research Projects Agency who has called for solutions to improve the credibility of social and behavioral sciences research. “History suggests that simple metrics are unlikely to address the multifaceted problems that have given rise to these crises of reproducibility, in part because simple metrics are easier to game,” Russell says. Verum’s Rife, however, says R-factors are less susceptible to gaming than existing metrics are. More.
But social and behavioural sciences are mostly PC bunk anyway. True, a few brave souls battle the tsunami of grant-enabled, grantor-pleasing PR that too often becomes policy. But no metric aimed at science values can address that.
Question: Do fields like origin of life, evolution, and cosmology redound with looniness because the concept of replication is inherently difficult for them?
See also: The “Grand Challenge” for evolutionary psychology is that it is bunk
P-values: Scientists slam proposal to raise threshold for statistically significant findings
The war over P-values is now a quagmire, but a fix is suggested
Deep problem created by Darwinian Ron Fisher’s p-values highlighted again
Early Darwinian Ronald Fisher’s p-value measure is coming under serious scrutiny
Misuse of p-values and design in life?
Rob Sheldon explains p-value vs. R2 value in research, and why it matters
If even scientists can’t easily explain p-values… ?
Nature: Banning P values not enough to rid science of shoddy statistics