Maybe the system is unreformable:
A recent write-up by Alvaro de Menard, a participant in the Defense Advanced Research Project’s Agency’s (DARPA) replication markets project (more on this below), makes the case for a more depressing view: The processes that lead to unreliable research findings are routine, well understood, predictable, and in principle pretty easy to avoid. And yet, he argues, we’re still not improving the quality and rigor of social science research.
While other researchers I spoke with pushed back on parts of Menard’s pessimistic take, they do agree on something: a decade of talking about the replication crisis hasn’t translated into a scientific process that’s much less vulnerable to it. Bad science is still frequently published, including in top journals — and that needs to change.
Most papers fail to replicate for totally predictable reasons.Kelsey Piper, “Science has been in a “replication crisis” for a decade. Have we learned anything?” at Vox
Systems can be unreformable when there is no compelling reason to pursue reform. As when, for example, bad stuff is funded right along with good stuff.
A view worth looking at is Robert J. Marks’s Why it’s so hard to reform peer review. Reformers are battling numerical laws that govern how incentives work. Know your enemy! Goodhart’s Law, for example, captures the unintended effect of using numerical metrics as goals: “When a measure becomes a target, it ceases to be a good measure.”