Uncommon Descent Serving The Intelligent Design Community

Fable: More on what happened when one team tried publishing a failed replication paper in Nature

What’s hot? What’s not?/Niklas Bildhauer, Wikimedia

If science were mostly disputes over trivia, replication would not matter. But when studies of the effectiveness of cancer treatment fail replication,  you might want to take an interest in this problem if you think you might ever need cancer treatment.

After all the hoopla about replication studies as one plank in the reform of a broken peer review system, it’s interesting to see how the top science journal reacted. In the first installment by Mante Nieuwland over at Retraction Watch, we learned,

Importantly, our multi-laboratory replication study tackled all the methodological and statistical issues with DUK05 that have come up in recent years. We tested a sample more than 10 times greater than that of DUK05, we employed both the original analysis and an improved statistical analysis, and we pre-registered all the crucial analyses to provide a time-stamped proof that the analyses were not tailored to achieve a certain result. More.

Sounds promising…

What happened next (Part 2):

Our initial expectation was that Nature Neuroscience would welcome our replication effort for review, because of two Nature editorials. The August 2016 Nature editorial titled “Go forth and replicate” actively solicited submissions such as ours to its journals.

A week later, Nature Neuroscience triaged our paper, that is, they rejected it without sending it out for peer review. The editorial team thought that our paper was not suitable for the general readership of Nature Neuroscience, and would be better appreciated in a more specialized journal. We were surprised by this disappointing news, as were many of our colleagues who were not involved in the replication study. After all, if the original study was general enough for Nature Neuroscience, then our replication study would be as well.

The researchers demonstrated that there would indeed be interest in their study among readers of Nature Neuroscience:

In less than 3 days after posting on bioRxiv, our preprint had amassed a great deal of online attention, ranking in the 99th percentile of all research outputs ever scored by Altmetric.

So they appealed.

Nature decided that what they had done amounted to a mere “refutation” and that the replication researchers would be permitted to publish a short letter shorn of the data that was the whole point of the exercise.

Meanwhile, the authors of the original study had the opportunity to defend their work in a way calculated to benefit themselves more than readers:

“The commentary solely focused on the original analysis, and did not even mention the improved analyses or Bayesian analyses.” More.

If there is no fixed body of work that must be addressed by all contributors to a serious discussion, it might as well be a tweet war between celebs.

So what finally happened? From Part 3 at Retraction Watch:

More than two months later, on Nov 8th, we received the editorial decision that our paper was rejected. As editorial letters go, it did not say much except that our conclusions did not significantly challenge the conclusions of DUK05, and merely summarized some of the topics mentioned by the three reviewers (R1-3). R1 was very positive about our paper and supported our conclusions, but R2 and R3 had a range of concerns, which I cannot cite directly, but below is the gist:

R2 wanted to see a head-to-head comparison between our results and those of DUK05 when precisely the same methods were used. However, we had made all correlation results available for review (with the original and new baseline correction), and we used a Bayesian analysis to test whether we replicated both the size and direction of the original, which was the case for the nouns but not the articles. Somehow, all these data and analyses seem to have been ignored or missed by this reviewer.

R3 also provided another, rather odd argument for rejection, namely that if the studies would be published together, readers would only read our study and ignore the commentary. But this does not seem like a reasonable argument for rejection, and in fact it is completely incompatible with the journal’s reason for publishing refutations together with commentaries in the first place.

We pointed out some of these issues to Nature Neuroscience in an email and briefly considered appealing yet again. However, we quickly decided against this given that rejecting our paper based on such comments conveyed (to us at least) an intention to reject our paper no matter what. More.

It’s not clear just what stake Nature’s editor have in protecting the original neuroscience paper on linguistics from failing replication. The sense around here is that if their stake were obvious, no one would even consider submitting a paper to them anyway.

Is the grip of unchallengeable orthodoxy hardening? Or is there a backstory? The trouble is, others will now be discouraged from submitting replication papers, so we won’t know.

Moral of this fable: Everyone wants reform until they find out which fiddles they can’t fudge anymore. Reform dies at birth and zealots continue to blame the world for not “trusting science.”

See also: The buzz now is all for replication papers but what happened when researchers submitted one to Nature?

Reproducibility problem making science extinct?


Replication crisis: Neuroskeptic on foxes guarding the henhouse

Then, johnnyb at 1, Nature is not part of the solution. As the replication team rightly said, publishing their results in the same journal was the optimum information flow. News
Maybe what Nature really meant is that they welcomed replication studies as long as you were investigating the replicability of *other* journal's papers. Perhaps the fact that the original paper was published in Nature was the biggest part of the problem? johnnyb

Leave a Reply