Intelligent Design Peer review

Is science refereeing out of date?

Spread the love

Making From Melinda Baldwin at Physics Today (where she is Books editor)

The imprimatur bestowed by peer review has a history that is both shorter
and more complex than many scientists realize.

At the end of the 19th century, an important shift began to take place in the scientific community’s view of referees. With concerns growing about the overall quality of the scientific literature, the referee was no longer simply helping protect the reputation of a scientific society or journal. Instead, the referee was increasingly seen as someone whose work was to protect the reputation and trustworthiness of the entire scientific literature, to staunch a flood of “veritable sewage thrown into the pure stream of science,” as physiologist and Member of Parliament Michael Foster put it.3

Hmmm. Those kinds of crusades usually end badly. For one thing, most actual enemies in these matters are within ourselves.

Peer review’s role in the scientific community has never been static. Its form and purpose have been shaped and reshaped according to what scientists needed from the practice—whether it was credibility for a scientific society, improvements in the scientific literature, or assurances to public funders that their money was being spent responsibly. If scientists are to tranform peer review’s future, they must consider what purpose they want it to serve—and whether that purpose can indeed be fulfilled by reports from two or more referees.
More.

Peer review, was actually a system that “just growed” after World War II and had thrust upon it the role of science cop—without anyone really considering how well the system was adapted to playing that role.

Then, of course, struggling scientists reverently protected its status without addressing the problems, until finally… Finally, they now yell at the public for “doubting science” when—in fields of which many laypeople have every reason to be knowledgeable, such as health concerns—there is good reason for doubt.

If science refereeing isn’t out of date, there has got to be a better way of doing it.

Note: Baldwin is also author of Making “Nature”: the History of a Scientific Journal (2015)

See also: Research group: Up to 85% of medical research funds may be wasted

Blinkers Award goes to… Tom Nichols at Scientific American! On why Americans “hate science” Health science is the way most people interact with science and in many areas, it is running neck and neck with the office rumor mill for credibility.

and

Peer review “unscientific”: Tough words from editor of Nature

Follow UD News at Twitter!

11 Replies to “Is science refereeing out of date?

  1. 1
    Armand Jacks says:

    Peer review is certainly not without its flaws, but I have yet to see a constructive alternative.

    In its ideal form, the reviewers only concern themselves with whether or not the paper uses appropriate techniques, identified all assumptions, controls variables that can be controlled, etc. However, reviewers are human, with all the failings that entails. Ultimately, the peer review process is based on the assumption of honesty on the part of the author. It is not difficult to draft a paper that will sail through the review process if the author is willing to fabricate his/her data.

    There is a hierarchy of science journals. All scientists and universities know which journals are rigorous in their review process and those that use questionable editorial practices. A professor’s tenure track is based on publications in prestigious journals. Publication in the questionable journals can actually hurt their career. For example, publications in Science or Nature will weigh in favour of their career. Publications in BioComplexity, not so much. And this has nothing to do with the subject matter of BioComplexity. It has more to do with the fact that most papers are authored by members of the editorial board, a perceived incestuous relationship.

  2. 2
    Charles says:

    Armand Jacks @ 1:
    “but I have yet to see a constructive alternative. ”

    How about “probationary status” concomitant with publication (without fee or limited access) to all data, analytical software, and step by step methodology sufficient to conduct independent repeatability studies.

    If the findings are repeated, within a credible margin of error, the study has merit and its status upgraded to “repeatability confirmed” If not, the study is retracted.

    If no one attempts repeatability, or the data, software, methodology, etc. are withheld, the study remains in a probationary, unconfirmed status indefinitely.

    We don’t need peers. We need proof.

  3. 3
    News says:

    Armand Jacks at 1 might enjoy Retraction Watch.

  4. 4
    News says:

    Charles at 2: That’s an idea worth discussing. It needs to matter if research cannot be replicated. It could – today – be cited fifteen times without ever being replicated and then fail replication. Not only does that not add to knowledge, it subtracts from it in a system where wrong answers get deducted from right ones.

  5. 5
    Charles says:

    News @ 4 (and film at Six)

    Not only does that not add to knowledge, it subtracts from it in a system where wrong answers get deducted from right ones.

    And a testable theory with an experimental methodology that explains how that theory is tested is critical as well.

    Getting the right answers for the wrong reasons is just as bad as the wrong answers. It’s no different than having guessed right, and guessing isn’t reliable.

  6. 6
    Armand Jacks says:

    News:

    Armand Jacks at 1 might enjoy Retraction Watch.

    You do realize that retractions are a sign that the system works? I would be more concerned if there were no retractions.

    Charles@2, interesting idea. And, I should note, that many publishes do request the raw data from the author and this is made available to the reviewers. As well, every paper includes contact information for the author. It is not uncommon for other researches to request the raw data. This being said, many papers involve such a huge amount of data that this approach would not be feasible. For example, I recently submitted a paper for publication that involved over 14,000 data-sets and over three million records (containing thirteen pieces of information per record. Uploading all of the data was not possible.

  7. 7
    Charles says:

    Armand Jacks @ 6:

    Uploading all of the data was not possible.

    Then your paper has not been independently replicated, has it. It ought to be flagged as “probationary”.

    If you’ve made a mistake somewhere, an innocent understandable mistake, you wouldn’t want to set others back by presuming your results were correct, would you.

    People routinely upload hundreds of gigabytes of data to websites. I doubt you have more than a few GB if that. You have to find a way to let others see your data and methods and replicate your results.

  8. 8
    Armand Jacks says:

    Charles:

    People routinely upload hundreds of gigabytes of data to websites. I doubt you have more than a few GB if that.

    Data transfer and storage is not free. Journals often put limits on data upload size for purely financial reasons.

    You have to find a way to let others see your data and methods and replicate your results.

    I am only an email away. Several readers of my papers have requested my raw data and I have arranged to get it to them.

  9. 9
    Charles says:

    Armand Jacks @ 8

    Journals often put limits on data upload size for purely financial reasons.

    Good grief. The internet is swimming in cheap or free online storage. Pick a host, put your data up, and link to it in your paper as supplemental data and software, etc..

    Consider the cachet your paper would have if in the abstract you could claim, legitimately, that your results have been independently replicated. How might your citation count change? Speaking invitations? Grant approvals…hhmmmm???

  10. 10
    Armand Jacks says:

    Charles:

    Consider the cachet your paper would have if in the abstract you could claim, legitimately, that your results have been independently replicated.

    Since the paper I just submitted independently replicates someone else’s work, using independently collected data, and obtaining the same outcome, I don’t see the benefit.

  11. 11
    Charles says:

    Armand Jacks @ 10

    Since the paper I just submitted independently replicates someone else’s work, using independently collected data, and obtaining the same outcome, I don’t see the benefit.

    To you personally, no. But you’ve just proven my point and conceded all your arguments.

    You independently replicated someone else’s work and it is being considered for publication. Peer reviews don’t get published, book reviews yes, but the peer reviewers feedback to the author and journals do not merit independent publication. But your replication of someone else’s work, being published on its own merit, demonstrates the publication value of replication over peer review.

    You originally complained you saw no constructive alternative.

    Then you moved the goal posts and complained you couldn’t upload all your data.

    Then you moved the goal posts again and said journals wouldn’t pay for it.

    When I pointed how inexpensive it is for you to upload it to a hosting server, you moved the goal posts again, out to the parking lot, saying you didn’t see the benefit for yourself.

    The original author’s work has been verified with two independent sets of data, his and yours, and now you further claim to not see the benefit of publishing your data.

    Publishing data allows it to be archived so it isn’t lost or contaminated, and evaluated without having to wait for personal requests to be answered, or trusting the data received is in fact the same data as was used in the study (assuming the data is forwarded at all). You may claim to be responsive, and that would make you unique as most scientists have a multitude of caveats about how their precious data is used or interpreted – which is the truth behind why they don’t want it published in the first place.

    I think you saw a constructive alternative to peer review, but the lack of repeatability in your own work and much of the soft sciences, made you want no part of it. The evidence is your gradual moving of the goal posts from “no constructive alternative”, to “data can’t be uploaded” then “journals won’t pay for it”, and finally to “no benefit”.

Leave a Reply