At AITSE (Caroline Crocker’s outfit), we are reminded of an Atlantic article (November 2010) on how little peer review actually contributes to the growth of a stable knowledge base:
Dr. John Ioannidis, formerly of Harvard University, Johns Hopkins and National Institutes of Health, is currently leading a team investigating whether medical research studies can be trusted and is making waves. He says that 90% of published results cannot. Moreover, he claims that peer-review by the scientific community is ineffective in addressing the problem. His research shows that, of the top 49 articles published in the last 13 years, only 25% of the claims to have found an effective intervention (e.g. daily aspirin or Vitamin E to reduce risk of heart attacks) were retested. This is understandable because 1) there is little funding for repeating someone else’s work, and 2) for an article to be accepted for publication it needs to contribute new understanding; repeated experiments do not. Of those claims that were re-tested, 41% were found to have been significantly exaggerated or simply wrong.
The problem isn’t that peer review does no good but that it isn’t doing the good needed now.
Suzan Mazur (non-Darwinian evolution news desk 1, new media) offers a number of articles on the defects of the current system in assessing the validity of research:
“David Noble: Peer Review, Where Are The Scholars?”
“Free science peer review from cultishconspiracy”
Margulis: Peer Review Or “Cycle Of Submission”?