Uncommon Descent Serving The Intelligent Design Community
Category

Peer review

The New York Times has noticed all the retractions

Here: Science, Now Under Scrutiny Itself* The crimes and misdemeanors of science used to be handled mostly in-house, with a private word at the faculty club, barbed questions at a conference, maybe a quiet dismissal. On the rare occasion when a journal publicly retracted a study, it typically did so in a cryptic footnote. Few were the wiser; many retracted studies have been cited as legitimate evidence by others years after the fact. But that gentlemen’s world has all but evaporated, as a remarkable series of events last month demonstrated. In mid-May, after two graduate students raised questions about a widely reported study on how political canvassing affects opinions of same-sex marriage, editors at the journal Science, where the study Read More ›

Can science paper retractions become reality TV?

Or even a mystery series? At Retraction Watch: Upon realizing they had experienced a case of mistaken cell-line identity, the authors of a 2014 Nature paper on lung cancer think “it prudent to retract pending more thorough investigation,” as they explain in a notice published Wednesday. But The problem seems to stem from more than just honest error, according to corresponding author Julian Downward, a scientist at the Francis Crick Institute in the UK. In a 1,215 word statement, sent to us via the Director of Research Communications and Engagement at Cancer Research UK, which funds Downward’s research, Downward told us the backstory not presented in the journal’s retraction note: … More. Your mileage may vary, but the people I Read More ›

Pressure to publish or perish does not cause misconduct, new study says

At Retraction Watch: A new study suggests that much of what we think about misconduct — including the idea that it is linked to the unrelenting pressure on scientists to publish high-profile papers — is incorrect. Some factors were associated with a higher rate of misconduct, of course — a lack of research integrity policy, and cash rewards for individual publication performance, for instance. Scientists just starting their careers, and those in environments where “mutual criticism is hampered,” were also more likely to commit misconduct. More. That makes sense. To argue the opposite is like saying that the need to make a profit causes car dealers to dump rolling coffins on their customers. Given the career-ending risks, there must be Read More ›

Serious doubt about peer reviewed studies is increasing

See, for example, “Science has taken a turn towards darkness” (This, by the way, is from distinguished medical journal Lancet, not from “A-Crock-a-Lypse News and Used Car Sales.”) Now, this from Times Higher: I used to be the editor of the BMJ, and we conducted our own research into peer review. In one study we inserted eight errors into a 600 word paper and sent it 300 reviewers. None of them spotted more than five errors, and a fifth didn’t detect any. The median number spotted was two. These studies have been repeated many times with the same result. Other studies have shown that if reviewers are asked whether a study should be published there is little more agreement than would Read More ›

Shoddy science practice still matters?

One could hardly believe, reading this It now appears that LaCour, whose pending appointment at Princeton based on his work is in doubt, made up more than just his data. He appears to have claimed on his CV a UCLA teaching award that doesn’t exist. I’ll let New York magazine pick up the climax of the story from here: But why does the fact that stuff doesn’t exist in this universe  even matter?  Mustn’t it exist in another one? And isn’t it enough to just sign on to some agenda now? See also: Quick summary of origin of life problems Follow UD News at Twitter!

Science is like hockey

It can be the greatest game on Earth. And it can be vastly more useful. But: Further to A growing serious interest in the science journal retraction problem?, this also landed in the In Bin yesterday: In the language of science, calling results “incredibly nice” is not a compliment—it’s tantamount to accusing a researcher of being cavalier, or even of fabricating findings. But rather than heed the warning, the journal, Anesthesia & Analgesia, punted. It published the letter to the editor, together with an explanation from Fujii, which asked, among other things, “how much evidence is required to provide adequate proof?” In other words, “Don’t believe me? Tough.” Anesthesia & Analgesia went on to publish 11 more of Fujii’s papers. Read More ›

A growing serious interest in the science journal retraction problem?

Maybe. It even penetrated as far as the New York Times: Retractions can be good things, since even scientists often fail to acknowledge their mistakes, preferring instead to allow erroneous findings simply to wither away in the back alleys of unreproducible literature. But they don’t surprise those of us who are familiar with how science works; we’re surprised only that retractions aren’t even more frequent. … Every day, on average, a scientific paper is retracted because of misconduct. Two percent of scientists admit to tinkering with their data in some kind of improper way. That number might appear small, but remember: Researchers publish some 2 million articles a year, often with taxpayer funding. In each of the last few years, Read More ›

Last religion post for the week: Jerry Coyne on religion

Drat, just when I (O’Leary for News) complained that the new atheists had given up threatening each other with legal action, raising cain about genome mapper Francis Collins, or starting hoo-haws in elevators, this item turned up in the In Bin: Jerry Coyne in The Scientist : But while science and religion both claim to discern what’s true, only science has a system for weeding out what’s false. In the end, that is the irreconcilable conflict between them. Science is not just a profession or a body of facts, but, more important, a set of cognitive and practical tools designed to understand brute reality while overcoming the human desire to believe what we like or what we find emotionally satisfying. Read More ›

Unbelievable: The tenured academic’s response to faked gay marriage opinion study

Noted by Barry Arrington here. Whitewash duly reported in the New Yorker is In retrospect, Green wishes he had asked for the raw data earlier. And yet, in collaborations in which data is collected at only one institution, it’s not uncommon for the collecting site to anonymize and code it before sharing it. The anonymized data Green did see looked plausible and convincing. “He analyzed it, I analyzed it—I have the most ornate set of graphs and charts and every possible detail analyzed five different ways,” Green said. Ultimately, though, the system takes for granted that no one would be so brazen as to create the actual raw data themselves. The author burbles on, as expected, about the nature of Read More ›

Should social media be used to evaluate research results?

Further to Notable retractions of possible interest, here’s something interesting from Nature: Potential flaws in genomics paper scrutinized on Twitter Reanalysis of a study that compared gene expression in mice and humans tests social media as a forum for discussing research results. A recent Twitter conversation that cast doubt on the conclusions of a genomics study has revived a debate about how best to publicly discuss possible errors in research. Yoav Gilad, a geneticist at the University of Chicago in Illinois, last month wrote on Twitter that fundamental errors in the design and data analysis of a December 2014 study2 led to an unfounded conclusion about the genetic similarities between mice and humans. Gilad and his co-author Orna Mizrahi-Man, a Read More ›

Notable retractions of possible interest

Three items from Retraction Watch: 1. Unhelpful retractions: A group of authors have withdrawn a 2011 Journal of Biological Chemistry paper, but then appear to have re-published almost the same paper a month later, only this time with just five of the original nine authors. … As we’ve come to expect from the JBC, here’s the full retraction notice, in all its inexplicit glory … If you ever find out what happened, tell us. 2. Faked data: In what can only be described as a remarkable and swift series of events, one of the authors of a much-ballyhooed Science paper claiming that short conversations could change people’s minds on same-sex marriage is retracting it following revelations that the data were Read More ›

Well, this might be useful: Tackling biases in science

Biases? In science? From Nautilus: Sometimes it seems surprising that science functions at all. In 2005, medical science was shaken by a paper with the provocative title “Why most published research findings are false.”1 Written by John Ioannidis, a professor of medicine at Stanford University, it didn’t actually show that any particular result was wrong. Instead, it showed that the statistics of reported positive findings was not consistent with how often one should expect to find them. As Ioannidis concluded more recently, “many published research findings are false or exaggerated, and an estimated 85 percent of research resources are wasted.” It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely Read More ›

Kirk Durston looks at the corruption of 21st century science

Friend Kirk Durston offers a five-part series on the corruption of 21st century science here: Part I: Should you have blind faith in what science has become today? This post will be the first in a series dealing with the corruption of 21st century science. As a scientist, I am increasingly appalled and even, just this past week, shocked at what is passing as 21st century science. It has become a mix of good science, bad science, creative story-telling, science fiction, scientism (atheism dressed up as science), citation-bias, huge media announcements followed by quiet retractions, massaging the data, exaggeration for funding purposes, and outright fraud all rolled up into what I refer to as 21st century science. In some disciplines, Read More ›

Proposed new guidelines for data driven science

The Leiden Manifesto for Research Metrics suggests ten principles to guide research evaluation. Here’s the .pdf (which may download automatically). Data are increasingly used to govern science. Research evaluations that were once bespoke and performed by peers are now routine and reliant on metrics. The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation. Here’s one guideline: 5. Allow those evaluated to verify data and analysis. To ensure data quality, all researchers Read More ›

Nature: Banning P values not enough to rid science of shoddy statistics

From Nature, we learn that in statistics, P values problems are just the tip of the iceberg: P values are an easy target: being widely used, they are widely abused. But, in practice, deregulating statistical significance opens the door to even more ways to game statistics — intentionally or unintentionally — to get a result. Replacing P values with Bayes factors or another statistic is ultimately about choosing a different trade-off of true positives and false positives. Arguing about the P value is like focusing on a single misspelling, rather than on the faulty logic of a sentence. … The ultimate goal is evidence-based data analysis. This is analogous to evidence-based medicine, in which physicians are encouraged to use only Read More ›