How scientists fool themselves – and how they can stop
This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today’s environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept ‘reasonable’ outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.
Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results, says statistician John Ioannidis, co-director of the Meta-Research Innovation Center at Stanford University in Palo Alto, California. The issue goes well beyond cases of fraud. Earlier this year, a large project that attempted to replicate 100 psychology studies managed to reproduce only slightly more than one-third2. In 2012, researchers at biotechnology firm Amgen in Thousand Oaks, California, reported that they could replicate only 6 out of 53 landmark studies in oncology and haematology3. And in 2009, Ioannidis and his colleagues described how they had been able to fully reproduce only 2 out of 18 microarray-based gene-expression studies More.
If these people want to get somewhere, they could start by losing the implausible “African savannah.” It stinks of evolutionary psychology. But worse, it signals that they are not serious. Ten years from now, they will still be nowhere.
Fact: Self-deception has never been a good idea, and never will be. Why don’t we start there?
That said, a really heartening thing about all this is that many scientists believe self-fooling to be a problem, a part of the larger peer review problem.
Contrary examples abound. For example, the widespread progressive bias in social sciences is well-recognized and is associated with falling victim to sloppy or even fraudulent science.
Yes. Monochromatic bias will do that. The only solution is to add new, different voices.
Yet, while insisting they want to do something about the problem, some hope to rely on some gimcrack or other to select out bias instead.
The only useful “bias selector outer” is someone whose life and research experience can function as a counterweight.*
So if they don’t want to do that, they are saying they don’t want to solve the problem, they just want to hide it.
* For example, a study of “human anger management strategies” where all investigators and participants were men, and none were women, would come up with some very skewed results. Of course, they would have a rationalization for that bias, but the results would remain skewed.
Follow UD News at Twitter!