Uncommon Descent Serving The Intelligent Design Community

Imagine someone asking just how much wisdom there is in the scientific crowd?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Surprisingly,someone did:

A new crowdsourced experiment—involving more than 15,000 subjects and 200 researchers in more than two dozen countries—proves that point. When various research teams designed their own means of testing the very same set of research questions, they came up with divergent, and in some cases opposing, results.

The crowdsourced study is a dramatic demonstration of an idea that’s been widely discussed in light of the reproducibility crisis—the notion that subjective decisions researchers make while designing their studies can have an enormous impact on their observed results. Whether through p-hacking or via the choices they make as they wander the garden of forking paths, researchers may intentionally or inadvertently nudge their results toward a particular conclusion.

Christie Aschwanden, “200 Researchers, 5 Hypotheses, No Consistent Answers” at Wired

Admittedly, this is a finding from psychology, which is not clearly a science. But how widespread is the problem? Well…

This problem—and this approach to demonstrating it—isn’t unique to social psychology. One recent project similarly asked 70 teams to test nine hypotheses using the same data set of functional magnetic resonance images. No two teams used the exact same approach, and their results varied as you might expect.

If one were judging only by the outcomes of these projects, it might be reasonable to guess that the scientific literature would be a thicket of opposing findings. (If different research groups often arrive at different answers to the same questions, then the journals should be filled with contradictions.) Instead, the opposite is true. Journals are full of studies that confirm the existence of a hypothesized effect, while null results are squirreled away in a file drawer. Think of the results described above on the implicit-bias hypothesis: Half the groups found evidence in favor and half found evidence against. If this work had been carried out in the wilds of scientific publishing, the former would have taken root in formal papers, while the rest would have been buried and ignored.

Christie Aschwanden, “200 Researchers, 5 Hypotheses, No Consistent Answers” at Wired

Wow. Read and bookmark the whole piece. Remember it if some yob accuses you of being anti-science because, based on experience and judgment, you question some finding widely puffed in media.

Comments
>If one were judging only by the outcomes of these projects, it might be reasonable to guess that the scientific literature would be a thicket of opposing findings. But once you take human nature into account, it is quickly obvious why this does not happen: people will always consider the effect of what they're saying on the probability of getting it published. P-hacking, HARKing, coercive citation, etc., all interfere with the ideal of science as dispassionately discovering truth. Always take human nature into account.EDTA
December 7, 2019
December
12
Dec
7
07
2019
04:03 PM
4
04
03
PM
PDT
Cross checking if the reproducability problem in psychology is widespread to science in general with a data set of functional magnetic resonance images does not seem like a very robust measure. Indeed, it does not appear like they even left the domain of psychology.bornagain77
December 7, 2019
December
12
Dec
7
07
2019
01:51 PM
1
01
51
PM
PDT

Leave a Reply