A media release from the Association for Psychological Science (May24, 2012) tells us something we already knew, “Questionable Research Practices Surprisingly Common,” but suggests a method of reform:
“There have been some very widely publicized cases of outright fraud,” says Leslie K. John of Harvard Business School. For example, a South Korean researcher achieved world-wide fame for cloning human stem cells—and infamy later when it turned out he had faked his data. “That’s very clear-cut. It’s an academic felony. But the focus of this paper isn’t on these clear-cut cases; it’s about the more subtle ways of manipulating the truth.” Along with her coauthors, George Loewenstein and Drazen Prelec, John designed a survey that was e-mailed to 5,964 psychological scientists. 2,155 responded. The researchers asked the questions using a method that attempts to make people more honest, in part by giving them an incentive to tell the truth.
They found that a surprising number of people had engaged in questionable research practices. For example, half the scientists admitted to having only reported the experiments that gave the results you wanted. This may not sound dramatic, and it’s not as bad as making up data, but it gives a skewed sense of the research; if scientists only report the results that support their hypotheses, they may leave out an important part of the picture. Other questionable practices include deciding whether to exclude data from a study after looking to see whether doing so affects the results (43.4 percent of respondents) and reporting an unexpected finding as if it had been expected all along (35 percent). And 1.7 percent of scientists admitted to having faked their data.
It’s impossible to tell from the study how often these things happened; they could be part of the day-to-day practice of science or people could be admitting to something they did once in college. “I think these are very high rates, but we don’t know whether this is people’s standard operating procedure or whether they’re one-off activities,” John says.
“Does this mean we can’t trust psychologists? No. No, this does not say that,” John says. “But there are clearly some problems.” One possibility might be for psychological scientists to consider instituting a system like that starting to be used in medical research, in which journals will only accept articles for publication if the study was registered before it began, with details about how it would be executed. “I think psychologists are motivated to do good science,” John says. “But these findings are disconcerting and signal the need for reform.”
We’ll see. Sometimes reforms must go deeper than that proposal.
They need to become the people who would not take those shortcuts in the first place. And accept that they might not end up with data sets that confirm fashionable views. And detect people who are gaming the system earlier and more often. If that’s not what they want, they don’t want science.
Curious how little help peer review has been.
Note: One wonders what would happen, under rigorous examination, to all those studies of primate apes, claiming that they mourn their dead, suffer self-doubt, make dolls, have police, go to war, and use “innovative, foresighted methods.” The obvious question is, then why are they still screaming in the trees? Something doesn’t smell right. But that field may be protected much longer from scrutiny because of the emotional investment in Darwinism.
Follow UD News at Twitter!