Admittedly in social sciences, but maybe worth unpacking anyway: From ScienceDaily:
For the past decade, social scientists have been unpacking a “replication crisis” that has revealed how findings of an alarming number of scientific studies are difficult or impossible to repeat. Efforts are underway to improve the reliability of findings, but cognitive psychology researchers at the University of Massachusetts Amherst say that not enough attention has been paid to the validity of theoretical inferences made from research findings.
Using an example from their own field of memory research, they designed a test for the accuracy of theoretical conclusions made by researchers. The study was spearheaded by associate professor Jeffrey Starns, professor Caren Rotello, and doctoral student Andrea Cataldo, who has now completed her Ph.D. They shared authorship with 27 teams or individual cognitive psychology researchers who volunteered to submit their expert research conclusions for data sets sent to them by the UMass researchers.
“Our results reveal substantial variability in experts’ judgments on the very same data,” the authors state, suggesting a serious inference problem. Details are newly released in the journal Advancing Methods and Practices in Psychological Science…
Rotello adds, “The message here is not that memory researchers are bad, but that this general tool can assess the quality of our inferences in any field. It requires teamwork and openness. It’s tremendously brave what these scientists did, to be publicly wrong. I’m sure it was humbling for many, but if we’re not willing to be wrong we’re not good scientists.” Further, “We’d be stunned if the inference problems that we observed are unique. We assume that other disciplines and research areas are at risk for this problem.” Paper. paywall – Jeffrey J. Starns, Andrea M. Cataldo, Caren M. Rotello, Jeffrey Annis, Andrew Aschenbrenner, Arndt Bröder, Gregory Cox, Amy Criss, Ryan A. Curl, Ian G. Dobbins, John Dunn, Tasnuva Enam, Nathan J. Evans, Simon Farrell, Scott H. Fraundorf, Scott D. Gronlund, Andrew Heathcote, Daniel W. Heck, Jason L. Hicks, Mark J. Huff, David Kellen, Kylie N. Key, Asli Kilic, Karl Christoph Klauer, Kyle R. Kraemer, Fábio P. Leite, Marianne E. Lloyd, Simone Malejka, Alice Mason, Ryan M. McAdoo, Ian M. McDonough, Robert B. Michael, Laura Mickes, Eda Mizrak, David P. Morgan, Shane T. Mueller, Adam Osth, Angus Reynolds, Travis M. Seale-Carlisle, Henrik Singmann, Jennifer F. Sloane, Andrew M. Smith, Gabriel Tillman, Don van Ravenzwaaij, Christoph T. Weidemann, Gary L. Wells, Corey N. White, Jack Wilson. Assessing Theoretical Conclusions With Blinded Inference to Investigate a Potential Inference Crisis. Advances in Methods and Practices in Psychological Science, 2019; 251524591986958 DOI: 10.1177/2515245919869583 More.
Abstract: Scientific advances across a range of disciplines hinge on the ability to make inferences about unobservable theoretical entities on the basis of empirical data patterns. Accurate inferences rely on both discovering valid, replicable data patterns and accurately interpreting those patterns in terms of their implications for theoretical constructs. The replication crisis in science has led to widespread efforts to improve the reliability of research findings, but comparatively little attention has been devoted to the validity of inferences based on those findings. Using an example from cognitive psychology, we demonstrate a blinded-inference paradigm for assessing the quality of theoretical inferences from data. Our results reveal substantial variability in experts’ judgments on the very same data, hinting at a possible inference crisis.
In other words, even if social scientists can replicate research results, there may be little agreement about what, if anything, they mean. Is it a good idea for governments to consult them on social policy?
See also: “Motivated reasoning” defacing the social sciences?
At the New York Times: Defending the failures of social science to be science Okay. So if we think that — in principle — such a field is always too infested by politics to be seriously considered a science, we’re “anti-science”? There’s something wrong with preferring to support sciences that aren’t such a laughingstock? Fine. The rest of us will own that and be proud.
What’s wrong with social psychology , in a nutshell
How political bias affects social science research
Stanford Prison Experiment findings a “sham” – but how much of social psychology is legitimate anyway?
A BS detector for the social sciences
All sides agree: progressive politics is strangling social sciences
Back to school briefing: Seven myths of social psychology: Many lecture room icons from decades past are looking tarnished now. (That was 2014 and it has gotten worse since.)
Follow UD News at Twitter!