Uncommon Descent Serving The Intelligent Design Community

Researchers: There is an inference crisis as well as a replication crisis

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Admittedly in social sciences, but maybe worth unpacking anyway: From ScienceDaily:

For the past decade, social scientists have been unpacking a “replication crisis” that has revealed how findings of an alarming number of scientific studies are difficult or impossible to repeat. Efforts are underway to improve the reliability of findings, but cognitive psychology researchers at the University of Massachusetts Amherst say that not enough attention has been paid to the validity of theoretical inferences made from research findings.

Using an example from their own field of memory research, they designed a test for the accuracy of theoretical conclusions made by researchers. The study was spearheaded by associate professor Jeffrey Starns, professor Caren Rotello, and doctoral student Andrea Cataldo, who has now completed her Ph.D. They shared authorship with 27 teams or individual cognitive psychology researchers who volunteered to submit their expert research conclusions for data sets sent to them by the UMass researchers.

“Our results reveal substantial variability in experts’ judgments on the very same data,” the authors state, suggesting a serious inference problem. Details are newly released in the journal Advancing Methods and Practices in Psychological Science…

Rotello adds, “The message here is not that memory researchers are bad, but that this general tool can assess the quality of our inferences in any field. It requires teamwork and openness. It’s tremendously brave what these scientists did, to be publicly wrong. I’m sure it was humbling for many, but if we’re not willing to be wrong we’re not good scientists.” Further, “We’d be stunned if the inference problems that we observed are unique. We assume that other disciplines and research areas are at risk for this problem.” Paper. paywall – Jeffrey J. Starns, Andrea M. Cataldo, Caren M. Rotello, Jeffrey Annis, Andrew Aschenbrenner, Arndt Bröder, Gregory Cox, Amy Criss, Ryan A. Curl, Ian G. Dobbins, John Dunn, Tasnuva Enam, Nathan J. Evans, Simon Farrell, Scott H. Fraundorf, Scott D. Gronlund, Andrew Heathcote, Daniel W. Heck, Jason L. Hicks, Mark J. Huff, David Kellen, Kylie N. Key, Asli Kilic, Karl Christoph Klauer, Kyle R. Kraemer, Fábio P. Leite, Marianne E. Lloyd, Simone Malejka, Alice Mason, Ryan M. McAdoo, Ian M. McDonough, Robert B. Michael, Laura Mickes, Eda Mizrak, David P. Morgan, Shane T. Mueller, Adam Osth, Angus Reynolds, Travis M. Seale-Carlisle, Henrik Singmann, Jennifer F. Sloane, Andrew M. Smith, Gabriel Tillman, Don van Ravenzwaaij, Christoph T. Weidemann, Gary L. Wells, Corey N. White, Jack Wilson. Assessing Theoretical Conclusions With Blinded Inference to Investigate a Potential Inference Crisis. Advances in Methods and Practices in Psychological Science, 2019; 251524591986958 DOI: 10.1177/2515245919869583 More.

Abstract: Scientific advances across a range of disciplines hinge on the ability to make inferences about unobservable theoretical entities on the basis of empirical data patterns. Accurate inferences rely on both discovering valid, replicable data patterns and accurately interpreting those patterns in terms of their implications for theoretical constructs. The replication crisis in science has led to widespread efforts to improve the reliability of research findings, but comparatively little attention has been devoted to the validity of inferences based on those findings. Using an example from cognitive psychology, we demonstrate a blinded-inference paradigm for assessing the quality of theoretical inferences from data. Our results reveal substantial variability in experts’ judgments on the very same data, hinting at a possible inference crisis.

In other words, even if social scientists can replicate research results, there may be little agreement about what, if anything, they mean. Is it a good idea for governments to consult them on social policy?

See also: “Motivated reasoning” defacing the social sciences?

At the New York Times: Defending the failures of social science to be science Okay. So if we think that — in principle — such a field is always too infested by politics to be seriously considered a science, we’re “anti-science”? There’s something wrong with preferring to support sciences that aren’t such a laughingstock? Fine. The rest of us will own that and be proud.

What’s wrong with social psychology , in a nutshell

How political bias affects social science research

Stanford Prison Experiment findings a “sham” – but how much of social psychology is legitimate anyway?

BS detector for the social sciences

All sides agree: progressive politics is strangling social sciences

and

Back to school briefing: Seven myths of social psychology: Many lecture room icons from decades past are looking tarnished now. (That was 2014 and it has gotten worse since.)

Follow UD News at Twitter!

Comments
The will to accept logical inference is absolutely primordial, isn't it? Without it, we might as well be a robot, a pebble, even, a void.Axel
October 17, 2019
October
10
Oct
17
17
2019
12:46 PM
12
12
46
PM
PDT
PaV - where in the paper do they say that? They mention the issue with his data being "too good", but don't criticise the inferences he made from his data. Which is ironic, as historians of science have pointed out that Mendel was thinking he was looking at hybridisation between species, so his own interpretation of his data was wrong.Bob O'H
October 17, 2019
October
10
Oct
17
17
2019
08:24 AM
8
08
24
AM
PDT
From the paper:
As characterized here, blinded inferencecan be used in any scenario in which researchers 5claim that they can (a) measure atheoretical constructbased on data patterns and (b)manipulatethattheoretical constructwith independent variables. If both of these claims are true, then researchers should be able to make accurate inferencesaboutthe state of independentvariables specifically linked to the theoretical constructby analyzingdata. If researchers fail in this task, then it suggests that at least one of the claims is false, i.e., researchers either lack valid 10techniques for measuring thetheoretical construct,lack valid ways to manipulate it, or both
Also,
To preview, we found surprisingly high variability in the inferences of memory researchers asked to interpret the same data, and we also found that many researchers made more inferential errors than would be expected fromsampling variability in the data. Given that our task required a relatively simple inference, we suspect that this pattern of surprisingly low inferential accuracy is likely to be found in other research areas.
Interestingly, in this paper they point to Mendel as someone who made improper inferences. Imagine that?PaV
October 17, 2019
October
10
Oct
17
17
2019
07:46 AM
7
07
46
AM
PDT
I do not know how to interpret this data, though I have a fear, If the words “research” and “model” are searched through Google’s Ngram program, the word frequencies parted company in 1956 with “model” soaring stratospherically compared with “research.”Belfast
October 17, 2019
October
10
Oct
17
17
2019
01:15 AM
1
01
15
AM
PDT

Leave a Reply