Uncommon Descent Serving The Intelligent Design Community
Category

Peer review

“Texas Sharpshooter Fallacies” produce bad science data

Robert J. Marks, author with design theorist William Dembski and Winston Ewert of Introduction to Evolutionary Informatics talks with Gary Smith, author The AI Delusion, about how, in general, based data is produced Smith: Texas Sharpshooter Fallacy # 1 is that I’m going to prove what a great shot I am and so I stand outside a barn and then I go and paint a thousand targets on the barn and I fire my gun and what do you know, I am lucky, I hit a target. And then I go and erase all the other targets and I say, look, I hit the target. And, of course, it’s meaningless, because, with so many targets, I’m bound to hit something… Read More ›

Is science “broken” or has it just accumulated a lot of baggage?

The National Academies of Science is wading into the longstanding mess over the validity of research findings. It doesn’t, of course, agree that there is a “crisis.” That said, the report also notes that the American public’s confidence in science hasn’t wavered at all in recent years, despite major news articles discussing the “crisis” in psychology and elsewhere. And it found that even scientists who have criticized the current state of things aren’t completely on-board with calling science broken. “How extensive is the lack of reproducibility in research results in science and engineering in general? The easy answer is that we don’t know,” Brain Nosek, co-founder and director of the Center for Open Science, told the report committee during a Read More ›

Doubt cast on new “exomoon” Rob Sheldon explains

Sheldon: There are red flags all over this data, but the investigators are standing by their measurement. This is what irreproducible papers look like in physics, and why the same crisis that afflicts other disciplines also afflicts physics. Read More ›

At Nature: Surviving the “reproducibility apocalypse”

Researchers, says an experimental psychologist, generally know what they should do: Yet many researchers persist in working in a way almost guaranteed not to deliver meaningful results. They ride with what I refer to as the four horsemen of the reproducibility apocalypse: publication bias, low statistical power, P-value hacking and HARKing (hypothesizing after results are known). My generation and the one before us have done little to rein these in.Dorothy Bishop, “Rein in the four horsemen of irreproducibility” at Nature That’s interesting, considering how often we were ordered to see science as the relentless pursuit of truth. If we start with something as basic as giving up gimmicks, maybe we’ll get further. She offers some thoughts on suggested reforms. Follow Read More ›

Another look at the call to abandon statistical significance

In an era where even medical journals are urged to get woke, abandoning statistical significance could mean abandoning a refuge against Correct nonsense. As Brookshire writes, “Unfortunately, there is no single alternative that everyone agrees would be better for all experiments.” But that might just be what some factions want and need. Read More ›

Does “liberal bias” deepen replication crisis in psychology?

Aw come on, it’s actually not all that complicated when you see it in action. One way you can know that liberal bias deepens the replication crisis is this: Consider the sheer number of ridiculous Sokal hoaxes that have played psychology journals. Read More ›

Report: That so many studies cannot be reproduced is a “crisis” in science

Afterword: Many scientists think of themselves as philosopher kings, far superior to those in the “basket of deplorables.” The deplorables have a hard time understanding why scientists are so special, and why they should vote as instructed by them. Read More ›