Uncommon Descent Serving The Intelligent Design Community

The Decline Effect & The Scientific Method

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Given the recent posts about peer review on UD, I thought this recent article at The New Yorker would be of interest. An excerpt:

The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”

HT to Mike Gene for noting this one.

Comments
Two quotes: According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. I can remember a heated discussion that took place here a couple of years back. It had to do with sickle-cell anemia. The scientist I was arguing with was just in shock that I would question the results of a statistical study. But it was quite clear that the scatter of points made the conclusion untenable. I'm sure if you did some kind of regression analysis that a result would come out 'statistically significant'; but looking at the data, it was like doing a regression analysis to a number of points generated by throwing a dart at a dart board---that is, you end up measuring nothing but scatter. Worse yet, there were 'individual' collections of data that were in clear violation of the result using 'all' of the data. There was another discussion about the size of beak finches on the Galapagos. Statistically, there was a difference. But to the naked eye, you wouldn't of been able to detect any difference at all. They ended up with statistics "proving" what they set out to prove. I think the use of statistics is way overused and needs to be looked at with great scrutiny.PaV
January 12, 2011
January
01
Jan
12
12
2011
07:33 PM
7
07
33
PM
PDT
gpuccio: I have a question for you. As a doctor, have you heard the term evolutionary medicine? I took an anatomy and physiology class some years ago and was surprised to find a section in the book about human evolution which stated, among other things, that medicine could not be properly understood or practiced without understanding where humans came from (my paraphrasing here).Barb
January 12, 2011
January
01
Jan
12
12
2011
04:14 PM
4
04
14
PM
PDT
"As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness." A truer statement was never uttered.kornbelt888
January 12, 2011
January
01
Jan
12
12
2011
08:16 AM
8
08
16
AM
PDT
"“But the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data." -- Leigh Simmons (Coff)(coff)kornbelt888
January 12, 2011
January
01
Jan
12
12
2011
08:09 AM
8
08
09
AM
PDT
Unfortunately, being a medical doctor, I am well aware of the widespread consequences of cognitive bias and confirmation bias in medical literature. Sometimes that comes in a truly subconscious form, but many times it is "supported" by a completely wrong methodology or by a very "partial" (to say the least) use of statistics. I must say that, IMO, the widespread success of so called "evidence based medicine" has not necessarily improved the situation. While greater attention has been certainly given to methodology and procedures, indeed I believe that the general scientistic emotional attitude which is usually linked to that kind of approach has been creating an expectation for more absolute "truths" in medicine, which is at best naive. Methodology and epistemology are always the key factor in a balanced search for scientific truth, and I believe that that kind of difficult disciplines are often simply underemphasized, or just transformed into arrogant "certainties".gpuccio
January 12, 2011
January
01
Jan
12
12
2011
08:07 AM
8
08
07
AM
PDT
Here is a scientist who is not afraid to use the largest 'sample size' he can to get the most accurate results: Michael Behe: Even More From Jerry Coyne http://www.evolutionnews.org/2011/01/even_more_from_jerry_coyne042741.htmlbornagain77
January 12, 2011
January
01
Jan
12
12
2011
02:56 AM
2
02
56
AM
PDT
Jonah Lehrer also has a follow-up on his first post; MORE THOUGHTS ON THE DECLINE EFFECT http://www.newyorker.com/online/blogs/newsdesk/2011/01/jonah-lehrer-more-thoughts-on-the-decline-effect.htmlDala
January 12, 2011
January
01
Jan
12
12
2011
01:12 AM
1
01
12
AM
PDT

Leave a Reply