Uncommon Descent Serving The Intelligent Design Community

Can you trust what you read in medical journals?


Not necessarily. From GENENG News

There appear to be systemic problems with the way that observational studies are commonly conducted. Virtually all of the problems listed in the Table can plague observational studies and, of course, any one alone or a combination of them could wreck a study. In light of multiple testing and multiple modeling, a p-value

It is popular to blame investigators for these problems, but the culpability must be shared by the managers of the scientific process: funding agencies and journal editors. At a minimum, funding agencies should require that datasets used in papers be deposited so that the normal scientific peer oversight can occur. Journal editors need to reexamine their policy of being satisfied with a p-value <0.05, unadjusted for multiple testing or multiple modeling. Editors are using “quality by inspection” (p-value <0.05) rather than the more modern “quality by design.”

See also: You mean, that cheeseburger WON’T be the death of us all?

Note: The reason we carry a lot of news on these topics is that, contrary to what science popularizers would have us believe, science is not a sure way to find out what’s really happening, unless it is pursued with exactly that intention.

The fact that a person popularizing Darwinism, the multiverse, or “cheeseburgers as the death of millions” has science credentials simply does not, by itself, make their claims more believable.

It only increases the chances they’ll get published somewhere.

Follow UD News at Twitter!

Oh, and just because: http://www.medscape.com/viewarticle/823141?nlid=56604_1521&src=wnl_edit_medp_wir&uac=210154BR&spon=17 ("Darwin For Doctors" article) Barb
I used to do some work in the field of evidence-based medicine. It turns out that a lot of medical practice actually has very little or no actual evidence in favor of it. It might make sense, but that is not quite the same thing as evidence. In fact, one of the major causes of deaths from heart attacks used to be the drugs they would give you to treat it, because no one bothered to actually rigorously test to see the drug's effects on outcomes! The EBM (evidence-based medicine) crowd made a lot of noise, some of it worthwhile and some of it not. The traditional medicine crowd got kind of annoyed at them and published this quite humorous paper published by the BMJ as a rebuttal to the EBMers. The other problem is that what is being tested is not necessarily the same thing as what the patient wants. I don't know if this is still the case, but some transplant studies used a metric of what percentage of patients still had the transplant after one year as a sign of how effective their strategy was. So, if you had a transplant, and were sick all year in the hospital on anti-rejection medication, and after 366 days the doctor says, "screw it, we're removing the organ", you would be counted as a success. In any case, scientists tend to hate philosophy, but science needs it, and medicine absolutely requires it. johnnyb
There are a couple of issues with observational research, as brought out by Layman and Watzlaf (2009): 1. During observational research, the researcher may influence the actions of the individuals being observed; if the researcher picks the wrong methodology (direct or indirect, nonparticipant or participant) then you may end up with meaningless data. 2. Interviews can be a form of observational research; however, standardized open-ended interviews use questions framed by the researcher and this doesn't always allow the subject(s) to speak freely. 3. Ethnographic research tends not to be objective and researchers may draw different conclusions while studying the same population. Barb

Leave a Reply