I just read an Insights article in the July 2008 edition of Scientific American. In a nutshell, biochemist Jeremy Nicholson billed as one of the world’s foremost experts on the metabolome (the collection of chemicals in the body which are byproducts of metabolic processes) is screening thousands of individuals to establish baseline amounts of different metabolites and then compares differences between individuals looking for consistent correlations between those differences and various kinds of diseases.
Seems like a good research plan to me. The noteworthy part that prompted the subject line of my article here is in the last paragraph of the first page of the SciAm article:
It is kind of like doing science backward: instead making hypotheses and then devising experiments to test them, he performs experiments first and tries to decipher his results later. He must sift through the range of chemicals produced by the genes people have, the food they eat, the drugs they take, the diseases they suffer from and the intestinal bacteria they harbor.
It struck me that this is in fact the same methodology that ID researchers use – look at the raw data and try to find patterns in it. Raw data, especially in fields like comparative genomics, is being amassed at an incredible rate and little of it is explained at this point in time.
While some may call this “doing science backward” I call it straightforward “reverse engineering” wherein you have a black box (you don’t know what’s inside the box or how it works) and you begin by amassing all the external information you can about it then form hypotheses which explain the data. Comparatve genomics at this point is an exercise in reverse engineering. It is not doing science in the dogmatic short form of hypothesis first and gathering evidence second. It’s evidence first and hypothesis second.