Review Of The Eighth Chapter Of Signature In The Cell by Stephen Meyer
ISBN: 9780061894206; Imprint: Harper One
In the middle ages, Moses Maimonides debated heavily with Islamic philosophers over the Aristotlean interpretation of the universe. By looking at the stars and seeing their irregular pattern in the heavens, he concluded that only design could have generated the star arrangements he observed (1). In the process he ruled out necessity and the Epicurean ideology of chance. Centuries later Isaac Newton similarly opted for design as the best explanation for the origins of our solar system. Writing in his General Scholium for example Newton left us with no doubt over where his focus lay:
“This most beautiful system of sun, planets, and comets could only proceed from the counsel and dominion of an intelligent and powerful Being” (2).
Still, with the revolutions in thought brought forth by the likes of Pierre Simon Laplace and of course later Charles Darwin, the stage was set for chance and necessity to become the only players permissible in scientific discourse (1). Today science operates under the conviction that the material world “is all there is, and that chance and impersonal natural law alone explain, indeed must explain, its existence” (3).
So, what of chance? When statisticians refer to chance events what they really mean is that the exact combination of physical factors that cause these events are so complex that their occurrence cannot be reasonably predicted. Implicit in an appeal to chance is the negation of any sort of law-like necessity or Maimonidean-style recourse to design. On the flip side, Stephen Meyer reminds us in Signature In The Cell that that chance hypotheses can be eliminated when “a series of events occurs that deviates too greatly from an expected statistical distribution” (p.180).
A casino player winning 100 bets consecutively while spinning a roulette wheel is an obvious example of such a deviation. But low probability in itself is not enough for detecting design. Indeed fundamental to this particular non-chance alternative is the recognition of some sort of discernible pattern- 100 wins on a roulette wheel for example- that compels us to suspect that an intelligence somewhere is directing the outcome.
For Meyer such insights were seeded through conversations he held with philosopher William Dembski in the hallways of academia as he grappled with questions relating to life’s origins. Much to the chagrin of the Darwin-faithful, today Dembski not only contends that design, “is a legitimate and fundamental mode of scientific explanation on a par with chance and necessity” but also argues that there exists a set of criteria for reliably detecting design in biology (1).
Pattern discernment, Dembski asseverates, can be retrospectively applied; that is, to events that have already occurred. Indeed as any spy buff will attest, cryptoanalysts routinely decode signals only after these signals have been generated and transmitted. Intelligent involvement in such cases can either be ruled in or out through a thorough examination of the available probabilistic resources (4).
In Signature In The Cell Meyer builds on Dembski’s cornerstone case and uses a seemingly non-ending supply of illustrations to firm up his own supportive arguments. But the reader is nevertheless left pondering over what relevance such illustrations have to the matter at hand, namely demonstrating that the origin of life requires more than just chance. Meyer meticulously alleviates such concerns with a component-by-component breakdown of the probabilistic resources of our cosmic landscape. He writes:
“There are a limited number of opportunities for any given event to occur in the entire history of the universe. Dembski was able to calculate this number by simply multiplying the three relevant factors together: the number or elementary particles (1080) times the number of seconds since the big bang (1016) times the number of possible interactions per second (1043). His calculation fixed the total number of events that could have taken place in the observable universe since the origin of the universe at 10130” (pp.216-217).
Applying his calculations on limits to biology Meyer notes:
“the probability of producing a single 150 amino acid protein by chance stands at about 1 in 10164. Thus for each functional sequence of 150 amino acids there are at least 10164 other possible non-functional sequences of the same length…Unfortunately that number vastly exceeds the most optimistic estimate of the probabilistic resources of the entire universe- that is the number of events that have occurred since the beginning of its existence” (p.217).
While such a rationale has already been advanced in the peer-reviewed literature (5), it is as profoundly relevant today as it was in its original context. Those design heisters who acrimoniously steal intelligent design away from the realm of biology do so at a tremendous cost to us all. Intelligent design is after all not ‘pie in the sky’ story telling. It is rigorous science.
1.William Dembski (2002), No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence, Rowman & Littlefield Publishers, Inc, Lanham, Maryland, pp.1-3
2. Nancy R. Pearcey and Charles B. Thaxton (1994), The Soul of Science: Christian Faith and Natural Philosophy; Crossway Books; Wheaton, Illinois, p.91
3. Guillermo Gonzalez and Jay Richards (2004), The Privileged Planet, How Our Place In The Cosmos Is Designed For Discovery, Regnery Publishing Inc, Washington D.C, New York, p.224
4. For a review of probability as relates to the biological context see Robert Deyes and John Calvert (2009), We Have No Excuse: A Scientific Case for Relating Life to Mind, Intelligent Design Network, See http://www.intelligentdesignnetwork.org/We_have_no_excuse.pdf
5. Stephen C. Meyer (2004), The Origin Of Biological Information And The Higher Taxonomic Categories, Proceedings of the Biological Society of Washington, Volume 117, pp. 213-239
38 Replies to “Reclaiming Biology From The Design Heisters”
“A casino player winning 100 bets consecutively while spinning a roulette wheel is an obvious example of such a deviation. But low probability in itself is not enough for detecting design. Indeed fundamental to this particular non-chance alternative is the recognition of some sort of discernible pattern- 100 wins on a roulette wheel for example- that compels us to suspect that an intelligence somewhere is directing the outcome.”
Yeah, tell me about it. I live in a province where the premier just fired the chair and the whole board of the lottery commission over issues like this.
It wasn’t that the pattern was unpredictable. That’s the idea behind a lottery; you pays yer money and you takes yer chance.
The problem was the opposite: The pattern WAS predictable. Far too many people who were selling the tickets were winning.
That pattern demonstrated design, because only design could have interfered with the lottery, as it is run.
Whatever else this situation demos (like buying lottery tickets is a waste of money and an inducement to fraud, theft, and social irresponsibility*) it illustrates a design inference.
*I particularly hate it when government proclaims that the lottery funds hospitals and sports activities. Where these are necessary expenses, why not fund them in the normal way through tax collection, where they can be debated in the legislature?
We can tell when humans rig the lottery, ergo we can tell that the universe was designed. Works for me.
the probability of producing a single 150 amino acid protein by chance stands at about 1 in 10^164. Thus for each functional sequence of 150 amino acids there are at least 10^164 other possible non-functional sequences of the same length…
Does Dr Meyer really just slip the word ‘functional’ into that second sentence? I have a PC that functions as a space heater. Its particular arrangement of atoms has to be at the same or greater level of improbability as a 150 AA protein, but it is not the only thing in the universe that functions as a space heater.
Your reference 5 is missing the word “withdrawn”.
Hey everybody, long-time visitor, first-time poster.
Just wanted to point out a typo (I hope it’s a typo).
“Dembski was able to calculate this number … the number of seconds since the big bang (10^16) … (pp.216-217).”
Notice the part about seconds since the Big Bang. 10^16.
10^16 seconds / 60 = 1.66666667 × 10^14 minutes.
1.66666667 × 10^14 /60 = 2.77777778 × 10^12 hours.
2.77777778 × 10^12 /24 = 1.15740741 × 10^11 days.
1.15740741 × 10^11 /365 = 317 097 921 years.
Just wanted to show my work, but what I got was 317 097 921 years, which is quite a bit less than the billions of years scientists believe the universe to have existed. (I’ve heard ~13.7 billion.)
Please update your article to reflect Dr. Dembski’s original work.
Nakashima: Your PC must heat a lot! And we all know it wasn’t built or intended to be used as a space heater.
Yet, even if we had never known what a PC was we would still be able to make a valid design inference for its origin simply by examining its structure and components.
A fire started by lightning also heats space. We know that the probability for such is high and that no intelligent origin need be posited. The laws of physics etc, suffice.
Not so with “functional” proteins. They do work. They are not inert. They contain “useful” information. And that information is encoded.
We also know that no coded information can exist without intelligence. The only encoded information systems we know of are all designed by intelligences. There is no such thing as a code system without intelligent origin. The very concept of code implies intelligence. Code is symbolic convention. No such thing can exist without intelligent origin.
The information encoded in DNA is both descriptive and prescriptive in nature. It is instructions for building living things. It is meaningful or semantic. Prescriptive information is always formal. Never random.
DNA also contains meta information. Meta information is information on information. This too requires an intelligence. Meta information is impossible without intelligence – yet the cell contains meta information.
It is extremely rash to accuse someone like Stephen Meyer, who has been actively involved in origin-of-life research for more than 20 years, of making an elementary blunder in probability theory. And insinuating that he is deceiving his audience (“Does Dr Meyer really just slip the word ‘functional’ into that second sentence?”) is borderline slanderous. But that is what you have done.
As it happens, there are plenty of articles online which refute your claim that Dr. Meyer has confused the astronomical improbability of a particular sequence of amino acids with the (far greater) probability of a sequence of amino acids that can perform the same function as that particular sequence.
Here, for example, is what Drs. William Dembski and Robert Marks wrote on the subject in their 2009 paper, Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information :
From a somewhat different perspective, K. D. Kalinsky addresses the same issue in his article, Intelligent Design: Required by Biological Life? (2008). Here is what he says about protein folding:
And here is what Dr. Stephen Meyer himself wrote in “The Origin of Biological Information and the Higher Taxonomic Categories” (2004) at http://www.discovery.org/a/2177 , which is the very paper you panned as “withdrawn” (did you read it?):
To cut a long story short: a functional sequence of amino acids may be vastly more probable than any of its particular instantiations. But it is still extremely improbable – simply because the vast majority of possible amino acid sequences are completely non-functional. The needle-in-a-haystack metaphor is entirely apt.
Mr. Nakashima, I think a retraction is called for. ID theorists are not ignorant. They have thought long and hard about the probabilities of the events they describe, and they have concluded that undirected mechanisms are an extremely implausible explanation even for the origin of proteins, let alone life.
Thank you for your response. I admit I am surprised by both its length and its vigor.
My message to which you are responding was itself a response to Mr Deyes’ review. As such, I am questioning several aspects of the review, not Dr Meyer’s original work.
To deal with a shorter matter first, yes, I have read Dr Meyer’s 2004 article. To the extent that Mr Deyes’ OP would like to use the words ‘peer review’ he should be consistent. If the peer review system is important and badges its products with additional honor, you have to take the whole process. If instead the ideas expressed in Meyer 2004 are the important thing, reference it from the discovery.org site. I didn’t pan the paper, I found Mr Deyes’ citation disingenuous.
In the case of the larger question, my query as to whether Mr Deyes was making a direct quote was based on exactly the respect for Dr Meyer’s learning that you think I am questioning. What Kalinsky, Dembski, and Marks have written is neither here nor there, but your quote of Dr Meyer himself shows that he is aware that looking for a specific funtional protein has (according to the most pessimistic estimate of Axe 2004) a probability of 10^-77 (Axe 2004 has an optimistic estimate of 10^-53, a pretty big error bar!). 10^-77 is a far cry from 10^-164.
I do find it quite surprising that Dr Meyers would slip almost 90 orders of magnitude and not notice it. Therefore my question to Mr Deyes. I make no remark against Dr Meyers, I haven’t read his book yet. (I’m hoping to win it in contest #11!) I hope that clarifies my position.
What I find odd is that anti-IDists challenge the math of IDists, yet expect everyone to accept their assertion that chance is sufficient explanation without any mathematical model whatsoever.
It seems to me that questioning IDers on the accuracy of their probability analysis is rather hypocritical when one offers no probability analysis whatsoever to support one’s original contention that chance is a sufficient explanation.
Mote, eye, beam.
I am wholehearted in my agreement with you that my PC was designed, even if it wasn’t designed to be a space heater. It also wasn’t designed to be a door stop or a boat anchor, but can be ‘exapted’ into those roles.
In the same sense, we’ve seen the DNA in the nucleus exapted to function as a lens in retinal cells, a function that does not depend on its information content, merely its bulk properties.
Function is as function does. Wasn’t there a book a few years ago, 100 Uses for a Dead Cat? And there are always brain teasers like how many ways can you measure the height of a flagpole with a brick.
You may find the field of population genetics to be a revelation.
just piling on . . .
WJM @6 I couldn’t agree more. I remember a few years ago reading some tripe attacking Dr. Dembski for, of all things, using too much math!
. . . and a digression . . .
Nakashima @7, your comment,
“It also wasn’t designed to be a door stop or a boat anchor, but can be ‘exapted’ into those roles” is interesting to me because beside the fact that I don’t know exactly what you mean be “exapted”, I don’t believe you.
What I mean is this: IF exaptation means to be appropriated for use out of some other use under the purview of some type of selective advantage, THEN neither the door nor the boat nor even the computer can be about the business of exaptation without an intelligent agent.
What is the purpose of a doorstop or an anchor? To stop a door or anchor a boat when the door or boat need to be immobilized!
Help me understand how your computer, door or boat could accomplish this exaptation without the involvement of an intelligent agent.
I’d suggest that the reason that practically anything (like this batch of ungraded papers in front of me) can be a useful doorstop is because of the active involvement of an intelligent (albeit procrastinating) agent.
It might be a revelation to you that population genetics doesn’t address origin of life scenarios, which is what is under consideration here.
Dr Meyer has situated much of this in terms of the Cambrian Explosion. But I support your call for better modeling all around. And experimentation. I’ve heard Miller-Urey type setups are relatively cheap, as wet biology experiments go.
Signature in the Cell is an argument primarily about the origin of biological information, not about it’s later development under the influence of natural selection.
In the above article, what Meyers is arguing about is clearly marked:
“But the reader is nevertheless left pondering over what relevance such illustrations have to the matter at hand, namely demonstrating that the origin of life requires more than just chance. Meyer meticulously alleviates such concerns with a component-by-component breakdown of the probabilistic resources of our cosmic landscape. He writes:”
The Cambrian Explosion and population genetics has nothing whatsoever to do with the argument Meyer is making above, and criticisms of his math are made in the shocking absence of any math whatsoever supporting assertions that chance and lawful physio-dynamic forces are remotely up to the task.
Can you provide evidence for this? I see a statement from the publisher saying “the Council…would have deemed the paper inappropriate” because of its subject matter; and various rumors of its withdrawal, but no formal statement of withdrawal.
Since the statement you are quoting also says
Accordingly, the Meyer paper does not meet the scientific standards of the Proceedings.
I think withdrawn is the proper term. Would you suggest ‘repudiated’ instead?
Please remember that I haven’t criticized Dr Meyer’s math. I’m asking how to reconcile 10^-164 with 10^-77. 10^-77 comes from a direct quote of Dr Meyer. 10^-164 comes from a book review quotation, which is not quite the same thing. I’m sure there is a way to reconcile the two, I’m just asking what that is.
Um, direct experimental measurement shows that this assertion of Meyer’s is wrong.
If Meyer was correct, then this, this, and this could not possibly be.
I have explained the errors in this regard (including the inappropriate use of Axe’s work) here and here.
As always, enjoy.
‘Repudiated’ would be more accurate, and appropriately more vague. ‘Tried to distance itself from but couldn’t come up with a good reason’ would be clearer.
Your earlier comment said you found the plain citation without ‘withdrawn’ disingenuous, but I fail to see that adding ‘withdrawn’ would increase accuracy or honesty.
You imply that this statement from the Council was part of the peer review process, and so definitively so that omitting reference to it is disingenuous. Can you give support for that? Are there prior guidelines saying that political statements of repudiation by the publisher affect the peer-reviewed status of a paper?
The statement was strange in that it not only did not explicitly withdraw the paper, but also did not cite any flaws in the paper’s statements of fact or arguments, other than straying from the journal’s traditional subject matter; nor did it allege any violation of peer review process guidelines.
The antecedent to ‘Accordingly’, was
The AAAS resolution was issued years before the paper was published. So in effect the Council’s statement interprets AAAS’s “observation” of lack of credible scientific evidence etc. etc. as applying to any future evidence that AAAS has never seen.
That’s about like saying that a paper examined evidence from telescope data and discovered a evidence suggesting that the orbits of celestial bodies are not perfect circles… but the AAAS has observed that there is no credible scientific evidence for such imperfection and there never can be. Either the evidence is misreported or not evaluated scientifically in the paper, or the AAAS has not considered that evidence, or the AAAS is wrong. If the paper misreports evidence or evaluates it unscientifically, the peer review process allows for rebuttals. But such a mistake was never alleged… only the conclusions were inadmissible.
To AAAS’s credit, its actual resolution does not claim to observe that “there is no” credible scientific evidence [for ID], nor that such evidence can never arise; merely that “the ID movement has failed [note past tense] to offer” said evidence. The resolution also does not advocate rejecting any papers that lead to ID-friendly conclusions; rather it opposes teaching ID in schools (Dr Meyer and the DI also oppose teaching ID in schools).
Thus the Council’s statement tells us that there is incompatibility between the Council’s overinterpretation of the AAAS’s resolution and the paper’s conclusions, but does not give us any basis for a scientific problem with the paper.
Surely this does not meet reasonable criteria for a credible peer review process. Call it politics and don’t soil peer review with it any further.
That we do not know. If you mean its function as a lens does not directly require a certain sequence of bases, that may be true, but nevertheless its arrangement (uncoiled DNA only on the outside, or conversely only on the inside) does encode information, because it is different from the uncoiled DNA being evenly intermixed with the coiled DNA. Beyond that, there must be processes that arrange the DNA in this pattern and keep it that way; these processes require biological information in order to perform the correct task. You may argue that this information arose by chance, but the information certainly is required.
You may indeed see your PC exapted as a heater or doorstop, but you will never see your doorstop exapted as a PC without tons of intelligent input.
Good to hear from you and I am very sorry to join in the discussion so late in the game. Just a few pages before the quote I gave above, is a passage that I think ties everything together. I am going to provide it here verbatim (any spelling errors are mine):
“Axel performed a mutagenesis experiment using his refined method on a functionally significant 150 amino acid section of a protein called beta-lactamase, an enzyme that confers antibiotic resistance upon bacteria. On the basis of his experiments, Axel was wable to make a careful estimate of the ratio of (a)the number of 150 amino acid sequences that can perform that particular function to (b) the whole set of possible amino acid sequences of this length. Axe estimated this ratio to be 1 to10exp77.
This was a staggering number, and it suggested that a random process would have great difficulty generating a protein with that particular function by chance. But I didn’t want to know the likelihood of finding a protein with a particular function within a space of combinatorial possibilities. I wanted to know the odds of finding any functional protein whatsoever within such a space. That number would make it possible to evaluate chance-based origin-of-life scenarios, to assess the probability that a single protein- any working protein-would have arisen by chance on the early earth.
Fortunately Axe’s work provided this number as well. Axe knew that in nature proteins perform many specific functions. He also knew that in order to perform these functions, their amino acid chains must first fold into three dimensional structures. Thus before he estimated the frequency of sequences performing a specific function, he first performed experiments that enabled him to estimate the frequency of sequences that will produce stable folds. On the basis of his experimental results, he calculated the ratio of (a) te number of 150-amino acid sequences capable of folding into stable “function-ready” structures to (b)the whole set of possible amino acid sequences of that length. He determined that ratio to be 1 in 10exp74……
Axe’s improved estimate of how rare functional proteins are within “sequence space” has now made it possible to calculate the probability that a 150 amino acid compound assembled by random interactions in a prebiotic soup would be a functional protein. This calculation can be made by multiplying the three independent probabilities bo one another: The probability of incorporating only peptide bonds (1 in 10exp45), the probability of incorporating only left-handed amino acids (1 in 10exp45) and the probability of achieving correct amino acid sequencing (using Axe’s 1 in 10exp74 estimate). Making that calculation (multiplying the separate probabilities by adding their exponents: 10exp(45+45+74)) gives a dramatic answer. The odds of getting even one functional protein of modest length (150 amino acids) by chance from a prebiotic soup is no better than 1 chance in 10exp164.”
(Stephen Meyer, Signature In The Cell, pp.210-212).
Nakashima, I really recommend you purchase the book- it will provide an even clearer picture than the short excerpt I have transcribed above.
Correction in the above: Axe not Axel!
Some quotes from this article:
“There is, however, a fly in the ointment. (Actually, there are many.) Recall that Axe did not work with the native TEM-1 penicillinase, but rather with a variant that had a lower activity. The assay system made this necessary. (Scoring bacteria on antibiotic-containing media isn’t particularly discriminating, and it’s hard to tell is, say, if a wild-type detoxifying enzyme has lost 90% of its activity.)”
“In addition, Axe deliberately identified and chose for study a temperature sensitive variant. In altering the enzyme in this way, he molded a variant that would be exquisitely sensitive to mutation.”
“On this basis alone, we may conclude that the claims of ID proponents vis-Ã -vis Axe 2004 are exaggerated and wrong. Axe’s numbers tell us about the apparent isolation of the low-activity variant, but reveal little (nor can it be expected to) about the “isolation” or evolution of TEM-1 penicillinase. (Or any other enzyme, for that matter.)”
“That is the real question that Axe, ID proponents, and other who follow this sort of discussion would ask. To get some idea, we can turn to Axe’s paper. Axe mentions two other studies — one deals with experiments done with the lambda repressor, and the other with chorismate mutase. Work with the lambda repressor (Reidhaar-Olson and Sauer, 1990) yielded a “value” for the frequency of functional variants of 1 in 10^63 (roughly) for the 92-mer. Work with chorismate mutase (Taylor et al., PNAS 98, 10596-10601, 2001) gave a value of 1 in 10^24 for the 93 amino acid enzyme. Scaled for a similar size protein, Axe’s work gives a value of 1 in 10^59, which falls within the range established by previous work. (The literature in this area is rather large, far beyond the scope of this article to review. Suffice to say that the range of “probability” stated here is representative of the numerous studies in this area.)
Studies such as these involve what Axe calls a “reverse” approach — one starts with known, functional sequences, introduces semi-random mutants, and estimates the size of the functional sequence space from the numbers of “surviving” mutants. Studies involving the “forward” approach can and have been done as well. Briefly, this approach involves the synthesis of collections of random sequences and isolation of functional polymers (e.g., polypeptides or RNAs) from these collections. Historically, these studies have involved rather small oligomers (7-12 or so), owing to technical reasons (this is the size range that can be safely accommodated by the “tools” used). However, a relatively recent development, the so-called “mRNA display” technique, allows one to screen random sequences that are much larger (approaching 100 amino acids in length). What is interesting is that the forward approach typically yields a “success rate” in the 10^-10 to 10^-15 range — one usually need screen between 10^10 -> 10^15 random sequences to identify a functional polymer. This is true even for mRNA display. These numbers are a direct measurement of the proportion of functional sequences in a population of random polymers, and are estimates of the same parameter — density of sequences of minimal function in sequence space — that Axe is after.
10^-10 -> 10^-63 (or thereabout): this is the range of estimates of the density of functional sequences in sequence space that can be found in the scientific literature. The caveats given in Section 2 notwithstanding, Axe’s work does not extend or narrow the range. To give the reader a sense of the higher end (10^-10) of this range, it helps to keep in mind that 1000 liters of a typical pond will likely contain some 10^12 bacterial cells of various sorts. If each cell gives rise to just one new protein-coding region or variant (by any of a number of processes) in the course of several thousands of generations, then the probability of occurrence of a function that occurs once in every 10^10 random sequences is going to be pretty nearly 1. In other words, 1 in 10^-10 is a pretty large number when it comes to “probabilities” in the biosphere.”
Your last quoted number has an extra – sign, it should be “1 in 10^10” but otherwise a very good article. Has subsequent work, especially with a forward approach, changed any of the ranges significantly since you wrote it?
Thanks for the help.
Not that I know of.
Thanks for the informative essay. Two follow-up questions:
1) Is there any peer-reviewed response to Axe’s 2004 article pointing out, similar to what you have, the stated gap between the conclusions supported by his research, and the conclusions drawn by his article or drawn by ID proponents like Meyer from his research? A PubMed search yields  as a plausible candidate, but I haven’t found the whole text yet.
2) Given that Axe helped graciously with early drafts of your essay, what has his response been to its criticism of his 2004 article?
 Dryden DT, Thomson AR, White JH. 2008. How much of protein sequence space has been explored by life on Earth? J R Soc Interface. 2008 Aug 6;5(25):953-6. (PMID 18426772)
Why, just because you say so?
You haven’t even demonstrated that you understand the arguments of ID.
The proof of that is in your response in which you include T-URF 13.
Also perhaps you should bring up your arguments to the people you level them against.
For example send an email to Doug Axe with your alleged “refutation” of his work.
Then send Dr Behe your essay about T-URF 13 and Dr Meyers your essays about novel proteins.
Axe isn’t responsible for his misconstrual by others. Even if he was, the peer-reviewed literature isn’t the place to point that out.
My take on Axe is based on his published work, nothing else. In Axe 1996, he is very clear that he went looking for a problem in the development of novel proteins and didn’t find it. The paper is a straightforward reporting of a negative result.
Axe didn’t bury the result, he published it. That to me is the mark of a commitment to science as “follow the evidence where it goes” and I respect that attitude tremendously.
Dr. Meyer, on the other hand, is just writing a popular book. He certainly could take cognizance of criticisms such as Arthur Hunt’s essay if he wanted to present a balanced perspective on the research and the conclusions that can reasonably be drawn from it.
Thanks for the comment. As for your questions and note:
1. I am not aware of any peer-reviewed papers that address the IDists’ use of Axe, ads I have done in my essay. But I do not keep up with publications in fields (more along the lines of philosophy or education than biological sciences) in which one might find such articles.
2. Axe has never contacted me since my essay first appeared. (Neither, for Joseph’s benefit, has Behe ever contacted me or returned my queries with anything more than a curt brushing aside.)
The paper you cite would seem to support my general contentions regarding the nature of functional protein sequence space. I have read better papers in passing (woefully, cannot provide references off the top of my head).
The author says:
This is a wonderful development. If we assume that a higher intelligence is running the universe, then science loses its predictive power and is therefore useless. Instead, science MUST assume that the universe runs on law and chance alone, and inevitably will not be able to answer some questions. (For example, the existence/non-existence of a god or gods is NOT a scientific question; science examines the natural, observable world, and deities exist outside of that.) If you’d like to suggest extra-terrestrial life as having intelligently designed life on Earth, it’s (barely) scientific and it’s hard to escape “turtles all the way down” from there.
Also, lars, you quoted:
(Emphasis mine.) Which is a gentle way of saying that science only examines the observable world.
But you interpret it as follows:
Which is inaccurate. The moment that someone argues for intelligent design as a testable hypothesis, they’ll start accepting papers.
The problem with intelligent design as a testable hypothesis is the predictions it makes. ID predicts that we will observe exactly what we see everywhere we look because the intelligent designer was just that smart. I can tell you that the theory will be correct in all situations without running an experiment, and that’s the problem. If it is tautologically impossible to run an experiment which contradicts a hypothesis, it is not a scientific hypothesis. However, design by chance posits that the chemical environment of Earth must have been favorable to the formation of amino acids. This might be practically testable by digging to find out what Earth’s surface looked like around the time that we expect life to have appeared and reproducing similar conditions in the lab.
To be a scientific theory, intelligent design must answer two questions: Who is/are the designer(s)? Where can we find them or evidence of their existence?
Quite the opposite is true in fact:
“Theology says to you in effect, ‘Admit God and with Him the risk of a few miracles, and I in return will ratify your faith in uniformity as regards the overwhelming majority of events.’ The philosophy which forbids you to make uniformity absolute is also the philosophy which offers you solid grounds for believing it to be general, to be almost absolute. The Being who threatens nature’s claim to omnipotence confirms her in her lawful occasions. Give us this ha’porth of tar and we will save the ship. The alternative is really much worse. Try to make Nature absolute and you find that her uniformity is not even probable. By claiming too much, you get nothing. You get the deadlock, as in Hume. Theology offers you a working arrangement, which leaves the scientist free to continue his experiments and the Christian to continue his prayers.”
(C.S. Lewis, Miracles, ch.1)
And the same logic applies to the Intelligent Designer.
There is no reason why science must assume that premise, as Clive Hayden pointed out. To perform linear algebra, must one assume that the universe consists only of vectors and matrices? Must Benjamin Britten assume there is nothing to life but music? Of course not… in both endeavors, the practitioner acknowledges that his domain is limited, and that his methods are not applicable everywhere. But they are exquisitely productive within their proper domain.
Moreover, if science does assume that premise, and the premise turns out not to be true, THEN science is useless, because it is all based on a false premise. As we know, conclusions derived from a false premise could as easily be false as true.
The scientist who insists that the universe must all fit into his observable domain is G.K. Chesterton’s madman, who “is not the man who has lost his reason, but the man who has lost everything except his [materialistic] reason.”  Chesterton describes this man as being convinced that he is the King of England, or indeed that he is God, “but what a small world it must be! What a little heaven you must inhabit…”
 Orthodoxy. Chapter II, The Maniac: http://www.gutenberg.org/files/130/130.txt
[empirical] would have been a better word than [materialistic].
I hope I’ll get a chance to ask Dr Meyer about it tomorrow morning at the Science and Faith conference. I’ve printed out Dr Hunt’s essay, and I hope I’ll have time to carefully read ch. 8 of Signature in the Cell in advance.
If you can entertain a request, I would appreciate it if any response by Meyer be relayed through this blog post. It will get very frustrating to have my responses wait for hours or days in the moderation queue here.
I was able to attend the morning sessions of the conference yesterday. They were very good.
Meyer spoke in session 2 and 4 of 4. After each of the two first sessions, questions were taken from open mics. I didn’t bring this question to the mic after session 2 because I knew Meyer would be speaking specifically on Signature in the Cell in session 4. However session 4 ran late into lunch time, and there was no Q&A. Moreover the conference organizer specifically asked us not to mob the speakers on the way to lunch. So I did not get a chance to ask Meyer directly about this issue.
However I did talk to two people about it: Ray Bohlin, molecular and cell biologist, a DI fellow and the speaker for session 1; and another person who has been a personal fried of Meyer’s for many years. I don’t know that the latter is a public figure in the ID debate so I won’t give his name without talking to him about it.
Both were familiar with Meyer’s use of Axe’s work, but not with Hunt’s criticism. There was not time to dig into the details of Hunt’s essay, so I didn’t get concrete answers on the points he raises. (Bohlin was using lunch time to finish up prep for his next session.)
Both told me (independently) that Meyers and Axe are in frequent contact, and that if Meyers had indeed used Axe’s evidence to make claims beyond what Axe thought were warranted, Axe would have objected, and in the 5 years since Meyer’s 2004 article, Meyer would have issued some sort of clarification or correction.
Anyway, Dr Bohlin is going to look into it, and the other person is going to run it by some other scientists, so hopefully there will be an informed response before too long.
I was glad I spent the time and money to get to the conference. This was my second time to hear Meyer speak in person, and again he was very impressive. He came across as clear, straightforward, unpretentious, and convincing.
On the down side, the conference was not aimed at a science audience, so it was not an ideal event for getting detailed questions answered. Nevertheless I heard some nitty-gritty conversations going on at breakfast and lunch.
@lars and @Clive Hayden:
Sorry, I typed out a sentence that I don’t quite believe myself, and I’ll blame myself for writing at a late hour.
You’re correct, science need not assume the existence or non-existence of intelligent intervention in the universe. But it does assume that all observed events can be explained naturally; implicity, this is the assumption that we have never observed a miracle, but events which are difficult to explain.
Also, you state:
This is why a scientific theory must be falsifiable; science tells lots of little stories about the world, none of which are entirely true. As an example, fluid dynamics on a macroscopic scale assumes that fluids look the same at all length scales. We know this is false; fluids are made up of discreet molecules, but the theory still gives good results.
However, results do exist which cannot be explained with the assumption that fluids are continuous, but they are accounted for well under some assumptions about molecules.
If the response to the first observed weird fluid behavior had been to attribute the weirdness to “intelligent fluids,” then we’d still be trying to develop supersonic aircraft by empirical trial and error. Instead, our response was to develop scientific theories, which give us an idea of where to look next, something an empirical understanding isn’t as useful for.
The assumption that an intelligent designer meddled in our development leaves us only with the option of empirically figuring out what critters were created as they are (or were) in place, and that would be that. We’d have no idea where to look next. Instead, if we assume that all species developed out of natural selection, then we can make predictions about what we’d expect to see, when, and where.
I should add that if the intelligent design community could say “this is the designer, and his address is…,” then intelligent design could be a scientific theory, and you could make predictions about what the designer would have designed and where, check that these predictions are consistent with current observations, and test the new predictions. This is something intelligent design cannot currently do.