Intelligent Design

Biology´s ‘Skeleton In The Closet’: The Broken Bones Of Origins Science

Spread the love

Review Of Chapter 13 Of Signature In The Cell, by Stephen Meyer, HarperOne Publishers, ISBN: 9780061894206

I never would have suspected that the literary sensation Dr Seuss’ The Cat In The Hat Comes Back would be used to make a point about the devastating shortcomings of origin of life theories (1). But when I read one of the later chapters of Meyer’s Signature In The Cell which in one foul swoop discredited Hermann Muller’s fortuitous origins of DNA, Henry Quastler’s DNA self replication hypothesis and Manfred Eigen’s ideas on hypercycles I could not help but be fascinated by his use of this children’s classic in his exposition. Of course in their own unique ways each of these scientists became steadfastly convinced that they were onto something of great significance that would lead to fruitful avenues on the all important question of how life had begun.

Muller drew inferences from his own work on viruses, in particular bacteriophages (‘bacteria eaters’), equating these simple organisms to “a gene that copies itself within the cell” (2). He envisioned these as being somehow analogous to primitive DNA floating around in the chemical-rich soup of the early earth (2). Quastler on the other hand suggested that polynucleotides could act as templates for replication through complementary base pairing (3). And Eigen chose to assume that ‘self-reproducing molecular systems’ involving RNA molecules and basic enzymes could somehow supply an early form of transcription and translation, later forming hypercycles that would have preceded the arrival of the earliest cells (4).

So how is the Cat in the Hat relevant? Crucial aspects of the above mechanistic propositions, writes Meyer, parallel the antics of our feline friend as he unwillingly redistributes the mess he has created in the house of his none-too-happy hosts. Origin of life scientists have similarly been trying for decades to “clean up the problem of explaining the origin of [biological] information” only to find that they have “simply transferred the problem elsewhere- either by presupposing some other unexplained sources of information or by overlooking the indispensable role of an intelligence” (1). And their modern day brethren, with the apparent sophistication of computer-housed evolutionary algorithms, have fared little better. Meyer’s unpacking of the reality behind Ev, for example, described by its author Thomas Schneider as “a simple computer program” that attempts to evolve the information content of DNA binding sites in a hypothetical genome, is a case in point (5). In Ev Schneider specifies the sequence of these DNA binding sites and incorporates the code for the binding site ‘recognizer’ (protein) into the genome (5). The relative penalties for mis-binding or non-binding of the recognizer to sequences are pre-set into the program (5).

Ev stands guilty as charged since, as Meyer asserts, it presupposes an unexplained source of information (1). And for that matter so does the much-celebrated evolutionary algorithm Me Thinks It Is Like A Weasel. “[Both] succeed in generating the information they seek” writes Meyer “either by providing information about the desired outcome from the onset, or by adding information incrementally during the computer program’s search for the target”. The so-called ‘active information’ imparted by the programmer allowed both programs to assess the proximity of any given sequence to a pre-specified target- hardly a fair representation of the Darwinian mechanism in action.

I had the opportunity to hear Michigan State University philosopher Robert Pennock present on another much-touted algorithm called AVIDA during the 2008 Bioethics Forum in Madison, Wisconsin. The forum carried the promissory title Evolution In The 21st Century. And Pennock certainly did his utmost to adopt the ‘Darwin immortalized’ slant that the event was promoting (6). From the deliberations that followed Pennock’s exposition it appeared on the surface that AVIDA trumped its predecessors by not pre-specifying any sort of evolutionary target (5). But as I found out on further inspection appearances can be deceptive. In fact the AVIDA world is home to a brood of digital organisms that are rewarded with resources and replicate each time they perform logic functions (eg: AND, OR). Meyer’s principle criticism is that the inherent complexity of these functions in no way equates to that which we find in genes and therefore unreasonably “diminishes the probabilistic task that nature would face in “trying” to evolve the first reproducing cell”.

The problems with AVIDA run deeper still as Winston Ewert, William Dembski and Robert Marks II have made all too clear in their expository dissection of digital organisms. They conclude that “AVIDA generates active information from a number of knowledge sources provided by the programmer and, with respect to an evolutionary strategy, performs poorly with respect to other search strategies using the same prior knowledge” (7). In fact AVIDA organisms are endowed with virtual genomes and the capacity to replicate and operate within a realm of assigned merit values for each of the logic functions they perform (6).

More generally, the thrust of Meyer’s attack has everything to do with the law of conservation of information (COI) (6). COI theory supplies us with a critical insight: “all computer search algorithms of moderate to high difficulty require active information” (ie from the programmer) and “the amount of information in a computer in its initial state equals or exceeds the amount of information in its final state” (1). That is, evolutionary algorithms do not furnish us with a means by which to simulate the origin of genetic information through natural selection given that too much information is siphoned into these algorithms from the onset by external intelligence.

For the same reasons already mentioned, the mess left by Dr Seuss’ Cat in the Hat is once again proverbially pertinent to the matter at hand. In this regard, Meyer is to be congratulated for his divulgence of biology’s foremost skeleton in the closet- the absence of a scientifically plausible explanation for the origin of biological information. On that matter, we should embrace his enthusiasm for change in the way that clenched-fist biologists filter debate over their view of life’s unfolding story.


1.Stephen Meyer (2009) Signature In The Cell: DNA And The Evidence For Intelligent Design, HarperOne Publishers, pp. 271-295

2.Iris Fry (2006) The origins of research into the origins of life, Endeavour, Volume 30, Issue 1, March 2006, pp. 24-28

3.Robert L. Herrmann (1975) Implications of Molecular Biology for Creation and Evolution, JASA 27 (December 1975): pp. 156-159,

4.Vladimir Red’ko (1998) Hypercycles, Principia Cybernetica, See

5.Thomas Schneider (2000) Evolution of biological information, Nucleic Acids Research, 2000, Vol 28, pp. 2794-2799

6.Robert Deyes (2008) AVIDA As A ‘Teleo-LOGIC’ Model Of Life, Access Research Network, See

7.Winston Ewert, William Dembski, Robert Marks II (2009) Evolutionary Synthesis Of Nand Logic: Dissecting A Digital Organism, Proceedings Of The 2009 IEEE International Conference On Systems, Man, And Cybernetics, San Antonio, Texas, USA (October 2009), pp. 3047-3053, See

3 Replies to “Biology´s ‘Skeleton In The Closet’: The Broken Bones Of Origins Science

  1. 1
    bornagain77 says:

    Where did the Information come from???,,,

    The Skeleton that keeps trying to sneak out of the closet:

    or maybe the elephant in the living room that won’t go away:

    The DNA Enigma – Where Did The Information Come From? – Stephen C. Meyer

    All Of Creation – Mercyme

  2. 2
    R0b says:

    I agree that the importance of such programs as WEASEL, ev, and Avida tends to get overblown. They confirm that RM+NS works if there is a somewhat gradual path to follow, but it’s the existence of such paths in biology that ID proponents deny.

    Having said that, I can see how the EIL’s “information accounting” framework seems to contribute to the debate, but I don’t see how it actually contributes. “Conservation of Information” sounds like a principle that’s applicable to evolutionary computation and biological evolution, but I have yet to see it applied in a way that tells us anything of use. CoI holds only under particular conditions, and I see no way to make a valid case for those conditions obtaining.

    With regards to Chapter 13 of SitC, Meyer gets several things wrong in his description of ev. It’s obvious that he gets his understanding from Marks and Dembski, not from Schneider, as he makes the same mistakes they do and adds a few of his own. (BTW, it appears that a large amount of active info was injected in ev by accident, not design.) He also follows Marks and Dembski in misapplying a quote from Leon Brillouin, which deals with deterministic processes, not probabilistic searches.

    You say:

    COI theory supplies us with a critical insight: “all computer search algorithms of moderate to high difficulty require active information” (ie from the programmer) and “the amount of information in a computer in its initial state equals or exceeds the amount of information in its final state”

    Both of those statements are tautological. If an algorithm tractably finds a small target in a large space, then it has active information by definition of the term “active information”. And assuming that the second quote refers to algorithmic information, it’s defined to not increase during computation.

    Which leads me to ask about something that Meyer says in this chapter:

    Computer scientists have formulated various mathematically precise laws of conservation of information to express this basic principle. Most state that within certain quantifiable limits the amount of information in a computer in its initial state (considering both its hardware and software) equals or exceeds the amount of information in its final state.

    What is Meyer referring to? There’s no footnote. If he’s talking about algorithmic information, then as mentioned before, the statement is true by definition (except for the “within quantifiable limits” part). Does anyone know what laws he’s talking about?

  3. 3
    kairosfocus says:


    Allow me to use simple descriptions and examples.

    What design thinkers point out is that information-rich functional organisation comes in island-like target zones — islands of function — in wider configuration spaces that are dominated by what are sometimes called seas of non-function.

    Push in enough noise and you go off the cliffs or beaches into the sea of non function.

    And, for sufficiently complex function, starting from an arbitrary initial configuration runs out of accessible world resources long before one credibly samples enough of the config space to make it reasonably plausible that one can land on such an island.

    But within such an island, variation and differential function may lead to population shifts.

    Also, active information in effect gives a map or a beacon [warmer/colder] that leads one to the islands of function, based on intelligence. That is why it exceeds the capacity of a random walk.

    And of course the point on conservation of info is that the active info did not come up as a free lunch, it came from an intelligent source. A system that stores and uses complex functional information, as a rule will gradually deteriorate and lose information once none is being freshly injected by an active — intelligent — source. [Think of your PC . . . ]

    So, across time it loses information.

    GEM of TKI.

Leave a Reply