Since Richard Dawkins thinks he has the right to reprint my letters to him by posting them over the Internet (go here), I’ll assume the same privilege applies to me. Let’s start with this exchange from the spring of 2000 (the paper in question became chapters 3 and 4 of my book NO FREE LUNCH):
From: Richard Dawkins [mailto:richard.dawkins@SNIP.ac.uk]
Sent: Friday, May 05, 2000 1:13 PM
Subject: Re: Evolutionary Algorithms Chapter
Dear Dr Dembski
Your paper is quite well written, and is not stupid (like the writings of your colleagues). But you are not saying anything I didn’t say myself, in The Blind Watchmaker, even if more briefly:-
The point about any phrase being equally eligible to be a target is covered on page 7: “Any old jumbled collection of parts is unique and, WITH HINDSIGHT, is as improbable as any other . . .” et seq.
More specifically, the point you make about the Weasel, is admitted, without fuss, on age 50: “Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective ‘breeding’, the mutant ‘progeny’ phrases were judged according to the criterion of resemblance to a DISTANT IDEAL target . . . Life isn’t like that.”
In real life of course, the criterion for optimisation is not an arbitrarily chosen distant target but SURVIVAL. It’s as simple as that. This is non-arbitrary. See bottom of page 8 to top of page 9. And it’s also a smooth gradient, not a sudden leap from a flat plain in the phase space. Or rather it must be a smooth gradient in all those cases where evolution has actually happened. Maybe there are theoretical optima which cannot be reached because the climb is too precipitous.
The Weasel model, like any model, was supposed to make one point only, not be a complete replica of the real thing. I invented it purely and simply to counter creationists who had naively assumed that the phase space was totally flat except for one vertical peak (what I later represented as the precipitous cliff of Mount Improbable). The Weasel model is good for refuting this point, but it is misleading if it is taken to be a complete model of Darwinism. That is exactly why I put in the bit on page 50.
Perhaps you should look at the work of Spiegelman and others on evolution of RNA molecules in an RNA replicase environment. They have found that, repeatedly, if you ‘seed’ such a solution with an RNA molecule, it will converge on a particular size and form of ‘optimal’ replicator, sometimes called Spiegelman’s minivariant. Maynard Smith gives a good brief account of it in his The Problems of Biology (see Spiegelman in the index). Orgel extended the work, showing that different chemical environments seleclt for different RNA molecules.
The theory is so beautiful, so powerful. Why are you people so wilfully blind to its simple elegance? Why do you hanker after “design” when surely you must see that it doesn’t explain anything? Now THAT’s what I call a regress. You are a fine one to talk about IMPORTING complexity. “Design” is the biggest import one could possibly imagine.
Dear Prof. Dawkins:
I’m puzzled why you mention Spiegelman’s replicase experiments. Just what do you think these experiments illustrate? The replicase protein is supplied by the investigator (from a viral genome), as are the activated mononucleotides needed to feed the RNA synthesis. The whole set-up is as artificial as the WEASEL illustration.
But the real problem is the steady attenuation of information in the experiment. As Brian Goodwin points out:
In a classic experiment, Spiegelman in 1967
showed what happens to a molecular replicating
system in a test tube, without any cellular
organization around it. The replicating molecules
(the nucleic acid templates) require an energy
source, building blocks (i.e., nucleotide bases),
and an enzyme to help the polymerization process
that is involved in self-copying of the templates.
Then away it goes, making more copies of the
specific nucleotide sequences that define the
initial templates. But the interesting result was
that these initial templates did not stay the same;
they were not accurately copied. They got
shorter and shorter until they reached the minimal
size compatible with the sequence retaining
self-copying properties. And as they got
shorter, the copying process went faster. So
what happened with natural selection in a
test tube: the shorter templates that copied
themselves faster became more numerous, while
the larger ones were gradually eliminated.
But lest you stop here idling contentedly, Goodwin continues:
This looks like Darwinian evolution in a test
tube. But the interesting result was that this
evolution went one way: toward greater
simplicity. Actual evolution tends to go toward
greater complexity, species becoming more
elaborate in their structure and behavior,
though the process can also go in reverse,
toward simplicity. But DNA on its own can
go nowhere but toward greater simplicity.
In order for the evolution of complexity to
occur, DNA has to be within a cellular
context; the whole system evolves as a
(Brian Goodwin, _How the Leopard Changed Its Spots_, 1994, pp. 35-36; by the way, Thomas Ray’s Tierra environment gave a similar result, showing how selection acting on replicators in a computational environment also led to simplicity rather than complexity — the replicators became simpler.)
Given a realistic pre-biotic background, absolutely nothing is going to happen. RNA replicators will not arise, nor will cells. Their molecular constituents have to be instructed about where to go and what to do, just as the computer needs to be supplied with “Methinks,” etc. Thinking that there is some magical way around this is delusory.
Your ace in the hole argument seems to be a tu quoque move: “Well, *you’ve* postulated a designer. You’re the REAL cheaters!” But this is hardly an adequate response to the information problem. Nor is simply positing smooth gradients — of course the gradients must be smooth if Darwinism obtains; but if Darwinism itself is at issue, then the gradients need to be established empirically. Work like Michael Behe’s suggests that the gradients are anything but smooth. Granted, Behe has published in the popular press. But there’s work by design theorists, now starting to appear in the journals, which argues this with full rigor for specific biochemical systems.