A reply to Professor Moran
|December 21, 2011||Posted by vjtorley under Intelligent Design|
Professor Moran has graciously replied to my recent post, “Will this do, Professor Moran?” (18 December 2011) in which I attempted to flesh out the argument that irreducible complexity requires an Intelligent Designer. I would like to thank him for taking the trouble to write a detailed rebuttal of my argument.
Since Professor Moran is a respected biochemist, I won’t be contesting his claim that the citric acid cycle evolved in a Darwinian fashion. What I’ll attempt to show is that it fails as a counter-example to my argument.
“Unlikely” is not the same thing as “impossible”
Before I address Professor Moran’s scientific arguments, I’d like to draw his attention to one brief but important passage in my post:
Note: The argument here is not absolutely ironclad; it is a probabilistic one…
I also wrote that “intelligent design is the best explanation for the generation of irreducibly complex systems.” However, I didn’t claim that it was the only possible explanation.
So I was astonished when I read the following passage in Professor Moran’s post:
It’s quite easy to think of examples that correspond to the steps that Torley says are impossible.
“Impossible” is not a word I used in my argument. (I did use the term “cannot,” but only in relation to states of affairs that were impossible by definition – e.g. a system with a large number of parts cannot have only a very small number of parts.) The phrase I used, over and over again in my argument, was “very unlikely.” Professor Moran is putting words into my mouth.
I’d also like to mention that I accept common descent. What I do not accept is the adequacy of any unguided mechanism (e.g. NDE) in accounting for the origin and development of life. I’m quite sure that Darwinian mechanisms played a role; I just don’t think they’re the stars of the show.
Which version of irreducible complexity am I talking about?
Towards the end of his post, Professor Moran expresses understandable frustration at the fact that Intelligent Design proponents don’t have a single, common definition of “irreducible complexity.”
Now Professor Moran is a biochemist, so I’ll answer him with a question: what’s an acid? He knows perfectly well that there’s more than one definition of that chemical term, just as there’s more than one definition of the biological term “species.” Multiple definitions for a scientific term are fine, so long as everyone is clear about which definition is being used. At the outset of my article on irreducible complexity, I used a definition which I quoted from a 2004 paper by Professor William Dembski.
Professor Michael Behe now uses a different definition from the one he originally formulated in Darwin’s Black Box: The Biochemical Challenge to Evolution (The Free Press: New York) in 1996, where he wrote:
By irreducibly complex I mean a single system which is composed of several well-matched, interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning. (Behe 1996, 39)
This is pretty close to Dembski’s definition:
A functional system is irreducibly complex if it contains a multipart subsystem (i.e., a set of two or more interrelated parts) that cannot be simplified without destroying the system’s basic function.
In a 2000 paper entitled, In Defense of the Irreducibility of the Blood Clotting Cascade:
Response to Russell Doolittle, Ken Miller and Keith Robison, Behe proposed replacing his old definition of irreducible complexity with an evolutionary definition:
An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.
While Professor Behe’s new definition is more mathematically rigorous than his old one, it is less intuitive. From a layperson’s perspective, it’s nice having a definition which you can picture easily. Behe’s original definition came with a handy visual illustration: the mousetrap. This is something which you can see won’t work if one of its parts goes missing – that is, unless someone cleverly tinkers with the remaining parts. (Yes, one can imagine a freak occurrence which might render the remaining parts functional, but once again, that would be “very unlikely.”) Another reason why I chose not to use Behe’s new definition is that it’s an historical definition. Unfortunately, many biochemical systems don’t wear their history on their sleeves, so to speak – but they do display their functionality in a way that everyone can see.
Professor Moran’s “foot in the door”
Now let’s go back to Behe’s 1996 definition. Notice that he spoke of “several well-matched, interacting parts that contribute to the basic function” (italics mine). Professor Dembski didn’t use these exact words in his 2004 definition; he used the somewhat more ambiguous term “inter-related.” This definitional ambiguity was Professor Moran’s “foot in the door.”
In his post, Professor Moran argued that the citric acid cycle would satisfy this definition of irreducible complexity: it has multiple parts (enzymes); these parts are inter-related, insofar as they constitute a chemical cycle; and finally, if you remove any of the parts, you break the cycle, so the system’s basic function is detroyed. And yet the citric acid cycle clearly evolved from two other pathways that originally had different functions. Game, set and match?
Not quite. If you look at my post and Professor Dembski’s article, Irreducible Complexity Revisited (version 2.0; revised 2/23/2004), you’ll see that we both used the term “configuration” to describe the arrangement of the parts. Take these two rhetorical questions which Dembski poses, when describing the “daunting probabilistic hurdles” that a Darwinian mechanism for assembling an irreducibly complex system must face:
(5) Interface Compatibility. Are the parts that are being recruited for inclusion in an evolving system mutually compatible in the sense of meshing or interfacing tightly so that, once suitably positioned, the parts work together to form a functioning system? …
(7) Configuration. Even with all the right parts slated to be assembled in the right order, will they be arranged in the right way to form a functioning system? (2004, pp. 30-31)
What Professor Dembski had in mind was not a set of separate components, each of which perform a task in some fixed temporal sequence, but rather, a structure composed of spatially contiguous, inter-locking components – such as the parts of a bacterial flagellum. This becomes apparent when he describes the difficulties attending hurdle number (5) – interface compatibility – when building a bacterial flagellum by a gradual Darwinian process:
For the Darwinian mechanism to evolve a system, it must redeploy parts previously targeted for other systems. But that’s not all. It also needs to ensure that those redeployed parts mesh or interface properly. (2004, p. 35)
Dembski goes on to highlight the difficulty: “The products of Darwinian evolution are, after all, … systems formed by sticking together items previously assigned to different uses” (2004, p. 35, italics mine).
Likewise, in step (iv)(a) of my own argument – the step which Professor Moran attacks – I explicitly used the term “configuration”:
However, it’s very unlikely that a system with function G, which gains one new part, while keeping the existing parts in nearly the same configuration as they were before, should suddenly be able to perform a totally new function F, especially if the number of parts in the system is large. (Emphasis mine – VJT.)
So, is the citric acid cycle irreducibly complex? If not, what is?
It should be clear by now that the citric acid cycle isn’t the sort of thing that Professor Dembski or I would want to describe as “irreducibly complex.” The enzymes in the cycle make up a pathway – i.e. an ordered sequence of reactions. The enzymes in the cycle aren’t all stuck together in some giant superstructure, so there is no multi-part configuration.
For my part, I’m prepared to go further and say that the blood clotting cascade isn’t the sort of thing I’d want to call “irreducibly complex” according to my definition. That doesn’t mean I necessarily think it evolved through a Darwinian process; it just means that according to the definition of “irreducibly complex” which I’m using, the question of whether it originated in that way is impossible for me to answer. Is that a problem? No. Remember: the aim of my argument was simply to develop a case for intelligent design, using a definition of “irreducible complexity” which applies to at least some of the systems which ID proponents would identify as irreducibly complex. I’m happy to focus on the bacterial flagellum, for argument’s sake.
The bacterial flagellum: draw me some pictures, please!
And that was what step (iv)(a) of my argument was about. Co-option is the standard neo-Darwinian explanation for the evolution of the bacterial flagellum, but when you’re dealing with something that has 30 parts, and the nearest functional sub-unit is a piece that has 10 parts, then I’d say you still have a lot of explaining to do. (That’s why the TTSS story doesn’t impress me: it’s a very long way from 10 to 30.)
You could suppose the existence of some “magic pathway,” where the successive addition of each new part somehow generates a new biological function, but then you’ve got to confront the configuration question: when I add a new part, do I substantially retain the old configuration of parts, or do I reshuffle the parts I already have? The idea that 30 succcessive biological functions could appear by the successive addition of parts to an existing configuration without any re-shuffling beggars belief, and the idea that dramatic reshuffling of the configuration could generate this successive appearance of functions as the structure gets bigger and bigger also appears ludicrous: it’s too much of a miracle.
Yes, you could imagine two or more smaller functional structures evolving in parallel and then coming together to make a bigger 30-part system. But the more sub-units you invoke, the harder it is to envisage them all coming together and producing something with a new function of its own. It’s much more likely that the sub-units wouldn’t mesh properly.
So I’d like to ask Professor Moran: how do you envisage the bacterial flagellum evolving? I’m not asking for lots of details here – just a conceptual scenario will be fine. They say a picture is worth a thousand words. Personally, I’d be happy with two or three simple pictures, because I really can’t picture any good way of building an irreducibly complex structure with 30 parts, without making a lot of “iffy” assumptions.
Darwinists can’t keep up with the science
One way in which a layperson like myself can tell when someone’s losing a scientific argument is when the number of new facts they can’t explain keeps growing faster than their ability to generate hypotheses to explain old facts. Looking at what has been happening in the field of research relating to the bacterial flagellum, this is precisely what seems to be happening. Professor Michael Behe’s 2007 book, The Edge of Evolution, has a whole Appendix devoted to what scientists now know about how the flagellum is built, and the system of controls that regulate its construction. Behe writes:
Complex, functional structures such as the cilium and flagellum are just the beginning. They demand intricate machinery and control programs to build them. Without those support systems. the final structure wouldn’t be possible. The bacterial flagellum contains several dozen protein parts. The cilium, which has so far resisted investigation of its DNA control program, has several hundred. There is every reason to think that the control of its construction will have to be much more intricate than that of the flagellum. (2007, p. 100)
In a diagram on page 99, Behe adds:
Genes for the construction of the bacterial flagellum are activated in a precisely timed fashion. Those needed for the construction of the bottom of the molecular machine are switched on first, followed in order by those needed for more distant parts.
Wait a minute. There’s a timing sequence? That really creates problems for evolutionary scenarios where you have lots of little sub-units coming together to make a flagellum, doesn’t it? How did the timing for the activation of the genes get re-regulated, so that the whole thing would develop in the right sequence, from bottom to top? I’m not saying it’s impossible. All I’m saying is: if I were a Darwinian, I’d be pouring myself a brandy. Your headache isn’t getting better; it’s getting worse.
Back to the citric acid cycle
I declared at the beginning of this post that I wouldn’t be contesting Professor Moran’s claim that the citric acid cycle evolved in a Darwinian fashion. But that doesn’t mean that I think the components of the system evolved in a stepwise fashion. I’m talking about enzymes here. I couldn’t help noticing that one of these enzymes is called citrate synthase. Here’s what Wikipedia says about it:
Citrate synthase’s 437 amino acid residues are organized into two main subunits, each consisting of 20 alpha-helices. These alpha helices compose approximately 75% of citrate synthase’s tertiary structure, while the remaining residues mainly compose irregular extensions of the structure, save a single beta-sheet of 13 residues. Between these two subunits, a single cleft exists containing the active site.
437 amino acids? We’re really talking about configuration problems here, if we try to imagine a step-wise scenario for its origin. I’ll let readers have a look at the beast, and judge for themselves:
Those who are familiar with the work of Dr. Douglas Axe will recognize the problem I’m talking about. Only a tiny fraction of amino acid sequences are in any way functional. The odds against proteins such as citrate synthase forming in this way are astronomical.
I’m sure that Professor Moran will insist that there are other more likely explanations for the origins of proteins. What about RNA, for instance? OK, fine. I have just one question. Can you point out an alternative scenario, and show me some calculations (back-of-the-envelope will do) indicating that your scenario is more likely to generate a functional protein than the nightmare scenario of building up an amino acid chain step by step? If you can’t quantify, then you’re not doing science. Well, what are you doing? Theorizing. Mmm. That sounds like religion to me. Don’t you think?