In a recent post, entitled, Barry Arrington Explains Irreducible Complexity, Professor Laurence Moran sought to discredit the argument that irreducible complexity requires an Intelligent Designer.
Let me state up-front that I am a philosopher, not a scientist. However, I believe in arguing rigorously, so I have attempted to state the argument from irreducible complexity in a rigorous fashion. I’d appreciate hearing from Professor Moran thinks of this argument, as a biologist.
What is irreducible complexity?
I’d like to quote a passage from an online paper entitled, Irreducible Complexity Revisited (version 2.0; revised 2/23/2004) by Professor William Dembski.
The basic logic of IC [Irreducible Complexity] goes like this:
A functional system is irreducibly complex if it contains a multipart subsystem (i.e., a set of two or more interrelated parts) that cannot be simplified without destroying the system’s basic function. I refer to this multipart subsystem as the system’s irreducible core.
We can therefore define the core of a functionally integrated system as those parts that are indispensable to the system’s basic function: remove parts of the core, and you can’t recover the system’s basic function from the other remaining parts. To say that a core is irreducible is then to say that no other systems with substantially simpler cores can perform the system’s basic function.
My argument for why the unguided evolution of a multi-part irreducibly complex system is extremely unlikely
Definition: “Reasonably probable” means “likely to happen, given the time constraints.”
Assumption: Saltationism won’t work, as an explanation of the unguided evolution of a multi-part irreducibly complex system. (“Nature does not make leaps.”)
Assumption: The unguided evolution of a multi-part irreducibly complex system proceeds by a Darwinian process.
Argument:
The unguided, Darwinian evolution of a complex system with an irreducible core of n parts which is able to perform a particular function F has to proceed in little steps, each of which is reasonably probable, where each step:
EITHER (i) starts with a very small number of parts, which together perform a biologically useful function when configured in the right way; {initial function – the first step}
OR (ii) adds a new part / alters an existing part, thereby improving an existing function of a system; {incremental change}
OR (iii) removes a part, but preserves the existing function of a system, resulting in a system which is still able to perform the same function, but with fewer parts, some of which may now be indispensable; {removal of scaffolding}
OR (iv)(a) adds a new part to / alters an old part in an existing system with function G, thereby generating a system which is able to perform a brand new function F; {co-option} and/or {transformation}
OR (iv)(b) removes a part from an existing system with function G, thereby generating a system which is able to perform a brand new function F. {novelty-creating loss}
Why (i) alone won’t work
By definition, (i) alone cannot generate a multi-part complex system with an irreducible core of n parts, since the system is still very simple: it still has only a very small number of parts.
Why a combination of (i) and (ii) won’t work
By definition, a combination of (i) and (ii) cannot generate a complex system with an irreducible core of n parts, since the new parts added are not indispensable to the function of the system.
Why a combination of (i), (ii) and (iii) won’t work
A combination of (i) and (ii) followed by (iii) could theoretically generate a complex system with an irreducible core of n parts, as the loss of a part may transform a reducibly complex system into an irreducibly complex one. But a system which has been initially built up by a combination of (i) and (ii) is likely to have a comfortable margin of error in its spatial configuration, since none of the parts is absolutely critical to the system. In other words, the system will have high fault tolerance. (The system is reducibly complex, so if the configuration of the parts varies slightly, that shouldn’t affect the functionality of the system too much.)
However in a complex system with an irreducible core of n parts, the spatial configuration of the parts is of vital importance: everything has to hang together in just the right way. (Think of Professor Michael Behe’s mousetrap.) What’s more, for a very large value of n, the margin of error in the spatial configuration of the parts in a complex system with an irreducible core is likely to be extremely small. Such a system has a negligible margin of error in its spatial configuration, or near-zero fault tolerance.
It’s very unlikely that the removal of a part from a complex system whose spatial configuration of parts has comfortable margin of error (i.e. high fault tolerance) will suddenly result in the formation of a system whose spatial configuration of parts has a negligible margin of error, or near-zero fault tolerance.
Cyclic repetition of (ii) and (iii) won’t help matters either, as repetition of step (ii) tends to increase the margin of error and hence the fault tolerance of the system, thereby making it harder and harder for step (iii) to generate a system with near-zero fault tolerance.
Conclusion: At least some of the steps in the evolution of a complex system with an irreducible core have to be either type (iv)(a) or type (iv)(b) steps.
Why (iv)(a) won’t work
However, it’s very unlikely that a system with function G, which gains one new part, while keeping the existing parts in nearly the same configuration as they were before, should suddenly be able to perform a totally new function F, especially if the number of parts in the system is large. Reason: the space of all possible configurations is astronomically large. However, the vast majority of configurations don’t do anything useful: they have no functionality. (Think of amino acid chains.) The number of possible functions is therefore much, much smaller than the number of possible configurations, and different functions are likely to be isolated on little islands of configuration space. If just adding one part to a complex system with an existing function G were enough to generate a system with a new function F, that would mean, contrary to supposition, that the two functions were relatively close together in configuration space. As the number of parts n of the complex system increases, however, this scenario becomes less and less plausible. (And now think of Behe’s bacterial flagellum. Even the simplest flagella require 30 parts. The idea that adding one part to an existing 29-part system would somehow magically confer the functionality of the flagellum appears to be extremely unlikely.)
The same logic applies if we imagine that no part is added, but that one of the existing parts of a system with function G is altered. Again, it is extremely unlikely that a single alteration would confer a new function F upon the system, especially if the number of parts in the system is already large.
It’s even less likely that a system with function G, which gains one new part, while at the same time dramatically reshuffling the configuration of the old parts, should sudddenly be able to perform a totally new function F. Reason: if the reshuffling is dramatic, it’s much more likely to merely destroy existing functionality than to confer new functionality. (Recall that the vast majority of possible configurations don’t do anything useful: they have no functionality. Wrecking is easy; building is hard.)
Why (iv)(b) won’t work
It’s even less likely that a system with function G, which loses one part, while keeping the other parts in nearly the same configuration as they were before, should suddenly be able to perform a totally new function F. Reason:losses of parts tend to destroy functionality. Also, it would mean that two functions were relatively close together in configuration space, which is extremely unlikely, as the number of possible configurations is much, much larger than the number of possible functions.
It’s even less likely that a system with function G, which loses one part, while at the same time dramatically reshuffling the configuration of the other parts, should suddenly be able to perform a totally new function F. Reason:losses of parts tend to destroy functionality. Also, it would mean that two functions were relatively close together in configuration space, which is extremely unlikely, as the number of possible configurations is much, much larger than the number of possible functions. Finally, reshuffling is more likely to destroy existing functionality than to create new functionality.
Why do we need a Designer to account for irreducibly complex systems?
Intelligent design is the only known process which is reliably capable of generating systems which are not only vastly improbable, but also functional. Since irreducibly complex systems have been shown to be vastly improbable and by definition have a function, it follows that intelligent design is the best explanation for the generation of irreducibly complex systems.
Note:The argument here is not absolutely ironclad; it is a probabilistic one, and it does not establish the existence of God, but merely of an Intelligent Designer of certain biological systems.
If you want a good, non-probabilistic argument for the existence of God, I’d recommend Job Opening: Creator of the Universe — A Reply to Keith Parsons (2009) by Professor Paul Herrick. I’d also recommend Lecture notes and bibliography from Dr. Robert Koons’ Western Theism course (1998) for a highly readable summary of some of the best philosophical arguments for God’s existence. If you’d like a good summary of the fine-tuning argument, try The Teleological Argument: An Exploration of the Fine-Tuning of the Universe by Dr. Robin Collins (The Blackwell Companion to Natural Theology. Edited by William Lane Craig and J. P. Moreland. 2009. Blackwell Publishing Ltd. ISBN: 978-1-405-17657-6.)These are about the best resources online for atheists who want to acquaint themselves with the arguments for God’s existence.
Where did the information in the designer come from?
The Designer isn’t irreducibly complex, so He doesn’t need another Designer.
Recall the definition of irreducible complexity: “a set of two or more interrelated parts that cannot be simplified without destroying the system’s basic function.” If the Designer (i) has no parts or (ii) has parts which cannot be removed because they’re inseparable from one another or (iii) is reducibly complex, then He won’t need a Designer, according to the argument I have put forward above.
That’s about all I have time to write today. Do readers think I have expressed the argument that irreducible complexity requires an Intelligent Designer in a sufficiently rigorous fashion? I’d like to hear your thoughts.