- Share
-
-
arroba
Amid the fog of recent controversies, I can discern a hopeful sign: the key figures in the ongoing debate over specified complexity on Uncommon Descent are actually converging in their opinions. Allow me to explain why.
Winston Ewert’s helpful clarifications on CSI
In a recent post, ID proponent Winston Ewert agreed that Elizabeth Liddle had a valid point in her criticisms of the design inference, but then went on to say that she had misunderstood what the design inference was intended to do (emphases mine):
She has objected that specified complexity and the design inference do not give a method for calculating probabilities. She is correct, but the design inference was never intended to do that. It is not about how we calculate probabilities, but about the consequences of those probabilities. Liddle is complaining that the design inference isn’t something that it was never intended to be.
He also added:
…[T]he design inference is a conditional. It argues that we can infer design from the improbability of Darwinian mechanisms. If offers no argument that Darwinian mechanisms are in fact improbable. When proving a conditional, we are not concerned with whether or not the antecedent is true. We are interested in whether the consequent follows from the antecedent.
In another post, Winston Ewert summarized his thoughts on specified complexity:
The notion of specified complexity exists for one purpose: to give force to probability arguments. If we look at Behe’s irreducible complexity, Axe’s work on proteins, or practically any work by any intelligent design proponent, the work seeks to demonstrate that the Darwinian account of evolution is vastly improbable. Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.
Winston Ewert concluded that “the only way to establish that the bacterial flagellum exhibits CSI is to first show that it was improbable.”
To which I would respond: hear, hear! I completely agree.
What about Ewert’s claim that “CSI and Specified complexity do not help in any way to establish that the evolution of the bacterial flagellum is improbable”? He is correct, if by “CSI and Specified complexity,” he simply means the concepts denoted by those terms. If, however, we are talking about the computed probability of the evolution of the bacterial flagellum emerging via unguided processes, then of course this number can be used to support a design inference: if the probability in question is low enough, then the inference to an Intelligent Designer becomes a rational one. Ewert obviously agrees with me on this point, for he writes that “Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.”
In a recent post, I wrote that “we can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).” Immediately afterwards, I added that in order to calculate the specified complexity of an object, we first require “the probability of producing the object in question via ‘Darwinian and other material mechanisms.'” I then added that “we compute that probability.” The word “compute” makes it quite clear that without that probability, we will be unable to infer that a given object was in fact designed. I concluded: “To summarize: to establish that something has CSI, we need to show that it exhibits specificity, and that it has an astronomically low probability of having been producedby unguided evolution or any other unintelligent process” (italics added).
Imagine my surprise, then, when I discovered that some readers had been interpreting my claim that “we can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold)” as if I were arguing for a design inference on the basis of some pre-specified numerical value for CSI! Nothing could be further from the truth. To be quite clear: I maintain that the inference that biological organisms (or structures, such as proteins) were designed is a retrospective one. We are justified in making this inference only after we have computed, on the basis of the best information available to us, that the emergence of these organisms (or structures) via unguided processes – in which I include both random changes and the non-random winnowing effect of natural selection – falls below a certain critical threshold of 1 in 2^500 (or roughly, 1 in 10^150). There. I cannot be clearer than that.
So I was heartened to read on a recent post by Barry Arrington that Keith S had recently endorsed a form of design inference, when he wrote:
To use the coin-flipping example, every sequence of 500 fair coin flips is astronomically improbable, because there are 2^500 possible sequences and all have equally low probability. But obviously we don’t exclaim “Design!” after every 500 coin flips. The missing ingredient is the specification of the target T.
Suppose I specify that T is a sequence of 250 consecutive heads followed by 250 consecutive tails. If I then sit down and proceed to flip that exact sequence, you can be virtually certain that something fishy is going on. In other words, you can reject the chance hypothesis H that the coin is fair and that I am flipping it fairly.
That certainly sounds like a design inference to me.
In a follow-up comment on Barry Arrington’s post, Keith S went on to point out:
…[I]n that example, I am not calculating CSI and then using it to determine that something fishy is going on. Rather, I have to determine that something fishy is going on first (that is, that P(T|H) is extremely low under the chance hypothesis) in order to attribute CSI to it.
To which I would respond: you’re quite right, Keith S. That’s what I’ve been saying and what Winston Ewert has been saying. It seems we all agree. We do have to calculate the probability of a system emerging via random and/or non-random unguided processes, before we impute a high level of CSI to the system and conclude that it was designed.
CSI vs. irreducible complexity: what’s the difference?
In a subsequent comment, Keith S wrote:
I think it’s instructive to compare irreducible complexity to CSI in this respect.
To argue that something is designed because it exhibits CSI is circular, because you have to know that it is designed before you can attribute CSI to it.
To argue that something is designed because it is irreducibly complex is not circular, because you can determine that it is IC (according to Behe’s definition) without first determining that it is designed.
The problem with the argument from IC is not that it’s circular — it’s that IC is not a barrier to evolution.
For the record: the following article by Casey Luskin over at Evolution News and Views sets forth Professor Mike Behe’s views on exaptation, which are that while it cannot be absolutely ruled out, its occurrence is extremely improbable, even for modestly complex biologically features. Professor Behe admits, however, that he cannot rigorously quantify his assertions, which are based on his professional experience as a biochemist. Fair enough.
The big difference between CSI and irreducible complexity, then, is not that the former is circular while the latter is not, but that CSI is quantifiable (for those systems where we can actually calculate the probability of their having emerged via unguided random and/or non-random processes) whereas irreducible complexity is not. That is what makes CSI so useful, when arguing for design.
Does Dr. Dembski contradict himself? I think not
Keith S claims to have uncovered a contradiction between the following statement by leading Intelligent Design advocate Dr. Willaim Dembski:
Michael Behe’s notion of irreducible complexity is purported to be a case of actual specified complexity and to be exhibited in real biochemical systems (cf. his book Darwin’s Black Box). If such systems are, as Behe claims, highly improbable and thus genuinely complex with respect to the Darwinian mechanism of mutation and natural selection and if they are specified in virtue of their highly specific function (Behe looks to such systems as the bacterial flagellum), then a door is reopened for design in science that has been closed for well over a century. Does nature exhibit actual specified complexity? The jury is still out.
and this statement of his:
It is CSI that Michael Behe has uncovered with his irreducbly complex biochemical machines. It is CSI that for cosmologists underlies the fine-tuning of the universe and that the various anthropic principles attempt to understand.
I don’t see any contradiction at all here. In the first quote, Dr. Dembski is cautiously pointing out that the inference that the bacterial flagellum was designed hinges on probability calculations, which we do not know for certain to be correct. In the second quote, he is expressing his belief, based on his reading of the evidence currently available, that these calculations are in fact correct, and that Nature does in fact exhibit design.
Dembski and the Law of Conservation of Information
Keith S professes to be deeply puzzled by Dr. Dembski’s Law of Conservation of Information (LCI), which he finds “murky.” He is especially mystified by the statement that neither chance nor law can increase information.
I’d like to explain LCI to Keith S in a single sentence. As I see it, its central insight is very simple: that when all factors are taken into consideration, the probability of an event’s occurrence does not change over the course of time, until it actually occurs. In other words, if the emergence of life in our universe was a fantastically improbable event at the time of the Big Bang, then it was also a fantastically improbable event 3.8 billion years ago, immediately prior to its emergence on Earth. And if it turns out that the emergence of life on Earth 3.8 billions of years ago was a highly probable event, then we should say that the subsequent emergence of life in our universe was highly probable at the time of the Big Bang, too. Chance doesn’t change probabilities over the course of time; neither does law. Chance and law simply provide opportunities for the probabilities to be played out.
Someone might argue that we can think of events in human history which seemed highly improbable at time t, but which would have seemed much more probable at a later time t + 1. (Hitler’s rise to power in Germany would have seemed very unlikely in January 1923, but very likely in January 1933.) But this objection misses the point. Leaving aside the point that humans are free agents, a defender of LCI could reply that when all factors are taken into consideration, events that might seem improbable at an earlier time can in fact be demonstrated to have a high probability of occurring subsequently.
Making inferences based on what you currently know: what’s the problem with that?
Certain critics of Intelligent Design are apt to fault ID proponents for making design inferences based on what scientists currently know. But I see no problem with that, as long as ID proponents declare that they would be prepared to cheerfully revise their opinions, should new evidence come to light which overturns currently accepted beliefs.
I have long argued that Dr. Douglas Axe’s paper, The Case Against a Darwinian Origin of Protein Folds, whose argument I summarized in my recent post, Barriers to macroevolution: what the proteins say, demonstrates beyond reasonable doubt that unguided mechanisms could not have given rise to protein folds that we find in living creatures’ body proteins, in the space of just four billion years. I have also pointed out that Dr. Eugene Koonin’s peer-reviewed article, The Cosmological Model of Eternal Inflation and the Transition from Chance to Biological Evolution in the History of Life (Biology Direct 2 (2007): 15, doi:10.1186/1745-6150-2-15) makes a very strong case that the probability of a living thing capable of undergoing Darwinian evolution – or what Dr. Koonin refers to as a coupled translation-replication system – emerging in our observable universe during the course of its history is astronomically low: 1 in 10^1,018 is Dr. Koonin’s estimate, using a “toy model” that makes deliberately optimistic assumptions. Finally, I have argued that Dr. Robin Collins’ essay, The Teleological Argument rules out the infinite multiverse hypothesis which Dr. Koonin proposes in order to explain the unlikely emergence of life in our universe: as Dr. Koonin argues, a multiverse would need to be specially fine-tuned in order to produce even one universe like our own. If Dr. Axe’s and Dr. Koonin’s estimates are correct, and if we cannot fall back on the hypothesis of a multiverse in order to shorten the odds against life emerging, then the only rational inference that we can make, based on what we currently know, is that the first living thing was designed, and that the protein folds we find in living creatures were also designed.
Now, Keith S might object that these estimates could be wrong – and indeed, they could. For that matter, the currently accepted age of the universe (13.798 billion years) could be totally wrong too, but I don’t lose any sleep over that fact. In everyday life, we make decisions based on what we currently know. If Keith S wants to argue that one can reasonably doubt the inference that living things were designed, then he needs to explain why the estimates I’ve cited above could be mistaken – and by a very large margin, at that.
Recently, Keith S has mentioned a new book by Dr. Andreas Wagner, titled, The Arrival of the Fittest: Solving Evolution’s Greatest Puzzle. I haven’t read the book yet, but let me say this: if the book makes a scientifically plausible case, using quantitative estimates, that life in all its diversity could have emerged on Earth over the space of just 3.8 billion years, then I will cheerfully change my mind and admit I was wrong in maintaining that it had to have been designed. As John Maynard Keynes famously remarked, “When the facts change, I change my mind. What do you do, sir?”
For that matter, I try to keep an open mind about the recent discovery of soft tissue in dinosaur bones (see here and here). Personally, I think it’s a very odd finding, which is hard to square with the scientifically accepted view that these bones are millions of years old, but at the present time, I think the preponderance of geological and astronomical arguments in favor of an old Earth is so strong that this anomaly, taken alone, would be insufficient to overthrow my belief in an old cosmos. Still, I could be wrong. Science does not offer absolute certitude, and it has never claimed to.
Conclusion
To sum up: statements about the CSI of a system are retrospective, and should be made only after we have independently calculated the probability of a system emerging via unguided (random or non-random) processes, based on what we currently know. After these calculations have been performed, one may legitimately infer that the system was designed – even while admitting that should subsequent evidence come to light that would force a drastic revision of the probability calculations, one would have to revise one’s views on whether that system was designed.
Are we all on the same page now?