[As I recall, Jason Rosenhouse objected that Bill Dembski’s notion of specification cannot be applied to biology. This essay is written to challenge some of the objections think I’ve heard him raise informally over the years at my ID talks at his school and our discussion at ID and creation conferences. He’s one of the brightest critics of ID that I know, and thus I think objections he might raise should be addressed.]
The opponents of ID argue something along the lines: “take a deck of cards, randomly shuffle it, the probability of any given sequence occurring is 1 out of 52 factorial or about 8×10^67 — Improbable things happen all the time, it doesn’t imply intelligent design.”
In fact, I found one such Darwinist screed here:
Creationists and “Intelligent Design” theorists claim that the odds of life having evolved as it has on earth is so great that it could not possibly be random. Yes, the odds are astronomical, but only if you were trying to PREDICT IN ADVANCE how life would evolve.
Ah, but what if cards dealt from one random shuffle are repeated by another shuffle, would you suspect Intelligent Design? A case involving this is reported in the FBI website: House of Cards
In this case, a team of cheaters bribed a casino dealer to deal cards and then reshuffle them in same order that they were previously dealt out (no easy shuffling feat!). They would arrive at the casino, play cards which the dealer dealt and secretly record the sequence of cards dealt out. Thus when the dealer re-shuffled the cards and dealt out the cards in the exact same sequence as the previous shuffle, the team of cheaters would be able to play knowing what cards they would be dealt, thus giving them substantial advantage. Not an easy scam to pull off, but they got away with it for a long time.
The evidence of cheating was confirmed by videotape surveillance because the first random shuffle provided a specification to detect intelligent design of the next shuffle. The next shuffle was intelligently designed to preserve the order of the prior shuffle.
Here is a spectacular example of how a skilled dealer can control the sequence of cards through an intelligently designed shuffle:
But how does this relate to the problem of ID and defining specifications which signify the action of an intelligent agency?
The answer is that it illustrates how circumstances themselves can provide specification for detecting design even when we don’t have the specification in advance. The casinos observing the cheating team did not know in advance what the outcome of the shuffled decks would be but they were able to detect intelligent design despite lacking explicit patterns of cards to look for before catching the crooks.
Opponents of ID have insinuated that we cannot legitimately compute probabilities of designed objects if we don’t have explicit specification of the design before we make the observation of the artifact. Not so. The FBI case is a case in point!
Now consider a randomly generated string: “yditboawrt”. Its existence might not be significant unless it were converted for use as someone’s password. But if it were converted to a password, at that point, what was previously just a random string takes on significance. If one has a lock-and-key/login-password type system, you can legitimately estimate threshold improbability based on looking at the improbability of the password itself.
For example a given password of 10 letters will have an improbability of 1 out of 26^10. If we found a random string of letters (like say scrabble pieces) lying in a box, it might be rather pointless to use probability to argue the pattern is designed merely because its improbability is 1 out of 26^10, however if we found a computer system protected by a login-password that consists of 10 letters, the improbability of that system existing in the first place is at least 1 out of 26^10 and actually far more remote since the system that would implement the password protection is substantially more complex than the password itself (and this is an understatement).
This illustrates how specified complexity can be detected in systems where we don’t have specifications in advance. One merely calculates the complexity of one of the parts that must be coordinated with the whole. I call this coordinated complexity. The FBI case is an example where individuals were able to confirm intelligent design without having explicit patterns to work with in advance.
There may be infinite ways to make lock-and-key systems or login-password systems. But the fact that there are an infinite number of ways to build these things, does not imply the systems are probable. Likewise, even though in principle we could construct life forms in an infinite number of ways, it does not mean they are probable from random events any more than lock-and-key systems are probable from random events. Critics of ID claim that ID proponents are assuming life forms assume basically one form. That objection is largely irrelevant, because the improbability of a design is evident by the level of coordinated complexity in evidence in the artifact! The calculations the ID proponents use that focus on a specific target are legitimate if one considers that the target itself is specified by the entire system which it is a part of.
Critics of ID often argue something to the effect, “simple replicators can be built” with the insinuation that since simple replicators exist, complex replicators are somehow probable. This is like saying if hacker is able to compromise a relatively short password by exhaustive search, that somehow he will resolve a far more complex one with the same techniques. Not so! Yet this is exactly what defenders of OOL research do. They give examples of replicators and suggest that it is not so hard to make a replicator. Agreed that it is relatively easy to make a replicator (like the Ghadiri Peptide), but making replicators isn’t the problem. Nor is it the problem that we have merely an improbable structure in life (after all the sequence of a randomly shuffled deck of cards is astronomically improbable), but that the structure is specified improbability. It is specified because of the level of coordination that defines the structure, just like the level of coordination needed to implement a password-protected system of a mere 10 letters, or the cooridination of a falsely shuffled deck of cards with a randomly shuffled deck of cards. It is specified complexity because it is coordinated complexity.
Like the calculation of the improbability of a password understates the improbability of a password protected computer system, the calculation for the arrival of a given proteins may actually understate the improbability of the system in which such a protein is deemed functional. For example, letting the protein be analogous to a key, consider how hard it would be given a key, to build a corresponding lock!
The problem with solving the origin of life is akin to login-password protected computer systems arising spontaneously. For this reason, it would seem, the calculations of life’s improbability put forward by ID proponents are quite valid, and actually may understate the magnitude of the problem for life spontaneously arising.