Masters of stealth intent on concealing their actions may successfully evade the explanatory filter.But masters of self-promotion intent on making sure their intellectual property gets properly attributed find in the explanatory filter a ready friend.Bill Dembski

Mere Creation

The Explanatory filter classifies systems or artifacts into 3 categories.

1. produced by law

2. produced by chance

3. produced neither by chance nor law (designed by definition)

Suppose we started out with the correct probability distributions. We can interpret the above statement by Bill to mean we might mistake a system as produced by chance or law when in fact it was produced by an intelligence. For example, if you had uniquely numbered fair coins, and they were arranged for you in the following way (with 1= heads, 0=tails), what would you say?

1101110010111011110001001101010111100110111101111……

Using ID procedures, in the absence of recognizing a design specification you would label the system as the product of chance, but if you recognized it as the digits of the Champernowne sequence, you’d say it is “produced neither by chance nor law and thus by definition is designed.”

Mistaking a design as the result of chance is perfectly within the framework of ID, such errors in using the Explantory Filter are acceptable (as evidenced by the quote above). For the sake of brevity, we don’t say:

produced by chance or produced by design that we mistake as chance

We merely say “chance”, with the provision that it is short hand for:

produced by chance or produced by design that we mistake as chance

The same holds true for making mistakes where we mistakenly attribute design to law.

The reliability of the filter rests on classifying things as “not chance and not law” based on an assumed probability distribution. The assumed distribution could of course be wrong, and thus the assertion of design could be wrong, but the inference relative to the assumptions is correct for “produced neither by chance nor law”. “Produced neither by chance nor law” means practically speaking “produced neither by chance nor law” according to the assumed distribution.

It does not mean the assumed distribution is correct, but it does mean the inference relative to the assumptions follows the correct deduction from the premises. This also means a design claim can be falsifiable if the assumptions are falsifiable.

So if someone says, “how do you know it is designed, you don’t have all the facts?” The correct response is, “in the ultimate sense, that may not be demonstrable, but relative to the assumptions I’m working from (which may be false assumptions), the inference of design is correct. Further, all things being equal, if I assert design on a reasonable distribution, the claim of design is always more likely to be true in the ultimate sense than the claim of mindless evolution.”

I gave an example of the design inference here:

Relevance of coin analogies to homochirality and symbolic organization in biology. The inference is correct with respect to the underlying assumptions. The underlying assumptions could be incorrect, but the deduction from the premises should be above reproach, and that’s what is meant by design *inference*.

**NOTES**

1. This discussion came up in part because Lizzie argues chance is the null (default) hypothesis for ID. I countered by saying the EF uses no null hypothesis. Any ID proponent is welcome to weigh in, but I don’t think Lizzie’s characterization is correct based on ID literature. It is true we assume chance by default if law and design are ruled out, but that’s different than saying chance is the null hypothesis.

2. Some design inferences in history were later falsified, like the craters of the moon. They looked so perfectly circular that some thought they had to be designed. That was one of the few rare cases where the product of law was mistaken for design. A meteor or rock hits the moon, it makes a circular crater. Also consider the effect of law in the Chlandi plate demonstration:

Response to Harry McCall (Chlandi plates)

3. Some will complain, “What if the design inference is wrong”, to which I respond, “Then we don’t lose much, but what if the non-design inference is wrong? What side of Pascal’s wager do you want to be on? What do you have to gain if non-design is true?”

See: If Darwinism were true, what is there to gain?

4. If you want to be an evolutionary formalist, you should say “I don’t know” in the face of uncertain probability distributions and stop trying to promote mindless evolution as “fact, fact, fact” when it is “speculation, speculation, speculation” and quit persecuting scientists and denying diplomas to students until you really know mindless evolution is true.

5. I used filter(s) in the title, various methods of rejecting the chance hypothesis may fail while others succeed. Someone with the Champernowne sequence in their EF filter library will recognize design, while others without the Champernowne sequence in the EF filter library won’t.

6. Bill Dembksi’s book *The Design Inference* makes clear it the inference is correct in principle based on the distributions assumed, he didn’t ever say we’ll necessarily have the correct distributions to work with. That is a Darwinist straw man, and like lots of strawman, it’s erected to make the appearance of an easy knockdown of a reasonable claim.

7. Summarizing the permissible errors of asserting design:

A. the assumptions are false (but that is true of every idea, not just ID), but all things being equal, if design is asserted, uncertainty favors the design case over the non-design case.

B. the assumptions are true, but we fail to recognize design. One example of that is the product of “Masters of Stealth” and another is the Champernowne sequence.

8. I’ve suggested (not insisted) a workable definition of “chance” is a process that maximizes uncertainty relative to the degrees of freedom of the symbols. To illustrate, maximum uncertainty implies a 50% proportion of heads in the case of coins and a 50% proportion of L-amino acids in the case of amino acids, and even less-than-50% proportions for alpha-peptide bonds in proteins/proteinoids and 3′-5′ in DNA chains.