It has been said that “Intelligent design (ID) is the view that it is possible to infer from empirical evidence that “certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection” . . .” This puts the design inference at the heart of intelligent design theory, and raises the questions of its degree of warrant and relationship to the — insofar as a “the” is possible — scientific method.
Leading Intelligent Design researcher, William Dembski has summarised the actual process of inference:
“Whenever explaining an event, we must choose from three competing modes of explanation. These are regularity [i.e., natural law], chance, and design.” When attempting to explain something, “regularities are always the first line of defense. If we can explain by means of a regularity, chance and design are automatically precluded. Similarly, chance is always the second line of defense. If we can’t explain by means of a regularity, but we can explain by means of chance, then design is automatically precluded. There is thus an order of priority to explanation. Within this order regularity has top priority, chance second, and design last” . . . the Explanatory Filter “formalizes what we have been doing right along when we recognize intelligent agents.” [Cf. Peter Williams’ article, The Design Inference from Specified Complexity Defended by Scholars Outside the Intelligent Design Movement, A Critical Review, here. We should in particular note his observation: “Independent agreement among a diverse range of scholars with different worldviews as to the utility of CSI adds warrant to the premise that CSI is indeed a sound criterion of design detection. And since the question of whether the design hypothesis is true is more important than the question of whether it is scientific, such warrant therefore focuses attention on the disputed question of whether sufficient empirical evidence of CSI within nature exists to justify the design hypothesis.”]
The design inference process as described can be represented in a flow chart:
Fig. A: The Explanatory filter and the inference to design, as applied to various aspects of an object, process or phenomenon, and in the context of the generic scientific method. (So, we first envision nature acting by low contingency law-like mechanical necessity such as with F = m*a . . . think of a heavy unsupported object near the earth’s surface falling with initial acceleration g = 9.8 N/kg or so. That is the first default. Similarly, we may see high contingency knocking out the first default — under similar starting conditions, there is a broad range of possible outcomes. If things are highly contingent in this sense, the second default is: CHANCE. That is only knocked out if an aspect of an object, situation, or process etc. exhibits, simultaneously: (i) high contingency, (ii) tight specificity of configuration relative to possible configurations of the same bits and pieces, (iii) high complexity or information carrying capacity, usually beyond 500 – 1,000 bits. In such a case, we have good reason to infer that the aspect of the object, process, phenomenon etc. reflects design or . . . following the terms used by Plato 2350 years ago in The Laws, Bk X . . . the ART-ificial, or contrivance, rather than nature acting freely through undirected blind chance and/or mechanical necessity. [NB: This trichotomy across necessity and/or chance and/or the ART-ificial, is so well established empirically that it needs little defense. Those who wish to suggest no, we don’t know there may be a fourth possibility, are the ones who first need to show us such before they are to be taken seriously. Where, too, it is obvious that the distinction between “nature” (= “chance and/or necessity”) and the ART-ificial is a reasonable and empirically grounded distinction, just look on a list of ingredients and nutrients on a food package label. The loaded rhetorical tactic of suggesting, implying or accusing that design theory really only puts up a religiously motivated way to inject the supernatural as the real alternative to the natural, fails. (Cf. the UD correctives 16 – 20 here on. as well as 1 – 8 here on.) And no, when say the averaging out of random molecular collisions with a wall gives rise to a steady average, that is a case of empirically reliable lawlike regularity emerging from a strong characteristic of such a process, when sufficient numbers are involved, due to the statistics of very large numbers . . . it is easy to have 10^20 molecules or more . . . at work there is a relatively low fluctuation, unlike what we see with particles undergoing Brownian motion. That is in effect low contingency mechanical necessity in the sense we are interested in, in action. So, for instance we may derive for ideal gas particles, the relationship P*V = n*R*T as a reliable law.] )
Explaining (and discussing) in steps:
1 –> As was noted in background remarks 1 and 2, we commonly observe signs and symbols, and infer on best explanation to underlying causes or meanings. In some cases, we assign causes to (a) natural regularities tracing to mechanical necessity [i.e. “law of nature”], in others to (b) chance, and in yet others we routinely assign cause to (c) intentionally and intelligently, purposefully directed configuration, or design. Or, in leading ID researcher William Dembski’s words, (c) may be further defined in a way that shows what intentional and intelligent, purposeful agents do, and why it results in functional, specified complex organisation and associated information:
. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)
2 –>As an illustration, we may discuss a falling, tumbling die:

Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance.
But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!
[Also, the die may be loaded, so that it will be biased or even of necessity will produce a desired outcome. Or, one may simply set a die to read as one wills.]
3 –> A key aspect of inference to cause is the significance of observed characteristic signs of causal factors, where we may summarise such observation and inference on sign as:
I observe one or more signs [in a pattern], and infer the signified object, on a warrant:
a –> Here, the connexion is a more or less causal or natural one, e.g. a pattern of deer tracks on the ground is an index, pointing to a deer.
b –> If the sign is not a sufficient condition of the signified, the inference is not certain and is defeatable; though it may be inductively strong. (E.g. someone may imitate deer tracks.)
c –> The warrant for an inference may in key cases require considerable background knowledge or cues from the context.
d –> The act of inference may also be implicit or even intuitive, and I may not be able to articulate but may still be quite well-warranted to trust the inference. Especially, if it traces to senses I have good reason to accept are working well, and are acting in situations that I have no reason to believe will materially distort the inference.
4 –> Fig. A highlights the significance of contingency in assigning cause. If a given aspect of a phenomenon or object is such that under similar circumstances, substantially the same outcome occurs, the best explanation of the outcome is a natural regularity tracing to mechanical necessity. The heavy object in 2 above, reliably and observably falls at 9.8 m/s^2 near the earth’s surface. [Thence, via observations and measurements of the shape and size of the earth, and the distance to the moon, the theory of gravitation.]
5 –> When however, under sufficiently similar circumstances, the outcomes vary considerably on different trials or cases, the phenomenon is highly contingent. If that contingency follows a statistical distribution and is not credibly directed, we assign it to chance. For instance, given eight corners and twelve edges plus a highly non-linear behaviour, a standard, fair die that falls and tumbles, exhibits sensitive dependency to initial and intervening conditions, and so settles to a reading pretty much by chance. Things that are similar to that — notice the use of “family resemblance” [i.e. analogy] — may confidently be seen as chance outcomes.)
6 –> However, under some circumstances [e.g. a suspicious die], the highly contingent outcomes are credibly intentionally, intelligently and purposefully directed. Indeed:
a: When I type the text of this post by moving fingers and pressing successive keys on my PC’s keyboard,
b: I [a self, and arguably: a self-moved designing, intentional, initiating agent and initial cause] successively
c: choose alphanumeric characters (according to the symbols and rules of a linguistic code) towards the goal [a purpose, telos or “final” cause] of writing this post, giving effect to that choice by
d: using a keyboard etc, as organised mechanisms, ways and means to give a desired and particular functional form to the text string, through
e: a process that uses certain materials, energy sources, resources, facilities and forces of nature and technology to achieve my goal.
. . . The result is complex, functional towards a goal, specific, information-rich, and beyond the credible reach of chance [the other source of high contingency] on the gamut of our observed cosmos across its credible lifespan. In such cases, when we observe the result, on common sense, or on statistical hypothesis-testing, or other means, we habitually and reliably assign outcomes to design.
7 –> For further instance, we could look at a modern version of Galileo’s famous cathedral chandelier as pendulum experiment.
i: If we were to take several measures of the period for a given length of string and [small] arc of travel, we would see a strong tendency to have a specific period. This, is by mechanical necessity.
ii: However, we would also notice a scattering of the result, which we assign to chance and usually handle by averaging out [and perhaps by plotting a frequency distribution).
iii: Also, if we were to fix string length and gradually increase the arc, especially as the arc goes past about six degrees, we would notice that the initial law no longer holds. But, Galilleo — who should have been able to spot the effect — reported that period was independent of arc. (This is a case of “cooking.” Similarly, had he dropped a musket ball, a feather and a cannon ball over the side of the tower in Pisa, the cannon ball should hit the ground just ahead of the musket ball, and of course considerably ahead of the feather.)
iv: So, even in doing, reporting and analysing scientific experiments, we routinely infer to law, chance and design, on observed signs.
8 –> But, are there empirically reliable signs of design that can be studied scientifically, allowing us to confidently complete the explanatory filter process? Design theorists answer, yes, and one definition of design theory is, the science that studies signs of design. Thus, further following Peter Williams, we may suggest that:
. . . abstracted from the debate about whether or not ID is science, ID can be advanced as a single, logically valid syllogism:
(Premise 1) Specified complexity reliably points to intelligent design.
(Premise 2) At least one aspect of nature exhibits specified complexity.
(Conclusion) Therefore, at least one aspect of nature reliably points to intelligent design.
9 –> For instance, in the 1970’s Wicken saw that organisation, order and randomness are very distinct, and have characteristic signs:
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and note added. )]
10 –> We see here, the idea-roots of a term commonly encountered at UD, functionally specific, complex information [FSCI]. (The onward restriction to digitally coded FSCI [dFSCI] as is seen in DNA — and as will feature below, should also be obvious. I add [11:01:18], based on b/g note 1: once we see digital code and a processing system, we are dealing with a communication system, and so the whole panoply of the code [a linguistic artifact], the message in the code as sent and as received, the apparatus for encoding, transmitting, decoding and applying, all speak to a highly complex –indeed, irreducibly so — system of intentionally directed configuration, and messages that [per habitual and reliable experience and association] reflect intents. From b/g note 2, the functional sequence complexity of such a coded data entity also bespeaks organisation as distinct from randomness and order, which can in principle and in practice be measured and shows that beyond a reasonable threshold of complexity, the coded message itself is an index-sign pointing to its nature as an artifact of design, thence its designer as the best explanation for a design. )
11 –> Earlier, in reflecting on the distinctiveness of living cells, Orgel had observed:
In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity.[Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added.]
12 –> This seems to be the first technical use of the term “specified complexity,” which is now one of the key — and somewhat controversial — terms of design theory. As the second background note summarises, Dembski and others have quantified the term, and have constructed metrics that allow measurement and decision on whether or not the inference to design is well-warranted.
13 –> However, a much simpler rule of thumb metric can be developed, based on a common observation highlighted in points 11 – 12 of the same background note:
11 –>We can compose a simple metric . . . Where function is f, and takes values 1 or 0 [as in pass/fail], complexity threshold is c [1 if over 1,000 bits, 0 otherwise] and number of bits used is b, we can measure FSCI in functionally specific bits, as the simple product:
FX = f*c*b, in functionally specific bits
12 –> Actually, we commonly see such a measure; e.g. when we see that a document is say 197 kbits long, that means it is functional as say an Open Office Writer document, is complex and uses 197 k bits storage space.
13a –> Or [added Nov 19 2011] we may use logarithms to reduce and simplify the Dembski Chi metric of 2005, thusly:
>>1 –> 10^120 ~ 2^398
Corona S2:445 AA, 1285 fits, Chi: 785 bits beyond . . . results n7
13b –> Some [GB et al] have latterly tried to discredit the idea of a dummy variable in a metric, as a question-begging a priori used to give us the result we “want.” Accordingly, in correction, let us consider:
1 –> The first thing is why is S = 0 the default? ANS: Simple, that is the value that says we have no good warrant, no good objective reason, to infer that it is a serious candidate that anything more than chance and necessity acting on matter in space across time is at work.
2 –> In the case of Pinatubo [a well-known volcano], that is tantamount to saying that however complex the volcano edifice may be, its history can be explained on it being a giant sized, aperiodic relaxation oscillator that tends to go in cycles of eruption from quiescent to explosive eruption depending on charging up, breaking through erupting, discharging and reblocking. In turn, driven by underlying plate tectonics. As SA just said: S=0 means it’s a volcano!
3 –> In short, we are looking at an exercise in doing science, per the issue of scientific warrant on empirically based inference to best explanation . . . .
5 –> But as was repeatedly laid out with examples, there is another class of known causal factors capable of explaining highly contingent outcomes that we do not have a good reason to expect on blind chance and mechanical necessity, thanks to the issue of the needle in the haystack.
6 –> Namely, the cause as familiar as that which best explains the complex, specified information — significant quantities of contextually responsive text in English coded on the ASCII scheme — in this thread. Intelligence, working by knowledge and skill, and leaving characteristic signs of art behind.
7 –> Notice, how we come to this: we see complexity, measured by the scope of possible configurations, and we see objectively, independently definable specificity, indicated by descriptors that lock down the set of possible or observed events E, to a narrow zone T within the large config space W, such that a blind search process based on chance plus necessity will only sample so small a fraction that it is maximally implausible for it to hit on a zone like T. indeed, per the needle in the haystack or infinite monkeys type analysis, it is credibly unobservable.
8 –> Under those circumstances, once we see that we are credibly in a zone T, by observing an E that fits in a T, the best explanation is the known, routinely observed cause of such events, intelligence acting by choice contingency, AKA design.
9 –> In terms of the Chi_500 expression . . .
a: I is a measure of the size of config space, e.g. 1 bit corresponds to two possibilities, 2 bits to 4, and n bits to 2^n so that 500 bits corresponds to 3 * 10^150 possibilities and 1,000 to 1.07*10^301.
b: 500 is a threshold, whereby the 10^57 atoms of our solar system could in 10^17 s carry out 10^102 Planck time quantum states, giving an upper limit to the scope of search, where the fastest chemical reactions take up about 10^30 PTQs’s.
c: In familiar terms, 10^102 possibilities from 10^150 is 1 in 10^48, or about a one-straw sample of a cubical haystack about 3 1/2 light days across. An entire solar system could lurk in it as “atypical” but that whole solar system would be so isolated that — per well known and utterly uncontroversial sampling theory results — it is utterly implausible that any blind sample of that scope would pick up anything but straw; straw being the overwhelming bulk of the distribution.
d: In short not even a solar system in the haystack would be credibly findable on blind chance plus mechanical necessity.
e: But, routinely, we find many things that are like that, e.g. posts in this thread. What explains these is that the “search” in these cases is NOT blind, it is intelligent.
f: S gives the criterion that allows us to see that we are in this needle in the haystack type situation, on whatever reasonable grounds can be presented for a particular case, noting again that the default is that S = 0, i.e. unless we have positive reason to infer needle in haystack challenge, we default to explicable on chance plus necessity.
g: What gives us the objective ability to set S = 1? ANS: Several possibilities, but the most relevant one is that we see a case of functional specificity as a means of giving an independent, narrowing description of the set T of possible E’s.
h: Functional specificity is particularly easy to see, as when something is specific in order to function, it is similar to the key that fits the lock and opens it. That is, specific function is contextual, integrative and tightly restricted. Not any key would do to open a given lock, and if fairly small perturbations happen, the key will be useless.
i: The same obtains for parts for say a car, or even strings of characters in a post in this thread, or notoriously, computer code. (There is an infamous case where NASA had to destroy a rocket on launch because a coding error put in I think it was a comma not a semicolon.)
j: In short, the sort of reason why S = 1 in a given case is not hard to see, save if you have an a priori commitment that makes it hard for you to accept this obvious, easily observed and quite testable — just see what perturbing the functional state enough to overwhelm error correcting redundancies or tolerances would do — fact of life. This is actually a commonplace.
k: So, we can now pull together the force of the Chi_500 expression:
i] If we find ourselves in a practical cosmos of 10^57 atoms — our solar system . . . check,
ii] where also, we see that something has an index of being highly contingent I, a measure of information-storing or carrying capacity,
iii] where we may provide a reasonable value for this in bits,
iv] and as well, we can identify that the observed outcome is from a narrow, independently describable scope T within a much larger configuration space set by I, i.e. W.
v] then we may infer that E is or is not best explained on design according as I is greater or less than the scope 500 bits.
10 –> So, we have a metric that is reasonable and is rooted in the same sort of considerations that ground the statistical form of the second law of thermodynamics.
11 –> Accordingly, we have good reason to see that claimed violations will consistently have the fate of perpetual motion machines: they may be plausible to the uninstructed, but predictably will consistently fail to deliver the claimed goods.
12 –> For instance, Genetic Algorithms consistently START from within a zone (“island”) of function T, where the so-called fitness function then allows for incremental improvements along a nice trend to some peak.
13 –> Similarly, something like the canali on Mars, had they been accurately portrayed, would indeed have been a demonstration of design. However, these were not actual pictures of the surface of Mars but drawings of what observers THOUGHT they saw. They were indeed designed, but they were an artifact of erroneous observations.
14 –> Latterly, the so-called Mars face, from the outset, was suspected to be best explained as an artifact of a low-resolution imaging system, and so a high resolution test was carried out, several times. The consistent result, is that the image is indeed an artifact. [Notice, since it was explicable on chance plus necessity, S was defaulted to 0.]
15 –> Mt Pinatubo is indeed complex, and one could do a sophisticated lidar mapping and radar sounding and seismic sounding to produce the sort of 3-D models routinely used here with our volcano, but the structured model of a mesh of nodes and arcs, is entirely within the reach of chance plus necessity, the default. There is no good reason to infer that we should move away from the default.
16 –> If there were good evidence on observation that chance and necessity on the gamut of our solar system could explain origin of the 10 – 100 million bits of info required to account for major body plans, dozens of times over, there would be no design theory movement. (Creationism would still exist, but that is because it works on different grounds.)
17 –> If there were good evidence on observation that chance and necessity on the gamut of our observed cosmos could account for the functionally specific complex organisation and associated information for the origin of cell based life, there would be no design theory movement. (Creationism would still exist, but that is because it works on different grounds.)
18 –> But instead, we have excellent, empirically based reason to infer that the best explanation for the FSCO/I in body plans, including the first, is design.
13c –> So, we have a more sophisticated metric derived from Dembski’s Chi metric, that does much the same as the simple product metric, and is readily applied to actual biological cases.
14 -> The 1,000 bit information storage capacity threshold can be rationalised:
The number of possible configurations specified by 1,000 yes/no decisions, or 1,000 bits, is ~ 1.07 * 10^301; i.e. “roughly” 1 followed by 301 zeros. While, the ~ 10^80 atoms of the observed universe, changing state as fast as is reasonable [the Planck time, i.e. every 5.39 *10^-44 s], for its estimated lifespan — about fifty million times as long as the 13.7 billion years that are said to have elapsed since the big bang — would only come up to about 10^150 states. Since 10^301 is ten times the square of this number, if the whole universe were to be viewed as a search engine, working for its entire lifetime, it could not scan through as much as 1 in 10^150 of the possible configurations for just 1,000 bits. That is, astonishingly, our “search” rounds down very nicely to zero: effectively no “search.” [NB: 1,000 bits is routinely exceeded by the functionally specific information in relevant objects or features, but even so low a threshold is beyond the credible random search capacity of our cosmos, if it is not intelligently directed or constrained. That is, the pivotal issue is not incremental hill-climbing to optimal performance by natural selection among competing populations with already functional body forms. Such already begs the question of the need to first get to the shorelines of an island of specific function in the midst of an astronomically large sea of non-functional configurations; on forces of random chance plus blind mechanical necessity only. Cf. Abel on the Universal Plausibility Bound, here.]
15 –> So far, all of this will probably seem to be glorified common sense, and quite reasonable. So, why is the inference to design so controversial, and especially the explanatory filter?
[ Continued, here ]