Uncommon Descent Serving The Intelligent Design Community

Glossary

As is typical of scientific, technological or academic fields of endeavor, Intelligent Design has its own technical vocabulary. Such a jargon provides a handy shorthand for precise communication among practitioners, but also presents a barrier to the person learning about the field for the first time. In the case of ID, this normal difficulty is multiplied by the presence of hostile advocates, who may provide mistaken (or, sadly, calculatedly misleading) “definitions,” which must be corrected by providing correct ones. A glossary is a way to help ease the required learning process.

A Working Glossary for ID

Anthropic Principle

Anthropic principle — the evident fine-tuning of the observed cosmos for cell-based life has led to a discussion of whether this is so in order to make a fit habitat for life, or what other possible views may explain this. As the materialism-leaning Internet Encyclopedia, Wikipedia summarizes: “the anthropic principle is the collective name for several ways of asserting that physical and chemicaltheories, especially astrophysicsand cosmology, need to take into account [that] there is life on Earth, and that one form of that life, homo sapiens, has attained intelligence. The only kind of universe humans can occupy is one that is similar to the current one.” In short, we will only be here observing and reasoning if the universe is such that it permits our existence; which requires exquisite fine-tuning of in aggregate dozens of cosmological factors, some to as fine as 1 part in 10^40 or 10^50 or 10^60. (In aggregate, if stored in bits, the precision would require well more than 1,000 bits.) So, how has that come to be?

Given the evident fact of a fine-tuned observed cosmos, was it by necessity, by accident or — using Plato’s term — by art?

“Berra’s Blunder”

Berra’s Blunder was coined by Phillip Johnson. It describes a particular kind of false analogy Darwinists have employed since, well, Darwin himself in Origin of Species. In this false analogy the Darwinist points to an evolutionary pattern that everyone knows is the basis of careful planning by an intelligent agent and then declares that the pattern that resulted from intelligent agents is evidence for completely naturalistic evolution. The term itself comes from Tim Berra:

If you compare a 1953 and a 1954 Corvette, side by side, then a 1954 and a 1955 model, and so on, the descent with modification is overwhelmingly obvious. This is what paleontologists do with fossils, and the evidence is so solid and comprehensive that it cannot be denied by reasonable people. . . .

Everything evolves, in the sense of ‘descent with modification,’ whether it be government policy, religion, sports cars, or organisms. The revolutionary fiberglass Corvette evolved from more mundane automotive ancestors in 1953. Other high points in the Corvette’s evolutionary refinement included the 1962 model, in which the original 102-inch was shortened to 98 inches and the new closed-coupe Stingray model was introduced; the 1968 model, the forerunner of today’s Corvette morphology, which emerged with removable roof panels; and the 1978 silver anniversary model, with fastback styling. Today’s version continues the stepwise refinements that have been accumulating since 1953. The point is that the Corvette evolved through a selection process acting on variations that resulted in a series of transitional forms and an endpoint rather distinct from the starting point. A similar process shapes the evolution of organisms.

T. Berra, Evolution and the Myth of Creationism, 1990, pp 117-119.

To which Johnson replied:

Of course, every one of those Corvettes was designed by engineers. The Corvette sequence – like the sequence of Beethoven’s symphonies to the opinions of the United States Supreme Court – does not illustrate naturalistic evolution at all. It illustrates how intelligent designers will typically achieve their purposes by adding variations to a basic design plan. . . . [These sequences] show that what biologists present as proof of ‘evolution’ or ‘common ancestry’ is just as likely to be evidence of common design.

P. Johnson, Defeating Darwinism by Opening Minds, 1997, p. 63.
Chance, contingency, necessity, and design

Chanceundirected contingency. That is, events that come from a cluster of possible outcomes, but for which there is no decisive evidence that they are directed; especially where sampled or observed outcomes follow mathematical distributions tied to statistical models of randomness. (E.g. which side of a fair die is uppermost on tossing and tumbling then settling.)

Contingency – here, possible outcomes that (by contrast with those of necessity) may vary significantly from case to case under reasonably similar initial conditions. (E.g. which side of a die is uppermost, whether it has been loaded or not, upon tossing, tumbling and settling.). Contingent [as opposed to necessary] beings begin to exist (and so are caused), need not exist in all possible worlds, and may/do go out of existence.

Necessity — here, events that are triggered and controlled by mechanical forces that (together with initial conditions) reliably lead to given – sometimes simple (an unsupported heavy object falls) but also perhaps complicated — outcomes. (Newtonian dynamics is the classical model of such necessity.) In some cases, sensitive dependence on [or, “to”] initial conditions may leads to unpredictability of outcomes, due to cumulative amplification of the effects of noise or small, random/ accidental differences between initial and intervening conditions, or simply inevitable rounding errors in calculation. This is called “chaos.”

Design — purposefully directed contingency. That is, the intelligent, creative manipulation of possible outcomes (and usually of objects, forces, materials, processes and trends) towards goals. (E.g. 1: writing a meaningful sentence or a functional computer program. E.g. 2: loading of a die to produce biased, often advantageous, outcomes. E.g. 3: the creation of a complex object such as a statue, or a stone arrow-head, or a computer, or a pocket knife.)

Creationism

Creationism — not just the view (a) that our universe is evidently the product of an intelligent and able Designer, but as a rule, also the further distinguishing view that (b) certain books from one or more of various religious traditions — when responsibly and fairly interpreted — give us an accurate (though usually not elaborate and technical) record from that Creator, of the fact and certain circumstances of that creation. So, creationists hold that (c) in doing science, one can only arrive at the actual truth of the origin of our cosmos [not merely a provisional origins model extrapolated from our presently observed, post-creation circumstances] by accepting that claimed revelatory insight as credibly true. Academically trained creation scientists then (d) look for evidences in the present that they hold provide empirical confirmation of the specific Creator, time-line and process of creation as identified and discussed in the particular books, and/or (e) for empirical dis-confirmation of alternative, non-creation based models.

CSI, or Complex, Specified Information

CSI — Life shows evidence of complex, aperiodic, and specified information in its key functional macromolecules, and the only other example we know of such function-specifying complex information are artifacts designed by intelligent agents. A chance origin of life would exceed the universal probability bound (UPB) set by the scope of the universe; hence design is a factor in the origin and development of life. Contrary to a commonly encountered (and usually dismissive) opinion, this concept is neither original to Dr Dembski nor to the design theory movement. Its first recognized use was by noted Origin of Life researcher, Leslie Orgel, in 1973:

Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity. [ L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added.]

The concept of complex specified information helps us understand the difference between (a) the highly informational, highly contingent aperiodic functionalmacromolecules of life and (b) regular crystals formed through forces of mechanical necessity, or (c) random polymer strings. In so doing, they identified a very familiar concept — at least to those of us with hardware or software engineering design and development or troubleshooting experience and knowledge. Furthermore, on massive experience, such CSI reliably points to intelligent design when we see it in cases where we independently know the origin story.

What Dembski did with the CSI concept starting in the 1990’s was to:

(i) recognize CSI’s significance as a reliable, empirically observable sign of intelligence,

(ii) point out the general applicability of the concept, and

(iii) provide a probability and information theory based explicitly formal model for quantifying CSI.(iv) In the current formulation, as at 2005, his metric for CSI, χ (chi), is:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]

P(T|H)is the probability of being in a given target zone in a search space, on a relevant chance hypothesis, (E.g. Probability of a hand of 13 spades form a shuffled standard deck of cards)

ϕS(T)is a multiplier based on the number of similarly simply and independently specifiable targets (e.g. having hands that are all Hearts, all Diamonds, all Clubs or all Spades)

10^120 is the Seth Lloyd estimate for the maximum number of elementary bit-based operations possible in our observed universe, serving as a reasonable upper limit on the number of search operations.

log2 [ . . . ] converts the modified probability into a measure of information in binary digits, i.e. specified bits. When this value is at least + 1, then we may reasonably infer to the presence of design from the evidence of CSI alone. (For the example being discussed, χ = -361, i.e. The odds of 1 in 635 billions are insufficient to confidently infer to design, on the gamut of the universe as a whole. But, on the gamut of a card game here on Earth, that would be a very different story.)

Configuration Space

or Search Space, Contingencies, Random Walks, and Hamming Distance

Config” space — a contingent event instantiates one possible outcome for a circumstance where reasonably similar initial conditions may or do lead to diverse outcomes, e.g. when we toss a die. The set of possible outcomes specifies a “space,” which in many cases becomes very large, and then may pose a challenge for a search that is not guided by a preset target or an “oracle” that informs the searcher that it is “warmer” or “colder.” For instance, a “typical” memory chip from the 1970’s could store 1,000 bits. This specifies 2^1,000 possible different bit patterns, or about 10^301 different configurations. This number is so large – it is about ten times the square of the number of possible quantum states of the number of atoms in our observed cosmos — that if we were to undertake a random walk based search (or the substantial equivalent) across the space, for a relatively small target zone, we would be maximally unlikely to find it, even if we were to convert the observed universe into a search machine. Related:Contingencies — possible outcomes that (by contrast with those of necessity) may vary significantly from case to case under reasonably similar initial conditions. (E.g. which side of a die is uppermost, whether it has been loadedor not, upon tossing, tumbling and settling.). Contingent [as opposed to necessary] beings begin to exist (and so are caused), need not exist in all possible worlds, and may/do go out of existence.

Random Walks — Wikipedia aptly summarizes: “A random walk, sometimes denoted RW, is a mathematical formalization of a trajectory that consists of taking successive random steps. The results of random walk analysis have been applied to computer science, physics, ecology, economics and a number of other fields as a fundamental model for random processes in time. For example, the path traced by a molecule as it travels in a liquid or a gas, the search path of a foraging animal, the price of a fluctuating stock and the financial status of a gambler can all be modeled as random walks.”

Hamming Distance — The bitwise difference between binary strings, considered as an abstract distance, measuring the number of changes that would transform one into the other.

Darwinism

When ID proponents on this site use the term “Darwinism,” they are referring to Neo-Darwinism, also called the modern evolutionary synthesis or Neo-Darwinian evolution (“NDE”), the basic tenets of which are described in the New World Encyclopedia as follows:

At the heart of the modern synthesis is the view that evolution is gradual and can be explained by small genetic changes in populations over time, due to the impact of natural selection on the phenotypic variation among individuals in the populations (Mayr 1982; Futuyma 1986). According to the modern synthesis as originally established, genetic variation in populations arises by chance through mutation (it is now known to be caused sometimes by mistakes in DNA replication and via genetic recombination—the crossing over of homologous chromosomes during meiosis). This genetic variation leads to phenotypic changes among members of a population. Evolution consists primarily of changes in the frequencies of alleles between one generation and another as a result of natural selection. Speciation, the creation of new species, is a gradual process that generally occurs when populations become more and more diversified as a result of having been isolated, such as via geographic barriers, and eventually the populations develop mechanisms of reproductive isolation. Over time, these small changes will lead to major changes in design or the creation of new taxa.

A major conclusion of the modern synthesis is that the concept of populations can explain evolutionary changes in a way that is consistent with the observations of naturalists and the known genetic mechanisms (Mayr 1982).

Though agreement is not universal on the parameters of the modern synthesis, many descriptions hold as basic (1) the primacy of natural selection as the creative agent of evolutionary change; (2) gradualism (accumulation of small genetic changes); and (3) the extrapolation of microevolutionary processes (changes within species) to macroevolutionary trends (changes about the species level, such as the origin of new designs and broad patterns in history). Evolutionary change is a shift of the frequency of genes in a population, and macroevolutionary trends come from gradual accumulation of small genetic changes.

Note, for example, the words of two of the leading figures in evolutionary theory, Ernst Mayr and Stephen Jay Gould.

“The proponents of the synthetic theory maintain that all evolution is due to the accumulation of small genetic changes, guided by natural selection, and that transspecific evolution is nothing but an extrapolation and magnification of the events that take place within populations and species.” (Mayr 1963)

“The core of this synthetic theory restates the two most characteristic assertions of Darwin himself: first, that evolution is a two-stage process (random variation as raw material, natural selection as a directing force); secondly, that evolutionary change is generally slow, steady, gradual, and continuous. . . Orthodox neo-Darwinians extrapolate these even and continuous changes to the most profound structural transitions in life.” (Gould 1980)

“Darwinist Derangement Syndrome”

Darwinist Derangement Syndrome (“DDS”) is akin to Tourette’s syndrome, a neuropsychiatric disorder characterized by physical and verbal tics in which the patient involuntarily vocalizes grunts and/or nonsense words. Similarly, those who suffer from DDS seem compelled to spout blithering idiotic nonsense in order to avoid a design inference. For example, famous evolutionist Nick Matzke makes a DDS utterance in the following exchange:Barry Arrington: “If you came across a table on which was set 500 coins (no tossing involved) and all 500 coins displayed the ‘heads’ side of the coin, would you reject ‘chance’ as a hypothesis to explain this particular configuration of coins on a table?”

Mark Frank: “. . . they might have slid out of a packet of coins without a chance to turn over.”

Sal Cordova: “Which still means chance is not the mechanism of the configuration.”

Matzke: “Not really.”

That an internationally prominent Darwinist would make such a patently ridiculous utterance is beyond rational explanation and can be explained only by DDS. DDS is a sad and pathetic condition that the editors of UD hope one day to have included in the Diagnostic and Statistical Manual of Mental Disorders published by the American Psychiatric Association.

Drake Equation

Drake Equation — in 1960, Frank Drake developed a speculative model for producing an educated guess of the number of extra-terrestrial civilizations in our galaxy that we may make contact with, N:

N = R* x fp x ne x fl x fi x fc x L, where:


R* –> estimated rate of new star formation (in a galaxy “similar” to ours)

fp –> est. fraction of these with planets

ne –> est. fraction of these suitable for life

fl –> est. fraction of these that actually form life

fi –> est. fraction of these where intelligent life (so, civilization) emerges

fc –> est. fraction of civilizations that are detectable at inter-stellar ranges

L –> est. length of time such civilizations are detectable

As “estimated” highlights, each factor is at best an educated guess, and this leads into an ongoing debate.

The equations, however, remains valuable for those interested in the design issue, as it sets up a context in which we may discuss the requisite factors, hurdles and available causal forces for getting to a universe that may have planetary systems that could/does bear civilizations that are significantly comparable to ours (starting with our own world), and thus that would become detectable at long range by signals and/or by active and enduring colonization of space.

In turn, that leads to an integrated discussion of the many linked cosmological fine-tuning, Goldilocks zone, complex, functional information and origin of consciousness and conscience issues that are deeply connected to the points raised by the findings of cosmological and biological Intelligent Design theory.

Evolution and Macro- vs, Micro- evolution

Evolution – “descent with modification”; envisioned as ranging from empirically observed minor population variations [e.g. finch beak lengths] to the proposed and widely believed (but, necessarily, not observed) origin of the major body plans of lifeforms over the ages through processes of chance variation and environmental selection pressures leading to differential reproductive success. Micro-evolution — a term of the art used to describe relatively minor population changes as has been empirically observed, often based on single-point mutations of DNA. For example, malarial resistance to chloroquine, and relative immunity tot he effects of malaria caused by sickle-cell anaemia

 

Macro-evolution — generally used in the literature to address the theory of body-plan level changes and associated or claimed evidence.

EF, Explanatory Filter

Explanatory Filter — an inductively and statistically based procedure for reliably identifying credible cases of design as opposed to chance or chance plus necessity. For, while chance, necessity and agency may – and often do – jointly all act in a situation, we may for analysis focus on individual aspects. When we do so, we can see that observed regularities that create consistent, reliably observable patterns — e.g. the sun rises in the east each morning, water boils at seal level on Earth at 100 °C — are the signposts of mechanical necessity; and will thus exhibit low contingency. Where there is high contingency – e.g. which side of a die is uppermost – the cause is chance (= credibly undirected contingency) or design (= directed contingency).

However, when an outcome (a) is sufficiently complex [e.g. for our practical purposes, the degree of contingency is beyond the configuration space set by ~ 500 – 1,000 bits of information storage capacity] and (b) comes from a reasonably narrow and independently specifiable target zone, then we may confidently conclude – based on massive experience — that (c) the outcome is intelligently designed for a purpose. A common example is a sufficiently long, ASCII text blog comment:where such a comment uses more than 72 – 143 characters, it is sufficiently long to have more than 10^150 – 10^301 possible configurations, i.e. that of 500 – 1,000 bits,and is complex in the universal probability bound [UPB] sense. It is also independently functionally specified as contextually and grammatically meaningful text in English, not the gobbledygook created by – by far and away — most cases of random typing or electrical noise: fghqwirt79wyfwhcqw9pfy79. So, we all confidently and routinely infer to design, not chance.

So, when we see the DNA strands for life, ranging from 100 – 500,000 to over 3 – 4 billion functionally specific 4-state DNA elements [and since 100,000 bases has a configuration space of about 9.98 * 10^60,205], we need a very good reason indeed to reject design as its best explanation; not a mere dismissive assertion or the imposed assumption of evolutionary materialism under the color of “science.”

Fine-Tuning

Fine-tuning — quite often in engineering work, components of a system have to be very precisely fitted together, key-lock style, to function effectively. Quite small perturbations from such close fits, typically will destroy functionality. It therefore has come as a shock in recent decades to discover that — on the generally accepted scientific models for cosmological origins — a host of required parameters require such tight, mutual fitting together if we are to arrive at a universe like our own’ i.e. that is life-facilitating. That is, rather small perturbations [in some cases down to one part in 10^40 or in 10^ 60] lead to drastically life-unfriendly model universes. This is described as cosmological fine-tuning. Similarly, the molecular nano-machinery of the cell, requires a similar key-lock integration of numerous elements to function. In both cases, this strongly suggests design as the most likely explanation.

FSCI, “functionally specified complex information”

FSCI — “functionally specified complex information” (or, “function-specifying complex information” or — rarely — “functionally complex, specified information” [FCSI])) is a commonplace in engineered systems: complex functional entities that are based on specific target-zone configurations and operations of multiple parts with large configuration spaces equivalent to at least 500 – 1,000 bits; i.e. well beyond the Dembski-type universal probability bound. In the UD context, it is often seen as a descriptive term for a useful subset of CSI first identified by origin of life researchers in the 1970s – 80’s. As Thaxton et al summed up in their 1984 technical work that launched the design theory movement, The Mystery of Life’s Origin:

 

. . . “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts.Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity.” [TMLO (FTE, 1984), Ch 8, p. 130.]

So, since in the cases of known origin such are invariably the result of design, it is confidently but provisionally inferred that FSCI is a reliable sign of intelligent design.

Creationism
ID-related TermsHandy definitions
Goldilocks Zones, Galactic Habitable Zones (GHZ) and Circumstellar Habitable Zones (CHZ)Goldilocks (“just right”) Zones — Astronomer Guillermo Gonzalez, an expert on exoplanets (planets orbiting stars other than our sun) argues that within a galaxy like our own [the Milky Way], only certain restricted zones are likely to have the long-term stability and concentration of heavier elements required for cell-based life. Within a planetary system (which must orbit a “just-right” type of star as well), only a similarly quite restricted zone will be “just right” in terms of temperature, radiation exposure etc. for the formation and sustenance of life based on Carbon chemistry and water, the two keys to the functionality of the biological cell. Then, it turns out that such zones are also the best zones for the observation and scientific inference about the origin, structure and nature of the cosmos. On reasonable probability estimates, that leads to the conclusion that ours is a “privileged planet” indeed.GHZ — Galactic habitable zones are those zones in a galaxy where there is sufficient long-term stability that intelligent, Carbon and Water-based life may originate and be sustained. The model for such zones is a stellar system that is between spiral arms in a spiral galaxy, as (i) the Population I stars will have significant quantities of heavy elements, (ii) the circular orbit of the star about the galactic core will support long-term stability, (iii) there will be a minimum, of interference from supernovae and other hazardous interstellar events.

 

CHZ — Circumstellar habitable zones are bands around stars in GHZ’s that will support terrestrial planets or other possible bases for life, which are at such a radius from the relevant star that water will be liquid in significant quantities, allowing it to function as a universal solvent and medium for life processes.

InformationInformation — Wikipedia, with some reorganization, is apt: “ . . that which would be communicated by a message if it were sent from a sender to a receiver capable of understanding the message . . . . In terms of data, it can be defined as a collection of facts [i.e. as represented or sensed in some format] from which conclusions may be drawn [and on which decisions and actions may be taken].”
IntelligenceIntelligence – Wikipedia aptly and succinctly defines: “capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.”
ID, Intelligent DesignIntelligent design [ID] – Dr William A Dembski, a leading design theorist, has defined ID as “the science that studies signs of intelligence.” That is, as we ourselves instantiate [thus exemplify as opposed to “exhaust”], intelligent designers act into the world, and create artifacts. When such argents act, there are certain characteristics that commonly appear, and that – per massive experience — reliably mark such artifacts. It it therefore a reasonable and useful scientific project to study such signs and identify how we may credibly reliably infer from empirical sign to the signified causal factor: purposefully directed contingency or intelligent design. Among the signs of intelligence of current interest for research are: [a] FSCI — function-specifying complex information [e.g. blog posts in English text that take in more than 143 ASCII characters, and/or — as was highlighted by Yockey and Wickens by the mid-1980s — as a distinguishing marker of the macromolecules in the heart of cell-based life forms], or more broadly

 

[b] CSI — complex, independently specified information [e.g. Mt Rushmore vs New Hampshire’s former Old Man of the mountain, or — as was highlighted by Orgel in 1973 — a distinguishing feature of the cell’s information-rich organized aperiodic macromolecules that are neither simply orderly like crystals nor random like chance-polymerized peptide chains], or

[c] IC — multi-part functionality that relies on an irreducible core of mutually co-adapted, interacting components. [e.g. the hardware parts of a PC or more simply of a mousetrap; or – as was highlighted by Behe in the mid 1990’s — the bacterial flagellum and many other cell-based bodily features and functions.], or

[d] “Oracular” active information – in some cases, e.g. many Genetic Algorithms, successful performance of a system traces to built-in information or organisation that guides algorithmicsearch processes and/or performance so that the system significantly outperforms random search. Such guidance may include oracles that, step by step, inform a search process that the iterations are “warmer/ colder” relative to a performance target zone. (A classic example is the Weasel phrase search program.) Also,

[e] Complex, algorithmically active, coded information – the complex information used in systems and processes is symbolically coded in ways that are not preset by underlying physical or chemical forces, but by encoding and decoding dynamically inert but algorithmically active information that guides step by step execution sequences, i.e. algorithms. (For instance, in hard disk drives, the stored information in bits is coded based a conventional, symbolic assignment of the N/S poles, forces and fields involved, and is impressed and used algorithmically. The physics of forces and fields does not determine or control the bit-pattern of the information – or, the drive would be useless. Similarly, in DNA, the polymer chaining chemistry is effectively unrelated to the information stored in the sequence and reading frames of the A/ G/ C/ T side-groups. It is the coded genetic information in the successive three-letter D/RNA codons that is used by the cell’s molecular nano- machines in the step by step creation of proteins. Such DNA sets from observed living organisms starts at 100,000 – 500,000 four-state elements [200 k – 1 M bits], abundantly meriting the description: function- specifying, complex information, or FSCI.)

IC, Irreducible ComplexityIrreducible Complexity, IC — A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, nonarbitrarily individuated parts such that each part in the set is indispensable to maintaining the system’s basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system. (Dembski, No Free Lunch, p. 285 [HT: D O’L])
MaterialismMaterialism — the philosophical premise that all that is, is “material.” In practice, that means that our observed universe is held to be “nothing but” the result of blind forces of nature acting on matter-energy in space-time, in light of chance circumstances across time. Thus, the resulting evolutionary materialism assumes or asserts – and this (on pain of the old No True Scotsman” fallacy) is not at all simply the general consensus of “all” informed and responsible thinkers – that: (a) through undirected cosmological evolutionary processes, our cosmos came into existence and evolved into the complex of stars, galaxies and planetary systems we observe. Similarly, (b) life originated by fortuitous synthesis and juxtaposition of required chemicals such that self-replicating entities came into existence. Once life originated, (c) chance variation of various sorts, and competition for food and for reproduction allowed diverse life forms to originate by chance and necessity only, and to fill the niches in ecosystems, leading to body-plan level biodiversity as observed in the fossil record and in our current world. Also, (d) at a certain point, some ape-like animals — having a superabundance of neurons, relative to what was needed to survive on the plains of East Africa several million years ago — became conscious, intelligent hominids; who eventually became modern man.

 

Such philosophically premised evolutionary materialism is then often enforced institutionally through the recently imposed “rule” of science known as methodological naturalism: in effect, only causal patterns and stories fitting into the origins model (a) through (d) as just summarized are permitted in scientific discourse, on pain of “expulsion.”

“Miller’s Mendacity”Miller’s Mendacity is a particular type of strawman fallacy frequently employed by Darwinists. It invariably consists of the following two steps:1. Erect the strawman: The Darwinist falsely declares that intelligent design is based on the following assertion: If something is improbable it must have been designed.

 

2. Demolish the strawman: The Darwinist then demonstrates an improbable event that was obviously not designed (such as dealing a particular hand of cards from a randomized deck), and declares “ID is demolished because I have just demonstrated an extremely improbable event that was obviously not designed.”

Miller’s Mendacity is named for Brown University biochemist Ken Miller and is based on his statements in an interview with the BBC:

BBC Commenter: In two days of testimony [at the Dover trial] Miller attempted to knock down the arguments for intelligent design one by one. Also on his [i.e., Miller’s] hit list, Dembski’s criticism of evolution, that it was simply too improbable.

Miller: One of the mathematical tricks employed by intelligent design involves taking the present day situation and calculating probabilities that the present would have appeared randomly from events in the past. And the best example I can give is to sit down with four friends, shuffle a deck of 52 cards, and deal them out and keep an exact record of the order in which the cards were dealt. We can then look back and say ‘my goodness, how improbable this is. We can play cards for the rest of our lives and we would never ever deal the cards out in this exact same fashion.’ You know what; that’s absolutely correct. Nonetheless, you dealt them out and nonetheless you got the hand that you did.

BBC Commentator: For Miller, Dembski’s math did not add up. The chances of life evolving just like the chance of getting a particular hand of cards could not be calculated backwards. By doing so the odds were unfairly stacked. Played that way, cards and life would always appear impossible.

In a letter to Panda’s Thumb Miller denied that his card comment was a response to Dembski’s work. He said, “all I was addressing was a general argument one hears from many ID supporters in which one takes something like a particular amino acid sequence, and then calculates the probability of the exact same sequence arising again through mere chance.” The problem with Miller’s response is that even if one takes it at face value he still appears mendacious, because no prominent ID theorist has ever argued “X is improbable; therefore X was designed.”

MN, Methodological NaturalismMethodological naturalism — the concept or “rule” that science may ONLY seek to ultimately explain observed phenomena in terms of (i) non-directed mechanical forces, and (ii) non-directed contingencies. That is, it is committed to the idea that the observed cosmos (including ourselves) originated and developed through undirected evolutionary processes — cosmological, chemical, biological, socio-cultural — that are in the end rooted in law-like necessity and/or chance. It therefore often contrasts “natural” vs. “supernatural” explanations; dismissing the latter as un- or even anti- scientific. However, this overlooks or ignores the alternative observed contrast that dates at least back to Plato: natural vs artificial (or intelligent).

 

So, if (i) intelligent cause is empirically observed (e.g. humans), and (ii) such causes may leave reliable signs of intelligent action (such as SETI investigators are looking for), then – as we have no good reason to assume that we exhaust the set of actual or possible intelligent agents – (iii) we must leave open the possibility of intelligent causes. At least, if science is to be an unfettered (but intellectually and ethically responsible) search for the truth about our world in light of the evidence of observation and experience.

Privileged Planet HypothesisPrivileged Planet Hypothesis — Astronomer Guillermo Gonzalez, an expert in exoplanets (planets orbiting stars other than our sun), advances a privileged planethypothesis. Taking aim at the late Carl Sagan, he argues that Earth is a very unusual planet, situated in a very fortunate position for astronomy, as well as for life – and that that is by design, not chance. [HT: D O’L]
ScienceScience — systematic empirically based investigations that use observation, hypothesis testing, measurement, experiment, simulations, thought experiments, mathematical analysis, logical argument and theory-/ model- building, to cumulatively provide ever-more adequate knowledge and understanding of events, objects and processes in the cosmos. Also, the cumulative body of such tested, but provisional, empirically based knowledge and explanations of the world at any given time.
Scientific Method(s)Scientific method(s) – the toolkit of empirically based, logically credible experimental, observational, statistical, mathematical, computational, thought experiment, visualization, modeling, simulation and analytical etc. techniques and investigative strategies used by scientists as they work to describe, explain, model, predict – and, sometimes, influence or control — events, objects and processes in the cosmos. A useful, ID-informed summary of the broad generic process involved is O, HI PET:

 

O –> OBSERVE – objects, phenomena, events and aspects of scientific interest; seeking patterns (and “exceptions”) that trace to one or more of:

(i) “mechanical necessity,” i.e. forces of nature that lead to “natural regularities”(which – fair warning — may include sensitive dependence on initial and/or intervening conditions, i.e. chaos,”etc.);

(ii) “undirected contingency” or “chance” that may sometimes show itself in “noise” and/or “statistical distributions” (which last can sometimes be modeled [e.g. Gaussian, Poisson, Weibull, U or reverse-J, etc.] and then may possibly be traced to underlying second level driving forces and constraints, etc.);

(iii) “directed contingency,” i.e. “design,” (often showing itself in complex specified information, active information, and/orirreducible complexity)

H –> HYPOTHESISE — propose explanatory models of the observed “facts” towards testing the alternatives on their relative strengths and limitations

IP –> INFER AND PREDICT – lay out projected consequences for envisioned future experimental or observational situations. (In some cases, “redrodict” to the past.)

ET –> EMPIRICAL TESTING – carry out a programme of tests to identify and confirm which alternative model(s) have the best explanatory power: (i) factual adequacy, (ii) logico-mathematical and dynamical coherence, (iii) explanatory elegance and power that is neither simplistic nor an ad hoc patchwork of ever-multiplying auxiliary hypotheses that only serve to explain [away] what has already been observed.

(NB: In light of the history of science, and the cumulative findings of philosophy of science in recent decades, there is no set of distinctive approaches to acquiring knowledge and understanding or inferring to best explanation that are (i) universal across the conventionally accepted list of sciences, and/or that are (ii) so necessary to, sufficient for and unique to scientific investigation and argument, that we may use them to mark (iii) a definite and sharp dividing line between science on the one hand and non- (or even pseudo-) science on the other. [Cf. a useful discussion here. This is not a general endorsement.])

CONTRIBUTORS: SB is a Philosopher-Communicator with an emphasis on the application of sound common sense reasoning to the design controversy, GP is a Medical Doctor with a focus on the microbiology and microevolutonary issues, and KF is an Applied Physicist and educator with interests in information technologies and related information theory and statistical thermodynamics. We jointly express appreciation to the developers of an earlier form of this page on responding to weak anti-ID arguments.