Uncommon Descent Serving The Intelligent Design Community
Category

ID Foundations

Foundational concepts and evidence for inferring design in light of empirically tested, reliable, observable signs

300px-AmineTreating

ID Foundations, 3: Irreducible Complexity as concept, as fact, as [macro-]evolution obstacle, and as a sign of design

[ID Found’ns Series, cf. also Bartlett here]

Irreducible complexity is probably the most violently objected to foundation stone of Intelligent Design theory. So, let us first of all define it by slightly modifying Dr Michael Behe’s original statement in his 1996 Darwin’s Black Box [DBB]:

What type of biological system could not be formed by “numerous successive, slight modifications?” Well, for starters, a system that is irreducibly complex. By irreducibly complex I mean a single system composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the [core] parts causes the system to effectively cease functioning. [DBB, p. 39, emphases and parenthesis added. Cf. expository remarks in comment 15 below.]

Behe proposed this definition in response to the following challenge by Darwin in Origin of Species:

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case . . . . We should be extremely cautious in concluding that an organ could not have been formed by transitional gradations of some kind. [Origin, 6th edn, 1872, Ch VI: “Difficulties of the Theory.”]

In fact, there is a bit of question-begging by deck-stacking in Darwin’s statement: we are dealing with empirical matters, and one does not have a right to impose in effect outright logical/physical impossibility — “could not possibly have been formed” — as a criterion of test.

If, one is making a positive scientific assertion that complex organs exist and were credibly formed by gradualistic, undirected change through chance mutations and differential reproductive success through natural selection and similar mechanisms, one has a duty to provide decisive positive evidence of that capacity. Behe’s onward claim is then quite relevant: for dozens of key cases, no credible macro-evolutionary pathway (especially no detailed biochemical and genetic pathway) has been empirically demonstrated and published in the relevant professional literature. That was true in 1996, and despite several attempts to dismiss key cases such as the bacterial flagellum [which is illustrated at the top of this blog page] or the relevant part of the blood clotting cascade [hint: picking the part of the cascade — that before the “fork” that Behe did not address as the IC core is a strawman fallacy], it arguably still remains to today.

Now, we can immediately lay the issue of the fact of irreducible complexity as a real-world phenomenon to rest.

For, a situation where core, well-matched, and co-ordinated parts of a system are each necessary for and jointly sufficient to effect the relevant function is a commonplace fact of life. One that is familiar from all manner of engineered systems; such as, the classic double-acting steam engine:

Fig. A: A double-acting steam engine (Courtesy Wikipedia)

Such a steam engine is made up of rather commonly available components: cylinders, tubes, rods, pipes, crankshafts, disks, fasteners, pins, wheels, drive-belts, valves etc. But, because a core set of well-matched parts has to be carefully organised according to a complex “wiring diagram,” the specific function of the double-acting  steam engine is not explained by the mere existence of the parts.

Nor, can simply choosing and re-arranging similar parts from say a bicycle or an old-fashioned car or the like create a viable steam engine.  Specific mutually matching parts [matched to thousandths of an inch usually], in a very specific pattern of organisation, made of specific materials, have to be in place, and they have to be integrated into the right context [e.g. a boiler or other source providing steam at the right temperature and pressure], for it to work.

If one core part breaks down or is removed — e.g. piston, cylinder, valve, crank shaft, etc., core function obviously ceases.

Irreducible complexity is not only a concept but a fact.

But, why is it said that irreducible complexity is a barrier to Darwinian-style [macro-]evolution and a credible sign of design in biological systems?

Read More ›

ds_cyb_mind_model
The Eng Derek Smith Cybernetic Model

ID Foundations, 2: Counterflow, open systems, FSCO/I and self-moved agents in action

In two recent UD threads, frequent commenter AI Guy, an Artificial Intelligence researcher, has thrown down the gauntlet:

Winds of Change, 76:

By “counterflow” I assume you mean contra-causal effects, and so by “agency” it appears you mean libertarian free will. That’s fine and dandy, but it is not an assertion that can be empirically tested, at least at the present time.

If you meant something else by these terms please tell me, along with some suggestion as to how we might decide if such a thing exists or not. [Emphases added]

ID Does Not Posit Supernatural Causes, 35:

Finally there is an ID proponent willing to admit that ID cannot assume libertarian free will and still claim status as an empirically-based endeavor. [Emphasis added] This is real progress!

Now for the rest of the problem: ID still claims that “intelligent agents” leave tell-tale signs (viz FSCI), even if these signs are produced by fundamentally (ontologically) the same sorts of causes at work in all phenomena . . . . since ID no longer defines “intelligent agency” as that which is fundamentally distinct from chance + necessity, how does it define it? It can’t simply use the functional definition of that which produces FSCI, because that would obviously render ID’s hypothesis (that the FSCI in living things was created by an intelligent agent) completely tautological. [Emphases original. NB: ID blogger Barry Arrington, had simply said: “I am going to make a bold assumption for the sake of argument. Let us assume for the sake of argument that intelligent agents do NOT have free will . . . ” (Emphases added.)]

This challenge brings to a sharp focus the foundational  issue of counter-flow, constructive work by designing, self-moved initiating, purposing agents as a key concept and explanatory term in the theory of intelligent design. For instance, we may see from leading ID researcher, William Dembski’s No Free Lunch:

. . .[From commonplace experience and observation, we may see that:]  (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.) [Emphases and explanatory parenthesis added.]

This is of course, directly based on and aptly summarises our routine experience and observation of designers in action.

For, designers routinely purpose, plan and carry out constructive work directly or though surrogates (which may be other agents, or automated, programmed machines). Such work often produces functionally specific, complex  organisation and associated information [FSCO/I;  a new descriptive abbreviation that brings the organised components and the link to FSCI (as was highlighted by Wicken in 1979)  into central focus].

ID thinkers argue, in turn, that that FSCO/I in turn is an empirically reliable sign pointing to intentionally and intelligently directed configuration — i.e. design — as signified cause.

And, many such thinkers further argue that:

if, P: one is not sufficiently free in thought and action to sometimes actually and truly decide by reason and responsibility (as opposed to: simply playing out the subtle programming of blind chance and necessity mediated through nature, nurture and manipulative indoctrination)

then, Q: the whole project of rational investigation of our world based on observed evidence and reason — i.e. science (including AI) — collapses in self-referential absurdity.

But, we now need to show that . . .

Read More ›

ID Foundations: The design inference, warrant and “the” scientific method

It has been said that Intelligent design (ID) is the view that it is possible to infer from empirical evidence that “certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection” . . .”  This puts the design inference at the heart of intelligent design theory, and raises the questions of its degree of warrant and relationship to the — insofar as a  “the” is possible — scientific method.

Leading Intelligent Design researcher, William Dembski has summarised the actual process of  inference:

“Whenever explaining an event, we must choose from three competing modes of explanation. These are regularity [i.e., natural law], chance, and design.” When attempting to explain something, “regularities are always the first line of defense. If we can explain by means of a regularity, chance and design are automatically precluded. Similarly, chance is always the second line of defense. If we can’t explain by means of a regularity, but we can explain by means of chance, then design is automatically precluded. There is thus an order of priority to explanation. Within this order regularity has top priority, chance second, and design last”  . . . the Explanatory Filter “formalizes what we have been doing right along when we recognize intelligent agents.” [Cf. Peter Williams’ article, The Design Inference from Specified Complexity Defended by Scholars Outside the Intelligent Design Movement, A Critical Review, here. We should in particular note his observation: “Independent agreement among a diverse range of scholars with different worldviews as to the utility of CSI adds warrant to the premise that CSI is indeed a sound criterion of design detection. And since the question of whether the design hypothesis is true is more important than the question of whether it is scientific, such warrant therefore focuses attention on the disputed question of whether sufficient empirical evidence of CSI within nature exists to justify the design hypothesis.”]

The design inference process as described can be represented in a flow chart:

explan_filter

Fig. A: The Explanatory filter and the inference to design, as applied to various  aspects of an object, process or phenomenon, and in the context of the generic scientific method. (So, we first envision nature acting by low contingency law-like mechanical necessity such as with F = m*a . . . think of a heavy unsupported object near the earth’s surface falling with initial acceleration g = 9.8 N/kg or so. That is the first default. Similarly, we may see high contingency knocking out the first default — under similar starting conditions, there is a broad range of possible outcomes. If things are highly contingent in this sense, the second default is: CHANCE. That is only knocked out if an aspect of an object, situation, or process etc. exhibits, simultaneously: (i) high contingency, (ii) tight specificity of configuration relative to possible configurations of the same bits and pieces, (iii)  high complexity or information carrying capacity, usually beyond 500 – 1,000 bits. In such a case, we have good reason to infer that the aspect of the object, process, phenomenon etc. reflects design or . . . following the terms used by Plato 2350 years ago in The Laws, Bk X . . .  the ART-ificial, or contrivance, rather than nature acting freely through undirected blind chance and/or mechanical necessity. [NB: This trichotomy across necessity and/or chance and/or the ART-ificial, is so well established empirically that it needs little defense. Those who wish to suggest no, we don’t know there may be a fourth possibility, are the ones who first need to show us such before they are to be taken seriously. Where, too, it is obvious that the distinction between “nature” (= “chance and/or necessity”) and the ART-ificial is a reasonable and empirically grounded distinction, just look on a list of ingredients and nutrients on a food package label. The loaded rhetorical tactic of suggesting, implying or accusing that design theory really only puts up a religiously motivated way to inject the supernatural as the real alternative to the natural, fails. (Cf. the UD correctives 16 – 20 here on. as well as 1 – 8 here on.) And no, when say the averaging out of random molecular collisions with a wall gives rise to a steady average, that is a case of empirically  reliable lawlike regularity emerging from a strong characteristic of such a process, when sufficient numbers are involved, due to the statistics of very large numbers  . . . it is easy to have 10^20 molecules or more . . . at work there is a relatively low fluctuation, unlike what we see with particles undergoing Brownian motion. That is in effect low contingency mechanical necessity in the sense we are interested in, in action. So, for instance we may derive for ideal gas particles, the relationship P*V = n*R*T as a reliable law.] )

Explaining (and discussing) in steps:

1 –> As was noted in background remarks 1 and 2, we commonly observe signs and symbols, and infer on best explanation to underlying causes or meanings. In some cases, we assign causes to (a) natural regularities tracing to mechanical necessity [i.e. “law of nature”], in others to (b) chance, and in yet others we routinely assign cause to (c) intentionally and intelligently, purposefully directed configuration, or design.  Or, in leading ID researcher William Dembski’s words, (c) may be further defined in a way that shows what intentional and intelligent, purposeful agents do, and why it results in functional, specified complex organisation and associated information:

. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)

Read More ›

osc_rsc_fsc

Background Note: On Orderly, Random and Functional Sequence Complexity

In 2005, David L Abel and Jack T Trevors published a key article on order, randomness and functionality, that sets a further context for appreciating the warrant for the design inference. The publication data and title for the peer-reviewed article are as follows: Theor Biol Med Model. 2005; 2: 29. Published online 2005 August 11. doi: 10.1186/1742-4682-2-29. PMCID: PMC1208958 Copyright © 2005 Abel and Trevors; licensee BioMed Central Ltd. Three subsets of sequence complexity and their relevance to biopolymeric information A key figure (NB: in the public domain)  in the article was their Fig. 4: Figure 4: Superimposition of Functional Sequence Complexity onto Figure 2. The Y1 axis plane plots the decreasing degree of algorithmic compressibility as complexity increases from Read More ›

Background Note: On signs, symbols and their significance

As a preliminary step to a discussion [DV, to follow] of the significance of and warrant for the design inference, let us now symbolise how we interact with and draw inferences about signs and symbols (generally following Peirce et al [Added, Feb 28: including P’s thought on warrant by inference to best explanation i.e. abductive reasoning; where also warrant can be understood on Toulmin, Plantinga, Gettier and others (cf broader discussion here )]): __________________ Signs: I observe one or more signs [in a pattern], and infer the signified object, on a warrant: I: [si] –> O, on W a –> Here, as I will use “sign” [as opposed to “symbol”],  the connexion is a more or less causal or natural Read More ›