Uncommon Descent Serving The Intelligent Design Community

Frequently raised but weak arguments against Intelligent Design

FOREWORD

This is – and will perhaps always be – a work in progress. Patrick and others originated this area of the website when they grew weary of refuting the same old shop worn anti-ID arguments over and over again. Their work was ad hoc and progressive, but in the end they had developed an impressive resource. In the fall of 2008, the UD administration asked StephenB, GPuccio and Kairosfocus to take the work Patrick had begun and reorganize it and add to it. Then, the various sections were subjected to public comment and given a final edit by Barry Arrington. Thus, no one person is responsible for the final product. It is an amalgam that resulted from this process. To all who contributed, the UD administration expresses thanks.

INTRODUCTION

For a long time, Intelligent Design (ID) proponents, enlightened by current scientific knowledge and faithful to its methods, have been making specific and objective arguments about the origin of biological information. Nevertheless, many critics mistakenly insist that ID, in spite of its well-defined purpose, its supporting evidence, and its mathematically precise paradigms, is not really a valid scientific theory. All too often, they make this charge on the basis of the scientists’ perceived motives.

We have noticed that some of these false objections and attributions, largely products of an aggressive Darwinist agenda, have found their way into institutions of higher learning, the press, the United States court system, and even the European Union policy directives. Routinely, they find expression in carefully-crafted talking points, complete with derogatory labels and personal caricatures, all of which appear to have been part of a disinformation campaign calculated to mislead the public.

Many who interact with us on this blog recycle this misinformation. Predictably, they tend to raise notoriously weak objections that have been answered thousands of times. What follows is a list of those objections and our best attempt to answer them in abbreviated form. If you have been sent here, you are now being asked to familiarize yourself with basic ID knowledge so that you can acquire the minimal amount of information necessary to conduct meaningful dialogue.

In the spirit of mutually constructive dialogue, we therefore invite you to examine the following.

For Uncommon Descent: StephenB, GPuccio, & Kairosfocus Jan. 2009

WEAK ANTI-ID ARGUMENTS:

1] ID is “not science”

2] No Real Scientists Take Intelligent Design Seriously

3] Intelligent Design does not carry out or publish scientific research

4] ID does not make scientifically fruitful predictions

5] Intelligent Design is “Creationism in a Cheap Tuxedo”

6] Since Intelligent Design Proponents Believe in a “Designer” or “Creator” They Can Be Called “Creationists”

7] Because William Dembski once commented that the design patterns in nature are consistent with the “logos theology” of the Bible, he unwittingly exposed his intentions to do religion in the name of science

8] Intelligent Design is an attempt by the Religious Right to establish a Theocracy

9] “Evolution” Proves that Intelligent Design is Wrong

10] The Evidence for Common Descent is Incompatible with Intelligent Design

11] Darwinian evolution is a Vastly More “Simple” Argument than Intelligent Design

12] Macro-evolution is nothing but lots and lots of “micro-evolution”!

13] Real Scientists Do Not Use Terms Like Microevolution or Macroevolution

14] Real Scientists Do Not Use Terms Like “Darwinism.” The word “Darwinism” is a derogatory term used by creationists, intelligent design supporters, and other opponents of evolutionary theory that has no real meaning except as a rhetorical device to discredit evolutionary biologists.

15] Nothing is Wrong with the Modern Synthesis! (And, by the way, what kind of “Darwinism” is ID dealing with? Why?)

16] ID is really an attempt at overthrowing the well established principles of science. It is  a theory which denies the history itself of modern rational thought and of our scientific tradition

17] Methodological naturalism is the rule of science

18] Methodological naturalism is a centuries-old, traditional rule for science

19] Science does not address the “Supernatural.”

20] ID scientists are participating in a tautological exercise. They don’t really draw an inference to design; they assume a design in advance and then call it an inference

21] Evolution and artificial intelligence research have proved that there is no such thing as the “free will” that IDers attribute to designers; and, there is a scientifically respectable form of “free will” that is fully compatible with determinism

22] Who Designed the Designer?

23] The Designer Must be Complex and Thus Could Never Have Existed

24] Bad Design Means No Design

25] Intelligent Design proponents deny, without having a reason, that randomness can produce an effect, and then go make something up to fill the void

26] Dembski’s idea of “complex specified information” is nonsense

27] The Information in Complex Specified Information (CSI) Cannot Be Quantified

28] What about FSCI [Functionally Specific, Complex Information] ? Isn’t it just a “pet idea” of some dubious commenters at UD?

29] The ID explanatory filter cannot rule out chance or unknown laws!

30] William Dembski “dispensed with” the Explanatory Filter (EF) and thus Intelligent Design cannot work

31] Intelligent Design Tries To Claim That Everything is Designed Where We Obviously See Necessity and Chance

32] What types of life are Irreducibly Complex? Or which life is not Irreducibly Complex?

33] In the Flagellum Behe Ignores that this Organization of Proteins has Verifiable Functions when Particular Proteins are Omitted, i.e. in its simplest form, a protein pump

34] Behe is Jumping to Conclusions on P. falciparum and his so-called edge of evolution. P. falciparum did not evolve because it did not need to evolve: it is so perfect already that it cannot improve upon itself

35] What About the spreading of antibiotic resistance?

36] ID Proponents Talk a Lot About Front-Loading But Never Explain What It Means

37] ID Proponents use a lot of other buzz-words like Intelligence, Design, Complexity, etc, but never clearly and convincingly explain what they mean

38] Does Quantum Theory contradict and disprove the Law of Non-Contradiction (LNC)?

39] ID is Nothing More Than a “God of the Gaps” Hypothesis

40] Why are you Intelligent Design Creationists always so busy quote-mining what scientists have to say about Evolution?

41] What about the Canaanites?

APPENDIX: GLOSSARY

____________________

RESPONSES:

1] ID is “not science”

On the contrary, as Dr William Dembski, a leading intelligent design researcher, has aptly stated:

“Intelligent Design is . . . a scientific investigation into how patterns exhibited by finite arrangements of matter can signify intelligence.”

At its best, science is an unfettered (but ethically and intellectually responsible) and progressive search for the truth about our world based on reasoned analysis of empirical observations. The very antithesis of an unfettered search for truth occurs when scientists don intellectual blinkers and assert dogmatically that all conclusions must conform to “materialist” philosophy.  Such an approach prevents the facts from speaking for themselves.  The search for truth can only suffer when it is artificially constrained by those who would impose materialist orthodoxy by authoritarian fiat before the investigation has even begun. This approach obviously begs the question, but, sadly, it is all too common among those who would cloak their metaphysical prejudices with the authority of institutional science or the law.

This is especially unfortunate, because just a moment’s reflection is enough to conclude that it is untrue true that science must necessarily be limited to the investigation of material causes only.  Material causes consist of chance and mechanical necessity (the so called “laws of nature”) or a combination of the two.  Yet investigators of the world as far back as Plato have recognized a third type of cause exists – acts by an intelligent agent (i.e., “design”).  Experience confirms beyond the slightest doubt that acts by intelligent agents frequently result in empirically observable signs of intelligence.  Indeed, if this were not so, we would have to jettison forensics, to cite just one of many examples, from the rubric of “science.”

Just look all around you.  The very fact that you are reading this sentence confirms that you are able to distinguish it from noise.

Moreover, ID satisfies all the conditions usually required for scientific inquiry (i.e., observation, hypothesis, experiment, conclusion):

1.  It is based on empirical data: the empirical observation of the process of human design, and specific properties common to human design and biological information (CSI).

2.  It is a quantitative and internally consistent model.

3.  It is falsifiable: any positive demonstration that CSI can easily be generated by non design mechanisms is a potential falsification of the ID theory.

4.  It makes empirically testable and fruitful predictions (see point 4)

2] No Real Scientists Take Intelligent Design Seriously

Yes, they do. At its core, design theory is simply the systemization of the common sense intuition (discussed in literature at least as far back as Cicero ~ 50 BC) that effects caused by intelligent agents leave unique traces that can be distinguished from effects caused by chance and necessity.  ID theorists build on that powerful intuition using the well-established methods and principles of science and mathematics.

ID theorists are qualified scientists, and they take ID very seriously indeed, as may be seen from this list of peer reviewed and peer-edited papers published in the professional literature. Leading edge ID theorists (Michael Behe, William Dembski, Charles Thaxton, Walter Bradley, Dr. Jonathan Wells, and Roger Olsen) have been joined by a follow-on wave of credentialed scientists (e.g. Douglas Axe, Guillermo Gonzalez, Albert Voie, John A. Davison, D.W. Snoke, David Berlinski, Scott Minnich, Stephen Meyer, Wolf-Ekkehard Lönnig, H. Saedler, Granville Sewell, David L Abel, Jack T Trevors, Robert Marks, Kurt Dunston and David KY Chiu, etc.) in developing ID ideas.

Moreover, even scientists who oppose ID take it seriously.  Some come to this very site and argue over its merits (we know their names) while others feel so threatened by it, they actually set up websites with the sole intent of attacking it in every way possible.  How many websites has the NCSE set up to attack alchemy?

We will be the first to acknowledge that ID does not represent the consensus among scientists at this time.  But ID represents a paradigm shift against an established and entrenched scientific orthodoxy, and the history of science is replete with examples of scientific orthodoxies that were hostile to (and sometimes even persecuted) revolutionary thinkers.  By definition orthodoxies resist change, even change for the better.

The elevation of philosophical materialism over seeking truth on its own terms is a major reason that a consensus of scientists rejects ID, and why some even go so far as to “expel” ID theorists from their jobs.  Therefore, the frequently encountered dismissal of ID practitioners is not so much driven by a fair assessment of the actual quality of their work in light of the best practice methods and principles of science.  Instead, it is a manifestation the “No true Scotsman” fallacy.

At the end of the day, however, it does not matter if most other scientists object to ID, because science is not settled by majority vote. The history of science makes it abundantly clear that today’s consensus is often tomorrow’s exploded theory, and in the end, scientific ideas stand or fall based on their own merits.

And, that – to be judged on its merits – is all ID asks for.


3] Intelligent Design does not carry out or publish scientific research

In the Dover case Judge Jones asserted that ID does not carry out or publish scientific research.  He was simply wrong.  Despite opposition and harassment, there is a significant and growing body of ID-supportive research and peer-reviewed scientific publications.   For instance, the Discovery Institute maintains a list of such research-based publications here.

It is also important to remember that biological research, when properly done, is an impartial search for true data and explanations about our world based on empirical evidence. Findings from such research are not “owned” by Darwinists or IDists. Good scientific research is good scientific research, period. Even if the researcher has a specific conviction (either for or against ID), his data are the property of the whole scientific community and can be legitimately evaluated and interpreted by all. In that sense, all biological research is ID research (or, if you want, Darwinist research).

For example Michael Behe, when asked what type of research would help prove his thesis as outlined in the Edge of Evolution, pointed to the research of Lenski at Michigan State on bacteria evolution. Lenski would undoubtedly cringe if he knew he was doing ID research, but ID research he is doing. Each generation of data for every culture line tests (and thereby possibly falsifies) Behe’s thesis.  Lenski does not call his research ID research but it is nevertheless consistent with ID objectives and theory.

4] ID does not make scientifically fruitful predictions.

Again, simply false. As just one example of a successful ID-based prediction:

Non-functionality of “junk DNA” was predicted by Susumu Ohno (1972), Richard Dawkins (1976), Crick and Orgel (1980), Pagel and Johnstone (1992), and Ken Miller (1994), based on evolutionary presuppositions.

By contrast, predictions of functionality of “junk DNA” were made based on teleological bases by Michael Denton (1986, 1998), Michael Behe (1996), John West (1998), William Dembski (1998), Richard Hirsch (2000), and Jonathan Wells (2004).

These Intelligent Design predictions are being confirmed. e.g., ENCODE’s June 2007 results show substantial functionality across the genome in such “junk” DNA regions, including pseudogenes.

In short, it is a matter of simple fact that scientists working in the ID paradigm – despite harassment, slander and even outright career-busting — carry out and publish research, and that they have made significant and successful ID-based predictions.

A similar, but more general and long term prediction of ID is that the real complexity of living beings will be shown to be much higher than currently thought. That kind of “prediction” has been constantly verified in the last few decades, and we can easily anticipate, in an ID scenario, that such a process will go on for a long time. We quote here from a recent post by Gil Dodgen on UD (with minor editing):

“With the aid of improved technology, the formerly fuzzy [appearances of design] of biology (Darwin’s blobs of gelatinous combinations of carbon) are not becoming fuzzier and more easily explained by non-ID theses — they are now known to be high-tech information processing systems, with superbly functionally integrated machinery, error-correction-and-repair systems, and much more that surpasses the most sophisticated efforts of the best human mathematicians, mechanical, electrical, chemical, and software engineers.”

And such a process is going on daily, and at an ever increasing rate, making non-ID explanations daily more unlikely.

5] Intelligent Design is “Creationism in a Cheap Tuxedo”

In fact, the two theories are radically different. Creationism moves forward: that is, it assumes, asserts or accepts something about God and what he has to say about origins; then interprets nature in that context. Intelligent design moves backward: that is, it observes something interesting in nature (complex, specified information) and then theorises and tests possible ways how that might have come to be. Creationism is faith-based; Intelligent Design is empirically-based.

Each approach has a pedigree that goes back over two thousand years. We notice the “forward” approach in Tertullian, Augustine, Bonaventure, and Anselm. Augustine described it best with the phrase, “faith seeking understanding.” With these thinkers, the investigation was faith-based. By contrast, we discover the “backward” orientation in Aristotle, Aquinas, and Paley. Aristotle’s argument, which begins with “motion in nature” and reasons BACK to a “prime mover” — i.e. from effect to its “best” causal explanation — is obviously empirically based.

To say then, that Tertullian, Augustine, Anselm (Creationism) is similar to Aristotle, Aquinas, Paley (ID) is equivalent to saying forward equals backward. What could be more illogical?

6] Since Intelligent Design Proponents Believe in a “Designer” or “Creator” They Can Be Called “Creationists”

First, a basic fact: while many intelligent design proponents believe in a Creator (which is their world-view right), not all do. Some hold that some immanent principle or law in nature could design the universe. That is: to believe in intelligent design is not necessarily to believe in a transcendent creative being.

However, what is rhetorically significant is the further fact that the term “creationist” is very often used today in a derogatory way.

Traditionally, the word was used to describe the world view that God created the universe, a belief shared by many ID scientists, and even some ID critics. But now, that same term is too often used dishonestly in an attempt to associate intelligent design, an empirically-based methodology, with Creationism, a faith-based methodology.

Some Darwinist advocates and some theistic evolutionists seem to feel that if they can tag ID with the “Creationist” label often enough and thus keep the focus away from science–if they can create the false impression that ID allows religious bias to “leak” into its methodology–if they can characterize it as a religious presupposition rather than a design inference –then the press and the public will eventually come to believe that ID is not really science at all.

In short, anti-ID ideologues use the word “creationist” to distract from a scientific debate that they cannot win on the merits. The only real question is whether someone who uses this dubious strategy is doing so out of ignorance (having been taken in by it, too) or out of malice.


7] Because William Dembski once commented that the design patterns in nature are consistent with the “logos theology” of the Bible, he unwittingly exposed his intentions to do religion in the name of science

In general, personal beliefs and personal views about the general nature of reality (be they religious, atheistic, or of any kind) should not be considered directly relevant to what scientists say and do in their specific scientific work: that’s a very simple rule of intellectual respect and democracy, and it simply means that nobody can impose a specific model of reality on others, and on science itself.

Moreover, Dembski is qualified as a theologian and a philospher-scientist-mathematician (one of a long and distinguished tradition), so he has a perfect right to comment seriously on intelligent design from both perspectives.

Further to this, the quote in question comes from a theologically oriented book in which Dembski explores the “theological implications” of the science of intelligent design. Such theological reframing of a scientific theory and/or its implications is not the same thing as the theory itself, even though each may be logically consistent with the other. Dembski’s point, of course, was that truth is unified, so we shouldn’t be surprised that theological truths confirm scientific truths and vice versa.

Also, Dembski’s reference to John 1:1 ff. underscores how a worldview level or theological claim may have empirical implications, and is thus subject to empirical test.

For, in that text, the aged Apostle John put into the heart of foundational era Christian thought, the idea that Creation is premised on Rational Mind and Intelligent Communication/Information. Now, after nineteen centuries, we see that — per empirical observation — we evidently do live in a cosmos that exhibits fine-tumed, function-specifying complex information as a premise of facilitating life, and cell-based life is also based on such functional, complex, and specific information, e.g in DNA.

Thus, theological truth claims here line up with subsequent empirical investigation:a risky empirical prediction has been confirmed by the evidence. (Of course, had it been otherwise – and per track record — many of the same critics would have pounced on the “scientific facts” as a disconfirmation. So, why then is it suddently illegitimate for Christians to point out from scientific evidence, that on this point their faith has passed a significant empirical test?)


8] Intelligent Design is an attempt by the Religious Right to establish a Theocracy

Darwinist advocates often like to single out the “Discovery Institute” as their prime target for this charge. It is, of course, beyond ridiculous.

In fact, all members from that organization and all prominent ID spokespersons embrace the American Founders’ principle of representative democracy. All agree that civil liberties are grounded in religious “principles” (on which the framers built the republic) not religious “laws” (which they risked their lives to avoid), and support the proposition that Church and State should never become one.

However, anti-ID zealots too often tend to misrepresent the political issues at stake and distort the original intent, spirit, and letter of the founding documents.

Historically, the relationship between Church and State was characterized not as a “union” (religious theocracy) or a radical separation (secular tyranny) but rather as an “intersection,” a mutual co-existence that would allow each to express itself fully without any undue interference from the other. There was no separation of God from government. On the contrary, everyone understood that freedom follows from the principle that the Creator God grants “unalienable rights,” a point that is explicit in the US Declaration of Independence. Many Darwinists are hostile to such an explicitly Creation-anchored and declaratively “self-evident” foundation for liberty and too often then misunderstand or pervert its historical context – the concept and practice of covenantal nationhood and just Government under God. Then, it becomes very tempting to take the cheap way out: (i) evade the responsibility of making their scientific case, (ii) change the subject to politics, (iii) pretend to a superior knowledge of the history, and (iv) accuse the other side of attempting to establish a “theocracy.”

In fact, design thinking is incompatible with theocratic principles, a point that is often lost on those who don’t understand it.

Jefferson and his colleagues — all design thinkers — argued that nature is designed, and part of that design reflects the “natural moral law,” which is observed in nature and written in the human heart as “conscience.” Without it, there is no reasonable standard for informing the civil law or any moral code for defining responsible citizenship. For, the founders held that (by virtue of the Mind and Conscience placed within by our common Creator) humans can in principle know the core ideas that distinguish right from wrong without blindly appealing to any religious text or hierarchy. They therefore claimed that the relationship between basic rights and responsibilities regarding life, liberty and fulfillment of one’s potential as a person is intuitively clear. Indeed, to deny these principles leads into a morass of self-contradictions and blatant self-serving hypocrisies; which is just what “self-evident” means.

So, as a member of a community, each citizen is should follow his conscience and traditions in light of such self-evident moral truth; s/he therefore deserves to be free from any tyranny or theocracy that which would frustrate such pursuit of virtue. By that standard, religious believers are permitted and even obliged to publicly promote their values for the common good; so long as they understand that believers (and unbelievers) who hold other traditions or worldviews may do the same.

Many Darwinists, however, confuse civil laws that are derived from religious principles and from the natural moral law (representative democracy) with religious laws (autocratic theocracy). So, they are reduced to arguing that freedom is based on a murky notion of “reason,” which, for them, means anti-religion. Then, disavowing the existence of moral laws, natural rights, or objectively grounded consciences, they can provide no successful rational justification for the basic right to free expression; which easily explains why they tend to support it for only those who agree with their point of view. Sadly, they then too often push for — and often succeed in — establishing civil laws that de-legitimize those very same religious principles that are the historic foundation for their right to advocate their cause. Thus, they end up in precisely the morass of agenda-serving self-referential inconsistencies and abuses that the founders of the American Republic foresaw.

So, it is no surprise that, as a matter of painfully repeated fact, such zealots will then typically “expel” and/or slander any scientist or educator who challenges their failed paradigm or questions its materialistic foundations. That is why for instance, Lewontin publicly stated:

Our willingness to accept scientific claims that are against common sense is the key to an understanding of the real struggle between science and the supernatural. We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [Bold emphasis added]

The point of all this should be clear. ID does not seek to establish a theocracy; it simply wants to disestablish a growing Darwinist tyranny.


9] “Evolution” Proves that Intelligent Design is Wrong

The word “evolution” can mean different things. The simplest meaning is one of natural history of the appearance of different living forms. A stronger meaning implies common descent, in its universal form (all organisms have descended from a single common ancestor) or in partial form (particular groups of organisms have descended from a common ancestor). “Evolution” is often defined as descent with modifications, or simply as changes in the frequencies of alleles in the gene pool of a population.

None of those definitions can prove ID wrong, because none are in any way incompatible with it.

ID is a theory about the cause of genetic information, not about the modalities or the natural history of its appearance, and is in no way incompatible with many well known patterns of limited modification of that information usually defined as “microevolution.” ID affirms that design is the cause, or at least a main cause, of complex biological information. A theory which would indeed be alternative to ID, and therefore could prove it wrong, is any empirically well-supported “causal theory” which excludes design; in other words any theory that fits well with the evidence and could explain the presence or emergence of complex biological information through chance, necessity, any mix of the two, or any other scenario which does not include design. However, once we rule out “just-so stories” and the like, we will see that there is not today, nor has there ever been, such a theory. Furthermore, the only empirically well-supported source of functionally specific, complex information is: intelligence.

To sum it up: no definition of evolution is really incompatible with an ID scenario. Any causal theory of evolution which does not include design is obviously alternative to, and incompatible with, ID.

However, while many such theories have indeed been proposed, they are consistently wanting in the necessary degree of empirical support. By contrast, design is an empirically known source of the class of information – complex, specified information (CSI) — exhibited by complex biological systems.


10] The Evidence for Common Descent is Incompatible with Intelligent Design

ID is a theory about the cause of complex biological information. Common descent (CD) is a theory about the modalities of implementation of that information. They are two separate theories about two different aspects of the problem, totally independent and totally compatible. In other words, one can affirm CD and ID, CD and Darwinian Evolution, or ID and not CD. However, if one believes in Darwinian Evolution, CD is a necessary implication.

CD theory exists in two forms, universal CD and partial CD. No one can deny that there are evidences for the theory of CD (such as ERVs, homologies and so on). That’s probably the reason why many IDists do accept CD. Others do not agree that those evidences are really convincing, or suggest that they may reflect in part common design. But ID theory, proper, has nothing to do with all that.

ID affirms that design is the key cause of complex biological information. The implementation of design can well be realized through common descent, that is through implementation of new information in existing biological beings. That can be done gradually or less gradually. All these are modalities of the implementation of information, and not causes of the information itself. ID theory is about causes.


11] Darwinian evolution is a Vastly More “Simple” Argument than Intelligent Design

This argument usually goes with passionate invocations of Occam’s Razor. Well, echoing Einstein, the answer is very easy: nothing is really simple, if it does not work.

Occam’s Razor is certainly not intended to promote false – thus, simplistic — theories in the name of their supposed “simplicity.” If Darwinian Evolution and ID both explained well what we know about complex biological information, then we could argue about which is the simplest theory. But that’s not the case. One of the most important results of ID theory is that it effectively falsifies Darwinian theory.

We should prefer a working theory to a falsified one, without arguing about “simplicity”.

Moreover, ID and Darwinian evolution are so different that it is really meaningless to compare their “simplicity.” According to Darwinists, ID is “not simple” because it postulates a designer. According to IDists, Darwinian Evolution is “not simple” because it tries and fails to explain complex biological information — which has all the key properties of known designed things — through complicated, ad hoc and artificial assumptions and question-begging rules, just to avoid the “simple” (and even, “natural’) explanation of a designer.

Such discussions are really pointless, more philosophy than science. The only important scientific point is: which theory gives the empirically well-supported, “best explanation”?


12] Macro-evolution is nothing but lots and lots of “micro-evolution”!

Such a point of view is simply untenable, and it denotes a complete misunderstanding of the nature of function. Macroevolution, in all its possible meanings, implies the emergence of new complex functions. A function is not the simplistic sum of a great number of “elementary” sub-functions: sub-functions have to be interfaced and coherently integrated to give a smoothly performing whole. In the same way, macroevolution is not the mere sum of elementary microevolutionary events.

A computer program, for instance, is not the sum of simple instructions. Even if it is composed ultimately of simple instructions, the information-processing capacity of the software depends on the special, complex order of those instructions. You will never obtain a complex computer program by randomly assembling elementary instructions or modules of such instructions.

In the same way, macroevolution cannot be a linear, simple or random accumulation of microevolutionary steps.

Microevolution, in all its known examples (antibiotic resistance, and similar) is made of simple variations, which are selectable for the immediate advantage connected to them. But a new functional protein cannot be built by simple selectable variations, no more than a poem can be created by random variations of single letters, or a software written by a sequence of elementary (bit-like) random variations, each of them improving the “function” of the software.

Function simply does not work that way. Function derives from higher levels of order and connection, which cannot emerge from a random accumulation of micro-variations. As the complexity (number of bits) of the functional sequence increases, the search space increases exponentially, rapidly denying any chance of random exploration of the space itself.

13] Real Scientists Do Not Use Terms Like Microevolution or Macroevolution

The best answer to this claim, which is little more than an urban legend, is to cite relevant cases. First, textbooks:

Campbell’s Biology (4th Ed.) states: “macroevolution: Evolutionary change on a grand scale, encompassing the origin of novel designs, evolutionary trends, adaptive radiation, and mass extinction.” [By contrast, this book defines “microevolution as “a change in the gene pool of a population over a succession of generations”]

Futuyma’s Evolutionary Biology, in the edition used by a senior member at UD for an upper division College course, states, “In Chapters 23 through 25, we will analyze the principles of MACROEVOLUTION, that is, the origin and diversification of higher taxa.” (pg. 447, emphasis in original). [Futuyma contrasts “microevolution” — “slight, short-term evolutionary changes within species.”]

In his 1989 McGraw Hill textbook, Macroevolutionary Dynamics, Niles Eldredge admits that “[m]ost families, orders, classes, and phyla appear rather suddenly in the fossil record, often without anatomically intermediate forms smoothly interlinking evolutionarily derived descendant taxa with their presumed ancestors.” (pg. 22.) In Macroevolution: Pattern and Process (Steven M. Stanley, The Johns Hopkins University Press, 1998 version), we read that, “[t]he known fossil record fails to document a single example of phyletic evolution accomplishing a major morphological transition and hence offers no evidence that the gradualistic model can be valid.” (pg. 39)

The scientific journal literature also uses the terms “macroevolution” or “microevolution.”

In 1980, Roger Lewin reported in Science on a major meeting at the University of Chicago that sought to reconcile biologists’ understandings of evolution with the findings of paleontology:

“The central question of the Chicago conference was whether the mechanisms underlying microevolution can be extrapolated to explain the phenomena of macroevolution. At the risk of doing violence to the positions of some of the people at the meeting, the answer can be given as a clear, No.” (Roger Lewin, “Evolutionary Theory Under Fire,” Science, Vol. 210:883-887, Nov. 1980.)

Two years earlier, Robert E. Ricklefs had written in an article in Science entitled “Paleontologists confronting macroevolution,” contending:

“The punctuated equilibrium model has been widely accepted, not because it has a compelling theoretical basis but because it appears to resolve a dilemma. … apart from its intrinsic circularity (one could argue that speciation can occur only when phyletic change is rapid, not vice versa), the model is more ad hoc explanation than theory, and it rests on shaky ground.” (Science, Vol. 199:58-60, Jan. 6, 1978.)

So, if such terms are currently in disfavor, that is clearly because they highlight problems with the Modern Evolutionary theory that it is currently impolitic to draw attention to. In the end, the terms are plainly legitimate and meaningful, as they speak to an obvious and real distinction between (a) the population changes that are directly observationally confirmed, “microevolution,” and (b) the major proposed body-plan transformation level changes that are not: “macroevolution.”

14] Real Scientists Do Not Use Terms Like “Darwinism.” The word “Darwinism” is a derogatory term used by creationists, intelligent design supporters, and other opponents of evolutionary theory that has no real meaning except as a rhetorical device to discredit evolutionary biologists.

Design thinkers sometimes use the term “Darwinism” for the sake of brevity, but we are obviously aware that it is not the original nineteenth century historical version of Darwin’s thought which is at stake here.

Nor is the suggested appeal to “no true scientist” appropriate. As the New World Encyclopedia article on “Darwinism” remarks:

Darwinism and other -isms

It is felt by some that the term “Darwinism” is sometimes used by creationists as a somewhat derogatory term for “evolutionary biology,” in that casting of evolution as an “ism”—a doctrine or belief—strengthens calls for “equal time” for other beliefs, such as creationism or intelligent design. However, top evolutionary scientists, such as Gould and Mayr, have used the term repeatedly, without any derogatory connotations. [NWE, art. “Darwinism,” Oct. 23, 2005, acc. Nov. 11, 2010.]

We see here a now very familiar, unfortunate rhetorical tactic. Whenever a term wanders out of the world of journals and textbooks into popular usage, and is picked up by critics of evolutionary materialism, proponents of Darwinism tend to deride those who use it, on the claim that such terms are not used by “true scientists.”

If “no true Scotsman” is a fallacy, so too is “no true scientist.” And, all the moreso because any number of Design thinkers, old and young earth creationists, as well as other critics of the Modern Evolutionary Synthesis (aka “[Neo-]Darwinism”) more generally, do have relevant, earned academic qualifications and credentials. The real issue is the balance of the case on the merits, not who uses what terms.

The main object of ID criticism of “Darwinism” is usually classical neo-Darwinism, aka “the modern synthesis,” which tries to explain biological information in the main in terms of the dynamic:

RV + NS –> DWM

 

(Random [or, “chance”] genetic Variation plus Natural Selection acting together yield descent with modification. This has been observed at micro-level, and has been extrapolated — without direct observational support — to the macro-level of body-plans. Unfortunately, on the strength of the former, the latter is too often presented as an empirical “fact,” often using the comparison that it is as certain as gravity and the orbiting of planets around the sun. The proper comparison, though, is not the observed orbiting of planets or falling of unsupported apples, but he far more speculative and tentative models of Solar System origins.)

ID proponents acknowledge that Darwinian mechanisms operate within a limited scope (changes in beak sizes among finches as a result of environmental pressures; development of resistance to antibiotics by certain bacteria). But they dispute that the mechanism responsible for these micro-evolutionary changes is also responsible for macro-evolutionary changes. In other words, ID proponents agree that Darwinian processes can change the size of finch beaks across generations, but they dispute that those processes are solely responsible for the existence of finches, or birds or dinosaurs, or land-animals in the first place.

At the macro-evolutionary level, ID proponents point out that Darwinism is too often rooted in an evolutionary materialist metaphysical presupposition imposed on science and posing as a scientific theory; as Richard Lewontin notoriously admitted in his infamous 1997 NYRB article, “Billions and Billions of Demons”:

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door.

Grounded in materialistic ideology, such Darwinism holds that purposeless, mindless, physical mechanisms, manifested as small genetic changes, can drive the evolutionary process to produce all observed complexity and biodiversity on earth. As such, it interprets all evidence in light of its own materialistic ideology and rules out, in principle — indeed “a priori,” any possibility that any part of the evolutionary process could have been designed.

Like the mythical bandit Procrustes, who reshaped the bodies of his unfortunate visitors to fit his iron bed, Darwinism reshapes biological evidence to fit its iron clad world view.

Design thinkers are also perfectly aware that many new forms of evolutionary thought exist, but unfortunately they are typically warped by the same a prioris.

The same NWE article on Darwinism is therefore correct to further observe:

There are some scientists who feel that the importance accorded to genes in natural selection may be overstated. According to Jonathan Wells [NB: a design thinker and critic of Darwinism], genetic expression in developing embryos is impacted by morphology as well, such as membranes and cytoskeletal structure. DNA is seen as providing the means for coding of the proteins, but not necessarily the development of the embryo, the instructions of which must reside elsewhere. It is possible that the importance of sexual reproduction and genetic recombination in introducing variability also may be understated.

UD’s resident Darwinist and critic, the respected Allen MacNeill, adds that in addition to the classic Neo-Darwinian synthesis of the 1920’s – 40’s, modern evolutionary thought embraces:

separate but related set of interconnected theories explaining the origin and modification of the phenotypic characteristics of living organisms, consisting (at a bare minimum) of the mechanisms of natural selection, sexual selection, genetic drift, and neutral molecular evolution in deep geological time, grounded (at least in part) in theoretical mathematical models of population genetics, depending on multiple sources of heritable phenotypic variation, and supported by inference from multiple sources of empirical evidence, including field and laboratory research in the fields of biochemistry, cell biology, comparative physiology, developmental biology, ecology, ethology, genetics, neurobiology, and physiological ecology. [Comment,”Darwinism” UD discussion thread, 11/10/2010, 10:51 pm.]

It is important to understand, however, that while ID arguments are often targeted to classical neo-Darwinism, they are perfectly valid for all forms of explanatory theories of biological information which “a priori” do not admit the possible intervention of a design process.

In other words, according to ID theory, no observed unintelligent causal mechanism ever proposed for the generation of information — whether based on chance, necessity, a combination of the two, or any other blindly mechanical form of “cause” — is credibly capable of generating the CSI in biological information on the scope of our observed universe; which is often estimated to comprise about 10^80 atoms and to have existed for some 13.7 billions of years.

(This, of course, can in principle be easily empirically falsified by simply producing a case where on reliable observation, such forces of undirected chance plus necessity have credibly generated CSI. But, while there are literally billions of cases of intelligent causation of such CSI [think: Internet], there are notoriously, no credible cases of chance and necessity alone generating CSI. For instance, Genetic algorithms are intelligently designed, and use constrained random walk searches within islands of function; the problem for evolutionary theory is to get to such islands of function in the vast sea of non-functional but chemically possible DNA and amino acid chain molecules, within the material and temporal resources of the Earth, much less the observed cosmos. The deep isolation of such islands of function leads to the confident stance by design thinkers on the matter. For, the only observed and probabilistically plausible solution to the coded, functionally specific information generation problem is intelligently directed configuration; aka, design.)


15] Nothing is Wrong with the Modern Synthesis! (And, by the way, what kind of “Darwinism” is ID dealing with? Why?)

The “Modern Synthesis” is the classical form of Neo-Darwinism, which assigns to random variation (RV) of genes and natural selection (NS) of the varied competing sub-populations the main role in driving biological evolution at micro- and macro- (body-plan origination) levels. While many modern biologists, like Dawkins, still more or less adhere to such a paradigm, others would be ready to declare that the modern synthesis is “history.”

Some of the most serious alternatives to the classical Neo-Darwinian paradigm have been: the theory of neutral evolution, due mainly to Kimura, which focuses on the role of neutral mutations and genetic drift; and the theory of punctuated equilibrium by Eldredge and Gould, which favors a scenario of stasis and relatively rapid change in evolution, in contrast to the traditional gradualism. These points of view, even if they have been in some way “integrated” into classical Neo-Darwinism, represent really alternative interpretations; sometimes, radically different from the tradition. More recently, classical Neo-Darwinism has faced even more radical attempts at review, focusing mainly on the search for new sources of variation, and often re-dimensioning the role of natural selection: we can cite here the contributions of Lynn Margulis (endosymbiontic theory), of Sean Carroll (Evolutionary developmental biology or evo-devo), and many others, while Allen MacNeill (a sometime, and often helpful, contributor at UD) has compiled a famous and very long list of “engines of variation” which includes possible phenotype-genotype interactions and many other classes of supposed alternative mechanisms. In general great attention has been given recently to adaptational mechanisms (even in the form of neo-Lamarckism) and to epigenetic inheritance.

One of the results of such heterogeneity of contemporary evolutionary thought has been that ID is often accused of dealing with one form and not with another, be it classical Neo-Darwinism or the most recent examples of what we may call: Neo-Neo-Darwinism.

The truth is much simpler: as a causal theory about the origin of biological information, ID is both a criticism and an alternative to all theories which try to explain biological information by purelyunguided mechanisms.

In the final sense, any list of “engines of variation” that “permits” only unguided mechanisms exclude design, and is thus based, at the basic causal level, on necessity or chance or some mixture of the two. This is bias, not proper science, as, before the facts can speak, it excludes another known “engine of variation” for contingent objects: design. So, we may directly see that, the counter-arguments and alternatives provided by the ID approach apply equally to classical Neo-Darwinian theory and all of these alternatives.

The reason why ID criticism is usually more specifically directed to classical Neo-Darwinism is that, in the end, RV + NS remains the most widely used, most detailed causal model of unguided evolution. It is difficult to analyze in detail alternative models which have never been detailed to the point that they can be really critically evaluated, and so the design theory commentary on these newer models often remains at a very generic level. But, we must underscore: ID arguments are equally valid for all cases: all forms of “random variation” are just that: random, and so must obey the laws of statistics, and all forms of “necessity” – including Natural Selection (as it is usually presented) – must be expressed in a credible and consistent logico-mathematical model.

Unless and until new causal principles are discovered, it has been immemorial since the days of Plato in his The Laws, Book X, that design is the only known alternative/complement to chance and necessity. And so, the only truly valid scientific approach is one that accepts at the outset the possibility of design as well as chance and necessity, and then seeks reliable signs that can differentiate the role played by each for the key aspects of life-forms.


16] ID is really an attempt at overthrowing the well established principles of science. It is  a theory which denies the history itself of modern rational thought and of our scientific tradition.

This objection completely misreads the debate. For centuries, design thinking defined the scientific landscape and held fast to the proposition that a transcendent creator fashioned the universe for discovery. That is the “old” idea that inspired many great scientists from the days of Newton to those of Einstein. By contrast, it was the “enlightenment” approach to philosophy followed by the Darwinian approach to science that completely reshaped our notions about the physical world. Design thinking remains consistent, but evolutionary thinking keeps finding new ways to explain away design and shrug it off as an “illusion.”

In that sense, the questioner has it backwards. In fact, it is ID that is preserving an old idea and arguing against a new idea, namely the proposition that design is an “illusion.” ID is simply challenging a challenge, asking Darwinists to provide evidence that supports that new idea. So far, they have only offered evidence for that which we already knew, namely that features in living organisms change over time. By contrast, they have offered no decisive evidence – question-begging imposed “rules” don’t count — for their extraordinary claim that law and chance alone can explain the apparent design of life.

It seems fair, then, to say that all too many recent developments in evolutionary theory are little more than “damage control” initiatives calculated to cover up failed naturalistic explanations of the past.  In the last 150 years, materialists have offered us, in succession, Darwin ’s General Theory, Modern Evolutionary Theory, Punctuated Equilibrium, structuralism, and self organization. Currently, they are looking for yet another paradigm. So, while Darwinists are not unified over which unintelligent mechanism drives macro-evolution, they all agree that some such mechanism MUST exist. That is why it is fair to call them Darwinists; they are united in their faith commitment. That they can find little evidence to support their faith commitment does not seem to be a problem for them.

In fact, some of the more ardent Darwinists, driven by zeal, purposely confuse the debate by mischaracterizing ID as an “anti-evolution” movement. that this charge is false does not seem to inhibit them in the least. For the record, Some ID advocates do accept both micro and macro evolution, while others accept only micro evolution. What they all agree on is this: there is no evidence to support Darwinists’ key claims that (i) mindless mechanistic forces drive macro evolution and (ii) all apparent design in natural objects is an “illusion.”

The problem is less about ID vs. evolution and more about the mistaken – or sometimes, outright dishonest — way that too many Darwinists frame the issue.


17] Methodological naturalism is the rule of science

Methodological naturalism is simply a quite recently imposed “rule” that (a) defines science as a search for natural causes of observed phenomena AND (b) forbids the researcher to consider any other explanation, regardless of what the evidence may indicate. In keeping with that principle, it begs the question and roundly declares that (c) any research that finds evidence of design in nature is invalid and that (d) any methods employed toward that end are non-scientific. For instance, in a pamphlet published in 2008, the US National Academy of Sciences declared:

In science, explanations must be based on naturally occurring phenomena. Natural causes are, in principle, reproducible and therefore can be checked independently by others. If explanations are based on purported forces that are outside of nature, scientists have no way of either confirming or disproving those explanations. [Science, Evolution and Creationism, p. 10. Emphases added.]

The resort to loaded language should cue us that there is more than mere objective science going on here!

A second clue is a basic fact: the very NAS scientists themselves provide instances of a different alternative to forces tracing to chance and/or blind mechanical necessity. For, they are intelligent, creative agents who act into the empirical world in ways that leave empirically detectable and testable traces. Moreover, the claim or assumption that all such intelligences “must” in the end trace to chance and/or necessity acting within a materialistic cosmos is a debatable philosophical view on  the remote and unobserved past history of our cosmos. It is not at all an established scientific “fact” on the level of the direct, repeatable observations that have led us to the conclusion that Earth and the other planets orbit the Sun.

In short, the NAS would have been better advised to study the contrast: natural vs artificial (or, intelligent) causes, than to issue loaded language over natural vs supernatural ones

Notwithstanding, many Darwinist members of the guild of scholars have instituted or supported the question-begging rule of “methodological naturalism,” ever since the 1980’s. So, if an ID scientist finds and tries to explain functionally specified complex information in a DNA molecule in light of its only known cause: intelligence, supporters of methodological naturalism will throw the evidence out or insist that it be re-interpreted as the product of processes tracing to chance and/or necessity; regardless of how implausible or improbable the explanations may be. Further, if the ID scientist dares to challenge this politically correct rule, he will then be disfranchised from the scientific community and all his work will be discredited and dismissed.

Obviously, this is grossly unfair censorship.

Worse, it is massively destructive to the historic and proper role of science as an unfettered (but intellectually and ethically responsible) search for the truth about our world in light of the evidence of observation and experience.


18] Methodological naturalism is a centuries-old, traditional rule for science

In an attempt to rationalize the recently imposed “rule” of methodological naturalism, some Darwinist academics have resorted to rewriting history. As the ‘revised” story goes, Newton and other greats of the founding era of Modern Science subscribed to the arbitrary standard of ruling out design in principle. Thus, one gathers, ID cannot be science because it violates the “traditional” and “well-established” criteria for science.

However, as anyone familiar with the real history of science knows – e.g. cf. Newton’s General Scholiumto his great scientific work, Principia — this proposition is at best a gross and irresponsible error, or even an outright deception. For, most scientists of the founding era were arguing on behalf of the proposition that God, as a super-rational being, does not act frivolously, unpredictably, and without purpose. For such men, and for their time, searching for “natural causes” was a testimony to the belief that the Christian God, unlike anthropomorphized Greek gods, did not throw capricious temper tantrums and toss lightning bolts out of the sky. In other words, the issue was not natural causes vs. design; (they were all design thinkers) it was orderly and intelligible natural processes vs. chaos.

That directly contradicts Lewontin’s dismissive assertion that “[t]o appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen.” Indeed, the theologians and philosophers will remind us that for miracles to stand out as sign-posts of more than the ordinary being at work, they require that nature as a whole works in an orderly, intelligible and predictable way.

So, for the founders of Modern Science, science (as a delimited field of study within a wider domain, i.e., “natural philosophy” and “natural history”) was primarily about discovering the underlying principles, forces and circumstances that drive observed natural phenomena. But, as Newton so aptly illustrates, it was simply not in their minds to insist dogmatically that only “natural” causes — i.e. blind mechanical necessity and even more blind chance – exist or may be resorted to in accounting for the nature and functions of our world. They made a provisional judgment based on the best information available, but they would never have dared to presume that they knew enough to close off all other options.

Further, in their estimation, the foundational scientists were “thinking God’s thoughts after him.” Obviously, they could hardly have believed in Methodological Naturalism while, at the same time, believing that God, as Creator, purposely left clues about his handiwork so that his creatures could interpret them as evidence of his existence and plan for the orderly conduct of the world that are also accessible to us to use for our betterment. Even apart from their religious inspiration, they understood that only the individual scientist knows what he is researching and why, so it is s/he who must in the first instance decide which methods are reasonable, responsible, and appropriate for the task

Indeed, it was their love of truth and the disinterested search for it that made them great. They were always ready to challenge rigid conventions and seek new answers. More importantly, they were wise enough to know that someone new could come along and make their ideas seem old, just as they had made the ideas of their predecessors seem old.

Now, in our day, a new idea has indeed come along, and it is embodied in the information found in a DNA molecule. It is beyond ridiculous, then, to suggest that men like Francis Bacon, Galileo, Sir Isaac Newton, Faraday, Maxwell or Lord Kelvin — all of whom were in part motivated by religion and whose religion gave meaning to their science — would ignore or dismiss such evidence of design because of its possible religious implications.


19] Science does not address the “Supernatural”

As a matter of brute fact, Science can address anything it pleases, and has already weighed in on several events that have been associated with the supernatural.

Physicists have speculated about the weight of the stone at the time of Christ’s resurrection and the likelihood that Roman guards could have lifted it. Medical doctors have aided the Catholic Church in its canonization process by determining whether or not a medical miracle can be attributed to the special intervention of a saint. Statisticians have calculated the improbability that 459 Old Testament prophecies about Jesus Christ would become realized as historical events as reported in the New Testament. More famously, chemists have calculated the possible age of the shroud of Turin. Indeed, some would say that astronomers “addressed” a supernatural creative event when they found evidence for the “big bang.

So, when Darwinist ideologues say that “science does not address the supernatural, what they really mean is that science should not be PERMITTED to address the supernatural, (or anything that could remotely be associated with it). This attitude of mind goes by the name of “methodological naturalism.” It is best expressed by Lewontin, who writes,

“Our willingness to accept scientific claims that are against common sense is the key to an understanding of the real struggle between science and the supernatural. We take the side of science in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community of unsubstantiated just-so stories [in evolutionary biology] because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science compel us to accept a material explanation of the phenomenal world, but on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material causes, no matter how counterintuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door”

The lesson here should be clear. We should not put science into a politically-correct, materialistic straight-jacket.

20] ID scientists are participating in a tautological exercise. They don’t really draw an inference to design; they assume a design in advance and then call it an inference

A tautology (in the bad sense) consists of smuggling the conclusion of an argument into the assumption; it uninformatively says the same thing twice in different words: a bachelor is an unmarried male.

It is as a rule very easy to discern when this is happening. If, for example, one begins by assuming that everything in nature must have been designed, then one will “find” design in nature. Equally, if one assumes there is and can be no design in nature – as say Lewontin did – then indeed one will never accept evidence (regardless of its actual strength) that points to such design. In either case, the conclusion is plainly being question-beggingly embedded in the hypothesis, and the error is not that hard to spot.

However, while question-begging tautologies do indeed constitute bad reasoning, self-evident first principles – things that once we understand them clearly in light of our experience of the world as intelligent and rational creatures, we see MUST be so — define the necessary conditions for reason itself. But, many Darwinists, on hearing that science requires such metaphysical foundations (they generally don’t know that), begin by denying the fact and end by mistakenly concluding that those foundational assumptions must be tautologies.

The resulting confused debates aptly (and often entertainingly) illustrate the underlying point: we inescapably must reason FROM first principles; we cannot reason our way TO – i.e. “prove” — them.

If, on the other hand, a scientist begins with the observation that certain familiar patterns are present in a DNA molecule, s/he can, without begging questions, use the methods of inference to best current explanation to draw inferences about design. Especially since, in every case where we independently know the cause of such precise and complex functional patterns — for instance, this paragraph — they come from intelligence.


21] Evolution and artificial intelligence research have proved that there is no such thing as the “free will” that IDers attribute to designers; and, there is a scientifically respectable form of “free will” that is fully compatible with determinism

Free will”: A property of conscious intelligent beings (e.g.. as we ourselves exemplify) which denies any rigid connection between input and output of information into and from the consciousness, and which is characterized by some form of “causal intervention” of the subject (the I) on the output of consciousness, both objectively observable and subjectively perceived by the intervening I.

Free will as just defined does not mean absolute freedom: the influences present in the input, in the context, and in the existing mind with all its inertial factors and structures, are certainly real. But they are not sufficient to explain or determine the output. In other words, the actions of the I are vastly influenced by outer and inner factors, but never completely determined.

Moreover, free will is in no way strictly linked to the objective results of action: once the action is outputted by consciousness, it can be modified by any external factor independent on the agent. That does not change the fact that free will has been exercised in outputting the action. Thus, the claimed compatibilist account – roughly: one may be subjectively free from imposed constraints but objectively, one’s cognitive, verbal, kinesthetic and social behaviors are predetermined by various factors – fails.

In other words, while the agent is always heavily influenced and limited by external reality, free will is a constant inner space of freedom which can always express itself, in greater or smaller ways, in the “black box” between cognition and action.

Or, as the current status of AI research and scientific studies on origins shows, our reasoning, deciding and acting are not simply and wholly reducible to deterministic forces creating and acting on or constraining matter and energy – most notably, in our brain tissues. Nor does it simply “emerge” from sufficiently sophisticated software. So, if that is being claimed, it needs to be shown, not merely asserted or assumed then backed up with just-so origins stories. (And, on long observations, that responsible demonstration is precisely what is as a rule not brought forth when the issue of free will comes up in discussions. In more direct terms: please, do not beg the question.)

Furthermore, free will is inwardly and intuitively connected to the concept of responsibility.

Indeed, no concept of intellectual, decision-making and moral responsibility could even exist without our intuitive certainty of free will in ourselves and (inferentially) in others. But there is no easy way to define responsibility in an universal way. As free will is essentially a very intimate property of consciousness, so also responsibility is very intimate and mysterious, although for social necessities it is often, and rightfully, stated in a set of outer rules.

To sum up, free will is an intimate property of consciousness: the intuition of a perceiving I and of an acting I within ourselves are the double real basis of any representation we have of ourselves and of the external world. But free will is also objectively observable, and is the source of all creativity and choice in human behavior. Thus, it is an empirically anchored principle of action exhibited by known intelligent agents, and so it properly takes its place in a theory that addresses reliable identification of signs of such intelligent action.

22] Who Designed the Designer?

Intelligent design theory seeks only to determine whether or not an object was designed. Since it studies only the empirically evident effects of design, it cannot directly detect the identity of the designer; much less, can it detect the identity of the “designer’s designer.” Science, per se, can only discern the evidence-based implication that a designer was once present.

Moreover, according to the principles of natural theology, the designer of the universe, in principle, does not need another designer at all. If the designer could need a designer, then so could the designer’s designer, and so on. From the time of Aristotle till the present, philosophers and theologians have pointed out that what needs a causal explanation is that which begins to exist. So, they have concludes that such a series of causal chains cannot go on indefinitely. According to the principle of “infinite regress,” all such chains must end with and/or be grounded on a “causeless cause,” a self-existent being that has no need for a cause and depends on nothing except itself. (Indeed, before the general acceptance of the Big Bang theory, materialists commonly thought that the logically implied self-existing, necessary being was the observed universe. But now, we have good reason to think that it came into existence – is thus a contingent being — and so must have a cause itself.)

Ultimately, there can really be only one final cause of the cosmos.

To ask, therefore, “who designed the designer,” is to ask a frivolous question. Typically, radical Darwinists raise the issue because, as believers in a materialistic, mechanistic universe, they assume that all effects must be generated by causes exactly like themselves. This leads to a follow-up objection . . .

23] The Designer Must be Complex and Thus Could Never Have Existed

This is, strictly speaking, a philosophical rather than a scientific argument, and its main thrust is at theists. So, here is a possible theistic answer from one of our comment threads:

“[M]any materialists seem to think (Dawkins included) that a hypothetical divine designer should by definition be complex. That’s not true, or at least it’s not true for most concepts of God which have been entertained for centuries by most thinkers and philosophers. God, in the measure that He is thought as an explanation of complexity, is usually conceived as simple. That concept is inherent in the important notion of transcendence. A transcendent cause is a simple fundamental reality which can explain the phenomenal complexity we observe in reality. So, Darwinists are perfectly free not to believe God exists, but I cannot understand why they have to argue that, if God exists, He must be complex. If God exists, He is simple, He is transcendent, He is not the sum of parts, He is rather the creator of parts, of complexity, of external reality. So, if God exists, and He is the designer of reality, there is a very simple explanation for the designed complexity we observe.” [HT: GPuccio]

Broadening that a bit, we are designers, we are plainly complex in one sense, but also we experience ourselves as just that: selves, i.e. essentially and indivisibly simple wholes. Thus, complex but also simple designers can and do exist. The objection therefore begs the question of first needing to demonstrate that the complexity in human designers is the condition required to allow the design process. It also fails to see that we also experience ourselves as having indivisible — thus inescapably simple — individual identities, and that such a property could well be necessary for the design process. So, it begs the question a second time.

24] Bad Design Means No Design

This argument assumes an infallible knowledge of the design process.

Some, for example, point to the cruelty in nature, arguing that no self respecting designer would set things up that way. But that need not be the case. It may well be that the designer chose to create an “optimum design” or a “robust and adaptable design” rather than a “perfect design.” Perhaps some animals or creatures behave exactly the way they do to enhance the ecology in ways that we don’t know about. Perhaps the “apparent” destructive behavior of some animals provides other animals with an advantage in order to maintain balance in nature or even to change the proportions of the animal population.

Under such circumstances, the “bad design” argument is not an argument against design at all. It is a premature — and, at times, a presumptuous — judgment on the sensibilities of the designer. Coming from theistic evolutionists, who claim to be “devout” Christians, this objection is therefore especially problematic. For, as believers within the Judeo-Christian tradition they are committed to the doctrine of original sin, through which our first parents disobeyed God and compromised the harmonious relationship between God and man. Accordingly, this break between the creator and the creature affected the relationship between men, animals, and the universe, meaning that the perfect design was rendered imperfect. A spoiled design is not a bad design.

Beyond such theodicy-tinged debates, ID as science makes no claims about an omnipotent or omniscient creator.

From a scientific perspective, a cosmic designer could, in principle, be an imperfect designer and, therefore, create a less than perfect design; indeed, that was precisely the view of many who held to or adapted Plato’s idea of the Demiurge. So, even if one rejects or abandons theism, the “bad design” argument still does not offer a challenge to ID theory as a scientific endeavor.

The real scientific question is this: Is there any evidence for design in nature? Or, if you like, is a design inference the most reasonable conclusion based on the evidence?

25] Intelligent Design proponents deny, without having a reason, that randomness can produce an effect, and then go make something up to fill the void

ID proponents do not deny that “randomness can produce an effect.” For instance, consider the law-like regularity that unsupported heavy objects tend to fall. It is reliable; i.e. we have a mechanical necessity at work — gravity. Now, let our falling heavy object be a die. When it falls, it tumbles and comes to rest with any one of six faces uppermost: i.e. high contingency. But, as the gaming houses of Las Vegas know, that contingency can be (a) effectively undirected (random chance), or (b) it can also be intelligently directed (design).

Also, such highly contingent objects can be used to store information, which can be used to carry out functions in a given situation.

For example we could make up a code and use trays of dice to implement a six-state digital information storing, transmission and processing system. Similarly, the ASCII text for this web page is based on electronic binary digits clustered in 128-state alphanumeric characters. In principle, random chance could produce any such message, but the islands of functional messages will as a rule be very isolated in the sea of non-functional, arbitrary strings of digits, making it very hard to find functional strings by chance.

ID thinkers have therefore identified means to test for objects, events or situations that are credibly beyond the reach of chance on the gamut of our observed cosmos. (For simple example, as a rule of thumb, once an entity requires more than about 500 – 1,000 bits of information storage capacity to carry out its core functions, the random walk search resources of the whole observed universe acting for its lifetime will probably not be adequate to get to the functional strings: trying to find a needle in a haystack by chance, on steroids.)

Now, DNA for instance, is based on four-state strings of bases [A/C/G/T], and a reasonable estimate for the minimum required for the origin of life is 300,000 – 500,000 bases, or 600 kilo bits to a million bits. The configuration space that even just the lower end requires has about 9.94 * 10^180,617 possible states. So, even though it is in principle possible for such a molecule to happen by chance, the odds are not practically different from zero.

But, intelligent designers routinely create information storage and processing systems that use millions or billions of bits of such storage capacity. Thus, intelligence can routinely do that which is in principle logically possible for random chance, but which would easily empirically exhaust the probabilistic resources of the observed universe.

That is why design thinkers hold that complex, specified information (CSI), per massive observation, is an empirically observable, reliable sign of design.


26] Dembski’s idea of “complex specified information” is nonsense

First of all, the concept of complex specified information (CSI) was not originated by Dembski. For, as origin of life researchers tried to understand the molecular structures of life in the 1970’s, Orgel summed up their findings thusly:

Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity. [ L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added.]

In short, the concept of complex specified information helped these investigators understand the difference between (a) the highly informational, highly contingent functional macromolecules of life and (b) crystals formed through forces of mechanical necessity, or (c) random polymer strings. In so doing, they identified a very familiar concept — at least to those of us with hardware or software engineering design and development or troubleshooting experience and knowledge.

Namely, complex, specified information, shown in the mutually adapted organization, interfacing and integration of components in systems that depend on properly interacting parts to fulfill objectively observable functions. For that matter, this is exactly the same concept that we see in textual information as expressed in words, sentences and paragraphs in a real-world language.

Furthermore, on massive experience, such CSI reliably points to intelligent design when we see it in cases where we independently know the origin story.

What Dembski did with the CSI concept in the following two decades was to:

(i) recognize CSI’s significance as a reliable, empirically observable sign of intelligence,

(ii) point out the general applicability of the concept, and

(iii) provide a probability and information theory based explicitly formal model for quantifying CSI.


27] The Information in Complex Specified Information (CSI) Cannot Be Quantified

That’s simply not true. Different approaches have been suggested for that, and different definitions of what can be measured are possible.

As a first step, it is possible to measure the number of bits used to store any functionally specific information, and we could term such bits “functionally specific bits.”

Next, the complexity of a functionally specified unit of information (like a functional protein) could be measured directly or indirectly based on the reasonable probability of finding such a sequence through a random walk based search or its functional equivalent. This approach is based on the observation that functionality of information is rather specific to a given context, so if the islands of function are sufficiently sparse in the wider search space of all possible sequences, beyond a certain scope of search, it becomes implausible that such a search on a planet wide scale or even on a scale comparable to our observed cosmos, will find it. But, we know that, routinely, intelligent actors create such functionally specific complex information; e.g. this paragraph. (And, we may contrast (i) a “typical” random alphanumeric character string showing random sequence complexity: kbnvusgwpsvbcvfel;’.. jiw[w;xb xqg[l;am . . . and/or (ii) a structured string showing orderly sequence complexity: atatatatatatatatatatatatatat . . . [The contrast also shows that a designed, complex specified object may also incorporate random and simply ordered components or aspects.])

Another empirical approach to measuring functional information in proteins has been suggested by Durston, Chiu, Abel and Trevors in their paper “Measuring the functional sequence complexity of proteins”, and is based on an application of Shannon’s H (that is “average” or “expected” information communicated per symbol: H(Xf(t)) = -∑P(Xf(t)) logP(Xf(t)) ) to known protein sequences in different species.

A more general approach to the definition and quantification of CSI can be found in a 2005 paper by Dembski: “Specification: The Pattern That Signifies Intelligence”.

For instance, on pp. 17 – 24, he argues:

define ϕS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S’s] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [χ] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 logarithm of the conditional probability P(T|H) multiplied by the number of similar cases ϕS(t) and also by the maximum number of binary search-events in our observed universe 10^120]

χ = – log2[10^120 ·ϕS(T)·P(T|H)].

To illustrate consider a hand of 13 cards with all spades, which is unique. 52 cards may have 635 *10^9 possible combinations, giving odds of 1 in 635 billions as P(T|H). Also, there are four similar all-of-one-suite hands, so ϕS(T) = 4. Calculation yields χ = -361, i.e. < 1, so that such a hand is not improbable enough that the – rather conservative — χ metric would conclude “design beyond reasonable doubt.” (If you see such a hand in the narrower scope of a card game, though, you would be very reasonable to suspect cheating.)

Debates over Dembski’s models and metrics notwithstanding, the basic point of a specification is that it stipulates a relatively small target zone in so large a configuration space that the reasonably available search resources — on the assumption of a chance-based information-generating process — will have extremely low odds of hitting the target. So low, that random information generation becomes an inferior and empirically unreasonable explanation relative to the well-known, empirically observed source of CSI: design.


28] What about FSCI [Functionally Specific, Complex Information] ? Isn’t it just a “pet idea” of some dubious commenters at UD?

Not at all. FSCI — Functionally Specific, Complex Information or Function-Specifying Complex Information (occasionally FCSI: Functionally Complex, Specified Information) – is a descriptive summary of the particular subset of CSI identified by several prominent origins of life [OOL] researchers in the 1970’s – 80’s. For at that time, the leading researchers on OOL sought to understand the differences between (a) the highly informational, highly contingent functional macromolecules of life and (b) crystals formed through forces of mechanical necessity, or (c) random polymer strings. In short, FSCI is a descriptive summary of a categorization that emerged as pre-ID movement OOL researchers struggled to understand the difference between crystals, random polymers and informational macromolecules.

Indeed, by 1984, Thaxton, Bradley and Olson, writing in the technical level book that launched modern design theory, The Mystery of Life’s Origin [Download here], in Chapter 8, could summarize from two key origin of life [OOL] researchers as follows:

Yockey [7] and Wickens [5] develop the same distinction [as Orgel], explaining that “order” is a statistical concept referring to regularity such as might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future. [TMLO, (Dallas, TX: Lewis and Stanley reprint), 1992, erratum insert, p. 130. Emphases added.]

The source of the abbreviation FSCI should thus be obvious – and it is one thing to airily dismiss blog commenters; it is another thing entirely to have to squarely face the result of the work of men like Orgel, Yockey and Wickens as they pursued serious studies on the origin of life. But also, while the cluster of concepts came up in origin of life studies, these same ideas are very familiar in engineering: engineering designs are all about stipulating functionally specific, complex information. Indeed, FSCI is a hallmark of engineered or designed systems.

So, FSCI is actually a functionally specified subset of CSI, i.e. the relevant specification is connected to the presence of a contingent function due to interacting parts that work together in a specified context per requirements of a system, interface, object or process. For practical purposes, once an aspect of a system, process or object of interest has at least 500 – 1,000 bits or the equivalent of information storing capacity, and uses that capacity to specify a function that can be disrupted by moderate perturbations, then it manifests FSCI, thus CSI. This also leads to a simple metric for FSCI, the functionally specified bit; as with those that are used to display this text on your PC screen. (For instance, where such a screen has 800 x 600 pixels of 24 bits, that requires 11.52 million functionally specified bits. This is well above the 500 – 1,000 bit threshold.)

On massive evidence, such cases are reliably the product of intelligent design, once we independently know the causal story. So, we are entitled to (provisionally of course; as per usual with scientific work) induce that FSCI is a reliable, empirically observable sign of design.

29] The ID explanatory filter cannot rule out chance or unknown laws!

Ever since the days of Plato’s The Laws, Book X, thinkers have noted that we may usefully describe causes for events in terms of mechanical necessity, chance and intelligent action.

Reflecting on an unsupported heavy object, we see that it tends to fall; thus, the mechanical force of gravity leads to a law-like necessity and a resulting natural regularity. If the object is a die, the face that is uppermost after it tumbles is (for practical purposes) a matter of chance: undirected contingency. However, if the die is tossed as a part of a game (and/or if it is loaded), the event is a part of a directed contingency – that is, a design. So, law, chance and design are distinct and credibly exhaustive (but sometimes interacting) causal factors; and for each key aspect of a situation, object or system, we can reasonably distinguish its dominant causal factor(s).

If the dominant factor for a given aspect of reality is law-like necessity, we will see a natural regularity; thus low contingency — and that is just how we would recognize the presence of and then identify “unknown laws.” That is, the very presence of high contingency points us away from explanation by laws. (And, if the laws of the universe are set up to produce life, such blatantly purposeful cosmic fine-tuning would point rather strongly to a higher level of design. That shines though even in the case of a so-called multiverse, since our “local” laws are already known to be tightly tuned to support life. So, echoing Robin Collins, we would have to address the exactingly precise operation of the “cosmos-baking machine.”)

High contingency, of course, may be undirected (chance) or directed (design).

The explanatory filter therefore helps us distinguish the latter two in certain important cases: where we have specified complexity, based on massive experience, we reliably infer by induction that the cause is design. This obviously does not give us an absolute, once- and- for- all proof; but that just means that the filter has key features of a good scientific theory: reliable results, but subject to falsification or correction in light of new evidence.

30] William Dembski “dispensed with” the Explanatory Filter (EF) and thus Intelligent Design cannot work

This quote by Dembski is probably what you are referring to:

I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection.

In a nutshell: Bill made a quick off-the-cuff remark using an unfortunately ambiguous phrase that was immediately latched-on to and grossly distorted by Darwinists, who claimed that the “EF does not work” and that “it is a zombie still being pushed by ID proponents despite Bill disavowing it years ago.” But in fact, as the context makes clear – i.e. we are dealing with a real case of “quote-mining” [cf. here vs. here] — the CSI concept is in part based on the properly understood logic of the EF. Just, having gone though the logic, it is easier and “clearer” to then use “straight CSI” as an empirically well-supported, reliable sign of design.

In greater detail: The above is the point of Dembski’s clarifying remarks that: “. . . what gets you to the design node in the EF is SC (specified complexity). So working with the EF or SC end up being interchangeable.”[For illustrative instance, contextually responsive ASCII text in English of at least 143 characters is a “reasonably good example” of CSI. How many cases of such text can you cite that were wholly produced by chance and/or necessity without design (which includes the design of Genetic Algorithms and their search targets and/or oracles that broadcast “warmer/cooler”)?]

Dembski responded to such latching-on as follows, first acknowledging that he had spoken “off-hand” and then clarifying his position in light of the unfortunate ambiguity of the phrasal verb dispensed with:

In an off-hand comment in a thread on this blog I remarked that I was dispensing with the Explanatory Filter in favor of just going with straight-up specified complexity. On further reflection, I think the Explanatory Filter ranks among the most brilliant inventions of all time (right up there with sliced bread). I’m herewith reinstating it — it will appear, without reservation or hesitation, in all my future work on design detection.

[….]

I came up with the EF on observing example after example in which people were trying to sift among necessity, chance, and design to come up with the right explanation. The EF is what philosophers of science call a “rational reconstruction” — it takes pre-theoretic ordinary reasoning and attempts to give it logical precision. But what gets you to the design node in the EF is SC (specified complexity). So working with the EF or SC end up being interchangeable. In THE DESIGN OF LIFE (published 2007), I simply go with SC. In UNDERSTANDING INTELLIGENT DESIGN (published 2008), I go back to the EF. I was thinking of just sticking with SC in the future, but with critics crowing about the demise of the EF, I’ll make sure it stays in circulation.

Underlying issue: Now, too, the “rational reconstruction” basis for the EF as it is presented (especially in flowcharts circa 1998) implies that there are facets in the EF that are contextual, intuitive and/or implicit. For instance, even so simple a case as a tumbling die that then settles has necessity (gravity), chance (rolling and tumbling) and design (tossing a die to play a game, and/or the die may be loaded) as possible inputs. So, in applying the EF, we must first isolate relevant aspects of the situation, object or system under study, and apply the EF to each key aspect in turn. Then, we can draw up an overall picture that will show the roles played by chance, necessity and agency.

To do that, we may summarize the “in-practice EF” a bit more precisely as:

1] Observe an object, system, event or situation, identifying key aspects.

2] For each such aspect, identify if there is high/low contingency. (If low, seek to identify and characterize the relevant law(s) at work.)

3] For high contingency, identify if there is complexity + specification. (If there is no recognizable independent specification and/or the aspect is insufficiently complex relative to the universal probability bound, chance cannot be ruled out as the dominant factor; and it is the default explanation for high contingency. [Also, one may then try to characterize the relevant probability distribution.])

4] Where CSI is present, design is inferred as the best current explanation for the relevant aspect; as there is abundant empirical support for that inference. (One may then try to infer the possible purposes, identify candidate designers, and may even reverse-engineer the design (e.g. using TRIZ), etc. [This is one reason why inferring design does not “stop” either scientific investigation or creative invention. Indeed, given their motto “thinking God’s thoughts after him,” the founders of modern science were trying to reverse-engineer what they understood to be God’s creation.])

5] On completing the exercise for the set of key aspects, compose an overall explanatory narrative for the object, event, system or situation that incorporates aspects dominated by law-like necessity, chance and design. (Such may include recommendations for onward investigations and/or applications.)

31] Intelligent Design Tries To Claim That Everything is Designed Where We Obviously See Necessity and Chance

Intelligent Design has never claimed anything like that. Design is just a supplementary causal mechanism, empirically observed in human behavior, which can explain some observed aspects of things that cannot be explained in other ways. Let’s quote Behe on that:

Intelligent design is a good explanation for a number of biochemical systems, but I should insert a word of caution. Intelligent design theory has to be seen in context: it does not try to explain everything. We live in a complex world where lots of different things can happen. When deciding how various rocks came to be shaped the way they are a geologist might consider a whole range of factors: rain, wind, the movement of glaciers, the activity of moss and lichens, volcanic action, nuclear explosions, asteroid impact, or the hand of a sculptor. The shape of one rock might have been determined primarily by one mechanism, the shape of another rock by another mechanism.Similarly, evolutionary biologists have recognized that a number of factors might have affected the development of life: common descent, natural selection, migration, population size, founder effects (effects that may be due to the limited number of organisms that begin a new species), genetic drift (spread of “neutral,” nonselective mutations), gene flow (the incorporation of genes into a population from a separate population), linkage (occurrence of two genes on the same chromosome), and much more. The fact that some biochemical systems were designed by an intelligent agent does not mean that any of the other factors are not operative, common, or important.

And:

I think a lot of folks get confused because they think that all events have to be assigned en masse to either the category of chance or to that of design. I disagree. We live in a universe containing both real chance and real design. Chance events do happen (and can be useful historical markers of common descent), but they don’t explain the background elegance and functional complexity of nature. That required design.

So, it is absolutely not true that ID claims that “everything is designed”. Indeed, a main purpose of ID is exactly to find ways to reasonably distinguish between what is designed and what is not.

32] What types of life are Irreducibly Complex? Or which life is not Irreducibly Complex?

Irreducible Complexity is a property of some machines, not of life itself. Many biological machines which are essential components of all life are Irreducibly Complex (IC). Not all components of living organisms are IC nor do they necessarily exhibit Complex Specified Information (CSI). For, following Behe, such an entity will be irreducibly complex if and only if: its core functionality relies on a multi-part interacting set of mutually co-adapted, interacting components.

The fact that a biological machine is IC (e.g. as shown through genetic knockout studies of the components of, e.g., the bacterial flagellum) implies that it cannot be reasonably the product of direct Darwinian pathways operating through selection of the observed function. This is because the function only emerges when the whole machine is already there. The real question is whether unguided Darwinian processes in whatever form, can produce IC machines through indirect pathways; for instance by co-option (producing sub-components which can be selected for a different function, and then “co-opted” for the final function). But any such indirect pathway should be explicitly modeled, and shown to be in the range of what unguided evolution can reasonably do.

A direct Darwinian pathway implies that the steps are selected for the improvement of the same function we find in the final machine. But, IC makes a direct Darwinian pathway impossible. So, only two possibilities are left: either (i) sudden appearance of the complete machine (practically impossible for statistical considerations), or (ii) step by step selection for different functions and co-optation to make a novel function, with this final function completely invisible to natural selection up to the final step.

We should also bear in mind that most – or at least a great many — biological machines in the cell, and most – or at least a great many — macroscopic machines in multicellular beings, are probably IC. (This is a point that Darwinists tend to bypass.)

Darwinists may believe in indirect Darwinian pathways, because it’s the only possible belief which is left for them, but it’s easy to see that it really means believing in repeated near-impossibilities. There is no reason in the world, either logic or statistical, why many complex functions should emerge from the sums of simpler, completely different functions. And even granted that, by incredible luck, that could happen once, how can one believe that it happened millions of times, for the millions (yes, I mean it!) of different IC machines we observe in living beings? The simple fact that Darwinists have to adopt arguments like co-option and indirect pathways to salvage their beliefs is a clear demonstration of how desperate they are.


33] In the Flagellum Behe Ignores that this Organization of Proteins has Verifiable Functions when Particular Proteins are Omitted, i.e. in its simplest form, a protein pump

Irreducible complexity means that the function of a complex machine is not maintained if we take away any of its core parts; e.g. as Minnich did for the bacterial flagellum. In other words, it means that there is no redundancy in the core of the machine, and that none of the parts or sub-assemblies can retain the function of the whole.

So, despite the TTSS (Type Three Secretory System) objection and the like, the flagellum, therefore, still credibly is irreducibly complex.

Behe’s main argument is that IC machines like the flagellum cannot reasonably be the product of direct Darwinian pathways, because the function only emerges when the machine is wholly assembled, and therefore cannot be selected before. That is supported by the observation that there are no technically detailed descriptions of such pathways in the scientific literature; which remains the case now over a decade since his observation was first published in 1996 in Darwin’s Blackbox. So, Darwinists have tried to devise for IC machines indirect Darwinian pathways, using the notion of cooption, or exaptation, which more or less means: even if the parts or sub-assemblies of the machine cannot express the final function, they can have different functions, be selected for them, and then be coopted for the new function.

The TTSS is suggested as an example of such a possible co-opted mechanism. The Darwinist argument is that there is strong homology between the proteins of the TTSS and a subset of the proteins of the flagellum which are part of a substructure in the basal body of the flagellum itself. Therefore, the flagellum could have reutilized an existing system.

The hypothesis has some empirical basis in the homology between the two systems: but that should not surprise us, because both the TTSS and the “homologue” subset in the flagellum accomplish a similar function: they pump proteins through a membrane. So, it is somewhat like saying that an airplane and a cart are similar because both have wheels. It is true, but an airplane is not a cart. For, the flagellum is not a TTSS; it is much more. And the sub-machine which pumps proteins in the basal body of the flagellum is similar to, but not the same as the TTSS.

Moreover, it is relevant to observe that the TTSS is used by prokaryote – non-nucleus based — bacteria to prey upon much more complex eukaryote – nucleus-based — cells, which appeared after the prokaryotes with flagella (i.e. the bacterial flagellum). In short, there is an obvious“which comes first?” counter-challenge to the objection: it is at least as credible to argue that the TTSS is a devolution, than that it is a candidate prior functional sub-assembly.

To sum up:

  1. A lot of the proteins in the flagellum have no explanation on the basis of homologies to existing bacterial machines, or of partial selectable function.

  2. Even if the functions of the TTSS and of the sub-machine in the flagellum are similar, the two machines are in fact different,and the proteins in the two machines are not the same. Homology does not mean identity.

  3. Most importantly, TTSS arguments notwithstanding: the overall function of the flagellum cannot be accomplished by any simpler subset. That means that the flagellum is irreducibly complex.

  4. Explaining the evolution of the flagellum by indirect pathways would imply explaining all its parts on the basis of partial selectable functions, and explaining also their production, regulation, assemblage, compatibility, and many other complex engineering adaptations. In other words, even if you have wheels and seats, engines and windows, you are still far away from having an airplane.

  5. Finally, it is still very controversial if the flagellum appeared after the TTSS, or rather the opposite; in which case the TTSS could easily be explained as a derivation from a subset of the flagellum, retaining the pump function with loss of the higher level functions. And anyway, the TTSS itself is irreducibly complex.


34] Behe is Jumping to Conclusions on
P. falciparum and his so-called edge of evolution. P. falciparum did not evolve because it did not need to evolve: it is so perfect already that it cannot improve upon itself

We must first note that it is credible that this malaria parasite, every year, has more reproductive opportunities than the entire family of mammals, over its entire existence in the fossil record and today.

Also, because of the impact of malaria as a disease, it has been one of the most intensely studied medical challenges, and as a result, the organism has been confronted with several powerful drugs over nearly a century, leading to powerful selection pressure. The net result: only mutations up to the level of two co-ordinated point changes yielding drug resistance have been observed; with an epidemiologically observed frequency of incidence that implies an empirical probability of about 1 in 10^20; in rough accord with theoretical considerations.

The dismissive assertion above therefore simply flies boldly in the face of the facts of what P. falciparum “needed” to achieve in the way of differential reproduction, and had abundant opportunity to achieve; but plainly did not. So, let us now review several examples:

  1. P. falciparum is excluded from a vast reproductive opportunity because it cannot survive in cold climates. Extending its range into temperate climates would vastly increase its reproductive potential. Evidently the necessary mutations for this require more than just a few interdependent mutations. It failed to increase its range in billions of trillions of replications.

  2. The human-produced and administered drug chloroquine has killed billions of trillions of individual P. falciparum yet in billions of trillions of mutational opportunities to resist this drug, which requires just a few point mutations, it only found a way to resist, through random mutation and natural selection, about 10 times. In none of those 10 times did the RM+NS “improved” version of the parasite pass the improvements on into the parasite population at large.

  3. A hemoglobin mutation in humans (sickle cell) confers resistance to P. falciparum (causing it to starve as the mutated hemoglobin clogs up its digestive mechanisms). Again in trillions of mutational opportunities P. falciparum failed to evolve any means of surviving in the sickle cell environment. Evidently this too requires more than just a few chained interdependent mutations.

How does modern evolutionary theory, with all its glut of potential Darwinian mechanisms beyond the modern synthesis’s random mutation plus natural selection, explain these failures to evolve complex structures under intense selection pressure when given far more opportunity to evolve than all the mammals that ever lived?


35] What About the spreading of antibiotic resistance?

From Wikipedia: “Antibiotic resistance can be a result of horizontal gene transfer, and also of unlinked point mutations in the pathogen genome”.

In other words, there are two kinds of antibiotic resistance. The first is due to the propagation via horizontal gene transfer of genes which already exist: no new information is created, and the information in the existing genes for resistance remains to be explained just like any other biological information. The second is a well known form of microevolution, usually easily explained by a single point mutation well in the limits of what random variation can accomplish, especially in fast replicators like bacteria.

It is perfectly true that antibiotic resistance spreads according to the principles of positive natural selection, when the environmental pressure (the antibiotic) is present. That is well known, implies no problem for ID, and is just the best confirmation of what minimal microevolution and NS can really do.


36] ID Proponents Talk a Lot About Front-Loading But Never Explain What It Means

Front loading is a descriptive term for a common approach in engineering and in operations or strategic management, which comes in several textures. In general it refers either to (i) the act of putting processes in place that will anticipate some future contingency, or (ii) creating or selecting material means to accomplish a goal in accordance with a previously conceived specification.

In that sense, front loaded processes know where they are going, they “look ahead.” If that last sentence sounds clumsy, it is because a missing piece, the designer, was left out of it. Intelligent agents, or designers, can produce forms, sequences, and structures by applying boundary constraints to limit possibilities, something that mindless forces cannot do.

Darwinian processes, on the other hand, by definition, cannot plan ahead or limit possibilities.

Darwinian evolution is a “purposeless, mindless process that did not have man in mind.”As a direct result, such mechanisms are challenged to fashion new body plans, or create complex new biological information; precisely because they are not front loaded, and don’t know where they are going or why they exist. So, it should be no surprise that, in spite of all the claims made on its behalf, the actual direct observational evidence over the past 150 years shows that Darwinian processes are remarkably limited in power and scope. On that evidence, it can explain only the “survival of the fittest,” through micro-evolution; it cannot explain “arrival of the fittest,” through macro-evolution. For, the process cannot “unfold” purposefully according to an “internal principle,” it can only “adapt” slavishly to the “outside” environment.”

By contrast, front loading is a special theory about how CSI may have been implemented by the designer(s) of living beings: the information for the future development of organisms may have been included in some common ancestor, so that it could express itself gradually in the course of natural history, without requiring further interventions of the designer. The front loading hypothesis can have different formulations, and is a theory about the modalities of implementation of design, and not a causal theory, because in that kind of hypothesis design remains anyway the main cause of biological information.

Front loading hypotheses, in that sense, are therefore a subset of theories in the general ID scenario, but other theories about the modalities of implementation do exist, such as the possibility of repeated, more or less gradual interventions of the designer in the course of natural history.

37] ID Proponents use a lot of other buzz-words like Intelligence, Design, Complexity, etc, but never clearly and convincingly explain what they mean

This is indeed a concern, and the appended short glossary is intended to help.

38]  At its foundation ID is based on a logical construct that posits that for any proposition A, A cannot be true and false at the same time and in the same formal relation.  Modern quantum mechanics has proven this logical construct – the so-called “law of non-contradiction” — is outdated and does not always holds true.

IN A NUTSHELL: UD’s contributor StephenB has put his finger on the basic problem here: Scientists do not use observed evidence to evaluate the principles of logic; they use the principles of logic to evaluate such evidence.

We can see this from how the quantization of energy levels was first discovered by Planck in 1900.

He was studying the so-called black body radiation problem, where if we have a cavity with a small hole in it, there is a definite roughly bell-curve shaped peaked spectrum of light from the hole that depends on the temperature but not the materials the cavity is made of. (Think about how and why the pupils of our eyes or the mouth of a cave or a pinhole in a closed box all look flat black.)

Classical models could to some extent explain one skirt or another, but not the peak and fall-back on its other side, hence talk of the Ultraviolet “catastrophe.” It was as though once the frequency moved up, the light “had” to come in larger and larger lumps.

So, roughly speaking, Planck proposed just that as a model:

E_lump = h*f,

f being the oscillation frequency of tiny oscillators in the walls of the cavity. He was hoping to then smoothen it off in the usual way that happens in calculus. But his lumps of light would not go away, and he was stuck with quanta of radiation when it is emitted or absorbed. (And of course, h is now known as Planck’s constant, 6.626 * 10^-34 Js.)

Then, five years later, Einstein came along with his study of the photoelectric effect, and showed how that sort of lumpiness of light would also explain another puzzle, the reason why light below a certain threshold frequency would not knock off electrons from a metal surface in a vacuum.

He called the lumps of light “photons,” and Quantum Theory was born. (BTW, this is what is mainly responsible for Einstein’s Nobel Prize; he did not get it for his then even more controversial theory of relativity. In addition, those who imagine that physics in particular does not study or explain on causes, should ask themselves: what does an effect point to?)

But now, let us look closer: at each stage, the scientists were comparing observations with what the classical theory predicted, and were implicitly assuming that if the theory, T, predicted observations, O, but we saw NOT-O instead, then T was wrong.

Q: Why is that?

A: Because they respected the logical law of identity [LOI], and its travelling companions, the law of non-contradiction [LNC] and the excluded middle [LEM]. If a scientific theory T is consistent with and predicts observations O, but we see the denial of O, i.e. NOT-O, O is first seen as distinct and recognisably different from NOT-O [LOI]. The physicists also saw that O and NOT-O cannot both be so in the same sense and circumstances [LNC], and they realised that once O is a distinct phenomenon they would see O or NOT-O, not both or something else [LEM]. (Where also, superposition is not a blending of logical opposites, but an interaction between contributing parents, say P and Q to get a composite result, say R; as we can see with standing waves on a string or a ripple tank’s interference pattern.) Going further, when such scientists scratched out their equations and derivations on their proverbial chalk boards, they were using distinct symbols, and were reasoning step by step on these same three laws. In short, the heart of the scientific method inescapably and deeply embeds the classic laws of thought. You cannot do science, including Quantum Theory science, without basing your work on the laws of thought. So, it is self-refuting and absurd to suggest that Quantum Theory results can or do undermine these laws of thought.

In short, to then suggest that empirical discoveries or theoretical analysis now overturns the basic laws of thought, is to saw off the branch on which science must sit, if it is to be a rational enterprise at all. And, while it is easy to get lost in the thickets of quantum weirdness, if we trace carefully, we will always see this. MORE DETAILS . . .


39] ID is Nothing More Than a “God of the Gaps” Hypothesis

Famously, when his calculations did not quite work, Newton proposed that God or angels nudged the orbiting planets every now and then to get them back into proper alignment. Later scientists were able to show that the perturbations of one planet acting on another are calculable and do not in aggregate skew the calculations.  Newton’s error is an example of the “God of the gaps” fallacy – if we do not understand it, God must have done it.

ID is not proposing “God” to paper over a gap in current scientific explanation. Instead ID theorists start from empirically observed, reliable, known facts and generally accepted principles of scientific reasoning:

(a) Intelligent designers exist and act in the world.

(b) When they do so, as a rule, they leave reliable signs of such intelligent action behind.

(c) Indeed, for many of the signs in question such as CSI and IC, intelligent agents are the only observed cause of such effects, and chance + necessity (the alternative) is not a plausible source, because the islands of function are far too sparse in the space of possible relevant configurations.

(d) On the general principle of science, that “like causes like,” we are therefore entitled to infer from sign to the signified: intelligent action.

(e) This conclusion is, of course, subject to falsification if it can be shown that undirected chance + mechanical forces do give rise to CSI or IC.  Thus, ID is falsifiable in principle but well supported in fact.

In sum, ID is indeed a legitimate scientific endeavor: the science that studies signs of intelligence.

 


40]
Why are you Intelligent Design Creationists always so busy quote-mining what scientists have to say about Evolution?

The first problem here, is a problem of mischaracterization: as the Creationists themselves acknowledge (and as  no. 5 above explains) Design thought and Creationism are quite distinct. Unfortunately, that same problem of mischaracterization also extends to too much of what is meant when the accusatory phrase “quote mining” is used against design thinkers and even Creationists.

The issue is, that there is a very significant difference between:

case a: a damaging but accurately reported admission against interest made by a party to a dispute – one of the most powerful (and most likely to be true) forms of verbal evidence, and

case b: a misleading, distorted quotation (or even misquotation) that has been taken out of context and used to create a caricatured argument that may either

(i) set up a strawman target to be knocked over, or else

(ii) create a false sense of an authority legitimizing an argument s/he disagrees with.

Spotting a strawman caricature set up to be knocked over – case b (i) – is relatively easy. The real problem is that case a is too often portrayed by those wishing to brush aside a damaging, legitimately cited admission against interest as if it were case b (ii), misuse of the words of an authority.

The Legal Dictionary section of thefreedictionary.com helps us to clarify the point:

admission against interest n. an admission of the truth of a fact by any person, but especially by the parties to a lawsuit, when a statement obviously would do that person harm, be embarrassing, or be against his/her personal or business interests. A third party can quote in court an admission against interest even though it is only hearsay. (See: hearsay, admission) [Copyright © 1981-2005 by Gerald N. Hill and Kathleen T. Hill. ]

Obviously, such an admission, if available, will be very powerful. So on one hand there is a temptation to create such where it does not exist. On the other, the temptation is to try to dismiss such an admission as though it were illegitimate. When the latter happens, someone legitimately making use of an admission against interest may then be unfairly brushed aside as either willfully deceptive or an ignoramus who does not understand what s/he is reading. This obviously also deeply poisons the tone of the discussion and can be used as a red herring distractor that hijacks the issue and triggers a quarrel if the falsely accused party tries to defend himself. Not good.

But, the objector will retort: we know for a fact that quote mining by Intelligent Design activists and Creationists happens all the time.  It may indeed occasionally happen, but an incident highlighted by UD President Barry Arrington shows what in our experience here at UD is the far more usual situation:

ARRINGTON: To review, in a previous post I argued that the fossil record did not turn out the way Darwin expected it would. Of course, I will be the first to admit that I am no expert on Darwin’s views, and there is no reason for anyone to care particularly what I say about that topic. So I quoted Niles Eldredge and Ian Tattersall:

Change in the manner Darwin expected is just not found in the fossil record.

Niles Eldredge and Ian Tattersall, The Myths of Human Evolution (New York: Columbia University Press, 1982), 45-46.

Note that I am not arguing here that Darwinian evolution did not occur (though I have views on that). Nor am I arguing that there are no fossils demonstrating transitions between major groups as opposed to sister species (though I have views on that as well). I am asserting a VERY narrow point: The fossil record did not turn out the way Darwin expected it would. And I am quoting Eldredge to support that point.

Matzke came onto these pages and accused me of “quote mining,” which is the deceptive use of an out-of-context quote to make it appear that the author agrees with the proposition one is advancing when they really did not. It is a form of lying and is morally reprehensible.

So, a fairly acrimonious exchange unfortunately developed.  The upshot? UD’s President sums up:

 

. . . in order for Matzke’s charge to be true, Eldredge and Tattersall would have had to, in context, mean something other than the proposition for which I quoted them, i.e., that the fossil record did not turn out as Darwin expected. But that is exactly what they meant. Therefore, the quote mining charge is false.

I pointed this out to Matzke and asked him to retract/apologize. He has steadfastly refused . . . . He writes:

[MATZKE:] As long as you keep refusing to admit the context of the Eldredge quote, you will be guilty of quote-mining when you use it to argue that the fossil record doesn’t support evolution.

(emphasis mine)

If I had argued that the fossil record does not support evolution this statement might have some force. I made no such argument (As I said above, I have views on that matter, but that is beside the point.) I argued something completely different. I argued that the fossil record did not turn out the way Darwin expected it to.

And, finally, Arrington observes:

. . . it turns out that Nick thinks Eldredge was wrong:

[MATZKE:] we’ve already been over what Darwin said he expected from the fossil record, and Eldredge got that bit wrong.

Now we get to the bottom of it. It is not that I misquoted Eldredge. My quote was perfectly accurate. Nick just disagrees with Eldredge on the point for which I quoted him, and under his personal definition of the term that makes me guilty of quote mining.

Of course, this brings out another complexity: the phrase “quote mining” is informal, usually polemical and as a rule does not appear in standard dictionaries and works on logic. 

It does seem to have a generally intended meaning as at case b (i) and/or case b (ii) above, but one has to be quite careful with such informal terms. And in any case even if Eldredge was in fact wrong (and Eldredge is indeed a leading expert), Arrington’s citation was plainly accurate to what Eldredge meant to say. So, it was plainly inappropriate to characterize it as “quote mining.”

In summary, Eldredge (and other leading Paleontologists) have admitted that that Darwin’s hopes about the fossil record  – after a quarter million fossil species, millions of collected specimens and billions more seen in the ground all around the world – simply have not been realized. That is, the circumstances of 1859 no longer obtain, but the strong pattern of gaps and so-called missing links still does. Which, is exactly how UD’s President cited Eldredge.

This case plainly shows how easily the toxic accusation “quote mining” can be abused

41] What About the Canaanites?

Whataboutism is a variant of the tu quoque logical fallacy that attempts to discredit an opponent’s position by charging them with hypocrisy without directly refuting or disproving their argument.

A frequent example of whataboutism employed by materialists:

ID Proponent:  “The Holocaust was objectively evil.  Therefore, objective moral standards exist.”

Materialist:  “What about God’s command to kill the Canaanites?  If the Holocaust was evil, wasn’t that evil too?”

Notice what the materialist did not do:  He did not even address the ID proponent’s argument, far less refute it.  Instead, the materialist tried to discredit the argument by charging the ID proponent with hypocrisy.

Materialists employee whataboutism frequently because it works.  It puts the ID proponent on the defensive, and time after time arguments about whether objective moral standards exist get bogged down in attempts to justify God’s commands concerning the Canaanites 3,400 years ago.

From a strictly logical point of view, there is no reason this should ever happen.  The proper response is to decline the invitation to change the subject:  “I don’t believe it, but let’s assume for the sake of argument you are right.  Getting back to the argument before you tried to change the subject . . .”

___________________

CONTRIBUTORS: SB is a Philosopher-Communicator with an emphasis on the application of sound common sense reasoning to the design controversy, GP is a Medical Doctor with a focus on the microbiology and microevolutonary issues, and KF is an Applied Physicist and educator with interests in information technologies and related information theory and statistical thermodynamics. We jointly express appreciation to the developers of an earlier form of this page on responding to weak anti-ID arguments.