Uncommon Descent Serving The Intelligent Design Community

You searched for peer review

Search Results

10 + 1 Questions For Professor Myers

When Michael Behe visited the UK, back in November, the Humanist Society of Scotland and the British Center for Science Education wrote up a list of “10 + 1 Questions For Professor Behe” which they subsequently distributed to their ranks of faithful followers. I responded, at the time, fairly thoroughly to the arguments made therein here (to which the BCSE retaliated fairly viciously here).

Since PZ Myers has been invited to visit Glasgow next week (one week from today to be specific), to lecture on the embryological evidence for Darwinism, I took it upon myself to draw up this list of “10 + 1 Questions For Professor Myers”. If you happen to be in the area, and are anticipating attending this event next Monday (which will take place in the Crystal Palace, 36 Jamaica Street, from 7pm), feel free to use the following questions as inspiration for the Q&A session which will follow the talk.

Read More ›

Douglas Axe Clears Up Four Misconceptions About His Work

Douglas Axe has posted a response to criticisms of his work from Arthur Hunt and Steve Matheson, regarding his 2004 JMB paper, on the Biologic Institute website. In August of 2004 I received an email inquiry from plant biologist Art Hunt. He had written a draft for a blog piece aimed at reviewing a research article of mine that had just appeared in the Journal of Molecular Biology [1], and he wanted to know whether he had understood my work correctly. He clearly aimed to refute claims that were beginning to surface that my paper supported intelligent design, but he also wanted to make sure he wasn’t misconstruing my work in the process. He didn’t expect me to oblige—“I will understand Read More ›

Bio-Complexity paper: Similarity of enzyme structure does not guarantee ease of interconversion

Doug Axe and Ann Gauger have a new peer-reviewed paper up at BIO-Complexity which provides a quantifiable measure of how many mutations are required for a relatively simple biological innovation – the functional conversion of one enzyme to that of its closest structural neighbor.

The authors argue that their results show that similarity of structure does not guarantee ease of interconversion, and that that goes to the root of all Darwinian trees based on such similarity.

Here’s the abstract: Read More ›

Backgrounder: Some challenges offered for Lynn Margulis’s endosymbiosis theory

Recently, well-known biologist Lynn Margulis has been in the news, letting Discover Magazine know that Darwinism is vastly overrated as a theory of evolution.

That said, here are a couple of challenges noted for her own theory of endosymbiosis (some life forms evolved by swallowing others (bacteria might have swallowed mitochondria when the latter was an independent life form), which then became part of their inner workings, resulting in greater complexity): Read More ›

The Nature of Nature — sticky

THE NATURE OF NATURE is now finally out and widely available. If you haven’t bought it yet, let me suggest Amazon.com, which is selling it for $17.94, which is an incredible deal for a 7″x10″ 1000-page book with, for most of us, no tax and no shipping charge (it costs over $10 to ship this monster priority mail). This is a must-have book if you are interested at all in the ID debate. To get it from Amazon.com, click here. Below is the table of contents and some introductory matter.

(Other news coverage continues below)

———————————————

Seven years in the making, at 500,000 words, with three Nobel laureate contributors, this is the most thorough examination of naturalism to date.

<<<<<>>>>>

Nature of NatureThe Nature of Nature: Examining the Role of Naturalism in Science

Edited by Bruce L. Gordon

and William A. Dembski

ISI Books

Intercollegiate Studies Institute

Wilmington, DE 19807

Back Cover:


Read More ›

Why there’s no such thing as a CSI Scanner, or: Reasonable and Unreasonable Demands Relating to Complex Specified Information

It would be very nice if there was a magic scanner that automatically gave you a readout of the total amount of complex specified information (CSI) in a system when you pointed it at that system, wouldn’t it? Of course, you’d want one that could calculate the CSI of any complex system – be it a bacterial flagellum, an ATP synthase enzyme, a Bach fugue, or the faces on Mt. Rushmore – by following some general algorithm. It would make CSI so much more scientifically rigorous, wouldn’t it? Or would it?

This essay is intended as a follow-up to the recent thread, On the calculation of CSI by Mathgrrl. It is meant to address some concerns about whether CSI is sufficiently objective to qualify as a bona fide scientific concept.

But first, some definitions. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define complex specified information (or CSI) as follows (p. 311):

Information that is both complex and specified. Synonymous with SPECIFIED COMPLEXITY.

Dembski and Wells then define specified complexity on page 320 as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY).

In this post, I’m going to examine seven demands which Intelligent Design critics have made with regard to complex specified information (CSI):
Read More ›

Why one guy packed up and left Darwinism

David Deming, associate professor of arts and sciences at the University of Oklahoma, and the author of Science and Technology in World History (Vols. 1 & 2) decided to dissent from Darwinism, because In 2008, I published a critique of intelligent design theory in the peer-reviewed journal Earth Science Reviews. I concluded that intelligent design cannot be construed as a scientific theory, and that the apparent goal of the intelligent design movement was to restore Christian theology as the queen of the sciences.But I also argued that to the extent creationists were highlighting areas in which scientific theory was inadequate they were doing better science than biologists. We ought to stop pretending that science has all the answers. Science is Read More ›

Science is self-correcting … no make that self-repeating

In a review of several recent science books, Dartmouth professor Alan Hirshfeld offers us a view of the Royal Society (former employer of sinner in the hands of an angry god, Michael Reiss), and similar societies, as engines of perpetual revolution (The Wall Street Journal) , opining “The Royal Society’s history of open-minded debate epitomizes science as a self-correcting process”: The group is more effective than the individual at sussing out weak hypotheses, flawed experiments or biased observations, and one of the vital contributions of Europe’s “natural philosophers” during the Enlightenment was the creation of societies to disseminate and evaluate their ideas. Such conclaves served as intellectual hubs before the rise of modern research universities and institutes, and remain important Read More ›

Why The Chromosomal Fusion Argument Doesn’t Wash

Recently, I purchased and read Daniel Fairbanks’ relatively recent (February 2010) book, Relics of Eden — The Powerful Evidence of Evolution in Human DNA. There was a time when I would have been compelled by many of the arguments for common descent articulated in that book. I have always been skeptical, in large measure, of the proposition of the unlimited causal efficacy which is often so casually ascribed to the neo-Darwinian synthesis. But there was a time when I would have strongly favoured a paradigm consistent with common descent. More recently, however, a deeper delve into the scientific literature has given me cause for caution with respect to these types of argumentation, as compelling to the uninitiated as they may at first glance superficially appear. As with many modern popular science writers, Fairbanks provides a gripping read, and is a very effective communicator to a lay audience. Having read the book, however, it quickly became clear that, on multiple levels, Fairbanks was seemingly out of touch with much of the scientific literature which succeeds in providing potent counter-examples to many of the arguments for common descent which he raises (an example of which is detailed below).

Read More ›

Ants Solve Steiner Problem

Some years back, ID critic Dave Thomas used to tout the power of genetic algorithms for their ability of solve the Steiner Problem, which basically tries to minimize distance of paths that connect nodes on a two-dimensional surface (last I looked, he’s still making this line of criticism — see here). In fact, none of his criticisms hit the mark — the information problem that he claims to resolve in evolutionary terms merely pushes the design problem deeper, as the peer-reviewed research at the Evolutionary Informatics Lab makes clear (go to the publications page there). Now here’s an interesting twist: Colonies of ants, when they make tracks from one colony to another minimize path-length and thereby also solve the Steiner Problem (see Read More ›

The 4% solution: The ultimate Copernican revolution is “We’re different”?

In “The challenge of the great cosmic unknowns” ( New Scientist 24 January 2011), Dan Falk reviews Richard Panek’s The 4% Universe: Dark matter, dark energy, and the race to discover the rest of reality: As he nears the present day, Panek weaves together two separate yet closely related storylines. In the first, he takes us to sophisticated laboratories around the world where researchers are trying to isolate particles of dark matter. Their best guess is that dark matter is made of WIMPs (weakly interacting massive particles), which were created at the time of the big bang and are now fiendishly difficult to detect. In the second storyline, we join the hunt for dark energy, which began in the late Read More ›

300px-AmineTreating

ID Foundations, 3: Irreducible Complexity as concept, as fact, as [macro-]evolution obstacle, and as a sign of design

[ID Found’ns Series, cf. also Bartlett here]

Irreducible complexity is probably the most violently objected to foundation stone of Intelligent Design theory. So, let us first of all define it by slightly modifying Dr Michael Behe’s original statement in his 1996 Darwin’s Black Box [DBB]:

What type of biological system could not be formed by “numerous successive, slight modifications?” Well, for starters, a system that is irreducibly complex. By irreducibly complex I mean a single system composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the [core] parts causes the system to effectively cease functioning. [DBB, p. 39, emphases and parenthesis added. Cf. expository remarks in comment 15 below.]

Behe proposed this definition in response to the following challenge by Darwin in Origin of Species:

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case . . . . We should be extremely cautious in concluding that an organ could not have been formed by transitional gradations of some kind. [Origin, 6th edn, 1872, Ch VI: “Difficulties of the Theory.”]

In fact, there is a bit of question-begging by deck-stacking in Darwin’s statement: we are dealing with empirical matters, and one does not have a right to impose in effect outright logical/physical impossibility — “could not possibly have been formed” — as a criterion of test.

If, one is making a positive scientific assertion that complex organs exist and were credibly formed by gradualistic, undirected change through chance mutations and differential reproductive success through natural selection and similar mechanisms, one has a duty to provide decisive positive evidence of that capacity. Behe’s onward claim is then quite relevant: for dozens of key cases, no credible macro-evolutionary pathway (especially no detailed biochemical and genetic pathway) has been empirically demonstrated and published in the relevant professional literature. That was true in 1996, and despite several attempts to dismiss key cases such as the bacterial flagellum [which is illustrated at the top of this blog page] or the relevant part of the blood clotting cascade [hint: picking the part of the cascade — that before the “fork” that Behe did not address as the IC core is a strawman fallacy], it arguably still remains to today.

Now, we can immediately lay the issue of the fact of irreducible complexity as a real-world phenomenon to rest.

For, a situation where core, well-matched, and co-ordinated parts of a system are each necessary for and jointly sufficient to effect the relevant function is a commonplace fact of life. One that is familiar from all manner of engineered systems; such as, the classic double-acting steam engine:

Fig. A: A double-acting steam engine (Courtesy Wikipedia)

Such a steam engine is made up of rather commonly available components: cylinders, tubes, rods, pipes, crankshafts, disks, fasteners, pins, wheels, drive-belts, valves etc. But, because a core set of well-matched parts has to be carefully organised according to a complex “wiring diagram,” the specific function of the double-acting  steam engine is not explained by the mere existence of the parts.

Nor, can simply choosing and re-arranging similar parts from say a bicycle or an old-fashioned car or the like create a viable steam engine.  Specific mutually matching parts [matched to thousandths of an inch usually], in a very specific pattern of organisation, made of specific materials, have to be in place, and they have to be integrated into the right context [e.g. a boiler or other source providing steam at the right temperature and pressure], for it to work.

If one core part breaks down or is removed — e.g. piston, cylinder, valve, crank shaft, etc., core function obviously ceases.

Irreducible complexity is not only a concept but a fact.

But, why is it said that irreducible complexity is a barrier to Darwinian-style [macro-]evolution and a credible sign of design in biological systems?

Read More ›

ds_cyb_mind_model
The Eng Derek Smith Cybernetic Model

ID Foundations, 2: Counterflow, open systems, FSCO/I and self-moved agents in action

In two recent UD threads, frequent commenter AI Guy, an Artificial Intelligence researcher, has thrown down the gauntlet:

Winds of Change, 76:

By “counterflow” I assume you mean contra-causal effects, and so by “agency” it appears you mean libertarian free will. That’s fine and dandy, but it is not an assertion that can be empirically tested, at least at the present time.

If you meant something else by these terms please tell me, along with some suggestion as to how we might decide if such a thing exists or not. [Emphases added]

ID Does Not Posit Supernatural Causes, 35:

Finally there is an ID proponent willing to admit that ID cannot assume libertarian free will and still claim status as an empirically-based endeavor. [Emphasis added] This is real progress!

Now for the rest of the problem: ID still claims that “intelligent agents” leave tell-tale signs (viz FSCI), even if these signs are produced by fundamentally (ontologically) the same sorts of causes at work in all phenomena . . . . since ID no longer defines “intelligent agency” as that which is fundamentally distinct from chance + necessity, how does it define it? It can’t simply use the functional definition of that which produces FSCI, because that would obviously render ID’s hypothesis (that the FSCI in living things was created by an intelligent agent) completely tautological. [Emphases original. NB: ID blogger Barry Arrington, had simply said: “I am going to make a bold assumption for the sake of argument. Let us assume for the sake of argument that intelligent agents do NOT have free will . . . ” (Emphases added.)]

This challenge brings to a sharp focus the foundational  issue of counter-flow, constructive work by designing, self-moved initiating, purposing agents as a key concept and explanatory term in the theory of intelligent design. For instance, we may see from leading ID researcher, William Dembski’s No Free Lunch:

. . .[From commonplace experience and observation, we may see that:]  (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.) [Emphases and explanatory parenthesis added.]

This is of course, directly based on and aptly summarises our routine experience and observation of designers in action.

For, designers routinely purpose, plan and carry out constructive work directly or though surrogates (which may be other agents, or automated, programmed machines). Such work often produces functionally specific, complex  organisation and associated information [FSCO/I;  a new descriptive abbreviation that brings the organised components and the link to FSCI (as was highlighted by Wicken in 1979)  into central focus].

ID thinkers argue, in turn, that that FSCO/I in turn is an empirically reliable sign pointing to intentionally and intelligently directed configuration — i.e. design — as signified cause.

And, many such thinkers further argue that:

if, P: one is not sufficiently free in thought and action to sometimes actually and truly decide by reason and responsibility (as opposed to: simply playing out the subtle programming of blind chance and necessity mediated through nature, nurture and manipulative indoctrination)

then, Q: the whole project of rational investigation of our world based on observed evidence and reason — i.e. science (including AI) — collapses in self-referential absurdity.

But, we now need to show that . . .

Read More ›
osc_rsc_fsc

Background Note: On Orderly, Random and Functional Sequence Complexity

In 2005, David L Abel and Jack T Trevors published a key article on order, randomness and functionality, that sets a further context for appreciating the warrant for the design inference. The publication data and title for the peer-reviewed article are as follows: Theor Biol Med Model. 2005; 2: 29. Published online 2005 August 11. doi: 10.1186/1742-4682-2-29. PMCID: PMC1208958 Copyright © 2005 Abel and Trevors; licensee BioMed Central Ltd. Three subsets of sequence complexity and their relevance to biopolymeric information A key figure (NB: in the public domain)  in the article was their Fig. 4: Figure 4: Superimposition of Functional Sequence Complexity onto Figure 2. The Y1 axis plane plots the decreasing degree of algorithmic compressibility as complexity increases from Read More ›

They said it: NCSE endorses the “design is re-labelled creationism” slander

In the short term, a smear campaign can be very successful, and will poison the atmosphere, perhaps even poisoning the general public’s perception of your opponents. Usually, it works by using what may be called for convenience the trifecta fallacy, unfortunately — and as we shall shortly see — a now habitual pattern of all too many evolutionary materialism advocates when they deal with Intelligent Design. Specifically:

i: use a smelly red herring distractor to pull attention away from the real issues and arguments

ii: lead it away to a strawman caricature of the issues and arguments of the opponent

iii: soak it in inflammatory innuendos, guilt by invidious association or outright demonising attacks to the man (ad hominems) and ignite through snide or incendiary rhetoric.

The typical result of such an uncivil, disrespectful rhetorical tactic when used on a naive or trusting public is that it distracts attention, clouds, confuses, polarises and poisons the atmosphere for discussion. Especially when false accusations are used, it can seriously damage reputations and careers. So, the trifecta is at minimum a violation of duties of care and respect. At worst, it is a cynically calculated propagandistic deception that through clouding the atmosphere with a poisonous, polarising cloud, divides the public and points their attention to an imaginary threat elsewhere, so that an agenda that plainly cannot stand on its own merits can gain power in the community.

But what happens when the smear begins to unravel as more and more people begin to understand that you have failed to be fair or truthful, in the face of abundant evidence and opportunity to the contrary?

Let us see, by examining the NCSE-hosted (thus, again, endorsed) page for the ironically named New Mexico Coalition for Excellence in Science and Math Education. Excerpting:

Science deals with natural explanations for natural phenomena. Creationism or intelligent design, if allowed, would change this to promote supernatural explanations for natural phenomena — a contradiction in terms with regard to science. Intelligent design is also sterile as far as science is concerned. To be considered as real science, it must be able to explain and predict natural phenomena. Intelligent design proponents simply say that life is too complex to have arisen naturally. Therefore, an intelligent being (God) must have directly intervened whenever it chose to cause the diversity of the species. This explains everything and it explains nothing; it is not science.

The creationist groups attempt to masquerade their ideas as science simply by calling the concept “intelligent design theory”. No testable hypotheses or any form of scientific research has been presented to support their attempts to insert religion into science. Furthermore, it is suspected that the aim of these religiously motivated people is to redefine the meaning of science; if they were successful, science would become useless as a method for learning about the natural world. CESE decries the very usage of science terminology where there is no sound use of science. CESE also decries any political attempt to discredit the Theory of Evolution. Creationists present false statements concerning the validity of observed evidence for evolution such as: “there is no fossil evidence for evolution,” “it is impossible to obtain higher complexity systems from lower complexity systems,” etc. They call into question the motives and beliefs of scientists with claims such as, “if you believe in evolution, you are an atheist,” etc. They have even invented an imaginary scientific “controversy” to argue their agenda . . .

This needs to be exposed and corrected in steps, and it is worth the while to immediately pause and look at the Dissent from Darwin list to see that: yes, Virginia, there is a real controversy on scientific matters tied to Darwinism.  Also, let us list links to the series so far: background, and “They said it . . . ” 1, 2, 3.

So now, correcting in steps: Read More ›