In the current discussion on [Mis-]Representing Natural Selection, UD commenter Bruce David has posed a significant challenge:
. . . it is not obvious that even with intelligence in the picture a major modification of a complex system is possible one small step at a time if there is a requirement that the system continue to function after each such step.
For example, consider a WWII fighter, say the P51 Mustang. Can you imagine any series of incremental changes that would transform it into a jet fighter, say the F80 and have the plane continue to function after each change? To transform a piston engine fighter in to a jet fighter requires multiple simultaneous changes for it to work–an entirely new type of engine, different engine placement, different location of the wings, different cockpit controls and dials, changes to the electrical system, different placement of the fuel tanks, new air intake systems, different materials to withstand the intense heat of the jet exhaust, etc., etc., etc. You can’t make these changes in a series of small steps and have a plane that works after each step, no matter how much intelligence is input into the process.
He then concludes:
Now both a P51 and an F80 are complex devices, but any living organism, from the simplest cell on up to a large multicellular plant or animal, is many orders of magnitude more complex than a fighter plane. If you believe that it is possible to transform a reptile with a bellows lung, solid bones and scales, say, into a bird with a circular flow lung, hollow bones, and feathers by a series of small incremental changes each of which not only results in a functioning organism, but a more “fit” one, then the burden of proof is squarely on your shoulders, because the idea is absurd on the face of it.
In responding, UD Contributor gpuccio clarifies:
consider that engineered modifications can be implemented in a complex organism while retaining the old functionality, and then the new plan can be activated when everything is ready. I am not saying that’s the way it was done, but that it is possible.
For instance, and just to stay simple, one or more new proteins could be implemented using duplicated, non translated genes as origin. Or segments of non coding DNA. That’s, indeed, very much part of some darwinian scenarios.
The difference with an ID scenario is that, once a gene is duplicated and inactivated, it becomes non visible to NS. So, intelligent causes can very well act on it without any problem, while pure randomness, mutations and drift, will be free to operate in neutral form, but will still have the whole wall of probabilistic barriers against them.
[U/d, Dec 30] He goes on to later add:
NS acts as negative selection to keep the already existing information. We see the results of that everywhere in the proteome: the same function is maintained in time and in different species, even if the primary sequence can vary in time because of neutral variation. So, negative NS conserves the existing function, and allow only neutral or quasi neutral variation. In that sense it works againstany emergence of completely new information from the existing one, even if it can tolerate some limites “tweaking” of what already exists (microevolution).
I suppose that darwinists, or at least some of them, are aware of that difficulty as soon as one tries to explain completely new information, such as a new basic protein domain. Not only the darwinian theory cannot explain it, it really works against it.
So, the duplicated gene mechanism is invoked.
The problem is that the duplicated gene, to be free to vary and to leave the original functional island, must be no more translated and no more functional. Indeed, that happens very early in the history of a duplicated gene, because many forma of variation will completely inactivate it as a functional ORF, as we can see all the time with pseudogenes.
So, one of the two:
a) either the duplicated gene remains functional and contributes to the reproduction, so that negative NS can preserve it. In that case, it cannot “move” to new unrelated forms of function.
b) or the duplicated gene immediately becomes non functional, and is free to vary.
The important point is that case a) is completely useless to the darwinian explanation.
Case b) allows free transitions, but they are no more visible to NS, at least not until a new functional ORF (with the necessary regulatory sites) is generated. IOWs, all variation from that point on becomes neutral by definition.
But neutral variation, while free of going anywhere, is indeed free of going anywhere. That means: feedom is accompanied by the huge rising of the probability barriers. As we know, finding a new protein domain by chance alone is exactly what ID has shown to be empirically impossible.
In her attempted rebuttal, contributor Dr Elizabeth Liddle remarks:
I don’t find Behe’s argument that each phylum has a radically different “kernel” very convincing. Sure, prokaryotic cells and eukaryotic cells are different, but, as I said, we have at least one theory (symbiosis) that might explain that. And in any case for non-sexually reproducing organisms, “speciation” is a poor term – what we must postulate is cloning populations that clone along with their symbiotic inclusions. Which is perfectly possible (indeed even we “inherit” parental gut flora).
I think you are making the mistake of assuming that because “phyla” is a term that refers not only to the earliest exemplars of each phylum but also to the entire lineage from each, that those earliest exemplars were as different from each other as we, for example, are from trees, or bacteria. It’s really important to be clear when we are talking longitudinally (adaptation over time) and when laterally (subdivisions of populations into separate lineages).
This was largely in response to Dr V J Torley’s listing of evidence:
What evidence [for the distinctness of main body plans and for abrupt origin of same in the fossil record], Elizabeth? Please have a look here:
In “The Edge of Evolution”, Dr. Michael Behe argues that phyla were probably separately designed because each phylum has it own kernel that requires design. He also suggests that new orders (or families, or genera – he’s not yet sure which) are characterized by unique cell types, which he thinks must have been intelligently designed, because the number of protein factors in their gene regulatory network (about ten) well exceeds the number that might fall into place naturally (three).
This exchange pivots on the central issue: does complex, multi-part functionality come in easily accessible continents that can be spanned by an incrementally growing and branching tree, or does it normally come in isolated islands in beyond astronomical spaces dominated by seas of non-function, that the atomic level resources of our solar system (our effective universe) or of the observed cosmos as a whole cannot take more than a tiny sample of?
Let’s take the matter in steps of thought:
1 –> Complex, multi-part function depends on having several well-matched, correctly aligned and “wired together” parts that work together to carry out an overall task, i.e. we see apparently purposeful matching and organisation of multiple parts into a whole that carries out what seems to be a goal. The Junkers Jumo 004 Jet engine in the above image is a relevant case in point.
2 –> Ever since Wicken posed the following clip in 1979, this issue of wiring-diagram based complex functional organisation has been on the table as a characteristic feature of life forms that must be properly explained by any successful theory of the causal roots of life. Clip:
3 –> The question at stake in the thread excerpted from above, is whether there can be an effective, incremental culling-out based on competition for niches and thence reproductive success of sub-populations that will create ever more complex systems that will then appear to have been designed.
4 –> Of course, we must notice that the implication of this claim is that we are dealing with in effect a vast continent of possible functional forms that can be spanned by a gradually branching tree. That’s a big claim, and it needs to be warranted on observational evidence, or it becomes little more than wishful thinking and grand extrapolation in service to an a priori evolutionary materialistic scheme of thought.
5 –> I cases where the function in question has an irreducible core of necessary parts, it is often suggested that something that may have had another purpose may simply find itself duplicated or fall out of use, then fit in with a new use. “Simple.”
6 –> NOT. For, such a proposal faces a cluster of challenges highlighted earlier in this UD series as posed by Angus Menuge [oops!] for the case of the flagellum:
For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:
C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.
C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.
C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.
C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.
C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.
( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)
8 –> The number of biologically relevant cases where C1 – 5 has been observed: ZERO.
9 –> What is coming out ever more clearly is this:
when a set of matching components must be arranged so they can work together to carry out a task or function, this strongly constrains both the choice of individual parts and how they must be arranged to fit together.
A jigsaw puzzle is a good case in point.
So is a car engine — as anyone who has had to hunt down a specific, hard to find part will know.
So are the statements in a computer program — there was once a NASA rocket that veered off course on launch and had to be destroyed by triggering the self-destruct because of — I think it was — a misplaced comma.
The letters and words in this paragraph are like that too.
That’s why (at first, simple level) we can usually quite easily tell the difference between:
A: An orderly, periodic, meaninglessly repetitive sequence: FFFFFFFFFF . . .
B: Aperiodic, evidently random, equally meaningless text: y8ivgdfdihgdftrs . . .
C: Aperiodic, but recognisably meaningfully organised sequences of characters: such as this sequence of letters . . .
In short, to be meaningful or functional, a correct set of core components have to match and must be properly arranged, and while there may be some room to vary, it is not true that just any part popped in in any number of ways can fit in.
As a direct result, in our general experience, and observation, if the functional result is complex enough, the most likely cause is intelligent choice, or design.
This has a consequence. For, this need for choosing and correctly arranging then hooking up correct, matching parts in a specific pattern implicitly rules out the vast majority of possibilities and leads to the concept of islands of function in a vast sea of possible but meaningless and/or non-functional configurations.
10 –> Consequently, the normal expectation is that complex, multi-part functionality will come in isolated islands. So also, those who wish to assert an “exception” for biological functions like the avian flow-through lung, will need to empirically warrant their claims. Show us, in short.
11 –> And, to do so will require addressing the difficulty posed by Gould in his last book, in 2002:
. . . long term stasis following geologically abrupt origin of most fossil morphospecies, has always been recognized by professional paleontologists. [The Structure of Evolutionary Theory (2002), p. 752.]
. . . . The great majority of species do not show any appreciable evolutionary change at all. These species appear in the section [[first occurrence] without obvious ancestors in the underlying beds, are stable once established and disappear higher up without leaving any descendants.” [p. 753.]
. . . . proclamations for the supposed ‘truth’ of gradualism – asserted against every working paleontologist’s knowledge of its rarity – emerged largely from such a restriction of attention to exceedingly rare cases under the false belief that they alone provided a record of evolution at all! The falsification of most ‘textbook classics’ upon restudy only accentuates the fallacy of the ‘case study’ method and its root in prior expectation rather than objective reading of the fossil record. [[p. 773.]
12 –> In that context, the point raised by GP above, that
. . . once a gene is duplicated and inactivated, it becomes non visible to NS. So, intelligent causes can very well act on it without any problem, while pure randomness, mutations and drift, will be free to operate in neutral form, but will still have the whole wall of probabilistic barriers against them.
. . . takes on multiplied force.
In short, the islands of function issue — rhetorical brush-asides notwithstanding — is real, and it counts. Let us see how the evolutionary materialism advocates will answer to it. END
PS: I am facing a security headache, so this post was completed on a Linux partition. Linux is looking better than ever, just now. as a main OS . . .