Earlier today, I decided to check back at TSZ, to see if they have recovered from the recent regrettable hack attack. They are back up, at least in part. The following however, caught my eye:
Intelligent design proponents make a negative argument for design. According to them, the complexity and diversity of life cannot be accounted for by unguided evolution (henceforth referred to simply as ‘evolution’) or any other mindless natural process. If it can’t be accounted for by evolution, they say, then we must invoke design . . . .
What mysterious barrier do IDers think prevents microevolutionary change from accumulating until it becomes macroevolution? It’s the deep blue sea, metaphorically speaking. IDers contend that life occupies ‘islands of function’ separated by seas too broad to be bridged by evolution.
In this post (part 2a) I’ll explain the ‘islands of function’ metaphor and invite commenters to point out its strengths and weaknesses. In part 2b I’ll explain why the ID interpretation of the metaphor is wrong, and why evolution is not stuck on ‘islands of function’.
This is quite wrong-headed, and easily explains part of why there is so little progress in exchanges:
1 –> The design inference is a positive inference on well tested, inductively established sign, not a negative inference. For instance, the functionally specific, complex information [FSCO/I] — notice the blend of complexity with specificity to achieve function — in the above clip is diagnostic of design as its most credible source. Something that is easily empirically verified on a base of literally billions of cases. (And there are no credible known exceptions, or that would have been trumpeted to the highest heavens all over the Web and in the literature.)
2 –> The similar inductive status of the island of function effect can also easily be shown from this text. There are a great many ways in which the 899 ASCII characters used in the above clip can be arranged: 128^899 ~ 2.41 *10^1894. (The number of Planck-time states of the 10^80 or so atoms of our observed cosmos since its credible beginning is less than 10^150, a very large number, but one that is utterly dwarfed by the set of possibilities for 899 ASCII characters.) Very few of them would convey the above message in recognisable English and while some noise — such as typos etc — can be tolerated, all too soon injection of random noise — a random walk on the island of function — would destroy function.
3 –> This is a simple illustration of a commonplace fact of life for complex, functionally specific entities made up from multiple, well-matched components that must be properly arranged and coupled together to achieve function. Taking our solar system as a zone of interest, the relevant components can be scattered in a great many ways indeed none of which will be functional. Even if clumped, a much smaller but still huge number of arrangements exists, the overwhelming majority of which possibilities will have no function.
4 –> Only in certain very special clusters of configurations (reflecting the amount of tolerance for configurations in a given neighbourhood) will there be functional configurations. So, we are at the issue that Dembski outlined long ago now, in No Free Lunch:
p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.
I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .
Biological specification always refers to function . . . In virtue of their function [[a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”
p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
5 –> This sort of functional specificity brings out how the sort of functional cluster in view is informational, i.e. there is a specific pattern, a set of nodes and arcs that has to be arranged in a form or pattern that allows function, within a fairly narrow range of tolerance. That range of neighbouring functional configs defines an island of function. Where also WLOG, as a nodes and arcs pattern can be reduced to a structured string [how AutoCAD etc work] this can be translated into string structures, with as many degrees of freedom as there are relevant bits.
6 –> Nor is this sort of remark exactly news, on Dec 30 2011, I noted here at UD as follows (something that was actually adverted to in the TSZ thread, but was not taken seriously by objectors to design . . . ):
1 –> Complex, multi-part function depends on having several well-matched, correctly aligned and “wired together” parts that work together to carry out an overall task, i.e. we see apparently purposeful matching and organisation of multiple parts into a whole that carries out what seems to be a goal. The Junkers Jumo 004 Jet engine in the above image is a relevant case in point.
2 –> Ever since Wicken posed the following clip in 1979, this issue of wiring-diagram based complex functional organisation has been on the table as a characteristic feature of life forms that must be properly explained by any successful theory of the causal roots of life. Clip:
3 –> The question at stake in the thread excerpted from above, is whether there can be an effective, incremental culling-out based on competition for niches and thence reproductive success of sub-populations that will create ever more complex systems that will then appear to have been designed.
4 –> Of course, we must notice that the implication of this claim is that we are dealing with in effect a vast continent of possible functional forms that can be spanned by a gradually branching tree. That’s a big claim, and it needs to be warranted on observational evidence, or it becomes little more than wishful thinking and grand extrapolation in service to an a priori evolutionary materialistic scheme of thought.
5 –> I cases where the function in question has an irreducible core of necessary parts, it is often suggested that something that may have had another purpose may simply find itself duplicated or fall out of use, then fit in with a new use. “Simple.”
6 –> NOT. For, such a proposal faces a cluster of challenges highlighted earlier in this UD series as posed by Angus Menuge [oops!] for the case of the flagellum:
For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:
C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.
C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.
C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.
C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.
C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.
( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)
8 –> The number of biologically relevant cases where C1 – 5 has been observed: ZERO.
9 –> What is coming out ever more clearly is this:
when a set of matching components must be arranged so they can work together to carry out a task or function, this strongly constrains both the choice of individual parts and how they must be arranged to fit together.
A jigsaw puzzle is a good case in point.
So is a car engine — as anyone who has had to hunt down a specific, hard to find part will know.
So are the statements in a computer program — there was once a NASA rocket that veered off course on launch and had to be destroyed by triggering the self-destruct because of — I think it was — a misplaced comma.
The letters and words in this paragraph are like that too.
That’s why (at first, simple level) we can usually quite easily tell the difference between:
A: An orderly, periodic, meaninglessly repetitive sequence: FFFFFFFFFF . . .
B: Aperiodic, evidently random, equally meaningless text: y8ivgdfdihgdftrs . . .
C: Aperiodic, but recognisably meaningfully organised sequences of characters: such as this sequence of letters . . .
In short, to be meaningful or functional, a correct set of core components have to match and must be properly arranged, and while there may be some room to vary, it is not true that just any part popped in in any number of ways can fit in.
As a direct result, in our general experience, and observation, if the functional result is complex enough, the most likely cause is intelligent choice, or design.
This has a consequence. For, this need for choosing and correctly arranging then hooking up correct, matching parts in a specific pattern implicitly rules out the vast majority of possibilities and leads to the concept of islands of function in a vast sea of possible but meaningless and/or non-functional configurations.
10 –> Consequently, the normal expectation is that complex, multi-part functionality will come in isolated islands. So also, those who wish to assert an “exception” for biological functions like the avian flow-through lung, will need to empirically warrant their claims. Show us, in short.
11 –> And, to do so will require addressing the difficulty posed by Gould in his last book, in 2002:
. . . long term stasis following geologically abrupt origin of most fossil morphospecies, has always been recognized by professional paleontologists. [The Structure of Evolutionary Theory (2002), p. 752.]
. . . . The great majority of species do not show any appreciable evolutionary change at all. These species appear in the section [[first occurrence] without obvious ancestors in the underlying beds, are stable once established and disappear higher up without leaving any descendants.” [p. 753.]
. . . . proclamations for the supposed ‘truth’ of gradualism – asserted against every working paleontologist’s knowledge of its rarity – emerged largely from such a restriction of attention to exceedingly rare cases under the false belief that they alone provided a record of evolution at all! The falsification of most ‘textbook classics’ upon restudy only accentuates the fallacy of the ‘case study’ method and its root in prior expectation rather than objective reading of the fossil record. [[p. 773.]
12 –> In that context, the point raised by GP above, that
. . . once a gene is duplicated and inactivated, it becomes non visible to NS. So, intelligent causes can very well act on it without any problem, while pure randomness, mutations and drift, will be free to operate in neutral form, but will still have the whole wall of probabilistic barriers against them.
. . . takes on multiplied force.
In short, the islands of function issue — rhetorical brush-asides notwithstanding — is real, and it counts. Let us see how the evolutionary materialism advocates will answer to it.
7 –> So, what is the grand overturn that shows that this is all nonsense? The concept of rising fitness functions that allow incremental change:
Now suppose that it rains for 40 days and 40 nights. The rain fills up our landscape, forming a vast sea. Only the mountain tops remain above the water as islands – the ‘islands of function’ that IDers are so fond of.
Our populations occupy the islands. Sea level indicates the minimum fitness at which mutants remain viable. Small changes will create viable descendants at different spots on the island, though the population as a whole will gravitate toward the high spots. Larger changes will put the mutants underwater, where they will die out.
The idea, according to ID proponents, is that populations remain stranded on these islands of function. Some amount of microevolutionary change is possible, but only if it leaves you high and dry on the same island. Macroevolution is not possible, because that would require leaping from island to island, and evolution is incapable of such grand leaps. You’ll end up in the water.
There is some truth to the ‘islands of function’ metaphor, but it also has some glaring shortcomings that ID proponents almost always overlook. I will mention some of the strengths and shortcomings in the comments, and I know that my fellow commenters will point out others.
8 –> To which the obvious answer is that the requisites of complex, specific, integrated function define islands which are isolated by seas of non-function that need to be bridged, not just on paper but observationally AND WITHIN ACCESSIBLE SEARCH RESOURCES (where the atomic resources of our solar system make the BLIND SEARCH creation of 500 bits of novel FSCO/I maximally implausible, and those of the observed cosmos max out at 1,000 bits.
9 –> In particular, the warrant for bridging islands of function requires that such a claim be justified observationally. Starting, with the origin of the very first body plan, and continuing with the origin of further body plans, requiring credibly 100 – 1,000 k bits of genetic information in the first case (in a string data structure) and of order 10 – 100 millions in onward cases for multicellular body plans.
10 –> Hardly less fatal, is something implied in what I just outlined. We are not dealing with known, close-by islands that were mountain-tops flooded out, but an unknown and patently vast seascape that may or may not contain islands of function, so far as the blind chance and mechanical necessity are concerned that are the only means of search permissible under the relevant conditions. And, where there are strictly limited search resources that max out at 500 – 1,000 bits under very generous conditions.
11 –> So, what is really needed is to start with a warm little pond or the like scenario and get a suitable concentration of monomers and then collect a living cell, per reasonable observation. One that has encapsulation, with gating of materials flows, and carries out self-replication and metabolism using a genetic code mechanism and the like. then, we need to see observational warrant for going from that to novel body plans with specialised properly organised cell types, tissues, organs, systems etc, constituting a new organism. This simply has not been done, nor is such in prospect.
12 –> Absent that, what we have is gross extrapolation of micro changes in already existing body plans, substituted for what was really needed. That is, we need to explain crossing the sea of non-function by blind chance mechanisms that would put us on shorelines of function, not the incremental hill climbing that can happen by all accounts once we are on such a shoreline. And, this must start, logically, with the very first body plan.
13 –> At the same time, we are surrounded by a world of technology that tells us that intelligent designers exist and are fully capable of creating FSCO/I rich systems. And we have whole disciplines and professions that study and practice the design of FSCO/I rich systems.
So, which alternative explanation is more reasonable on the actual evidence, why? END