Uncommon Descent Serving The Intelligent Design Community

Can designs be functional but selectively neutral or deleterious?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Can designs be selectively neutral and even deleterious but still functional? Yes. As Allen Orr said:

selection can wreck their exquisite engineering just as surely as it built it. An optic nerve with little or no eye is most assuredly not the sort of design one expects on an engineer’s blueprint, but we find it in Gammarus minus. Whether or not this kind of evolution is common, it betrays the fundamental error in thinking of selection as trading in the currency of Design.

Actually, Orr made a mistake, selection can’t build exquisite engineering design, but it can wreck it!

In a rare moment of honesty, from the most recent Wiki version of Genetic Redundancy, we read how genes can be functional but invisible to selection (neutrally evolving):

Genetic redundancy is a term typically used to describe situations where a given biochemical function is redundantly encoded by two or more genes. In these cases, mutations (or defects) in one of these genes will have a smaller effect on the fitness of the organism than expected from the genes’ function. Characteristic examples of genetic redundancy include (Enns, Kanaoka et al. 2005) and (Pearce, Senis et al. 2004). Many more examples are thoroughly discussed in (Kafri, Levy & Pilpel. 2006).

The main source of genetic redundancy is the process of gene duplication which generates multiplicity in gene copy number. A second and less frequent source of genetic redundancy are convergent evolutionary processes leading to genes that are close in function but unrelated in sequence (Galperin, Walker & Koonin 1998). Genetic redundancy has classically aroused much debate in the context of evolutionary biology (Nowak et al., 1997; Kafri, Springer & Pilpel . 2009).

From an evolutionary standpoint, genes with overlapping functions implies minimal, if any, selective pressures acting on these genes. One therefore expects that the genes participating in such buffering of mutations will be subject to severe mutational drift diverging their functions and/or expression patterns with considerably high rates. Indeed it has been shown that the functional divergence of paralogous pairs in both yeast and human is an extremely rapid process. Taking these notions into account, the very existence of genetic buffering, and the functional redundancies required for it, presents a paradox in light of the evolutionary concepts. On one hand, for genetic buffering to take place there is a necessity for redundancies of gene function, on the other hand such redundancies are clearly unstable in face of natural selection and are therefore unlikely to be found in evolved genomes.

To understand genetic redundancy and biological robustness we must not think in linear terms of single causality where A causes B causes C causes D causes E. Rather it must be appreciate that biological systems operate as in a scale-free network. In a scale-free network the distribution of node linkage follows a power law, in that it contains many nodes with a low number of links, few nodes with many links and very few nodes with a high number of links. A scale-free network is very much like the internet: the major part of the websites makes only a few links, less make an intermediate number of links, whereas a minor part makes the majority of links. Usually hundreds of routers routinely malfunction on the Internet at any moment, but the network rarely suffers major disruptions. As many as 80 percent of randomly selected Internet routers can fail and the remaining ones will still form a compact cluster in which there will still be a path between any two nodes [Barabasi et al, 2003]. Likewise, genes never operate alone but in redundant scale-free networks with an incredible level of buffering capacity.

An interactive network of cooperating proteins that substitute for or by-pass each other’s functions provide the robustness of biological system. It is hard to imagine how selection acts on individual nodes of a scale-free, redundant genetic system. From an evolutionary standpoint, genes with overlapping functions implies minimal, if any, selective pressures acting on these genes. One therefore expects that the genes participating in such buffering of mutations will be subject to severe mutational drift diverging their functions and/or expression patterns with considerably high rates. Although the functional divergence of paralogous gene pairs can be extremely fast, redundant genes do commonly not mutate faster than essential genes (Winzeler EA et al. 1999; Wagner A, 2000; Kitami T, 2002].

Thus entire systems could be knocked out with little effect on the organism’s basic activity. The ability to knock out DNA without compromising basic function is not evidence against the design, and in fact in some cases this would be evidence for design. For further examples, see:
Reductive evolution of complexity — square circle

Fault Tolerance

If a species can lose its stomach, it must mean the mutation was neutral

Airplane Magnetos

Why don’t Darwinist models of evolution like Weasel, Avida, Steiner, Ev, Geometric and Cordova’s Remarkable Algorithm model modes of natural selection that destroy design? Direct real time observation in the lab and field show natural selection is much more a Destroyer than a designer. Why is this not prominently modeled? Because Darwin and Dawkins ideas are divorced from biological reality. The only place Darwinism works for creating exquisitely engineered designs is in man-made, make-believe worlds.

NOTES

In the essay, Dennett’s Strange idea is a bad idea fro recognizing biological function, I pointed out:

However, fitness is hard to define rigorously and even more difficult to measure….An examination of fitness and its robustness alone would thus not yield much insight into the opening questions. Instead, it is necessary to analyze, on all levels of organization, the systems that constitute an organism, and that sustain its life. I define such systems loosely as assemblies of parts that carry out well-defined biological functions.

Andreas Wagner

but Wagner’s definition of “system” sounds hauntingly similar to Michael Behe’s definition of Irreducible Complexity:

A single system composed of several well-matched, interacting parts that contribute to the basic function of the system

Selection is a horrible criteria for determining if something is functional or not. Same with superficial knockout experiments.

Comments
Sal: Just thinking out loud here a bit further . . . Another way for us to think of this issue is to realize that the duplicate-gene-is-fodder-for-evolution concept is not based on the idea that the sequence is functional in its current state. That isn't the salient point. Rather, the key assumption underlying the concept is that the sequence is close to some different (though, unfortunately, typically vaguely defined and unspecified) functional sequence that could arise by tweaking a few nucleotides here and there. But of course, without duplication the original sequence is also close to our new target sequence. Why then not just tweak the original sequence? We are loathe to tweak the original sequence for fear of messing up current function. After all, the original sequence doesn't exist in a vacuum; it is part of a larger functionality. It must be properly expressed in the appropriate amounts and at the right times by the appropriate mechanisms, the resulting transcribed chain must then be properly translated (often by means of as-yet-not-understood concatenation algorithms), the resulting protein must be transported by relevant shepherding mechanisms to the right location in the cell, and finally in most cases the protein must be integrated into a larger functional construct. As a result, we recognize that tweaking the existing sequence could impair ultimate functionality at many points along the chain. We are thus understandably hesitant to suggest that existing sequences which form part of a complex integrated functionality could be successfully tweaked to create a new function. And yet, the idea that we can tweak a copy of that very same existing sequence to create a new function glosses over the fact that all of the same kinds of factors (expression mechanisms and triggers, concatenation algorithms, proper shepherding, proper integration into a larger system, etc.) will also need to be either (i) copied and tweaked or (ii) created de novo. So while the tweaking of an existing sequence to get new function would require not only changes in the sequence itself, but also changes in the means of expression and integration, creation of a new function by tweaking a copy of the sequence faces precisely the same barriers. Thus, the only advantage of the gene-duplication idea -- and the primary reason the idea has currency -- is that by tweaking a copy we can potentially avoid messing up a critical existing function for the organism. And that would seem to be the real benefit of the gene-duplication idea. But one could just as easily argue that tweaking an existing sequence and its existing related systems, is easier than coming up with new sequence and new related systems (whether from copies or de novo). So the gene-duplication proposal trades the benefit of not messing up a current function for what may potentially be a greater challenge and higher probabilistic hurdle of not only creating a new functional sequence but also creating the right expression and integration systems to implement that new sequence. Thus, it is very difficult to say whether the gene-duplication concept increases the odds and helps the evolutionary storyline in any meaningful way.Eric Anderson
April 25, 2014
April
04
Apr
25
25
2014
10:48 PM
10
10
48
PM
PDT
Thanks, Sal. Interesting thoughts. I wonder, in the context of an organism, whether perhaps we should attribute "function" to a particular feature only if it contributes to an organism's overall functionality. After all, if we take the idea of functionality to the extreme, we could argue that any old random non-coding nucleotide sequence has a function: that of providing a physical nucleotide-string medium on which mutations and natural selection can carry out their trial-and-error activity. Presumably we don't want to take the idea of functionality that far, or it becomes meaningless. We're interested in actual, existing contribution to an organism's well being. I agree with you that redundancy can be a valuable function. Typically, when we think of redundancy in designed systems, however, we are dealing with something that either (i) is dormant unless and until needed (the spare tire), or (ii) is used concurrently and analyzed and compared with the primary system to confirm measurements (say, multiple gyroscopes in a flight system). My dictionary defines "redundancy" in this design sense as follows: "The provision of additional or duplicate systems, equipment, etc., that function in case an operating system fails, as in a spacecraft." It doesn't appear that a duplicate gene falls into either of those categories. So I'm not sure it represents true "redundancy," at least not in the sense of contributing to the larger functional system. It might represent "redundancy" in the some of the other senses of the word my dictionary cites: - superfluous repetition - unnecessary repetition - being in excess; exceeding what is usual or natural Our duplicate gene seems to fall into these latter categories. Indeed, the whole point of the duplicate-gene-is-fodder-for-evolving-new-traits idea is that the duplicate gene is not needed for current functionality of the organism. So it becomes a little less clear that we can say it currently has a function. Maybe we could say that it could have a function in the right set of circumstances? The entire duplicate gene concept rests on the idea that, well, if we start with a sequence that came from a functional gene then surely we must be closer to another functional sequence than if we started from a random string of nucleotides, so the thinking goes. If we accept that proposition, then gene duplication would perhaps be a great way to get a head start on the awful probabilities that await the random generation of new functional sequences. Personally, in most cases I am quite skeptical that making random mutations to a functional sequence has meaningfully more likelihood of creating a new functional sequence that can be utilized, incorporated, and implemented in the organism than any other random tweaking. Yes, it might increase the odds a bit, but it is kind of like saying Derek Jeter has better odds of hitting a baseball to the Moon than I do. Sure he does. But he still isn't going to come close. The marginal increase in odds is a rounding error.
For this reason, I don’t just dismiss “making duplicates doesn’t increase CSI”.
[I presume you mean you don't just "accept" it?] I think I understand your position. It is not without merit. Nor without weakness. :)
For that matter, I pretty much just go back to the plain vanilla Explanatory Filter for ID arguments, because calculating bits of CSI has not really helped clarify arguments, it just adds confusion.
Agreed. Largely because CSI cannot be calculated in bits. :)
I revert back to the IC arguments, OOL arguments, etc.
Often a good approach, I agree.Eric Anderson
April 25, 2014
April
04
Apr
25
25
2014
05:59 PM
5
05
59
PM
PDT
Eric, In the context of this discussion, an identical spare part is a separate function, it functions as redundancy. It would be really bad to say a spare tire doesn't represent a function different from the installed tires. The new function also represents CSI different from installed tires. Copies of books serve a function. For this reason, I don't just dismiss "making duplicates doesn't increase CSI". For that matter, I pretty much just go back to the plain vanilla Explanatory Filter for ID arguments, because calculating bits of CSI has not really helped clarify arguments, it just adds confusion. I revert back to the IC arguments, OOL arguments, etc.scordova
April 24, 2014
April
04
Apr
24
24
2014
11:44 PM
11
11
44
PM
PDT
I've just pulled out my trusty dictionary to see what kinds of definitions they give for this word "new". Here are the first few: - recently made or brought into being - of a kind never before existing - novel - markedly different from what was before Arguably, a copy of X could be "new" in the first sense above. Meaning the copy is new, not the X. And we might still reasonably conclude that although the copy is new, the information and the function are not. A copy would of course not meet the subsequent 3 definitions at all. ----- Incidentally, I have always found the idea of a duplicate copy of a gene being important fodder for evolution a little suspect. Why would we need another copy of an existing nucleotide sequence to tweak? After all, according to the general storyline DNA is already mostly junk -- there is no shortage of nucleotide sequences to work with. Just keep tweaking away until something functional arises. My sense is that the duplicate gene idea has most of its traction for one simple reason: it is more believable. We seem to have this idea that if we have an already functional sequence, then it will be a lot easier to tweak it into some new, novel function, than it would be to tweak a junk sequence into that function. That may be true in a few limited cases in which we are dealing with genes that happen to be close to each other in the search space. However, in most cases that would seem to be an unfounded assumption. But, hey, it is more believable.Eric Anderson
April 24, 2014
April
04
Apr
24
24
2014
11:31 PM
11
11
31
PM
PDT
Sal:
I argue CSI can increase by accident in cases of a stuck copy machine. Other IDists here will vociferously disagree. I think that for DUPLICATED function that is “new”, it’s not a stretch that random chance can make a DNA copier make extra copies out of some error.
I think most everyone agrees that a copying error could result in an additional copy of something that already existed. Perhaps part of the disconnect comes in interchanging the words "information" and "function" as seems to have happened in your two paragraphs? There is certainly no "new" information -- as in unique, previously unseen in the organism information -- as a result of a duplicate copy. There is "new" information only if we stretch the word "new" to mean "another copy of existing." There is no "new" function produced either; just two things that can perform the same function. Essentially a form of redundancy. If I have two copies of an identical program on my computer (which I purposely do for backup purposes), I would certainly not -- never within ordinary English usage -- claim that my backup copy performs a "new" function compared to my original program. Anyway, I'm not sure what the best way to describe the duplicate gene situation is. Just thinking out loud here.Eric Anderson
April 24, 2014
April
04
Apr
24
24
2014
11:16 PM
11
11
16
PM
PDT
Nonsense.
Like I said:
Other IDists here will vociferously disagree.
scordova
April 24, 2014
April
04
Apr
24
24
2014
10:54 PM
10
10
54
PM
PDT
Hmmm . . . So every time another copy of The Origin rolls off the press, new complex specified information is created? Or a better example yet, given that I personally have two copies of Darwin's Black Box right here on my shelf, it means I have more information at my fingertips than if I had only one copy? And if I give one of the copies to a friend (which I fully intend to do when I get around to it), I will have lost some information and will, presumably, have access to less information than I did before I gave the second copy away? Nonsense.Eric Anderson
April 24, 2014
April
04
Apr
24
24
2014
10:50 PM
10
10
50
PM
PDT
Again from PNAS article:
“biochemically active but selectively neutral”
scordova
April 24, 2014
April
04
Apr
24
24
2014
10:29 PM
10
10
29
PM
PDT
By the way, the latest PNAS paper supports my claim:
Loss-of-function tests can also be buffered by functional redundancy, such that double or triple disruptions are required for a phenotypic consequence.
http://www.pnas.org/content/early/2014/04/23/1318948111.longscordova
April 24, 2014
April
04
Apr
24
24
2014
10:18 PM
10
10
18
PM
PDT
Gordon, Very nice to hear from you as always.
doesn’t that mean that we have to consider step 2 as producing function?
Yes. And for what it's worth, this is an example of why I hold the ID LCI (ID Law of Conservation of Information) at arm's length. Here are two scenarios: 1. Purposeful gene duplication like a zip file decompression. The capacity for duplicates pre-existed, even though not physically implemented. It is producing function from a blue print if such duplications are pre-programmed in the genome. 2. Fortuitous accident created the duplicated function, which is easy if we are doing rote copying. I'm fine with that in as much as it is still a function that is invisible to selection, and thus can be destroyed by random mutation or even selection itself. Recall, there was an unresolved debate about CSI in cases of rote copying: https://uncommondescent.com/computer-science/the-paradox-in-calculating-csi-numbers-for-2000-coins/ This problem applies to duplicated genes and also duplicated genomes (polyploidy). I argue CSI can increase by accident in cases of a stuck copy machine. Other IDists here will vociferously disagree. I think that for DUPLICATED function that is "new", it's not a stretch that random chance can make a DNA copier make extra copies out of some error. Unduplicated function of any complexity, I don't think can arise via chance or selection in the wild. So, I answer "yes" to your question. Some of my associates may have quite a different opinion, however.scordova
April 24, 2014
April
04
Apr
24
24
2014
10:13 PM
10
10
13
PM
PDT
Sal, I have a different question about your argument, specifically relating to how you define production vs. destruction of design. Let me run through a simple scenario: Step 1) There's a single gene performing a particular function. Step 2) The gene gets duplicated, meaning there are now two (identical) genes performing that function. According to the second item you quoted, this process is "the main source of genetic redundancy". Step 3) The two copies of the gene are now subject to relaxed selection, meaning that they can diverge in both gene sequence and function as long as at least one retains the original function. (Actually, it's also possible for each one to retain part of the original function, but let's assume that doesn't happen here.) Step 4) One of the copies suffers a mutation that effectively disables it, and this mutation spreads through the population via genetic drift. Step 5) The population is now pretty much back where it started, except that it now has a new pseudogene (the disabled copy). You appear to be considering step 4 to count as destroying function. But it's essentially the reverse of step 2; doesn't that mean that we have to consider step 2 as producing function? Otherwise we have the paradox that we have a sequence of steps that involves only destruction of function... but all the function is still there at the end. So I'm confused by your argument...Gordon Davisson
April 24, 2014
April
04
Apr
24
24
2014
06:41 PM
6
06
41
PM
PDT
Sal, Thanks for your reply. But if ultimately the answer is that no, indeed these GA's are not representations of reality, I don't see how they can be evidence for or against evolution. Why are they important to study if all they do is make predictions which are flawed from the beginning and thus don't tell you anything about what happens to life? You can't say a GA shows that so and so is not possible, if it also can't say if something is possible. Also, you said in small populations and low mutations rates, fixation can occur in X number of steps. But then what this is saying is that in X +1 or X+ 2 fixation stops occurring. Fixation is thus just a falsely created finish line only after the race has begun. So what the GA would show you is that fixations occurs and does not occur-just choose where you are going to put the checkered flag.phoodoo
April 24, 2014
April
04
Apr
24
24
2014
05:09 PM
5
05
09
PM
PDT
Genetic redundancy reminds me of a comment from former Boeing engineer turned compiler writer Walter Bright, who is one of my favorite engineers to follow:
All I know in detail is the 757 system, which uses triply-redundant hydraulic systems. Any computer control of the flight control systems (such as the autopilot) can be quickly locked out by the pilot who then reverts to manual control. The computer control systems were dual, meaning two independent computer boards. The boards were designed independently, had different CPU architectures on board, were programmed in different languages, were developed by different teams, the algorithms used were different, and a third group would check that there was no inadvertent similarity. An electronic comparator compared the results of the boards, and if they differed, automatically locked out both and alerted the pilot. And oh yea, there were dual comparators, and either one could lock them out. This was pretty much standard practice at the time. Note the complete lack of "we can write software that won't fail!" nonsense. This attitude permeates everything in airframe design, which is why air travel is so incredibly safe despite its inherent danger.
JoeCoder
April 24, 2014
April
04
Apr
24
24
2014
12:55 PM
12
12
55
PM
PDT
1. Do you believe any GA could mimic the claims of Darwinian evolution?
Not in real world biology. Only in make-believe biology.
If you make a computer simulation which shows how many steps to fixation, isn’t one more step past fixation in the same program a step away from fixation? How is there a beginning and an end? So if we say in a population of 10 it takes 20 generations to fixation, wouldn’t the same program say that in 22 generations you no longer have fixation?
If the population is small and mutation rates per individual are small, there could be fixation before the next mutation arises. For large populations and large mutation rates, no.
Both programs fail in my estimation, and so are a waste of time.
The simulations are still important to study. Bill Dembski, Robert Marks, Winston Ewert, Royal Truman and a few other studied GAs in order to demonstrate the conditions underwhich Darwinian evolution can or cannot succeed. Darwinian evolution cannot succeed in the real world, and this is an important result. The work devoted by ID proponents on the project isn't a waste of time. It's a work IDists felt needed to be done. The simulations of population genetics that show that most evolution is non-Darwinian is important. It also shows neutral evolution as described by Kimrua, Nei, Moran, PZ Myers, etc. would be extremely destructive to design, hence neither neutral theory nor Darwinian evolution can explain the design of life. Hence the work of top creationist scientists like Sanford, ReMine, Brewer, Gipson, Baumgardner, Carter, etc. is not a waste of time. It's work that creationist felt needed to be done.scordova
April 24, 2014
April
04
Apr
24
24
2014
11:17 AM
11
11
17
AM
PDT
Sal, I have read some of your posts regarding Genetic algorithms and also regarding neutral fixation. I am still a bit confused though about your position. I have two questions. 1. Do you believe any GA could mimic the claims of Darwinian evolution? It seems to me to be a logic impossibility to model a program which says choose the best outcomes, when the definition of the best outcomes is simply the best outcomes. 2. How can we make models to estimate time to fixation for a novel mutation, when all mutations are constantly ongoing. For example, regarding the M&M model discussed over at skeptical zone a while back. If we start with a new color M&M introduced into a population, and then kill off some, and double others, you can say we could eventually end up with them all one color, but then wouldn't the generation where they all end up as one just be another stepping stone to them all ending up as another color? How is there a beginning an and end, when every generation is in flux. If you make a computer simulation which shows how many steps to fixation, isn't one more step past fixation in the same program a step away from fixation? How is there a beginning and an end? So if we say in a population of 10 it takes 20 generations to fixation, wouldn't the same program say that in 22 generations you no longer have fixation? Both programs fail in my estimation, and so are a waste of time.phoodoo
April 24, 2014
April
04
Apr
24
24
2014
10:46 AM
10
10
46
AM
PDT

Leave a Reply