Uncommon Descent Serving The Intelligent Design Community

Other problems for Human Evolution, Nachman’s U-Paradox


Cornell geneticist John Sanford pointed out many problems confronting the theory of Darwinian evolution, particularly human evolution. (See: Genetic Entropy ) Many of his arguments were subtle. Among them was his discussion of a somewhat obscure paper: Estimate of the Mutation Rate per Nucleotide in Humans by Nachman.

Nachman writes:

The high deleterious mutation rate in humans presents a paradox.

What Nachman’s paper discusses is the idea of purifying selection (getting rid of bad mutations). If a population on average is receiving 3 deleterious mutations per individual, each female would have to be making 40 offspring to provide sufficent population resources to purge the bad mutations out of the population. But only 3 deleterious mutations per individual might be extremely optimistic. What if we’re dealing with more?

How much time and population resources would be needed to maintain the population if each individual is adding 100 deleterious mutations per generation (even 100 might be optimistic)? But the problem is worsened by the fact this is just to maintain the status quo, much less evolve the population. This problem is the U-Paradox.

To evolve the population, one needs to add new mutations and either fix some mutations and remove (purify away) others. Haldane’s dilemma deals with the difficulty of fixation. Nachman U-Paradox problem deals with the difficulty of purification. Even if the mutations were neutral, Nachman’s paper still poses the problem of how to purify away neutral mutations such that the genomes between each member of the species remains relatively similar (humans are about 99.5% similar to each other).

Recall that for innovation to happen mutations need to be added. Thus, there is the associated cost of getting rid of unwanted innovations. I would argue that calculations should include purifying away even many neutral mutations because of the high degree of mono-morphism between each member of the species (i.e., humans are about 99.5% similar to each other).

Sanford vigorously objected to the hand-waving in Nachman’s paper and Nachman’s appeals to “synergistic epistasis” to kluge away the problems. “Synergistic epistasis” was essentially a phrase to cover up a serious problem [for example, the Darwinists concocted “abiogenesis” and compartmentalized away a major problem for their theory]. There may be isolated examples of “Synergistic epistasis”, but as a generalized principle and cure-all for the U-paradox, Sanford is highly skeptical.

Nachman pulled the number 3 out of the air simply based on the fact he didn’t think human females would be making more than 40 offspring on average (that would mean some girls need to have 70 kids to make up for the under-producers that have only 10 kids).

The average number of such mutations is designated by the symbol, “U”. For 3 deleterious mutations, U=3. What if U=100 (a more realistic, but still optimistic number)? How hard will the females have to work at making kids?

Consider then that we humans have a 180,000,000 base pair difference from chimps, about 6% difference (see: Humans only 94% similar to chimps, not 98.5%). Does one get the sense a problem is lurking somewhere?

Let’s assume (very generously) that for every 3 desirable mutations fixed we need to purify out 3 unwanted mutations. For the chimp human divergence we would be dealing with 90,000,000 nucleotides (180,000,000 / 2 ). 90,000,000 /3 = 30,000,000 generations or about 600,000,000 years. What if U=100? Recall, U=3 was just pulled out of the air! U=100 has some experimental verification, and it may actually be far worse.

I’ve glossed over a multitude of issues, and I’m sure the Darwinists will be angrily objecting. I can already anticipate all the misrepresentations and distortions the Darwinists in the blogsphere will offer (their usual modus operandi). Well, I’ll counter that they need seriously work the numbers and make a convincing case. Right now, human evolution from accepted evolutionary principles is speculation at best. If a fine geneticist like John Sanford finds the numbers objectionable, then it will at least give me pause.

I’m not saying the issues have been conclusively resolved one way or another, but I post this to suggest, the issue of human evolution via any sort of accepted mechanism is far from settled. The Darwinists to their credit are very clever. When confronted with a insurmountable problem, label the problem as being fixed by a yet undefined mechanism. In this case, the kluge is “synergistic epistasis”.

[...] as a generalized principle and cure-all for the U-paradox, Sanford is highly skeptical.” https://uncommondescent.com/intelligent-design/other-problems-for-human-evolution-nachmans-u-para... Mutation is not the savior of evolution but is the downfall of [...] Random mutations - Page 8 - Christian Forums
[...] To make matters worse, there are even more dilemmas to deal with such as Nachman’s U-Paradox. Looming on the horizon, and even more problematic would be the fact DNA might only be a FRACTION of the actual information that is used to create living organisms. This idea of the organism (or at least a single cell) as being the totatlity of information versus only the DNA was suggested by one of the finest evolutionary biologist on the planet, Richard Sternberg. He argues his case in the peer-reviewed article Teleomorphic Recursivity. And by the way, these discussions of selectionist speed limits assumes the multitude of Irreducibly Complex structures in biology are somehow visible to natural selection…. [...] What are the speed limits of naturalistic evolution? | Uncommon Descent
In the interest of open discussion, I point out at least some avenues that can alleviate some (but probably not all) of the problems of Nachman's U-Paradox. 1. High degrees of inbreeding to clear out the genome. Having sisters and brothers bear offspring might clean out a lot, but this "fix" raises other issues, such as if in breeding is that pervasive, how does fixation occur in geographically dispersed populations. In many population models, the assumption is the population is "well stirred" (any individual just as likely to mate with any other). In a geographically dispersed population one has the problem of making it well-stirred such that fixation can occur. To give an exaggerated illustratation, consider that one beneficial mutation pops up somewhere amongst the 5 billion people. How long do you think it would take for that beneficial mutation to get fixed into 5 billion people spread all over the world? This of course is an exaggeration of what the supposed conditions were way back when, because the populations were much smaller then. But if they were much smaller back then, there was less opportunity for novel beneficial mutations to arise. In other words, whatever model one comes up with, we probably can expect to find a fatal contradiction if we are willing to keep our eyes open for it. 2. The other possible "fix" is there is a population bottleneck where there are only a couple or just a single family of individuals at some point in time. But small damaged populations (like a single family) are prone to mutational meltdown. To illustrate, consider the creatures subjected to "cobalt bomb" experiments. Take a small family of these mutatans and try to inbreed them as a small family. Do you think that will really purify out and restore their genomes? Nope. What you get out of the process is not much better than what you started with. Here is the story of mutational meltdown: Mutational meltdown
Mutational meltdown refers to the process by which a small population accumulates deleterious mutations, which leads to loss of fitness and decline of the population size, which may lead to further accumulation of deleterious mutations due to inbreeding depression. A population experiencing mutational meltdown is trapped in a downward spiral and will go extinct if the phenomenon lasts for some time. Usually, the deleterious mutations would simply be selected away, but during mutational meltdown, the number of individuals thus suffering an early death is too large relative to overall population size so that mortality exceeds the birth rate.
I'm must emphasize, I don't think we have satisfying answers as to why we're still alive according to the Darwinian paradigm. The U-Paradox inclines me to believe a very unique process was in operation in the past which we have not analog to in the present, not even by a long shot. Some might argue special creation, but let me be more circumspect and cautious and simply say, the U-Paradox and Haldane's dilemma are not consistent with the model of Darwinian evolution. The alternative to special creation would be some front-loaded "reformat the bad sectors" operation or front-loaded self-cleaning. That possibility should not be dismissed either. We see pre-programmed self-healing in single organisms, it perhaps not a stretch to think it can happen at the population level as well. Finally, there is no reason there can't be a combination of special creation and/or front loading. We simply don't know at this stage of the game... I think the issues need to be revisited and far more rigor applied than the cursory discussion we have here. But my intuition tells me it will be a fruitful area of research. scordova
But– I always have thought that there was something wrong with the idea of bombarding biota with something known to be detrimental and hoping to get positive results.
True enough, but that shows just how badly a paradigm can influence someone's common sense. Consider that "cobablt bomb" and chemcial steric mutation experiments were seriously considered and even carried out in the 50's and possibly 60's (or whatever the timeframe was). Looking back, we wonder how Darwinists back then could have believed something so obviously idiotic. Regarding Nachman's U-Paradox, it is not even considered whether the similar dynamics are at play in the real world (albeit at a much much slower pace). In the real world, there are much lower levels of radiation, but they are there. In the real world there a multitude of factors creating mutations, the mutations may not be as plentiful as those caused by radio-cobalt, but they are there.... I did not mention, that now that we have more and more sequencing being done, we can track more mutations over each generation, and we are confronted with the fact they seem to be multiplying unchecked by selection. This may be true of humans and many other species. This problem is borne out observationally by the shear number of heritable diseases that strangely have not been weeded out, and worse yet, some that have some deleterious mutations have some positive selection value in certain environments (i.e., sickle-cell anemia). Bryan Sykes, an Oxford Geneticist (yes, the same school as Dawkins) who did fabulous work on Y-Chromosomal Adam and Mitochondrial Eve has been very disturbed at the advance of genetic mutations in the human genome. So in addition to Haldane's dilemma, we add Nachman's U-Paradox. scordova
But-- I always have thought that there was something wrong with the idea of bombarding biota with something known to be detrimental and hoping to get positive results. avocationist
Thx so much Sal - cleared things up! jpark320
Joseph, Thank you for linking to Fred Williams discussion. Fred Williams, like ReMine, like myself have backgrounds in Electrical Engineering. Fred Williams had a classic debate with primate biologist Scot Page. It was charming to see Williams feed Page remedial lessons in math and population genetics: Williams vs. Page That debate was my first introduction to Nachman's paper. scordova
The following is also a good article dealing with this topic: Monkey-Man Hypothesis Thwarted by Mutation Rates. It opens with:
Monkey-Man Hypothesis Thwarted by Mutation Rates Fred Williams, RMCF VP April 2003 Introduction Evolutionists often argue that DNA similarity between chimps and man is powerful evidence that they share a common ancestor. Recent estimates put the difference at 1.24%[1]. Creationists respond by arguing that DNA similarity would be expected due to common design, and also note that 1.24% still represents a difference of roughly 39 million fixed base pairs between the two[2]. These are valid points that sufficiently expose the weak logic of the evolutionist claim. However, there are other serious problems with the evolutionist claim that have gone mostly unnoticed. In recent years, study after study have yielded human mutation rates that are inexplicably too high [3,4,5,6]. These rates are determined by doing direct comparison of simian DNA to human DNA. Estimates are then made for the deleterious (harmful) mutation rate for both the human clade and simian clade since their assumed split from a common ancestor 5 to 6 million years ago.
Has anyone read Lonnig (sp?) on this one? I believe he says something similar to John Sanford about deleterious mutations and recurrent variation. late_model
By the way, here is a geneticist on the cobalt bomb experiments Pro-ID geneticist Maciej Giertych in his own words:
Mutations figure prominently in the Evolution story. When in the early ’60s I was starting breeding work on forest trees, everyone was very excited about the potential of artificial mutations. In many places around the world, special “cobalt bomb” centers were established to stimulate rates of mutations. What wonderful things were expected from increased variability by induced mutations. All of this work has long since been abandoned. It led nowhere. All that was obtained were deformed freaks, absolutely useless in forestry.
Ah, how quickly we forget the failures of Darwinian theory! scordova
Otherwise, as most affective mutations are deleterious, every beneficial mutation would be tested in the company of an organism that had multiple deleterious organism going along with it. How on earth does the beneficial mutation compete with that.
Exactly!!!! Sanford points out the problem of interference selection. That is exactly what is observed big time in cobalt bomb experiments. If the deleterious rate is substantially higher than the beneficial rate, the ability of beneficial mutations to be seen by selection is severely compromised. What if an individual has several deleterious mutations and then a few beneficial ones. It could be the case he could be out competed by an individual with no beneficial mutations, but fewer deleterious ones. The beneficial mutation is effectively drowned out and invisible to seleciton. Haldane's dilemma becomes even worse! And as I pointed out, we have compelling empirical reasons to conclude this. scordova
Regarding the requirement that females make 40 kids, the assumption is that effectively 38 of the 40 will not have successful lineages, or maybe even die. For purifying selection to work, one has to make the rather generous assumption the deleterious mutation is sufficiently visible to selective forces. One can see that on top of females having 40 kids, it's an awfully generous assumption that effectively 38 will be dying or have their lineages somehow truncated!!! John Sanford demonstrates why this cannot possibly be the case because of the problem of signal-to-noise ratios. We have strong empirical corroboration of this. Consider all the heretible defficiencies in humans that persist generation after generation and never get weeded out! So on top of Nachman's paradox, we have the problem of even asserting that such mutation are sufficiently visible to selection when both theoretically and empirically it is demonstrated the vast majority of deleterious mutations are almost neutral in selective value, and thus invisible to selection. The net result is like rust on a car, it just keeps accumulating, the break down is never immediate. What this is likened to is all classroom full of students where no one gets 100% on the exam. When each organism is slowly degrading, we only have one that is more functional than the others, but in actuality none are as functional and pure and healthy as the ancestors. Again, I point to the cobalt bomb experiments as an illustration of a time-lapsed video of evolution and the problem of Nachman's U-Paradox. The efficacy of purifying selection has not been studied enough. I suggest it is because it flies in the face of prevailing paradigms and because it's a bit too nerdy. However, imho, these population genetic arguments have as much force as any of the traditional ID arguments around today. Sal scordova
Observation, ie reality, is also a problem confronting the theory of evolution. As geneticist Giuseppe Sermonti put it:
Sexuality has brought joy to the world, to the world of the wild beasts, and to the world of flowers, but it has brought an end to evolution. In the lineages of living beings, whenever absent-minded Venus has taken the upper hand, forms have forgotten to make progress. It is only the husbandman that has improved strains, and he has done so by bullying, enslaving, and segregating. All these methods, of course, have made for sad, alienated animals, but they have not resulted in new species. Left to themselves, domesticated breeds would either die out or revert to the wild state—scarcely a commendable model for nature’s progress.
He goes on to say:
Natural Selection, which indeed occurs in nature (as Bishop Wilberforce, too, was perfectly aware), mainly has the effect of maintaining equilibrium and stability. It eliminates all those that dare depart from the type—the eccentrics and the adventurers and the marginal sort. It is ever adjusting populations, but it does so in each case by bringing them back to the norm. We read in the textbooks that, when environmental conditions change, the selection process may produce a shift in a population’s mean values, by a process known as adaptation. If the climate turns very cold, the cold-adapted beings are favored relative to others.; if it becomes windy, the wind blows away those that are most exposed; if an illness breaks out, those in questionable health will be lost. But all these artful guiles serve their purpose only until the clouds blow away. The species, in fact, is an organic entity, a typical form, which may deviate only to return to the furrow of its destiny; it may wander from the band only to find its proper place by returning to the gang. Everything that disassembles, upsets proportions or becomes distorted in any way is sooner or later brought back to the type. There has been a tendency to confuse fleeting adjustments with grand destinies, minor shrewdness with signs of the times. It is true that species may lose something on the way—the mole its eyes, say, and the succulent plant its leaves, never to recover them again. But here we are dealing with unhappy, mutilated species, at the margins of their area of distribution—the extreme and the specialized. These are species with no future; they are not pioneers, but prisoners in nature’s penitentiary.
The point being, that IF it were left to direct scientific observations and objective testing, evolutionism fails miserably and all that is left is wishful thinking supported by speculation. The Amazing Karnac holds the envelope to his head and proclaims: "The hokey-pokey" He then rips open the envelope to and reads the contents: "What our descendants will call today's theory of evolution." Joseph
[...] UD discusses Nachman’s U paradox To evolve the population, one needs to add new mutations and either fix some mutations and remove (purify away) others. Haldane’s dilemma deals with the difficulty of fixation. Nachman U-Paradox problem deals with the difficulty of purification. Even if the mutation were neutral, Nachman’s paper still poses the problem of how to purify away neutral mutations such that the genomes between each member of the species remains relatively similar (humans are about 99.5% similar to each other). [...] Darwiniana » Genomic complexities
As an aside, it's exactly these sorts of discussions that get swept aside or under the rug because of the prevailing paradigm. One might ask how an ID-friendly discipline of biology would be different than the ones we have today. Well, these sorts of issues would be getting a lot more scrutiny. An ID-friendly approach wouldn't hesitate to point out something is "rotten in the state of Darwin". scordova
I didn’t understand the connection btwn 180 mill. divided by 2 and the human and monkey genome difference (that is why divide by 2?).
Correct, to approximate, we divide 180,000,000 by 2. But let's back up a bit. Do you recall the stories of the "Cobalt Bomb" labs where we tried to speed up evolution by irradiating the creatures (often via radioactive cobalt)? We have the same problem here with Nachman's paradox. Sure, we may add more beneficial mutations (and that is a generous assumption), but can we deal with the cost of purifying out all the bad we introduce? Now you are correct if we can fix more mutations, like 100 vs.s 3, we can account for divergence in less time. But there is subtlety here that I must address. Nachman is dealing not with fixation but with purification. Recall that if we are mutating the genomes of creatures, we also need to be purifying out the bad mutations. Haldane's dilemma only dealt with one side of the problem!!!!! That is, in Haldane's dilemma he ignored what was happening to the rest of the genome and focused on one posistion (loci), be it a full gene or even single nucleotide (the moderen forumulation under ReMine). Haldane's model focused on how hard it would be for natural selection to fix one gene or nucleotide. What is not even considered is when the mutation rate is increased, there are more deleterious mutations in general being introduced per individual, and those need to be taken care of too. The Cobalt Bomb lab experiments kinda bore out this little oversight in Haldane's calculations. :-) But unfortunately for the Darwinians, the oversight still leaves Haldane's dilemma intact while adding Nachman's paradox on top of it! I just hypothetically said, if we take a generous estimate of saying for every nucleotide that gets fixed, we have to take care of only a single deleterious mutation. Actually, I don't think any one really knows how many bad mutations we must introduce per individual to glean out one that is usable. Most would argue 1-for-1 (or 3-for-3) is pretty optimistic. The way I phrased the issue was probably confusing. So you're confusion here is understandable:
take it that the 90 mill divided 3 was for u = 3 , but then I thought that can’t be right b/c 90 mill divided by 100 (for u =100) would be less of a prb than u =3 - arggh!
Indeed we MIGHT be able to go faster if we had a higher mutation rate, but how many offspring would we need to make this feasible?? Recall again the cobal bomb experiments as an illustration of what happens when we up the mutation rates. There are two issues in that case: 1. Will increased mutations increase the fixation rate (how it affects Haldane's dilemma) 2. Will increase mutations require more offspring to purify the popluation (Nachman's U-Paradox). I don't know the answer to #1, the answer to #2 is "yes" from what I've read.
I assume you multiplied the last figure by 20 for human/primate gestation time.
Correct. Finally, let's take the scenarion that for every 3 nucleotides fixed we need 100 deleterious mutations (3 successful trials at a penalty of 100 errors). This would surely be more in line with what we see in the cobalt bomb labs, and it is easy to envision it to be the appropriate model for real evolution over long periods of time. This would obviously be a disastarous scenario, wouldn't it? I think my 3 deleterious for 3 fixed maybe optimistic. Geneticist Bryan Sykes takes a similarly dismal view of genomic evolution. He's predicting human extinction 100,000 years from now. He never explains, however, why we're still alive. I welcome commentary from those with more insight than I. Sal scordova
Hey Sal, Insanely interesting! I was wondering if you could explain the math a lil bit :) I didn't understand the connection btwn 180 mill. divided by 2 and the human and monkey genome difference (that is why divide by 2?). I take it that the 90 mill divided 3 was for u = 3 , but then I thought that can't be right b/c 90 mill divided by 100 (for u =100) would be less of a prb than u =3 - arggh! I assume you multiplied the last figure by 20 for human/primate gestation time. Thx! jpark320
"I hereby give a phrase I’ve coined to the ID community: 'Darwin of the Gaps'" I think Behe already used that phrase when interviewed by Strobel in "Case for the Creator." jb
I hereby give a phrase I've coined to the ID community: "Darwin of the Gaps" geoffrobinson
Jehu and Salvador, Positive Epistatsis - not for the lack of effort: http://www.sciencemag.org/cgi/content/full/312/5775/848b This team correctly points out data bias and need for better test controls. Michaels7
My twiddling with computer modeling would indicate that the maximum mutation rate to produce evolution is one affective (non-neutral) mutation per organism per generation. Otherwise, as most affective mutations are deleterious, every beneficial mutation would be tested in the company of an organism that had multiple deleterious organism going along with it. How on earth does the beneficial mutation compete with that. Try some arithmetic, say an organism's quality rating is 97 + mutations, he has 1 good and 2 deleterious, now his quality rating is 95. He's still a degraded organism. bFast
Hi Jehu, Indeed, the threshhold may be reached and then the organism is broken. At first glance this may seem to cure Nachman's paradox. I'm not so sure. That organism that passed the threshhold may fail to reproduce, but it doesn't clean out the rest of the population that are at the border of the threshhold... Actually, that raises another interesting point. Why aren't more creatures approaching that catastrophic threshhold? It's like they had an initial condition not too long ago where their genomes were nicely purified. :-) Sal scordova
In contrast to "synergistic epistasis" what actually happens is "negative epistasis." That is seemingly neutral mutations accumulate until they reach a threshold and then "once the stability threshold is exhausted, the deleterious effects of mutations become fully pronounced" http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=17122770 Jehu
Thanks for the post Sal, very informative. Atom

Leave a Reply