Genetics News

Fixation rate, what about breaking rate?

Spread the love

Hats off to VJTorley for vindicating claims I’ve made about neutral theory (non-Darwinian evolution) for almost the last eight years at UD. He found this by PZ Myers:

M]aybe we should be honest from the very beginning about the complexity of modern evolutionary theory and how it has grown to be very different from what Darwin knew.

First thing you have to know: the revolution is over. Neutral and nearly neutral theory won.

Fixation The Neutral Theory’s Achilles Heel

Oh you mean PZ you all weren’t honest from the very beginning. 🙂 Just kidding!

I would argue a slightly different Achilles heel, not the rate of “fixation” (awful term as it suggests improvement when in fact it could just as well mean permanent damage!), namely the rate of breaking.

I pointed out most evolution is free of selection. I still stand by that where the term “selection” refers to Darwinian selection. Neutral theory partially helps ID, but it fails in one critical aspect, the problem that random walks destroy design.

In If not Rupe and Sanford 8/6/13, would you believe Wiki, I showed a widely accepted formula that fixation rate equals mutation rate. I believe it is sound from a mathematical standpoint, but flawed from a functional standpoint.

http://en.wikipedia.org/wiki/Fixation_(population_genetics)

fixation rate

The population size is N and the Greek symbol μ (mu) is the mutation rate.

Ok, so let’s do an experiment. Let’s subject bacteria or plants or any organism to radiation and thus increase the mutation rate mutation rate by a factor of 1 million or 1 billion. Do you think the above formula will still hold? We tried it in the lab, it killed the plants, and at some point rather than speeding evolution we are doing sterilization.

And VJ quoted Moran:

Random genetic drift is a mechanism of evolution that results in fixation or elimination of alleles independently of natural selection. If there was no such thing as neutral mutations then random genetic drift would still be an important mechanism…

Random genetic drift is a mechanism of evolution that was discovered and described over 30 years before Neutral Theory came on the scene.

What Neutral Theory tells us is that a huge number of mutations are neutral and there are far more neutral mutations fixed by random genetic drift that there are beneficial mutations fixed by natural selection. The conclusion is inescapable. Random genetic drift is, by far, the dominant mechanism of evolution.…

The revolution is over and strict Darwinism lost.

Not quite, Larry. Most alleles disappear by drift, but it’s like people waiting in line and the line gets longer and longer, even though many alleles are getting purged, if the rate of introduction exceeds the rate of purging, then the genome starts to get packed with junk. So yeah, neutral theory predicts “fixing” but it fails to account for breaking! The experiments with accelerated mutation rates highlights the folly of blindly following a mathematical idealization like the formula above.

Though the below formulas based on the Poisson distribution needs some amendments because of synergistic epistasis and the binomial distribution, here is a problem that I’ve pointed out repeatedly at UD. So the above formula of fixation is true, but so are the formulas below for breaking. The problem is we focus on the “fixes” but forget about the breaks that come along!

NOTES
Excerpt from Death of the Fittest

The following video is a crude 1-minute silent animation that I and others put together. God willing, there will be major improvements to the animation (including audio), but this is a start. Be sure to watch it in full screen mode to see the details.

The animation asserts that if harmful mutation rates are high enough, then there exists no form or mechanism of selection which can arrest genetic deterioration. Even if the harmful mutations do not reach population fixation, they can still damage the collective genome.

The animation starts off with healthy gingerbread parents. Each parent spawns 2 gingerbread kids, and the red dots on the kids represent them having a mutation. To simplify the animation, the reproduction was depicted as asexual, but the concept can easily be extended to sexually reproducing species.

The missing gingerbread limbs are suggestive of severe mutations, the more mild mutations are represented by gingerbread kids merely having a red dot and not having severe phenotypic effects of their mutation. The exploding gingerbread kids represent natural selection removing the less functionally fit from the population. 4 generations are represented, and the fourth generation has three mutations per individual.

embedded by Embedded Video

YouTube Direkt

Note the persistence of bad mutations despite any conceivable mechanism of selection.

When I posted this video earlier at UD, I got complaints about the simplicity of the model. I will suggest two refinements which will show that even with moderate rates of mutation per individual per generation, genetic deterioration will happen. Further, this claim is reinforced by the work of Nobel Prize winner Hermann Muller who said a deleterious mutation rate of even 0.5 per individual per generation would be sufficient to eventually terminate humanity. So the simple model I present is actually more generous than Muller’s. Current estimates of the number of bad mutations are well over 1.0 per human per individual. There could be hundreds, perhaps thousands of bad mutations per individual per generation according to John Sanford. Larry Moran estimates 56-160 mutations per individual per generation. Using Larry’s low figure of 56 and generously granting that only about 11% of those are bad, we end up with 6 bad mutation per individual per generation, 6 times more than the cartoon model presented, and 12 times more than Muller’s figure that ensures the eventual end of the human race.

The first refinement of the cartoon model comes from Nachman and Crowell’s paper Esitmate of the Mutation Rate per Nucleotide in Humans and The Mutational Load by Kimura. Nachman provides a way to relate mutation rates with the probability of having a eugenically “ideal” child.

I hypothesized Nachman and Crowell were using a Poisson distribution as reasonable model for the probability of a eugenically clean individual appearing in the face of various mutation rates. And sure enough, with a little sleuthing help from my UD colleague “JoeCoder”, it was confirmed in Kimrua’s paper (see eqn. 1.4) which Nachman and Crowell, and Eyre-Walker and Keightley referenced.

This was important because up until that realization, I felt uncomfortable not knowing how those probabilities were derived. But now that it is clear that professional population geneticists are using the Poisson distribution to estimate probabilities, there is transparency in their model, and that makes the cartoon model defensible. The Appendix Notes in the comment section will provide a justification for the Poisson distribution.

So now the details:

let U = mutation rate (per individual per generation)
P(0,U) = probability of individual having no mutation under a mutation rate U (eugenically the best)
P(1,U) = probability of individual having 1 mutation under a mutation rate U
P(2,U) = probability of individual having 2 mutations under a mutation rate U
etc.

The wiki definition of Poisson distribution is:

to conform the wiki formula with evolutionary literature let

and

thus

Because P(0,U) = probability of individual having no mutation under a mutation rate U (eugenically the best), we can find the probability the eugenically best individual emerges by letting:

which yields

Given the Poisson distribution is a discrete probability distribution, the following idealization must hold:

thus

thus

thus

On inspection, the left hand side of the above equation must be the percent of offspring that have at least 1 new mutation, and this reduces to the following:

which is in full agreement with Nachman and Crowell’s equation in the very last paragraph and in full agreement with an article in Nature: High genomic deleterious mutation rates in homonids by Eyre-Walker and Keightley, paragraph 2. The simplicity and elegance of the final result is astonishing, and simplicity and elegance lend force to arguments.

So what does this mean? If the mutation rate is 6 per individual per generation, using that formula, the chances that a eugenically “ideal” offspring will emerge is:

This would imply each parent needs to procreate the following number of kids on average just to get 1 eugenically fit kid:

Or equivalently each couple needs to procreate the following number of kids on average just to get 1 eugenically fit kid:

In other words parents would have to be acting roughly like 100 Octomoms or 800 Richard Dawkins just to make one eugenically “ideal” baby that doesn’t have any new mutation (but still has all the bad mutations inherited from mom and dad). These calculations suggest, if Darwinism is true, the world needs far more women with the virtues of Octomom.

For humanity to survive, even after each couple has 807 kids on average, we still have to make the further utterly unrealistic assumption that the eugenically “ideal” offspring are the only survivors of a selective process. Hence, it is absurd to think humanity can purge the bad out of its populations — the bad just keeps getting worse.

In truth, since most mutations are of nearly neutral effect, most of the damaged offspring will reproduce, and the probability of a eugenically ideal line of offspring approaches zero over time. Therefore the cartoon model which assumes at least 1 new mutation per individual per generation is reasonable, and as I pointed out, the cartoon model is actually generous given Muller’s number of only 0.5 new mutations per generation per individual. The cartoon however graphically conveys the gravity of the problem.

Finally, how does this relate to the flaws in Dawkins weasel, Avida or any other conceivable genetic algorithm falsely used to defend Darwinism? These models notoriously don’t allow the offspring to move progressively farther from a desirable ideal with each generation (as illustrated in the cartoon model). Dawkins assumes cumulative selection, but this suffers from the flaw of assuming that all descendants are at least as good as the ancestor (the implementation of Dawkins Weasel disguises this fact). Computer simulations that assume offspring are at least as good as parents are obviously flawed, and more subtly, simulations that allow offspring to be on average better than their parents are also flawed. I leave it to the developers of these simulations to fix their bugs and conceptions. This is the 2nd refinement to the cartoon model. Let developers of evolutionary simulation incorporate the above considerations into their programs.

We have models from nature that show “death of the fittest” better describes what’s going on in nature, and the notion of “survival of the fittest implies inevitable improvement with each generation” is Darwin’s, Dennett’s, and Dawkins’ delusion (DDDD).

26 Replies to “Fixation rate, what about breaking rate?

  1. 1
    JoeCoder says:

    Thanks for the detailed writeup Sal. But one thing that you’re forgetting is that as load increases so does the variance in the number of deleterious mutations inherited.

    First to pick a deleterious mutation rate. Studies of conserved sequences put about 10% of nucleotides being “strictly” functional and ENCODE put it higher at 20% based on binding sites and exons. But let’s be generous to ID and say 100% are deleterious–or about 100 deleterious mutations per generation.

    Suppose we get to the point where Mom has 200 thousand deleterious mutations and so does dad. They have 5 kids. I think we can use stat trek’s binomial calculator with parameters 0.5, 200,000, and 200,000 / 2 – 100 = 49,950 to figure out how many of those 5 kids end up with less deleterious mutations than their parents. Divided by 2 because half the genome comes from each parent.

    The result is 41%. 2 Out of 5 kids are genetically superior to their parents. This is just barely enough to survive indefinitely under omnisciently strong selection. Increasing the number of offspring would make it easier. Interestingly this means any species that averages less than 5 offspring per generation is doomed for extinction, if I’m doing all this right.

    But does this solve the problem of genetic entropy? I still think not. It only works if all deleterious mutations are equal. If there are a few that are very deleterious and most are only slightly deleterious, selection only focuses on the very deleterious while the slightly deleterious ones still sneak in. I don’t know how to model this mathematically, but I believe Sanford’s paper in world scientific on the mutation count mechanism tackles this in more detail. I haven’t had a chance to read it yet.

  2. 2
    scordova says:

    I wrote above:

    Though the below formulas based on the Poisson distribution needs some amendments because of synergistic epistasis and the binomial distribution,

    JoeCoder provided the amendments that I was referring to, he states it better than I could and he was the first to point it out to me.

  3. 3
    JoeCoder says:

    Above that should be 99,950, not 49,950.

  4. 4
    scordova says:

    This is just barely enough to survive indefinitely under omnisciently strong selection.

    Exactly, it is generous assumption that selection works with 100% efficiency when it may, for all we know, be almost neutral even with a lot of mutations put together.

    We have theoretical and empirical reasons to suspect this. For example, the persistence of myopia and diabetes — diabetes in animals for example:

    http://en.wikipedia.org/wiki/Diabetes_in_dogs

    Why hasn’t selection cleaned this out? If it can’t clean something as this, why should we expect to clean other things out like slightly deleterious mutations.

    If selection were omniscient enough to build complexity but can’t weed out diabetes in dogs and humans, I’m doubtful it’s as strong as advertised.

    One way to settle this, monitor genomes in real time, I suspect the results won’t be pretty, we’ll realize as I suggested in 2007, there will be almost unabated growth of mild, almost undetectable defects accumulating in the genome. We can detect the changes via gene sequencing, and we will not be able to reconcile the accumulation of these mutations with evolutionary theory since these changes will accumulate free of selection.

    I wrote:

    a fundamental consequence of Sanford’s Genetic Entropy thesis is that there will be an unabated rise in Single Nucleotide Polymorphisms (SNPs) per generation per individual.

    http://www.uncommondescent.com.....roponents/

  5. 5
    scordova says:

    JoeCoder,

    Deeply conserved regions have been knocked out in mice and no noticeable change. This suggests selection won’t find mutations in the huge megabases of genes as they mutate thus the calculations of the Poisson distribution might hold for these regions.

    A testable hypothesis based on the above calculations is the mutations will increase in real time in these regions. That will falsify both selectionist theory and neutral theory for past evolution but affirm neutral theory for present evolution.

  6. 6
    JoeCoder says:

    If selection were omniscient enough to build complexity but can’t weed out diabetes in dogs and humans, I’m doubtful it’s as strong as advertised.

    The counter-argument is that with these the selective forces usually come into play after child-bearing years. Selection against the elderly isn’t anything with lasting effect.

    we’ll realize as I suggested in 2007, there will be almost unabated growth of mild, almost undetectable defects accumulating in the genome.

    Did you see the news about this a little over a year ago? As reported in Nature News:

    Of 1.15 million single-nucleotide variants found among more than 15,000 protein-encoding genes, 73% in arose the past 5,000 years, the researchers report. On average, 164,688 of the variants — roughly 14% — were potentially harmful, and of those, 86% arose in the past 5,000 years. ‘There’s so many of [variants] that exist that some of them have to contribute to disease,’ says Akey

    Another source put it in simpler terms: “humans are carrying around larger number of deleterious mutations than they did a few thousand years ago”

    However, every population geneticist will agree that if population increases over time so will SNP’s, and especially if reproductive rates decrease, which entails less selection.

  7. 7
    JoeCoder says:

    I knew about the megabase deletions of conserved sequences in mice having no effect on phenotype. I would guess they perhaps were redundant systems that only came into play if other genes were first knocked out. Dennis Noble spoke a lot about this in his talk that got passed around several months ago. At 16:27:

    Simply by knocking genes out we don’t necessarily reveal function, because the network may buffer what is happening. So you may need to do tow knockouts or even three before you finally get through to the phenotype. … If one network doesn’t succeed in producing a component necessary to the functioning of the cell and the organism, then another network is used instead. So most knockouts and mutations are buffered by the network.

    And at 19:40:

    Is this an unusual result, … or is it general? This study went through all 6000 genes in the organism yeast. knocking them out one by one. 80% of the knockouts were silent. So this physiological process of buffering against gene change is general. It’s usual in fact. Now that doesn’t mean to say that these proteins that are made as a consequence of gene templates for them don’t have a function. Of course they do. If you stress the organism you can reveal the function. .. If the organism can’t make product X by mechanism A, it makes it by mechanism B.

    These systems are never used unless the primary systems fail and therefore can’t be maintained by selection and should deteriorate through genetic drift. Moreso they don’t show similarity to the primary systems so they couldn’t have been created through duplications. Of course what evolution can’t maintain it certainly can’t create. So we have a design pattern that matches our own designs and is the opposite of what evolution could do.

    What’s really interesting is that we see this same system of disparate redundancy in the best and most reliable of our own designs. From Walter Bright, who is a former Boeing engineer and a minor celebrity in my field of computer science, although I have no idea where he stands on the origins debate:

    All I know in detail is the 757 system, which uses triply-redundant hydraulic systems. Any computer control of the flight control systems (such as the autopilot) can be quickly locked out by the pilot who then reverts to manual control. The computer control systems were dual, meaning two independent computer boards. The boards were designed independently, had different CPU architectures on board, were programmed in different languages, were developed by different teams, the algorithms used were different, and a third group would check that there was no inadvertent similarity. An electronic comparator compared the results of the boards, and if they differed, automatically locked out both and alerted the pilot. And oh yea, there were dual comparators, and either one could lock them out. This was pretty much standard practice at the time. Note the complete lack of “we can write software that won’t fail!” nonsense. This attitude permeates everything in airframe design, which is why air travel is so incredibly safe despite its inherent danger.

    And here:

    I am continually amazed at critical systems design in Fukushima and Deep Water Horizon that have no backups or overrides. I’d fire any engineer that came to me the second time with a critical system design he argues “can’t fail” and doesn’t need a backup/override.

  8. 8
    Axel says:

    Am I missing something? Selection is a function of intelligence.

    Sunflowers don’t choose to face the sun, Venus Flytraps don’t choose to eat the wee beasties they devour. It’s that old animism faith again, isn’t it? No clarity of reasoning, no amount of reasoning, can free them from the shackles of the hunter-gatherer faith that binds them.

  9. 9
    JoeCoder says:

    Why not just pull out the DNA and make a million or a billion random nucleotide substitutions and then put it back?

    At this point there’s already so much evidence for evolution that wikipedia has to have a continual fundraiser just to pay for servers to host all of it. The final blow was when we discovered that babies could evolve into adults in a mere 9 months. If creationists can’t be convinced by that nothing will change their minds. /s

  10. 10
    scordova says:

    The diabetes in dogs and humans has appeared in juvenile form. The occurrence in dogs is 1 in 500. If NS can’t weed that out, it just strikes me a strange to believe it can weed out even smaller defects. If the “dog/human” split was tens of millions of years ago, why is it still there.

    Sharon Molem said it means diabetes must then be a positive trait since it persisted! That was my complaint about “Survival of the Sickest” and Darwinism in general. You get these odd kinds of “solutions”…

  11. 11
    scordova says:

    An enigma I posed in VJ Torley’s thread.

    If sharks are living fossils, how come individual sharks aren’t as diverged from other sharks as sharks are from other species? The molecular clock should not have stopped ticking for the regions that make them identical in the present day!

    So whatever we conclude for humans, in living fossils the problem is far far worse.

  12. 12
    vjtorley says:

    Hi Sal,

    Great post. There’s a lot of food for thought here, and you have gone to great lengths to make it accessible to the general reader. I shall ponder what you’ve written carefully over the next few days. Thanks once again.

  13. 13
    scordova says:

    JoeCoder,

    Regarding the nuance you were quite keen to point out, the generous assumption both you and I granted was that bad and good traits are separable.

    The problem is that when recombination happens between mom and dad, there may be linkage blocks with the result that the bad must necessarily be inherited with the good for that block.

    The most obvious and extreme form of selection interference is when there is tight physical linkage between benefician and deleterious mutations. This results in an irreconcilable problem referred to as “Muller’s Ratchet”. One of the most obvious requirements of natural selection is the ability to separate good and bad mutations. This is not possible when good and bad mutations are physically linked.

    John Sanford, Genetic Entropy
    p. 81

    I have not confirmed this with John, but he may be referring to Haplotype blocks.

  14. 14
    JoeCoder says:

    If sharks are living fossils, how come individual sharks aren’t as diverged from other sharks as sharks are from other species? The molecular clock should not have stopped ticking for the regions that make them identical in the present day!

    Could be background selection removing any sharks that become too divergent. Of course that would require selection to act on most of the genome in turn meaning that most of the genome is strictly functional.

  15. 15
    scordova says:

    Could be background selection removing any sharks that become too divergent. Of course that would require selection to act on most of the genome in turn meaning that most of the genome is strictly functional.

    But as you rightly pointed out with deeply conserved sequences, functional doesn’t necessarily mean selectable.

    http://www.uncommondescent.com.....omplexity/

    We can test in real time if sharks are diverging in their deeply conserved regions. ID/Creationist prediction, the divergence will be un-abated in such a way as to:

    1. refute selection in the past and present
    2. refute neutral evolution in the past
    3. affirm neutral evolution in the present

    This situation then becomes an ID inference just like the 500 coins example — why aren’t the genomes randomized after 350 million years since they are mostly free of selection?

  16. 16
    scordova says:

    Suppose we get to the point where Mom has 200 thousand deleterious mutations and so does dad. They have 5 kids. I think we can use stat trek’s binomial calculator with parameters 0.5, 200,000, and 200,000 / 2 – 100 = 49,950 to figure out how many of those 5 kids end up with less deleterious mutations than their parents. Divided by 2 because half the genome comes from each parent.

    There is a subtle modeling assumption here that may invalidate this supposition and it is one you pointed out, but it is worse than that. You correctly said:

    It only works if all deleterious mutations are equal.

    It is worse than that, it assumes dysfunctional mutations actually get negative selection coefficients, and we know this is not the case. Example, a back up system such as likely in deeply conserved regions is metabolically expensive. Unless it is needed under an environmental stress its selection coefficient is negative even though it is functional.

    Thus for the binomial distribution to work as you suggest, the selection coefficients must be persistently negative and positive, this has never been proven and in the case of deeply conserved regions that may be functional, it is a dubious assumption, and hence in the case of such regions, the original Poisson model I presented above (the same as Nachman and Crowell) is possibly generous.

    One way to settle it is real time analysis, given the fact of blind cave fish and juvenile diabetes in mammals, I’m not expecting selection will be shown to work as well as advertised. If selection can’t weed out juvenile diabetes and a host of other juvenile diseases that are evident in all mammals, then I don’t expect it will weed out far smaller dysfunctions in the genome.

  17. 17
    scordova says:

    Before I forget, an article JoeCoder made me aware of:

    http://news.indiana.edu/releas.....ered.shtml

    It points out some proteins not expressed except under environmental stress.

  18. 18
    JacobyShaddix says:

    Hi Sal

    Perhaps you should read up on this. Here is what Laurence Moran wrote about this in 2009: http://sandwalk.blogspot.co.uk.....-junk.html

    I’ll give you the highlights:

    A species cannot afford to accumulate deleterious mutations in the genomes of its individuals. Eventually the number of “bad” mutations will reach a level where most genes have multiple “bad” alleles and it becomes impossible to produce offspring.

    This phenomenon is referred to as genetic load. It means that species can only survive if the genetic load is below some minimum value. A good rule of thumb is that there can’t be more than 0.1 deleterious mutations per individual per generation but in actual populations this value can be a bit higher.

    This is one of the arguments in favor of Neutral Theory. Most mutations are neither deleterious nor beneficial. They are simply neutral with respect to natural selection.

    Let’s think about a typical protein-encoding gene.1 The coding region is about 2,000 base pairs in length and consist of 666 codons. More than half these codons can be mutated to a new codon encoding a different amino acid without severe effects on the function of the protein.2 These are called amino acid substitutions. Of the “essential” codons, many can tolerate mutations that create synonymous codons. Putting these facts together suggests that only about 20% of mutations to protein encoding regions are detrimental. The rest are effectively neutral.

    That’s still unacceptably high. It leads to the idea that a large percentage of our genome must be unaffected by mutations. In other words, genes represent only a small percentage of our genome and mutations can freely accumulate in the rest without detrimental consequences.

    In order to bring the genetic load down to acceptable levels, the number of genes has to be less than 40,000 according to the arguments made in the 1960s. We now know that we have only 20,000 genes. Most of them encode proteins and the coding regions of those genes make up about 40,000,000 bp or about 1.3% of our genome

    Recall that only 20% of mutations in coding regions are likely to be detrimental. That means that the effective target size for detrimental mutations is about 20% x 1.3% = 0.26% of our genome. Out of 130 mutations, only 0.3 per individual per generation will be detrimental.3

    Since we are diploid organisms, the 130 mutations in the zygote are spread out over two copies of our genome but almost all of them will be in the chromosomes coming from the father. Every zygote inherits one complete set of chromosomes with hardly any mutations while the other set has less than one detrimental mutation.

    Because a large percentage of gene mutations are neutral, and because most of our genome is junk, we can easily tolerate 130 mutations per individual per generation without going extinct.

    Creationists will never understand this because: (a) they believe that modern evolutionary theory is all about “Darwinism” and Darwinian evolution doesn’t recognize neutral mutations and random genetic drift, and (b) they can’t admit to junk DNA because that’s the opposite of what intelligent design would look like.

  19. 19
    scordova says:

    This is one of the arguments in favor of Neutral Theory. Most mutations are neither deleterious nor beneficial. They are simply neutral with respect to natural selection.

    Neutral with respect to selection does not mean neutral with respect to function. Something can be functional but not selectively visible or in some cases negatively selectable such as in cases that follow Behe’s rule of adaptive evolution.

    Creationists will never understand this because: (a) they believe that modern evolutionary theory is all about “Darwinism” and Darwinian evolution doesn’t recognize neutral mutations and random genetic drift, and (b) they can’t admit to junk DNA because that’s the opposite of what intelligent design would look like.

    Wrong. Moran doesn’t understand neutral does not equate to non-functional, and sometimes something can be functional and selectively disfavored.

    Moran’s hero is Richard Dawkins who has 1 child. Octomom who is dysfunctional has 14 times as many kids a Dawkins. According to evolutionary theory, Octomom has shown herself to be selectively advantaged because of her desire to make babies and spread her genes whereas Dawkins himself said he prefers to spread his memes. Imho, Dawkins is more functional than Octomom, but you can’t determine that just by looking at which phenotype made more offspring (like evolutionary theory does) can you? 🙂

  20. 20
    JoeCoder says:

    Putting these facts together suggests that only about 20% of mutations to protein encoding regions are detrimental. The rest are effectively neutral.

    I remember reading that article on Sandwalk a while back ago. I think Dr. Moran’s numbers are way off even for Encode critics.

    This paper puts it at “0.70 ± 0.06” — 70% of mutations within fly protein coding exons are deleterious. Not 20%. Another paper estimated “30–50% of single amino acid mutations [within protein coding regions] are strongly deleterious, 50–70% are neutral or slightly deleterious and 0.01–1% are beneficial.”

    It’s also wrong to only consider amino acid altering mutations. In salmonella, even “mutations that do not change the protein sequence had negative effects similar to those of mutations that led to substitution of amino acids”.

    Finally, consider that “Because most mutations are deleterious, the probability that a variant retains its fold and function declines exponentially with the number of random substitutions” Meaning that mutations that are initially neutral will still contribute to the rust that will eventually make the bumper fall off.

    Encode puts the number of strictly functional sequences at at least 20%. But if you don’t like ENCODE then fine. Just based on conserved sequences 10% are functionally constrained. That means that anything that mutated one of those 300 million nucleotides didn’t live to tell about it:

    Our results suggest that between 200 and 300 Mb (6.7%–10.0%) of the human genome is under functional constraint. This estimate was arrived at as follows. First, the amount of human genome under functional constraint is at least 200 Mb, the upper-bound estimate for human and horse made in a divergence regime associated with conservative estimations, according to our simulations. Second, the indicative higher estimate of 300 Mb was obtained by extrapolating the trend for lower-bound estimates involving human … methods for inferring quantities of functional DNA rest upon the hypothesis that in functional sequence most nucleotide changes are detrimental, causing such changes to be purged from the species’ populations, which results in evolutionarily conserved sequence. … the true quantity of functional material in mammalian genomes may be around 300 Mb (10% of the human genome) … these values may underestimate the true level of constraint

  21. 21
    RodW says:

    Sal said

    Neutral with respect to selection does not mean neutral with respect to function. Something can be functional but not selectively visible or in some cases negatively selectable such as in cases that follow Behe’s rule of adaptive evolution.
    and

    Wrong. Moran doesn’t understand neutral does not equate to non-functional, and sometimes something can be functional and selectively disfavored.

    Clearly Moran does understand this. He mentioned mutations that create synonamous codons. These mutations occur in proteins. As a biochemist at a major university its safe to say he knows that proteins have function.

    This issue of function is irrelevant to the issue at hand which is how many harmful mutations can a population of organisms endure.

    Harmful mutations = mutations that create a new and harmful function.. or… mutations that disable a necessary function
    Neutral mutations = mutations that occur in non-functional regions that do not create a new and harmful function…or…
    mutations in functional regions that do not change the function in a harmful way.

  22. 22
    scordova says:

    RodW,

    Neutral means with respect to differential reproductive success, it says nothing of function.

    How does Moran define Junk? By it being neutral?

    Because a large percentage of gene mutations are neutral, and because most of our genome is junk, we can easily tolerate 130 mutations per individual per generation without going extinct.

    JoeCoder and I gave examples where this is demonstrably false. A system can be functional and selectively neutral or possibly selectively deleterious in given environments.

    I gave poignant examples of flawed thinking as far back as this in 2006:
    Airplane Magnetos, Contingency Designs.

    How does moran establish something as junk — because he doesn’t personally witness function. The link provided above shows the fallacy of such presumption, and I provide the link again which states:

    http://news.indiana.edu/releas.....ered.shtml

    Those environmental stresses resulted in small changes in expression level at thousands of genes; and in one treatment, four newly modeled genes were expressed altogether differently. In total, 5,249 transcript models for 811 genes were revealed only under perturbed conditions.

    How does Larry know something is junk. Has explored every possible pathway that might reveal expression in genes he thought were dead? No, he just assumes it. For example those transcript models might never have been detected. Kind of disturbing biologists might not see a gene expressed in an certain way and then assumes that the dna is junk. That Bloomington study actually found the genes in supposedly gene-free zones.

    As far as Larry teaching at a university, here is an interesting fact, many of the faculty there disagree with Larry:

    Yet Moran provided no justification for his ex cathedra pronouncement that the 95% figure has been discredited. He simply brushed aside the 2008 and 2010 articles in Nature and Nature Genetics and their eighteen co-authors–nine of whom listed their affiliation as Moran’s own institution, the University of Toronto. – See more at: http://www.evolutionnews.org/2.....izs94.dpuf

  23. 23
    JoeCoder says:

    Looks like Dr. Moran has blogged about this post, and some of my comments here are quoted there in the comments there.

  24. 24
    RodW says:

    Sal,

    Ok, sorry. I didn’t realize you had shifted the conversation to one about ‘junk DNA’. I suppose its true that the genetic load argument doesnt prove that most of the genome is junk. What it says is that random mutations in ~90% of the genome will have no effect on fitness. It seems to me one has to have a pretty flexible notion of ‘function’ in this case.
    Its very hard to prove a negative – in this case that a stretch of DNA has no funtion- but we can come pretty close by analysing some locus in detail. The amount of this thats been done, along with the understanding of overall nature of sequences in the genome suggests pretty stongly that most of it is junk. I’ve gotten the impression that LM thinks its 90% but I can imagine it being as low as 50% as we unravel layers of regulatory complexity ( and with a more fexible definition of ‘function’)

  25. 25
    scordova says:

    The problem isn’t reproductive success, it is loss of function. Because “deleterious” and “beneficial” are not invariant concepts like mass and charge, it leads to all sorts of problems in conceptualizing biology. I address it here.

    http://www.uncommondescent.com.....s-neutral/

    Thanks JoeCoder for all your help.

  26. 26
    JoeCoder says:

    This is three years old, but I still want to correct something I wrote:

    70% of mutations within fly protein coding exons are deleterious.

    This should say 70% of amino acid altering mutations, not total mutations.

Leave a Reply