Intelligent Design

A short post on fixation

Spread the love

In a recent post, Professor Larry Moran accused me of shifting the goalposts, in order to avoid a discussion about whether evolution could account for the fixation of 22.4 million mutations in the human lineage, since we broke away from the chimps, five million years ago. Not being one to run away from a controversy, I’ve decided to make this question the topic of today’s post.

I’d like to begin by defining the neutral theory of evolution:

“This neutral theory claims that the overwhelming majority of evolutionary changes at the molecular level are not caused by selection acting on advantageous mutants, but by random fixation of selectively neutral or very nearly neutral mutants through the cumulative effect of sampling drift (due to finite population number) under continued input of new mutations.”
(Motoo Kimura, “The neutral theory of molecular evolution: A review of recent evidence,” Japanese Journal of Genetics 66, 367–386 (1991)).

And here’s a handy definition of the term “genetic fixation”:

1. the increase of the frequency of a gene by genetic drift until no other allele is preserved in a specific finite population.
(Stedman’s Medical Dictionary. Copyright 2006 Lippincott Williams & Wilkins.)

“Evidence, please!”

In a previous post, I asked for some experimental evidence to back up Professor Moran’s claim that 22.4 million nearly neutral alleles could have become fixed in the human genome during the last five million years. Were there any other organisms – bacteria, for instance – exhibiting the fixation rate predicted by evolutionary theory for neutral alleles?

Professor Moran kindly provided an example, in his response to my post:

Fortunately for Torley, there are a number of papers that answer his question. The one that I talk about in class is from Richard Lenski’s long-term evolution experiment. Recall that mutation rates are about 10^-10 per generation. If the fixation rate of neutral alleles was equal to the mutation rate then (as predicted by population genetics) then this should be observable in the experiment run by Lenski (now 60,000 generations).

The result is just what you expect. The total number of neutral allele fixations is 35 in the bacterial cultures and this correspond to a mutation rate of 0.9 × 10^-10 or only slightly lower than what is predicted. There are lots of references in the paper and lots of other papers in the literature.

Wielgoss, S., Barrick, J. E., Tenaillon, O., Cruveiller, S., Chane-Woon-Ming, B., Médigue, C., Lenski, R. E. and D. Schneider (2011) Mutation rate inferred from synonymous substitutions in a long-term evolution experiment with Escherichia coli. G3: Genes, Genomes, Genetics 1, 183-186. [doi: 10.1534/g3.111.000406]

The 12 evolving E. coli populations in Richard Lenski’s long term evolution experiment, on June 25, 2008. Image courtesy of Wikipedia.

Initially, I was very impressed with Lenski’s paper, and I was inclined to think that Professor Moran had proved his point. Scientia locuta est, causa finita est. Or so I thought.

A skeptical biochemist

It was then that I was contacted by a scientist who wrote to me, arguing that the fixation of 22.4 million mutations in the human lineage during the last five million years by a combination of selection and genetic drift was impossible and nonsensical for any population of organisms, especially when we consider the pattern of fixation. Strong words! Who was this mysterious scientist? Readers might be surprised to learn that he’s a biochemist with a very impressive track record named Branko Kozulic, whom I introduced to readers in a previous post, titled, The Edge of Evolution? A short summary of his career achievements is available here. Dr. Kozulic also serves on the editorial board of the Intelligent Design journal Bio-Complexity.

By now I was intrigued. Here was a prominent biochemist disagreeing with the arguments of another prominent biochemist! (Larry Moran is a Professor of Biochemistry at the University of Toronto.) Who was right? I decided to investigate the matter further.

There are three different mutation rates

Dr. Kozulic pointed out that we need to distinguish between three different mutation rates:

(a) the number of mutations per base pair per generation, which is indeed roughly constant across all organisms; and

(b) the number of mutations per individual per generation, which varies widely between different kinds of organisms, for reasons that I’ll discuss below; and

(c) the total number of mutations entering the population per generation, which is equal to “the number of gametes produced each generation, 2N, times the probability of a mutation in any one of them, u.” (John Gillespie, Population Genetics: A Concise Guide, Johns Hopkins University Press, 2004, pp. 32-33.)

Professor Moran does make this distinction in some of his posts – for example, this one, where he states that there is “one mutation in every 10 billion base pairs that are replicated,” and then goes on to say that there are “133 new mutations in every zygote.”

Which mutation rate is the fixation rate equal to?

In the passage cited above, Professor Moran referred to Lenski’s long term evolution experiment:

If the fixation rate of neutral alleles was equal to the mutation rate then (as predicted by population genetics) then this should be observable in the experiment run by Lenski (now 60,000 generations).

Did you notice the reference to “the mutation rate”? As we saw above, there are three mutation rates. In chapter two of his book, Population Genetics: A Concise Guide (Johns Hopkins University Press, Baltimore, second edition, 2004), which I’ve been recently perusing, evolutionary biologist John Gillespie repeatedly refers to the mutation rate for a given locus. And in population genetics, altering the numerical relationship between the mutation rate and the (effective) population size can lead to dramatically different results. For example Gillespie, in the textbook referred to above, writes:

If 1/u << N, the time scale of mutation is much less than drift, leading to a population with many unique alleles. If N << 1/u, the time scale of drift is shorter, leading to a population devoid of variation. (2004, p. 31)

Professor Moran is kindly requested to state whether he agrees with this statement, and if not, to provide some references to support his views.

Fixation in human beings: five orders of magnitude faster than in Lenski’s bacteria!

In the passage cited above, Professor Moran referred to Lenski’s results with E. coli bacteria: a mere 35 fixations after 60,000 generations. That’s about 0.0006 fixation events per generation, for the population as a whole.

By contrast, the fixation rate which Professor Moran claims for human beings (130 per generation) was 200,000 times faster than the rate which Lenski observed for his bacteria. That’s a difference of over five orders of magnitude! This difference in fixation rates requires an explanation. Do we agree on this point, Professor Moran?

Finding the cause that explains the pattern

Now, clearly something was responsible for producing the 22.4 million neutral alleles that distinguish the human lineage from that of chimpanzees. Nobody disputes that. What Dr. Kozulic rejects is the idea that all these mutations could have been fixed by any undirected process (e.g. random mutations plus natural selection, or plus genetic drift), within the time available, especially when we consider the pattern of fixed mutations.

I’d now invite readers to have a look at an article by Rasmus Nielsen et al., titled, A Scan for Positively Selected Genes in the Genomes of Humans and Chimpanzees (PLoS Biology, 3(6): e170. doi:10.1371/journal.pbio.0030170, published May 3, 2005). In particular, I’d like readers to check out Figure 1, showing the distributions of nonsynonymous and synonymous nucleotide differences among genes, for the chimpanzee sequence.

What the figure shows is that multiple mutations (up to 21) have become fixed in thousands of different proteins, within the relatively short span of five million years.

In short: it is the pattern of fixation which neither the theory of neutral evolution nor the neo-Darwinian theory of natural selection, nor any combination of the two, can adequately explain. Until Professor Moran comes up with an explanation of his own, and some research to back it up, the ball is squarely in his court.

Over to you, Professor Moran.

57 Replies to “A short post on fixation

  1. 1
    bornagain77 says:

    Anyone who refers to Lenski’s work as their main supporting evidence for ‘bottom up’ evolution is definitely not looking at the Lenski experiment in an unbiased manner:

    Richard Lenski’s Long-Term Evolution Experiments with E. coli and the Origin of New Biological Information – September 2011
    Excerpt: The results of future work aside, so far, during the course of the longest, most open-ended, and most extensive laboratory investigation of bacterial evolution, a number of adaptive mutations have been identified that endow the bacterial strain with greater fitness compared to that of the ancestral strain in the particular growth medium. The goal of Lenski’s research was not to analyze adaptive mutations in terms of gain or loss of function, as is the focus here, but rather to address other longstanding evolutionary questions. Nonetheless, all of the mutations identified to date can readily be classified as either modification-of-function or loss-of-FCT.
    (Michael J. Behe, “Experimental Evolution, Loss-of-Function Mutations and ‘The First Rule of Adaptive Evolution’,” Quarterly Review of Biology, Vol. 85(4) (December, 2010).)
    http://www.evolutionnews.org/2.....51051.html

    Rose-Colored Glasses: Lenski, Citrate, and BioLogos – Michael Behe – November 13, 2012
    Excerpt: Readers of my posts know that I’m a big fan of Professor Richard Lenski, a microbiologist at Michigan State University and member of the National Academy of Sciences. For the past few decades he has been conducting the largest laboratory evolution experiment ever attempted. Growing E. coli in flasks continuously, he has been following evolutionary changes in the bacterium for over 50,000 generations (which is equivalent to roughly a million years for large animals). Although Lenski is decidedly not an intelligent design proponent, his work enables us to see what evolution actually does when it has the resources of a large number of organisms over a substantial number of generations. Rather than speculate, Lenski and his coworkers have observed the workings of mutation and selection.,,,
    In my own view, in retrospect, the most surprising aspect of the oxygen-tolerant citT mutation was that it proved so difficult to achieve. If, before Lenski’s work was done, someone had sketched for me a cartoon of the original duplication that produced the metabolic change, I would have assumed that would be sufficient — that a single step could achieve it. The fact that it was considerably more difficult than that goes to show that even skeptics like myself overestimate the power of the Darwinian mechanism.
    http://www.evolutionnews.org/2.....66361.html

    Genetic Entropy Confirmed (in Lenski’s e-coli) – June 2011
    Excerpt: No increases in adaptation or fitness were observed, and no explanation was offered for how neo-Darwinism could overcome the downward trend in fitness.
    http://crev.info/content/11060....._confirmed

    Mutations : when benefits level off – June 2011 – (Lenski’s e-coli after 50,000 generations)
    Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually.
    http://www2.cnrs.fr/en/1867.htm?theme1=7

    The Mutational Meltdown in Asexual Populations – Lynch
    Excerpt: Loss of fitness due to the accumulation of deleterious mutations appears to be inevitable in small, obligately asexual populations, as these are incapable of reconstituting highly fit genotypes by recombination or back mutation. The cumulative buildup of such mutations is expected to lead to an eventual reduction in population size, and this facilitates the chance accumulation of future mutations. This synergistic interaction between population size reduction and mutation accumulation leads to an extinction process known as the mutational meltdown,,,
    http://www.oxfordjournals.org/.....84-339.pdf

  2. 2
    bornagain77 says:

    Of interest to the mutation rate of Lenski’s e-coli:

    New Work by Richard Lenski:
    Excerpt: Interestingly, in this paper they report that the E. coli strain became a “mutator.” That means it lost at least some of its ability to repair its DNA, so mutations are accumulating now at a rate about seventy times faster than normal.
    http://www.evolutionnews.org/2.....enski.html

    Lenski’s Long-Term Evolution Experiment: 25 Years and Counting – Michael Behe – November 21, 2013
    Excerpt: Twenty-five years later the culture — a cumulative total of trillions of cells — has been going for an astounding 58,000 generations and counting. As the article points out, that’s equivalent to a million years in the lineage of a large animal such as humans.,,,
    ,,,its mutation rate has increased some 150-fold. As Lenski’s work showed, that’s due to a mutation (dubbed mutT) that degrades an enzyme that rids the cell of damaged guanine nucleotides, preventing their misincorporation into DNA. Loss of function of a second enzyme (MutY), which removes mispaired bases from DNA, also increases the mutation rate when it occurs by itself. However, when the two mutations, mutT and mutY, occur together, the mutation rate decreases by half of what it is in the presence of mutT alone — that is, it is 75-fold greater than the unmutated case.
    Lenski is an optimistic man, and always accentuates the positive. In the paper on mutT and mutY, the stress is on how the bacterium has improved with the second mutation. Heavily unemphasized is the ominous fact that one loss of function mutation is “improved” by another loss of function mutation — by degrading a second gene. Anyone who is interested in long-term evolution should see this as a baleful portent for any theory of evolution that relies exclusively on blind, undirected processes.
    http://www.evolutionnews.org/2.....79401.html

  3. 3
    Moose Dr says:

    I know I sometimes get overwhelmed by the question of which numbers we are talking about. It seems to me that the 22.4 million mutation difference between human and chimp is the mutations that are found in the genes that humans and chimps have in common. It seems to me that the calculation of how many mutations should fix due to drift, as measured in the bacteria, are of the entire dna of the bacteria, including the non-coding dna.

    If I am right, he is comparing apples to oranges.

    The shared genes should be particularly resistant to mutation because they are active regions. We know, for instance, that in critical DNA no mutations stick (ultraconserved) because all mutations are detrimental. It is a whole lot easier to find non-detrimental mutations in low-priority DNA than it is in the DNA that actually prescribes the shape of the mechanical parts of the organism.

  4. 4
    wd400 says:

    Every time you’ve written one of these posts you added an addendum acknowledging nad elementary mistake. Do you think maybe you should learn something the first thing about population genetics before you make these claims?

    In this case the answer to “Which mutation rate is the fixation rate equal to?” is all of them. THe fixation rate per base is equal to the individual mutation rate per base. The fixation rate per genome is the same as the individual mutation rate per genome.

    In the passage cited above, Professor Moran referred to Lenski’s results with E. coli bacteria: a mere 35 fixations after 60,000 generations. That’s about 0.0006 fixation events per generation, for the population as a whole.

    The E. coli genome is more than 600 times smaller than the human genome and has a mutation rate ~ 2 orders of magnitiude lower (in part because there are multiple cell divisions per generation in animals).

    Using back of an envelope numbers,

    5e6 bases * a mutation rate of 1e-10 per base * 60 000 generations gives a prediction of… 30 mutations.

    What the figure shows is that multiple mutations (up to 21) have become fixed in thousands of different proteins, within the relatively short span of five million years.

    Yes. In a gene 1200 bases long you’d expect

    1e-8 mutations per base * 1200 bases * 240,000 generations * 2 indpendantly evolving lineages gives an expectation of ~6 changes per gene. Taking into account larger genes, and the variance inherit in a poisson process like mutation 21 is not very surprising.

    For example Gillespie, in the textbook referred to above, writes…[snip text book relationship between Ne and heterozygosity]

    I don’t know what the point of this quote is. Small populations contain less diversity because variants fix more quickly. What has that to do with this post?

  5. 5
    gpuccio says:

    VJ:

    Without going onto the technical details that you so patiently outline, I would like to state once more what I have said is a very simple concept, but it is often overlooked as soon as darwinists start debating their beloved neutral theory.

    The idea is simple. The problem for any non design theory of biological information is to explain how functional information arises.

    Darwinists usually agree with us (temporarily) that complex functional information cannot arise merely through random processes (that is, though a random search or random walk). That’s why they invoke their beloved Natural Selection.

    But, when the dramatic limitations of NS, and the complete absence of any empirical support for its role in macroevolution, are suggested to them, suddenly they seem to forget what they have said before, and go back to the idea of neutral mutations. But neutral mutations are by definition random, and we are again in the initial scenario (complex functional information cannot arise by random processes alone).

    Here comes to their (apparent) rescue the magical genetic drift: “Well, neutral mutations can be fixed just the same, by genetic drift!” (that is usually said with a great self-satisfied smile on the darwinist’s face).

    Correct. And so?

    Everybody seems to overlook the important point. I will try to state it here as simply as possible:

    If it is true that a random mutation can become fixed by genetic drift, it is equally true that such a process can happen with the same probability for any random neutral mutation. IOWs, the process is simply a random aspect of Random Variation. The probability of each mutation to be present in the genome remain exactly the same.

    What does that mean? It means that genetic drift, while remaining an interesting phenomenon, is completely irrelevant to the generation of functional information.

    The only important quantities in the computation of the system where functional information arises are:

    a) The random search/random walk, which can be defined by two parameters:

    a1) How improbable the functional state is versus the search space (the target space / search space ratio).

    a2) How many states can be reaches / tested by the random variation process.

    b) Eventual non random components of the process (such as NS), whose role must be understood, tested and verified.

    Now, the important point is:

    Genetic drift does not modify any of these parameters.

    The improbability of the functional state remains absolutely the same.

    The number of states that the system can reach /test in the given time remains the same. Indeed, it depends only on the variation rate, however computed. Genetic drift only modifies randomly the modalities of random variation,
    but in no way it increases significantly the variation rate.

    The role, if any, of NS remains the same.

    So, I really don’t understand all this emphasis on genetic drift.

    Random genetic drift is totally irrelevant to our problem of how functional information arises, because it is a random process that bears no relationship to function, and does not modify the probabilistic resources of the system

    Well, I suppose that this is a verbose way of saying what Dr. Kozulic and yourself had already clearly stated in the OP:

    “In short: it is the pattern of fixation which neither the theory of neutral evolution nor the neo-Darwinian theory of natural selection, nor any combination of the two, can adequately explain.”

    I just wanted to be very clear on the reasons why I so fully agree with you 🙂

  6. 6
    Dr JDD says:

    Great pot gpuccio – I would have to agree, especially in light of events like the Cambrian explosion which would require very rapid mutation rate/introduction of novel beneficial mutations of a very high magnitude. Even with environmental pressures/changes, noone can postulate an intellectually satisfying naturalistic explanation of this.

    Anyway, putting that aside, I do have a problem with this comparing bacterium mutational rates to humans. I have not given this a huge amount of time (just relating to own experience which may be a foolish and naive thing to do!), so this may be easy to answer, but would be interested to hear an answer. I am not saying this is evidence against these experiments, just asking a question.

    When you first engage in the act of PCR, you use enzymes to copy and replicate your DNA of interest. Thus the cheapest and most basic approach in labs that I used many years ago now, is Taq polymerase (still frequently used). However you soon learn that not all polymerase is equal. You realise that Taq has a higher error rate, and that actually there is another ancient polymerase such as Pfu polymerase which provides proof-reading abilities, thus reducing the error rate dramatically (but it is a little more expensive so you don’t give this to undergrads unless the work is quite important!).

    That raises a point though, that not all polymerase are equal. If there is variation in ancient polymerases quite closely related how much more in eukaryotic polymerases? Well there is certainly published work showing that human polymerase, even with its proof reading capabilities is much less prone to error than other eukaryotic replication systems. By the way, as a side point, if you turn off proof reading capabilities in mice, you get genomic instability and cariogenesis…anyway, I digress.

    My question is, how do researchers take into account the potential differencies in the accuracies of human polymerases (and associated machinery in transcription) in these experiments that utilise bacteria? Is it a direct comparison made or are there factors used to make the comparison more likely to relate to the actual in vivo scenario? And, if there is no adjustment made for improved error rates in humans (assuming this holds true), will these bacterial experiments not overestimate the mutation rates?

    Finally, somewhat irrelevant but worth considering – modelling bacteria to relate to advanced multicellular organisms is probably the only way to model this, but is severely limited. EVen in the world of pharmaceutics we can use mice/rats as models, even chimps and yet cannot predict the way drugs will act in humans fully. Pharma history is littered with drugs that looked safe or very effective in “closely related organisms to humans” (i.e. mice, etc.) yet once they go into the clinic they kill/harm people or show no efficacy. Much of our understanding of even cellular processes is based on animal models yet there is clear differences in humans meaning we just do not understand a lot about ourselves and rather make assumptions that what is true for a mouse will hold true for a human. Often true, but often it is more complex than that. Always worth considering.

    Again, I am not attacking this bacterial work, merely making a few comments and questions.

    JD

  7. 7
    gpuccio says:

    Dr JDD:

    Thank you for your thoughtful post. We really need more people with biological experience here!

    I think that, when we try to “explain” the evolution of functional sequences, like proteins, the mutation rate can be safely approximated in favour of the darwinian theory: the theory will however fail, and without any possible doubt.

    I will try to be more clear. What really counts here is not the mutation rate itself, but the number of states, or sequence configurations, that can be really achieved by the system in the available time.

    When Dembski proposed his famous universal probability bound, he set the threshold very high (500 bits of complexity, about 10^150 possible configurations), so that he could exclude any possible random search in the whole universe, even using all quantic states from the big bang on as bits for the computation.

    That is really remarkable, because 500 bits is equivalent to the complexity of a 115 AAs sequence (if the target space were made of one single state). Even considering the functional redundancy, we are well beyond that threshold in many complex proteins. For example, in Durston’s famous paper where he analyzes 35 protein families, 12 protein families have functional complexities beyond this universal probability bound, with the highest functional complexity being 2416 bits (Flu PB2).

    But I always felt that Dembski was being too generous here. So some time ago I tried to compute a gross threshold which was more appropriate for a biological system. So, I considered our planet, with a life span of 5 billion years, as though it had been fully covered by prokaryotes from the beginning of its existence to now, and I tried to compute, grossly, the total number of states which could have been tested by such a system, considering a mean bacterial genome, reproduction time, and a very generous estimation of a global bacterial population on our planet. The result, which can be more or less appropriate, was that 150 bits (10^45) of functional complexity were more than enough to exclude a random generation of a functional sequence in the whole life of our planet.

    Now, that is even more remarkable, because 150 bits is equivalent to about 35 AAs, and in Durston’s paper 29 protein families out of 35 were well beyond that threshold.

    Are we still exaggerating in favour of darwinism? Yes, we certainly are.

    First of all, prokaryotic life did not certainly begin 5 billion years ago (which is even more than earth’s real life span).

    Second, the earth was not certainly fully covered by prokaryotes from the beginning of life.

    Third, the appearance of new protein families is not restricted to prokaryotes, but it goes on in higher beings, up to mammals. And mammals reproduce much more slowly than prokaryotes, and, even more important, they are not as many of them on our planet. Therefore, the number of states that can be reached / tested by mammals, or more in general by metazoa, is much smaller than what can be reached / tested by prokaryotes, whatever the mutation rate. And still, new complex functional protein families which never existed before, and are totally unrelated to what existed before, continued to emerge in metazoa, up to mammals.

    So, maybe 150 bits is still too generous as a biological probability bound. After all, both Behe and Axe, starting from bottom up considerations, tend to fix the threshold of what random variation can achieve at 3-5 AAs (about 13-22 bits).

    But I fell that I can safely be generous. We win anyway.

    So, let it be 150 bits, for now 🙂

  8. 8
    bornagain77 says:

    a few related notes:

    Groundbreaking Genetic Discoveries Challenge Ape to Human Evolutionary Theory – June 17, 2013
    Excerpt: Ultimately, the study results were contradictory to what evolutionists had theorized. Not only were genetic recombination rates markedly low in areas of human-chimp DNA differences (“rearranged” chromosomes), but the rates were much higher in areas of genetic similarity (“collinear” chromosomes). This is the reverse of what evolutionists had predicted.
    “The analysis of the most recent human and chimpanzee recombination maps inferred from genome-wide single-nucleotide polymorphism data,” the scientists explained, “revealed that the standardized recombination rate was significantly lower in rearranged than in collinear chromosomes.”
    Jeffrey Tomkins, a Ph.D. geneticist with the Institute for Creation Research (ICR), told the Christian News Network that these results were “totally backwards” from what evolutionists had predicted, since genetic recombination is “not occurring where it’s supposed to” under current evolutionary theory.
    Dr. Tomkins further emphasized that evolutionists greatly exaggerate the genetic similarities between humans and chimps, and often ignore areas of DNA where major differences do exist.
    “It’s called cherry-picking the data,” he explained. “There are many genetic regions between humans and chimps that are radically different. In fact, humans have many sections of DNA that are missing in chimps and vice versa. Recent research is now showing that the genomes are only 70% similar overall.”,,,
    http://christiannews.net/2013/.....ry-theory/

    Of related note: Richard Dawkins claimed that the FOXP2 gene was among ‘the most compelling evidences’ for establishing that humans evolved from monkeys, yet, as with all the other evidences offered from Darwinists, once the FOXP2 gene was critically analyzed it fell completely apart as proof for human evolution:

    Dawkins Best Evidence (FOXP2 gene) Refuted – video
    http://www.youtube.com/watch?v=IfFZ8lCn5uU

    Human brain evolution: From gene discovery to phenotype discovery – Todd M. Preuss – February 2012
    Excerpt: It is now clear that the genetic differences between humans and chimpanzees are far more extensive than previously thought; their genomes are not 98% or 99% identical.,,,
    ,,our understanding of the relationship between genetic changes and phenotypic changes is tenuous. This is true even for the most intensively studied gene, FOXP2,,
    In part, the difficulty of connecting genes to phenotypes reflects our generally poor knowledge of human phenotypic specializations, as well as the difficulty of interpreting the consequences of genetic changes in species that are not amenable to invasive research.
    http://www.pnas.org/content/10.....9.full.pdf

    As well, the primary piece of evidence, at the Dover trial, trying to establish chimp human ancestry from SNP (Single Nuecleotide Polymorphism) evidence was overturned:

    Dover Revisited: With Beta-Globin Pseudogene Now Found to Be Functional, an Icon of the “Junk DNA” Argument Bites the Dust – Casey Luskin – April 23, 2013
    http://www.evolutionnews.org/2.....71421.html

    It is also important to remember that neutral mutations, purely as a theory (and the abandonment of Selection), was forced upon Darwinists by the math itself, not by the empirical evidence:

    A Short History Of The Junk DNA Argument Of Darwinists
    Excerpt: Kimura’s Quandary
    Excerpt: Kimura realized that Haldane was correct,,, He developed his neutral theory in response to this overwhelming evolutionary problem. Paradoxically, his theory led him to believe that most mutations are unselectable, and therefore,,, most ‘evolution’ must be independent of selection! Because he was totally committed to the primary axiom (neo-Darwinism), Kimura apparently never considered his cost arguments could most rationally be used to argue against the Axiom’s (neo-Darwinism’s) very validity.
    John Sanford PhD. – “Genetic Entropy and The Mystery of the Genome” – pg. 161 – 162
    https://docs.google.com/document/d/14-TXfGxPu-3YeCHtLmxTmL4UZN90Odt135c59yTIFsw/edit

    A graph featuring ‘Kimura’s Distribution’ being ‘properly used’ is shown in the following video:

    Evolution Vs Genetic Entropy – Andy McIntosh – video
    https://vimeo.com/91162565

    Moreover, the empirical evidence itself (and common sense) tell us that the overwhelming majority of mutations are deleterious, not neutral, in their effects,,

    Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez 1, Robert J. Marks II 2, Jorge Fernandez 3 and John C. Sanford 4 – May 2013
    Excerpt: It is almost universally acknowledged that beneficial mutations are rare compared to deleterious mutations [1–10].,, It appears that beneficial mutations may be too rare to actually allow the accurate measurement of how rare they are [11].
    1. Kibota T, Lynch M (1996) Estimate of the genomic mutation rate deleterious to overall fitness in E. coli . Nature 381:694–696.
    2. Charlesworth B, Charlesworth D (1998) Some evolutionary consequences of deleterious mutations. Genetica 103: 3–19.
    3. Elena S, et al (1998) Distribution of fitness effects caused by random insertion mutations in Escherichia coli. Genetica 102/103: 349–358.
    4. Gerrish P, Lenski R N (1998) The fate of competing beneficial mutations in an asexual population. Genetica 102/103:127–144.
    5. Crow J (2000) The origins, patterns, and implications of human spontaneous mutation. Nature Reviews 1:40–47.
    6. Bataillon T (2000) Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? Heredity 84:497–501.
    7. Imhof M, Schlotterer C (2001) Fitness effects of advantageous mutations in evolving Escherichia coli populations. Proc Natl Acad Sci USA 98:1113–1117.
    8. Orr H (2003) The distribution of fitness effects among beneficial mutations. Genetics 163: 1519–1526.
    9. Keightley P, Lynch M (2003) Toward a realistic model of mutations affecting fitness. Evolution 57:683–685.
    10. Barrett R, et al (2006) The distribution of beneficial mutation effects under strong selection. Genetics 174:2071–2079.
    11. Bataillon T (2000) Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? Heredity 84:497–501.
    http://www.worldscientific.com.....08728_0006

    “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010
    Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.
    http://behe.uncommondescent.co.....evolution/

    Unexpectedly small effects of mutations in bacteria bring new perspectives – November 2010
    Excerpt: Most mutations in the genes of the Salmonella bacterium have a surprisingly small negative impact on bacterial fitness. And this is the case regardless whether they lead to changes in the bacterial proteins or not.,,, using extremely sensitive growth measurements, doctoral candidate Peter Lind showed that most mutations reduced the rate of growth of bacteria by only 0.500 percent. No mutations completely disabled the function of the proteins, and very few had no impact at all. Even more surprising was the fact that mutations that do not change the protein sequence had negative effects similar to those of mutations that led to substitution of amino acids. A possible explanation is that most mutations may have their negative effect by altering mRNA structure, not proteins, as is commonly assumed.
    http://www.physorg.com/news/20.....teria.html

  9. 9
    bornagain77 says:

    Moreover, the vast majority of mutations are now known to be ‘non-random’, i.e. directed. Mutations are not almost all random as is presupposed in neutral theory:

    Revisiting the Central Dogma in the 21st Century – James A. Shapiro – 2009
    Excerpt (Page 12): Underlying the central dogma and conventional views of genome evolution was the idea that the genome is a stable structure that changes rarely and accidentally by chemical fluctuations (106) or replication errors. This view has had to change with the realization that maintenance of genome stability is an active cellular function and the discovery of numerous dedicated biochemical systems for restructuring DNA molecules.(107–110) Genetic change is almost always the result of cellular action on the genome. These natural processes are analogous to human genetic engineering,,, (Page 14) Genome change arises as a consequence of natural genetic engineering, not from accidents. Replication errors and DNA damage are subject to cell surveillance and correction. When DNA damage correction does produce novel genetic structures, natural genetic engineering functions, such as mutator polymerases and nonhomologous end-joining complexes, are involved. Realizing that DNA change is a biochemical process means that it is subject to regulation like other cellular activities. Thus, we expect to see genome change occurring in response to different stimuli (Table 1) and operating nonrandomly throughout the genome, guided by various types of intermolecular contacts (Table 1 of Ref. 112).
    http://shapiro.bsd.uchicago.ed.....0Dogma.pdf

    New Research Elucidates Directed Mutation Mechanisms – Cornelius Hunter – January 7, 2013
    Excerpt: mutations don’t occur randomly in the genome, but rather in the genes where they can help to address the challenge. But there is more. The gene’s single stranded DNA has certain coils and loops which expose only some of the gene’s nucleotides to mutation. So not only are certain genes targeted for mutation, but certain nucleotides within those genes are targeted in what is referred to as directed mutations.,,,
    These findings contradict evolution’s prediction that mutations are random with respect to need and sometimes just happen to occur in the right place at the right time.,,,
    http://darwins-god.blogspot.co.....ected.html

    WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? Fully Random Mutations – Kevin Kelly – 2014
    Excerpt: What is commonly called “random mutation” does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it.
    On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern.
    http://edge.org/response-detail/25264

    Moreover, The whole idea of neutral mutations simply goes against the ‘common sense’ of theoretical concerns:

    “Moreover, there is strong theoretical reasons for believing there is no truly neutral nucleotide positions. By its very existence, a nucleotide position takes up space, affects spacing between other sites, and affects such things as regional nucleotide composition, DNA folding, and nucleosome building. If a nucleotide carries absolutely no (useful) information, it is, by definition, slightly deleterious, as it slows cell replication and wastes energy.,, Therefore, there is no way to change any given site without some biological effect, no matter how subtle.”
    – John Sanford – Genetic Entropy and The Mystery of The Genome – pg. 21 – Inventor of the ‘Gene Gun’

    Moreover, information, which is completely transcendent of energy and mass, is what is constraining the cell to be in such a massive state of thermodynamic disequilibrium. It is not merely the chemical bonding (in fact the chemical bonding is held to be ‘passive’ to thermodynamic concerns).

    Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH – Dr Andy C. McIntosh is the Professor of Thermodynamics (the highest teaching/research rank in U.K. university hierarchy) Combustion Theory at the University of Leeds.
    Excerpt: This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.
    http://journals.witpress.com/paperinfo.asp?pid=420

    Information and Thermodynamics in Living Systems – Andy C. McIntosh – May 2013
    Excerpt: The third view then that we have proposed in this paper is the top down approach. In this paradigm, the information is non-material and constrains the local thermodynamics to be in a non-equilibrium state of raised free energy. It is the information which is the active ingredient, and the matter and energy are passive to the laws of thermodynamics within the system.
    As a consequence of this approach, we have developed in this paper some suggested principles of information exchange which have some parallels with the laws of thermodynamics which undergird this approach.,,,
    http://www.worldscientific.com.....08728_0008

    Quantum Information/Entanglement In DNA – short video
    http://www.metacafe.com/watch/5936605/

    Does DNA Have Telepathic Properties?-A Galaxy Insight – 2009
    Excerpt: DNA has been found to have a bizarre ability to put itself together, even at a distance, when according to known science it shouldn’t be able to.,,, The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible.
    http://www.dailygalaxy.com/my_.....ave-t.html

    Quantum entanglement holds together life’s blueprint – 2010
    Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours. “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford.
    http://neshealthblog.wordpress.....blueprint/

    It is very interesting to note that quantum entanglement, which demonstrates that ‘information’ in its pure ‘quantum form’ is completely transcendent of any time and space constraints (i.e. it is ‘non-local’), should be found in molecular biology on such a massive scale, for how can the quantum entanglement ‘effect’ in biology possibly be explained by a material (matter/energy) ’cause’ when the quantum entanglement ‘effect’ falsified material particles as its own ‘causation’ in the first place? (Bell, A. Aspect, A. Zeilinger) Appealing to the probability of various configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various ‘special’ configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place!

    Verse and Music:

    John 1:1-4
    In the beginning was the Word, and the Word was with God, and the Word was God.
    The same was in the beginning with God.
    All things were made by Him, and without Him was not anything made that was made.
    In Him was life, and that life was the Light of men.

    Evanescence – The Other Side (Lyric Video)
    http://www.vevo.com/watch/evan.....tantsearch

  10. 10
    bornagain77 says:

    to somewhat reiterate: The Neutral Theory requires that a large percentage of the genome be considered junk. Yet ENCODE 2012 decisively overturned that notion:

    Junk No More: ENCODE Project Nature Paper Finds “Biochemical Functions for 80% of the Genome” – Casey Luskin – September 5, 2012
    Excerpt: The Discover Magazine article further explains that the rest of the 20% of the genome is likely to have function as well:
    “And what’s in the remaining 20 percent? Possibly not junk either, according to Ewan Birney, the project’s Lead Analysis Coordinator and self-described “cat-herder-in-chief”. He explains that ENCODE only (!) looked at 147 types of cells, and the human body has a few thousand. A given part of the genome might control a gene in one cell type, but not others. If every cell is included, functions may emerge for the phantom proportion. “It’s likely that 80 percent will go to 100 percent,” says Birney. “We don’t really have any large chunks of redundant DNA. This metaphor of junk isn’t that useful.””
    http://www.evolutionnews.org/2.....64001.html

    Scientists go deeper into DNA (Video report) (Junk No More) – Sept. 2012
    http://bcove.me/26vjjl5a

    Quote from preceding video:
    “It’s just been an incredible surprise for me. You say, ‘I bet it’s going to be complicated’, and then you are faced with it and you are like ‘My God, that is mind blowing.’”
    Ewan Birney – senior scientist – ENCODE 2012

    I believe Moran himself raised a big stink with the some of the ENCODE scientists trying to tell them that they were wrong in their conclusion that the vast majority of DNA is functional. But as the following paper points out, the motivation from Darwinists for so staunchly resisting the ENCODE findings was driven primarily by philosophical concerns, not empirical concerns.

    The extent of functionality in the human genome – John S Mattick and Marcel E Dinger – July 2013
    Excerpt of abstract: Finally, we suggest that resistance to these (ENCODE) findings is further motivated in some quarters by the use of the dubious concept of junk DNA as evidence against intelligent design.
    http://link.springer.com/artic.....ltext.html

  11. 11
    Dr JDD says:

    http://www.evolutionnews.org/2.....84031.html

    BA77 you might be interested in this excellent article. Note – it also proves that science can advance much better if we assume presence in nature has function (due to design):

  12. 12
    bornagain77 says:

    Whale Evolution Vs. Population Genetics – Richard Sternberg PhD. in Evolutionary Biology – video
    http://www.youtube.com/watch?v=85kThFEDi8o

  13. 13
    Charles says:

    wd400@4 & vjtorely

    5e6 bases * a mutation rate of 1e-10 per base * 60 000 generations gives a prediction of 30 mutations.

    Your calculation with all units expressed explicitly is:

    5×10^6 basepairs/generation * 1×10^-10 mutations/basepair/generation * 6×10^4 generations

    And we see that 10^6 * 10^-10 * 10^4 all cancel => 1; and 5×6 = 30; and 30 x 1 = 30

    So I understand how you get “30”. But I don’t see how you get “mutations”:

    basepairs/generation * mutations/basepair/generation * generations

    which resolves to:

    basepairs/generation * mutations-generation/basepair * generations

    basepairs/generation cancels -generations/basepair leaving “mutation-generations”, not simply “mutations”

    This matters because if the units are incorrect, then the calculation is incorrect, or incorrectly explained or stated.

    So
    1) is there something else assumed in your calculation that cancels “generations” from the final result? or
    2) is there some interpretation that “mutation-generations” equates to just “mutations”?

    I’m just trying to understand the units on your calculation, I’m not arguing, I’m asking.

  14. 14
    bornagain77 says:

    Thanks for the link Dr JDD!

  15. 15
    wd400 says:

    Charles,

    Using your units, I get mutations/generation, in any case I don’t think the genome size is a rate per generaton. The mutation rate per individual genome should be in bases per generatoin. So, genome size (bp) x nucleotide mutation rate (mutatoins per bp per gen) gives

    bases * [(mutations/bases)/gen] = bases/gen

    Multiply that by several generations and our units are indeed mutations.

    JDD,

    The mutation rate is generall much higher in animals than prokaryotes, but that has precisely no relevance to the questions VJ is making since we have obversed mutation rates to form as the basis of these calculations.

  16. 16
    Charles says:

    wd400 @15

    in any case I don’t think the genome size is a rate per generaton.

    Ok, basepairs/genome then.

    I think my mistake was mutations/basepair/generation => mutations/(basepair/generation)
    whereas you intended (mutations/basepair)/generation

    Though I think you have a typo in that:
    bases * [(mutations/bases)/gen] = mutations/gen not “bases/gen”

    and mutations/gen when multiplied by generations indeed yields mutations

    But if the genome size is “basepairs/genome”, then your calculation yields 30 “mutations/genome”

    which would be interpreted as meaning after 60,000 generations a genome would have acquired 30 mutations (when compared against a genome at generation 1)

    You have a similar implication when you wrote “In a gene 1200 bases long you’d expect … gives an expectation of ~6 changes per gene

    I’m assuming for the purposes of the calculation, base-pairs/gene or base-pairs/genome are homologous concepts, yes?

  17. 17
    wd400 says:

    Yes. Under neutrality he population fixation rate (i.e. the rate at which new variants take over a population) is equal to the invidual mutation rate, that’s the point at the hard of this whole thing.

  18. 18
    wd400 says:

    Hold on, I think that’s not quite what you are asking.

    If you want to know how many mutations fix in a gene, and you know the per-nucleotide mutation rate then you need to scale up to the size of the gene. If you want to know how many mutations fix in the genome then you need to scale up further to account for the number of nucelotides in the genome.

  19. 19
    Charles says:

    Ok, so what did you intended were the complete units of “5e6 bases”?

    per gene, per genome, per pizza, what?

    What back of the envelope calculation were you illustrating?

  20. 20
    wd400 says:

    THe units are mutations. The expected number of mutatations to fix in the population over that many generatons given the known genome size and mutation rate.

  21. 21
    Charles says:

    WD400 @ 20

    So in your post #4, you meant “5e6 bases per genome” and the calculation yields “mutations per genome“, right?

  22. 22
    wd400 says:

    Yes. That many mutations are expected to fix across the genome.

  23. 23
    vjtorley says:

    Hi wd400,

    Thank you for your post. I had anticipated your criticism:

    The E. coli genome is more than 600 times smaller than the human genome and has a mutation rate ~ 2 orders of magnitude lower (in part because there are multiple cell divisions per generation in animals).

    I was waiting for Professor Moran to come out and say something like that. To my surprise, he didn’t.

    I was also waiting for him to explain the pattern of fixation we observe, where multiple mutations (up to 21) have become fixed in thousands of different proteins. Alas, no explanation was forthcoming.

    I think the ball is still in Professor Moran’s court.

  24. 24
    wd400 says:

    I had anticipated your criticism

    Then why did you not correct your error in conflating the E coli results and human-chimp differences?

  25. 25
    wd400 says:

    Just to clarify how at sea you are here. You say in reference ot the PLOS Biol paper:

    “What the figure shows is that multiple mutations (up to 21) have become fixed in thousands of different proteins, within the relatively short span of five million years.

    In short: it is the pattern of fixation which neither the theory of neutral evolution nor the neo-Darwinian theory of natural selection, nor any combination of the two, can adequately explain.

    and (after reading my comment?)

    I was also waiting for him to explain the pattern of fixation we observe, where multiple mutations (up to 21) have become fixed in thousands of different proteins. Alas, no explanation was forthcoming.

    But, given that we know the human mutation rate (mu) is around 1E-8 per base, and the average length of a protein coding gene in human genome is ~1200 bp (with much variance). Under nutrality we’d expect

    1E-8 * 1200 bp * 240,000 gen * 2 lineages = ~6 difference per gene.

    In fact we see on average substantially fewer differences per gene (though some have more, that’s the distriutoin gene size and poisson distribution for you). That effect is the result of natural selection, since these are genic sequences and we expect mutatoins that change amino acids to be selected against.

    And lo , the paper actually breaks the mutatoins down into those that change the amino acid in question (non-synonmous mutations) and those that don’t. Since ~1/4 of possible nucleotide mutations are synonynous you’d expect the same 3:1 ratio in differences between human chimps in the absence of selection. In fact, more than half of the observed differences are synonymous, the effect of non-synonymous mutations being removed by selection.

    So, there are not only fewer differences in the observed data than the neutral theory would predict, but the difference can be explained by the action of natural selection on non-synonymous mutations.

    So much for

    In short: it is the pattern of fixation which neither the theory of neutral evolution nor the neo-Darwinian theory of natural selection, nor any combination of the two, can adequately explain.

  26. 26
    Mung says:

    Thank you for your thoughtful post. We really need more people with biological experience here!

    I think I’ve been alive long enough to have acquired quite a bit of biological experience!

  27. 27
    Joe says:

    What would Freud say about this fixation with fixation? 🙂

    But anyway, has anyone ever validated neutral theory’s equations? Has it been done with prokaryotes? Metazoans? How large were the populations?

    Haldane had 300 generations for a beneficial mutation to become fixed, yet that didn’t pan out with fruit flies.

  28. 28
    Mung says:

    Joe, it must be true. The maths declares it!

  29. 29
    Mung says:

    I have a question for wd400 (not intended to exclude others):

    Consider in a diploid organism a single locus with just two alleles, A and a. Thus there are three different possible genotypes in the population: AA, Aa and aa. We will make the following assumptions:

    We draw several very important conclusions from this result. First, after one generation the genotype frequencies are completely determined by the initial allele frequencies.

    – Population Biology: Concepts and Models

    Given that this is a post on fixation, what’s the difference?

    What’s mutation rate got to do with anything?

  30. 30
    wd400 says:

    This text sound like is realted to Hardy-Weinberg, and isn’t really relevant to this post. Unless you want to explain your question.

  31. 31
    Gordon Davisson says:

    gpuccio:

    Random genetic drift is totally irrelevant to our problem of how functional information arises, because it is a random process that bears no relationship to function, and does not modify the probabilistic resources of the system

    I (at least mostly) agree that drift is irrelevant to the question of the origin of functional information, but I don’t think you can really complain about that here, because it’s not the question that was asked. Or at least, it might be what VJTorley meant to ask, but it’s not what Larry took the question to be about. If you want an answer about functional information, you must be clear that that is what you are asking about.

    A big part of the problem is that when someone talks about large-scale (macro) evolution, they might be talking about any of these quite different things:

    * Large quantities of genetic change. This seems to be what Larry Moran took the question as being about, and if you ask about the 22.4 million mutations in the human lineage this is what you’re asking about. Since most genetic change is neutral, any answer that doesn’t emphasize neutral evolution is necessarily wrong.

    * Significant phenotypic changes. In this case, the small fraction of genetic changes that have significant phenotypic effects are what matters, and since they’re much more likely to influence reproductive success, selection is much more important here. I think this is what VJTorley originally meant to ask about, but I’m not completely clear…

    * Reproductive isolation. If you ask about speciation, this is technically what you’re asking about. But almost nobody on the antievolution side actually cares much about this, which leads to much miscommunication and frustration…

    * New genes. Humans do have at least a few novel genes — “Recent de novo origin of human protein-coding genes” by David G. Knowles and Aoife McLysaght in Genome Research 2009. 19: 1752-1759 identifies three novel human genes, but note that none of them corresponds to much genetic change (they are all closely similar to noncoding regions in chimp etc genenomes), and none has any known phenotypic effect (although one was found to be up-regulated in chronic lymphocytic leukemia).

    * New functions/information/complexity/etc. If you’re interested in something like this (and the rest of your comment suggests you are), you need to be very very clear on exactly what you’re asking about, because different people mean different things by them. For example, consider the citrate-using ability that Lenski’s bacteria gained. Mike Behe argues that this is not a novel functional element (FCT), because it just put a copy of an existing promoter with a copy of an existing gene. Larry Moran’s take on it, though, is that putting existing elements together in a novel combination to get a novel function clearly is a gain of function. And I’d have to agree with him; isn’t putting existing elements together in new combinations how most human innovation happens?

    Even on the ID side, there are a large number of different definitions of functions/information/complexity/etc. There’s Dembski’s CSI (multiple versions of it), other quite different things called CSI, Behe’s FCTs, Kirk Durston’s Fits, your dFCSI, kairosfocus’ FSCO/I, … Frankly, for most/all of these, you probably won’t get much interest from the evolutionist side.

    I won’t get into why evolutionists don’t talk much about these, because it wouldn’t be very productive. From the IDist perspective, it’s clearly because we’re ducking the really important questions; from the evolutionist perspective, it’s because we don’t really care where you’ve planted the latest set of goal posts. And then someone will insult someone’s mother and it’ll turn into a screaming match, and the only thing anyone will learn is what a bunch of jerks everyone on the other side is…

    I will, however, point out that those on the evolution side are interested in the origin of complexity, just not in the same terms as IDists. Since we’re talking about neutral vs. adaptive (selected) evolution, I’ll point to PZ Myers article ?EP: Complexity is not usually the product of selection (although note that the headline is misleading — he’s really just arguing that that complexity isn’t always the product of selection).

  32. 32
    scordova says:

    VJTorley,

    I would attack neutral theory from a different angle instead of fixation. There has been too much internet fixation on gene fixation!

    If I get around to it I may post, but briefly, neutral theory will grate against polyconstrained DNA.

    Neutral theory is a frienemy of ID. It’s not all bad, and some of it is very good, dare I say, over-the-top good.

    Neutral theory emerged out of the realization that Natural Selection cannot influence most of the genome for construction as a matter of principle. i.e. You can’t select individually for 4 giga base pairs when you only have a population of 10,000 individuals. Impossible, there is too much selection interference.

    I don’t believe fixation is a major flaw in neutral theory, the flaws are:

    1. construction of polyconstrained or IC features
    2. maintenance of conserved regions

    The major points in favor of neutral theory:

    1. it describes selection’s absence in the present and past (but absence of selection doesn’t mean random walks can make designs, in fact neutral theorists like Nei would likely argue the illusion design is an accident of our perception, that we’re just mistaken like seeing faces in the clouds!)

    2. it demonstrates via math that selection as a matter of principle is mostly absent from the genome and morphological features except for vital functions

    3. it restates Haldane’s dilemma in an alternative way

    4. it shows protein polymorphism is not explained by selection and heterozygous advantage is too costly to be a common mechanism of maintaining competing alleles in a population

  33. 33
    PaV says:

    wd400:

    But, given that we know the human mutation rate (mu) is around 1E-8 per base, and the average length of a protein coding gene in human genome is ~1200 bp (with much variance). Under neutrality we’d expect

    1E-8 * 1200 bp * 240,000 gen * 2 lineages = ~6 difference per gene.

    In fact we see on average substantially fewer differences per gene (though some have more, that’s the distribution gene size and poisson distribution for you). That effect is the result of natural selection, since these are genic sequences and we expect mutations that change amino acids to be selected against.

    And lo, the paper actually breaks the mutations down into those that change the amino acid in question (non-synonmous mutations) and those that don’t. Since ~1/4 of possible nucleotide mutations are synonynous you’d expect the same 3:1 ratio in differences between human chimps in the absence of selection. In fact, more than half of the observed differences are synonymous, the effect of non-synonymous mutations being removed by selection.

    First of all, in your calculations you’re using “two lineages”. That’s because NGD is occurring BOTH in chimps and in humans as time moves forward. As far as “human evolution” is concerned, however, what happens in “chimps” after they split off from one another is completely unimportant and unrelated to ‘human evolution.’ This means that as far as “human evolution” is concerned, we see 3 differences per gene from the MRCA. Maybe you dispute this, but I don’t see any other way of viewing the numbers.

    Secondly, when you say this: “since these are genic sequences and we expect mutations that change amino acids to be selected against,” we’re seeing the real function of NS in all of this: to ELIMINATE non-synonymous mutations; i.e., when a non-neutral mutation occurs, it is effectively eliminated. IOW, we haven’t seen “positive selection” yet. Everything is EITHER ‘neutral’, or “deleterious” and hence removed. As you say in the next paragraph: this lowered 3:1 ratio is “the effect of non-synonymous mutations being removed by selection.”
    Based on your numbers then, we have 3 differences per gene having occurred since the lineages split, with one-half of these differences being “synonymous.” That is, in almost 5 million years of “neutral drift with NS”, the human genome has acquired 1.5 differences per gene. The size of the average gene you give as 1200bp. If we assume a smaller average gene size of 300 a.a., then the average difference becomes 3 * 3/4 = 2.125. Half of this is ONE a.a. difference in a gene that is 300 a.a.s long. Can you really say that this is the cause of the chimp/human divide? Is this really the source of the macroevolution that occurred?

    I simply can’t take these numbers seriously. NS explains “stasis”; it doesn’t explain “macroevolution.” And purely NGD, as gpuccio points out, really doesn’t ‘explain’ much at all.

    This is why ID and UD exist.

    As to the bacterial calculations, it is known that the mutation rate of bacteria can become 1,000 greater than normal. Apparently there is controversy over SIM (stress-induced mutagenesis), but I must say that it is really odd to think that the figure for the bacterial mutation rate is two orders of magnitude less than that of mammals when all one hears is the great mutational prowess of bacteria, and how they so quickly can develop resistance to drugs.

  34. 34
    wd400 says:

    PaV,

    I’ve said this to you approximately a hundred times, this will be the last one:

    Accepting that neutral theory explains most among-genome differenes doesn’t requires us to ditch natural selectoin for some differences. I don’t know why you find this concept so difficult to grasp, but I’m really not interested in repeating my self again.

    but I must say that it is really odd to think that the figure for the bacterial mutation rate is two orders of magnitude less than that of mammals when all one hears is the great mutational prowess of bacteria, and how they so quickly can develop resistance to drugs.

    In culuture, you have about 1 billion E. coli cells (with their ~5 million bp genome) per mL. You are not going to spend a lot of time wiating for mutations, even with a low mutation rate.

  35. 35
    gpuccio says:

    Gordon Davisson:

    Thank you for your very interesting post.

    You are essentially right, I am interested in functional complexity, not in how many neutral mutations may have happened or not happened.

    As you may know, I have a definite definition of functional information, which is essentially the same as Durston’s fits. I believe that all the definitions you refer to are essentially slightly different ways to refer to the same concept: how much information is necessary to implement a function. I am preparing a post about the concept of functional information.

    The case of Lenski is rather clear. Putting existing elements together in new combinations is certainly new functional information, but it is not necessarily complex. Reactivating a function which is already in the genome can require only a few bits of variation, and that is not a complex transition. That’s what happened in the Lenski case.

    On the contrary, generating a new functional protein superfamily is certainly a complex transition, That’s where dFSCI can be safely measured, as Durston has shown.

    Regarding humans and chimps, the problem is simple: they are very different, phenotipically and functionally. We still don’t understand what genomic differences may be responsible for that, be them at the protein gene level or, more likely, at the regulatory level. As soon as we understand well what the sequence differences are that are responsible, at least in part, for the functional acquisitions, we will be able to measure the functional complexity of the transition. I am rather confident that it will be very high.

    However, the simple concept is that a complex functional transition cannot be explained by neutral variation. As I have tried to argue, neutral variation is irrelevant from that point of view.

    NS can’t do the trick either, but for other reasons. But that’s another story…

  36. 36
    PaV says:

    wd400:

    Accepting that neutral theory explains most among-genome differenes doesn’t requires us to ditch natural selection for some differences. I don’t know why you find this concept so difficult to grasp, but I’m really not interested in repeating my self again.

    You can repeat this statement as often as you like—a million times—and, yet, it has absolutely NO explanatory power. It explains nothing—except your “belief” in “evolution” via discrete, material changes.

    If NGD, on its own, can only produce ONE a.a. change per ‘gene’ over 5 million years, then, what has NS done?

    Or, put it another way, if you ascribe the “difference” between ‘chimp’ genes and ‘human’ genes as being due to NGD, then how did NS affect anything except to keep the NS/SN ratio of a.a. substitutions below 3:1?

    Tell me, if NS IS WORKING on mutations, then they have to be the same mutations that constitute NGD. There aren’t two separate sets of mutations: one for NGD, and one for NS.

    When you compare genomes, you’re looking for a.a. differences, right? Well, which ones were “changed” because of NGD, and which ones from NS? Can you tell me that?

    Of course you can’t. You simply say: “Just because NGD give us the differences between lineages, doesn’t mean that NS isn’t working.” OK. What is it telling us?

    You say that NGD explains MOST AMONG-GENOME differences. OK. Then that means that the average 1 a.a. difference between chimps and humans may or may not be due to NGD. This statement only has merit if there are MORE differences between lineages than NGD would predict, because in that case it would be very simple to assume that NS has brought about the excess.

    In the meantime, you satisfy yourself in saying that “somewhere in all of those changes—minuscule as they are–NS was at work.” It doesn’t satisfy me. Sorry. You’ll have to do better.

    In the end your left saying this: when we examine chimp and human lineages, we find that the COMBINATION of NGD and NS gives us—after 5 MILLION YEARS OF PUTATIVE EVOLUTION!!!—-an average of 1 a.a. difference in a protein consisting of 300 a.a.s. Really? Should I take this seriously? Why?

    Just because you say that “neutral drift” doesn’t preclude NS, isn’t enough to make a logical case for how these mechanisms could POSSIBLY work. Wave your hands all you want, I’m simply not impressed.

    P.S. I noticed you didn’t attack my numbers. So, I guess, you’re going to have to live with them.

  37. 37
    wd400 says:

    There are many ways to test for positive natural selection from genetic data. Instead of writing the ALL CAPS rants about what you think you know, you should learn some population genetics.

  38. 38
    PaV says:

    wd400:

    There are many ways to test for positive natural selection from genetic data. Instead of writing the ALL CAPS rants about what you think you know, you should learn some population genetics.

    Are you saying then, that having ‘identified’ these ‘changes’ that are attributed to NS that the total number of differences between present day humans and its MCRA is more than 3 b.p.s? I don’t see how you can come up with “more” differences that a comparison of genomes gives you. All you can do, as I suggested above, is to come up with some proportion of the already determined differences are due to NGD and how much to NS. But, again, this adds nothing to the differences shown by genome comparisons.

    So, with NS adding nothing to the total amount of differences with lineages, using a smaller b.p. count for an average protein, and also taking into account synonymous changes of b.p.s, we end up with all of this amounting to but a “single” amino acid change per gene among the two lineages. This extremely small amount of change cannot possibly account for the differences between chimps and humans.

    Again, you have not disputed the numbers. If you, then, don’t dispute the numbers, you must then explain how this value of a single a.a. difference per gene between chimp and human lineages could possibly explain the human/chimp divide.

    Lacking this explanation, it seems to me that you only end up with lots of equations, but no conclusions. Sorry, but the onus is upon you. If Darwinism/evolutionary biology is to be believed, then credible numbers have to be given. And, if the numbers are not convincing, which they are not, then the underlying understanding of macroevolution should be rejected.

    As to population genetics, I’ve read my share, and forgotten a lot of it. I don’t work with it every day, or teach it. But I have accessed and own my share of population genetics texts. And I look at their equations, and I look at their numbers, and none of it adds up. If it did, then I would be an evolutionary biologist ally. But it doesn’t, so I’m a foe.

  39. 39
    wd400 says:

    I’m sorry, your problem is now that the obverved genetic differences between humans and chimps can’t be sufficient to explain other differences between these species? You don’t just deny evolutionary biology but also genetics?

  40. 40
    wd400 says:

    (with foes like these…)

  41. 41
    wd400 says:

    I have to say, I find it extradonary that neither VJ Torley, nor any of his fellow travellers have acknowledged the errors that form the basis of this post.

    Is it that hard to say you, and I guess Dr. Kozulic, are wrong about this?

  42. 42
    PaV says:

    wd400:

    I’m sorry, your problem is now that the obverved genetic differences between humans and chimps can’t be sufficient to explain other differences between these species?

    Here’s this from my first post up above:

    Half of this is ONE a.a. difference in a gene that is 300 a.a.s long. Can you really say that this is the cause of the chimp/human divide? Is this really the source of the macroevolution that occurred?

    I simply can’t take these numbers seriously. NS explains “stasis”; it doesn’t explain “macroevolution.” And purely NGD, as gpuccio points out, really doesn’t ‘explain’ much at all.

    This is why ID and UD exist.

    It’s been my complaint for about ten years—not just ‘now.’

    You don’t just deny evolutionary biology but also genetics?

    You’ve got it entirely wrong. What I’m demonstrating—not just saying—is that EITHER one is forced to deny evolutionary biology, OR, one is forced to deny population genetics.

    You can’t have it both ways. The numbers don’t add up—plain and simple.

  43. 43
    wd400 says:

    Half of this is ONE a.a. difference in a gene that is 300 a.a.s long.

    Remember, this is the obvserved differnce between humans can chimps. Are you really claiming the observed genetic differences aren’t enough?

  44. 44
    PaV says:

    wd400:

    As I have already posted , I would have never dreamed that the per base mutation rate for bacteria would be two orders of magnitude lower than for most mammals and eukaryotes. But, that seems to be the case. And, yes, with large population sizes, they can afford to have this lower rate.

    I was always given the impression that they had both: larger population sizes, and a higher mutation rate.

    That might be why vjtorley used a 10^-8 figure instead of 10^-10. And, yes, the bacterial genome is two orders of magnitude smaller than mammalian genomes, so that, taken together, that’s four orders of magnitude, giving you 20 mutations instead of 20,000. Point conceded.

    vjtorley is just now getting up to speed on population genetics. And Dr. Kazulic might not be that familiar with it as well.

    But, conceding the point, the problem that remains is no longer that there are way too few mutations compared to what should be expected; rather, the problem is that there are simply way ‘too few’ mutations, period.

    To think that one a.a. difference per 300-350 a.a. long protein is, on average, capable of explaining the differences between chimps and humans seems to me to be intellectually dishonest.

    I know that is a strong statement; but I think it is clearly correct.

    If biology is teaching us anything these days, it is that the ‘coding’ portion of the genome is much less important in development than the supposed ‘non-coding’. Now, the nc portions being, on average, longer than the coding portions, one might expect more than one mutation on average. However, we find that a lot of the nc portions of the genome are ‘highly conserved.’ In sum, then, we have a Darwinian ‘double-whammy’ here.

  45. 45
    PaV says:

    wd400:

    Remember, this is the obvserved differnce between humans can chimps. Are you really claiming the observed genetic differences aren’t enough?

    Surely, you jest.

  46. 46
    wd400 says:

    I’m just trying to understand – are you claiming the genetic differences between humans and chimps are insufficient to explain other differences between those species?

  47. 47
    PaV says:

    wd400:

    I’m just trying to understand – are you claiming the genetic differences between humans and chimps are insufficient to explain other differences between those species?

    I didn’t say that, did I? Because, the obvious riposte is that “obviously chimps are different from humans, and how can you explain that difference except that their DNA are different.”

    What is clear from these numbers is that the difference between chimp and human proteins is so minuscule that to account for the differences between chimps and humans on the basis of these differences alone—remember! we’re talking about the ‘coding region’ of the genome—is preposterous.

    Hence, the obvious conclusions are: most, if not all, of the differences between chimps and humans result from changes in the ‘non-coding’ regions of the genome, and changes to the gametes themselves, i.e., a change in the make-up of the basic cell-type for chimps and humans.

  48. 48
    PaV says:

    In my last post I intended to add the following:

    “IDists have been saying this for years.”

    Here’s something from today’s Phys.org website.

    Planaria deploy an ancient gene expression program in the course of organ regeneration

    Researchers knew previously that during embryogenesis FoxA initiates formation of endoderm-derived organs in species as diverse as mouse and roundworms. The new work suggests that regenerating tissues exploit those evolutionarily ancient gene expression pathways. “Engulfing food is one thing that defines an animal,” says Sánchez Alvarado. “This means that organisms from humans to flatworms use a common toolbox to build a digestive system, one that has been shared since animals became multicellular.”

    Read more at: http://phys.org/news/2014-04-p.....ation.html

    I, personally, have been saying for the last ten years that ‘genes’ (proteins, basically) are nothing more than a “common toolbox” for animals, and that what changes the basic shape/morphology of various animals is the “blueprint” (=bauplan) of the organism, and that this would be FOUND to be present in what is termed now (then) as “junk-DNA”.

    I have been vindicated many times over since then. And, to ID’s credit, and to the shame (if they were capable of admitting they were wrong) of Darwinists.

    Oh, and BTW, ” . . evolutionarily ancient gene expression pathways.” Just think: half a billion years of “neutral genetic drift”, and the pathways remain the same. Again, the principal function of NS is to “get rid” of defects, not to bring about innovation.

  49. 49
    gpuccio says:

    PaV:

    You are obviously right. The general tendency, in speciation, is that protein effectors appear in great abundance at the beginning (in LUCA), and then they continue to appear, but at a constantly slowing rate. In mammals and primates, the appearance of new protein families is extremely reduced.

    On the contrary, the differences between species at higher levels are mainly regulatory, and the role of “non translated” DNA becomes increasingly more important.

    To suggest that the huge differences between chimp and humans can be explained by a few mutations, most of which should be neutral, is real folly.

  50. 50
    wd400 says:

    I didn’t say that, did I?

    I honestly have no idea what point you were trying to make. THe latest seems to be that changes in protein coding genes aren’t, by themselves, enough to explain differences between humans and chimps. Evolutionary biologists have known this for 40 years so I can’t imagine the point you are groping for.

  51. 51
    PaV says:

    wd400:

    I honestly have no idea what point you were trying to make. THe latest seems to be that changes in protein coding genes aren’t, by themselves, enough to explain differences between humans and chimps. Evolutionary biologists have known this for 40 years so I can’t imagine the point you are groping for.

    So, when you can’t refute numbers, and you can’t refute logic, you then say: “Oh, we’ve known that for 40 years.”

    So why didn’t you say that in the first place???? Why were you asking me about you ‘already knew’?

    Is it because you want to run and hide—hide from the deficiencies of neutral theory and the whole neo-Darwinian/Darwinian project altogether?

    I suspect the next thing you’ll want to say is: let’s talk about Hox genes and evo-devo.

    If not that, then we’ll start talking about how NS comes along and saves the day.

  52. 52
    wd400 says:

    So why didn’t you say that in the first place????

    Because I had no idea you were making such a weird argument. If you are saying something like

    “If you only look at protein coding genes, you expect x mutations per gene, x is not enough (because I say so) so there just aren’t enough mutations”

    then I’d say why only look at protein coding genes.

    BTW, I failed to include the link to the 40year olf paper by King and Wilson in my earlier comment:

    http://www.sciencemag.org/cont.....07.extract

  53. 53
    PaV says:

    So, it appears I reached the right conclusion. You were hoping, I suppose, that I would somehow stumble.

    Well, now that we know that proteins can’t do the job, and that regulatory mechanisms must be invoked, we begin to enter, little bit by little bit, the land of Irreducible Complexity, because now DNA sequences must do more than just code for relatively stable molecules, but must ‘integrate’ various levels of functionality in some kind of feedback loop, or, more to the point (and a pointer to IC), series of feedback loops.

    Just how do you propose that NGD can accomplish this?

  54. 54
    wd400 says:

    So, it appears I reached the right conclusion. You were hoping, I suppose, that I would somehow stumble

    Actually, I don’t think evolutionary biology is a game in which one can score points. I’ve acted in bad faith (‘hope you’d stumble’) I’m being honest when I say it’s very hard to extract any point from your comments.

    For instance, I have no idea where you think I’ve claimed drift is responsible for establishing new regulatory networks? Have you read the OP(s) or my posts, or just flown off on your own tangents? The posts are abuot accounting for the observed genetic differences between humans and chimps, the neutral theory explains the majority of those differences. THat’s all.

  55. 55
    wd400 says:

    (clearly I mean ‘never acted in bad faith’ above)

  56. 56
    PaV says:

    wd400:

    It’s very clear, however, that you’re not being forthcoming. You insist on being drawn out. The fact is is that if you could demonstrate how these regulatory mechanisms could be built up by NGD—which you did not say could build up such mechanisms(but, of course, this is obvious, and so, of course, you’re not going to go down that path)—you would have done so by now.

    You really have no way of plausibly demonstrating how NS could build up such a mechanism because NS imposes its own kind of constraints, and so does not show a way these mechanisms could be built up (because, of course, you would have shown this to the whole world by now)

    I suppose you will now take Larry Moran’s route and say that it is NOT enough to simply poke holes in population genetics, a positive theory MUST be demonstrated.

    I’m being honest when I say it’s very hard to extract any point from your comments.

    I’m sure this is true. Why? Because the plain import of my comments contradicts your personal viewpoint, and you neither care to look at this or to acknowledge it. It’s quite simple, isn’t it?

  57. 57
    PaV says:

    wd400

    You say you can’t understand the point I am making. You say I am making a “wierd” argument.

    Let me simplify: I want to know if, and how, either NGD or NS can explain macroevolution.

    In the case of chimps and humans, you have ‘almost’ conceded that most of the changes come from regulatory networks. So, then, the basic question is: how does either NGD or NS explain these changes.

    But you are also basically saying/admitting that ‘drift’ cannot explain the changes to regulator networks and such. That leaves NS.

    So, how do you see this happening? If you can lay out a rather plausible scenario, then I’m ready to see things differently. So, as they say, “the ball is on your side of the net.”

Leave a Reply