Uncommon Descent Serving The Intelligent Design Community

A short post on fixation

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

In a recent post, Professor Larry Moran accused me of shifting the goalposts, in order to avoid a discussion about whether evolution could account for the fixation of 22.4 million mutations in the human lineage, since we broke away from the chimps, five million years ago. Not being one to run away from a controversy, I’ve decided to make this question the topic of today’s post.

I’d like to begin by defining the neutral theory of evolution:

“This neutral theory claims that the overwhelming majority of evolutionary changes at the molecular level are not caused by selection acting on advantageous mutants, but by random fixation of selectively neutral or very nearly neutral mutants through the cumulative effect of sampling drift (due to finite population number) under continued input of new mutations.”
(Motoo Kimura, “The neutral theory of molecular evolution: A review of recent evidence,” Japanese Journal of Genetics 66, 367–386 (1991)).

And here’s a handy definition of the term “genetic fixation”:

1. the increase of the frequency of a gene by genetic drift until no other allele is preserved in a specific finite population.
(Stedman’s Medical Dictionary. Copyright 2006 Lippincott Williams & Wilkins.)

“Evidence, please!”

In a previous post, I asked for some experimental evidence to back up Professor Moran’s claim that 22.4 million nearly neutral alleles could have become fixed in the human genome during the last five million years. Were there any other organisms – bacteria, for instance – exhibiting the fixation rate predicted by evolutionary theory for neutral alleles?

Professor Moran kindly provided an example, in his response to my post:

Fortunately for Torley, there are a number of papers that answer his question. The one that I talk about in class is from Richard Lenski’s long-term evolution experiment. Recall that mutation rates are about 10^-10 per generation. If the fixation rate of neutral alleles was equal to the mutation rate then (as predicted by population genetics) then this should be observable in the experiment run by Lenski (now 60,000 generations).

The result is just what you expect. The total number of neutral allele fixations is 35 in the bacterial cultures and this correspond to a mutation rate of 0.9 × 10^-10 or only slightly lower than what is predicted. There are lots of references in the paper and lots of other papers in the literature.

Wielgoss, S., Barrick, J. E., Tenaillon, O., Cruveiller, S., Chane-Woon-Ming, B., Médigue, C., Lenski, R. E. and D. Schneider (2011) Mutation rate inferred from synonymous substitutions in a long-term evolution experiment with Escherichia coli. G3: Genes, Genomes, Genetics 1, 183-186. [doi: 10.1534/g3.111.000406]

The 12 evolving E. coli populations in Richard Lenski’s long term evolution experiment, on June 25, 2008. Image courtesy of Wikipedia.

Initially, I was very impressed with Lenski’s paper, and I was inclined to think that Professor Moran had proved his point. Scientia locuta est, causa finita est. Or so I thought.

A skeptical biochemist

It was then that I was contacted by a scientist who wrote to me, arguing that the fixation of 22.4 million mutations in the human lineage during the last five million years by a combination of selection and genetic drift was impossible and nonsensical for any population of organisms, especially when we consider the pattern of fixation. Strong words! Who was this mysterious scientist? Readers might be surprised to learn that he’s a biochemist with a very impressive track record named Branko Kozulic, whom I introduced to readers in a previous post, titled, The Edge of Evolution? A short summary of his career achievements is available here. Dr. Kozulic also serves on the editorial board of the Intelligent Design journal Bio-Complexity.

By now I was intrigued. Here was a prominent biochemist disagreeing with the arguments of another prominent biochemist! (Larry Moran is a Professor of Biochemistry at the University of Toronto.) Who was right? I decided to investigate the matter further.

There are three different mutation rates

Dr. Kozulic pointed out that we need to distinguish between three different mutation rates:

(a) the number of mutations per base pair per generation, which is indeed roughly constant across all organisms; and

(b) the number of mutations per individual per generation, which varies widely between different kinds of organisms, for reasons that I’ll discuss below; and

(c) the total number of mutations entering the population per generation, which is equal to “the number of gametes produced each generation, 2N, times the probability of a mutation in any one of them, u.” (John Gillespie, Population Genetics: A Concise Guide, Johns Hopkins University Press, 2004, pp. 32-33.)

Professor Moran does make this distinction in some of his posts – for example, this one, where he states that there is “one mutation in every 10 billion base pairs that are replicated,” and then goes on to say that there are “133 new mutations in every zygote.”

Which mutation rate is the fixation rate equal to?

In the passage cited above, Professor Moran referred to Lenski’s long term evolution experiment:

If the fixation rate of neutral alleles was equal to the mutation rate then (as predicted by population genetics) then this should be observable in the experiment run by Lenski (now 60,000 generations).

Did you notice the reference to “the mutation rate”? As we saw above, there are three mutation rates. In chapter two of his book, Population Genetics: A Concise Guide (Johns Hopkins University Press, Baltimore, second edition, 2004), which I’ve been recently perusing, evolutionary biologist John Gillespie repeatedly refers to the mutation rate for a given locus. And in population genetics, altering the numerical relationship between the mutation rate and the (effective) population size can lead to dramatically different results. For example Gillespie, in the textbook referred to above, writes:

If 1/u << N, the time scale of mutation is much less than drift, leading to a population with many unique alleles. If N << 1/u, the time scale of drift is shorter, leading to a population devoid of variation. (2004, p. 31)

Professor Moran is kindly requested to state whether he agrees with this statement, and if not, to provide some references to support his views.

Fixation in human beings: five orders of magnitude faster than in Lenski’s bacteria!

In the passage cited above, Professor Moran referred to Lenski’s results with E. coli bacteria: a mere 35 fixations after 60,000 generations. That’s about 0.0006 fixation events per generation, for the population as a whole.

By contrast, the fixation rate which Professor Moran claims for human beings (130 per generation) was 200,000 times faster than the rate which Lenski observed for his bacteria. That’s a difference of over five orders of magnitude! This difference in fixation rates requires an explanation. Do we agree on this point, Professor Moran?

Finding the cause that explains the pattern

Now, clearly something was responsible for producing the 22.4 million neutral alleles that distinguish the human lineage from that of chimpanzees. Nobody disputes that. What Dr. Kozulic rejects is the idea that all these mutations could have been fixed by any undirected process (e.g. random mutations plus natural selection, or plus genetic drift), within the time available, especially when we consider the pattern of fixed mutations.

I’d now invite readers to have a look at an article by Rasmus Nielsen et al., titled, A Scan for Positively Selected Genes in the Genomes of Humans and Chimpanzees (PLoS Biology, 3(6): e170. doi:10.1371/journal.pbio.0030170, published May 3, 2005). In particular, I’d like readers to check out Figure 1, showing the distributions of nonsynonymous and synonymous nucleotide differences among genes, for the chimpanzee sequence.

What the figure shows is that multiple mutations (up to 21) have become fixed in thousands of different proteins, within the relatively short span of five million years.

In short: it is the pattern of fixation which neither the theory of neutral evolution nor the neo-Darwinian theory of natural selection, nor any combination of the two, can adequately explain. Until Professor Moran comes up with an explanation of his own, and some research to back it up, the ball is squarely in his court.

Over to you, Professor Moran.

Comments
What would Freud say about this fixation with fixation? :) But anyway, has anyone ever validated neutral theory's equations? Has it been done with prokaryotes? Metazoans? How large were the populations? Haldane had 300 generations for a beneficial mutation to become fixed, yet that didn't pan out with fruit flies.Joe
April 6, 2014
April
04
Apr
6
06
2014
05:39 PM
5
05
39
PM
PDT
Thank you for your thoughtful post. We really need more people with biological experience here!
I think I've been alive long enough to have acquired quite a bit of biological experience!Mung
April 6, 2014
April
04
Apr
6
06
2014
04:31 PM
4
04
31
PM
PDT
Just to clarify how at sea you are here. You say in reference ot the PLOS Biol paper:
"What the figure shows is that multiple mutations (up to 21) have become fixed in thousands of different proteins, within the relatively short span of five million years. In short: it is the pattern of fixation which neither the theory of neutral evolution nor the neo-Darwinian theory of natural selection, nor any combination of the two, can adequately explain.
and (after reading my comment?)
I was also waiting for him to explain the pattern of fixation we observe, where multiple mutations (up to 21) have become fixed in thousands of different proteins. Alas, no explanation was forthcoming.
But, given that we know the human mutation rate (mu) is around 1E-8 per base, and the average length of a protein coding gene in human genome is ~1200 bp (with much variance). Under nutrality we'd expect 1E-8 * 1200 bp * 240,000 gen * 2 lineages = ~6 difference per gene. In fact we see on average substantially fewer differences per gene (though some have more, that's the distriutoin gene size and poisson distribution for you). That effect is the result of natural selection, since these are genic sequences and we expect mutatoins that change amino acids to be selected against. And lo , the paper actually breaks the mutatoins down into those that change the amino acid in question (non-synonmous mutations) and those that don't. Since ~1/4 of possible nucleotide mutations are synonynous you'd expect the same 3:1 ratio in differences between human chimps in the absence of selection. In fact, more than half of the observed differences are synonymous, the effect of non-synonymous mutations being removed by selection. So, there are not only fewer differences in the observed data than the neutral theory would predict, but the difference can be explained by the action of natural selection on non-synonymous mutations. So much for
In short: it is the pattern of fixation which neither the theory of neutral evolution nor the neo-Darwinian theory of natural selection, nor any combination of the two, can adequately explain.
wd400
April 6, 2014
April
04
Apr
6
06
2014
01:46 PM
1
01
46
PM
PDT
I had anticipated your criticism Then why did you not correct your error in conflating the E coli results and human-chimp differences?wd400
April 6, 2014
April
04
Apr
6
06
2014
01:06 PM
1
01
06
PM
PDT
Hi wd400, Thank you for your post. I had anticipated your criticism:
The E. coli genome is more than 600 times smaller than the human genome and has a mutation rate ~ 2 orders of magnitude lower (in part because there are multiple cell divisions per generation in animals).
I was waiting for Professor Moran to come out and say something like that. To my surprise, he didn't. I was also waiting for him to explain the pattern of fixation we observe, where multiple mutations (up to 21) have become fixed in thousands of different proteins. Alas, no explanation was forthcoming. I think the ball is still in Professor Moran's court.vjtorley
April 6, 2014
April
04
Apr
6
06
2014
01:03 PM
1
01
03
PM
PDT
Yes. That many mutations are expected to fix across the genome.wd400
April 6, 2014
April
04
Apr
6
06
2014
12:38 PM
12
12
38
PM
PDT
WD400 @ 20 So in your post #4, you meant "5e6 bases per genome" and the calculation yields "mutations per genome", right?Charles
April 6, 2014
April
04
Apr
6
06
2014
12:29 PM
12
12
29
PM
PDT
THe units are mutations. The expected number of mutatations to fix in the population over that many generatons given the known genome size and mutation rate.wd400
April 6, 2014
April
04
Apr
6
06
2014
12:23 PM
12
12
23
PM
PDT
Ok, so what did you intended were the complete units of "5e6 bases"? per gene, per genome, per pizza, what? What back of the envelope calculation were you illustrating?Charles
April 6, 2014
April
04
Apr
6
06
2014
12:19 PM
12
12
19
PM
PDT
Hold on, I think that's not quite what you are asking. If you want to know how many mutations fix in a gene, and you know the per-nucleotide mutation rate then you need to scale up to the size of the gene. If you want to know how many mutations fix in the genome then you need to scale up further to account for the number of nucelotides in the genome.wd400
April 6, 2014
April
04
Apr
6
06
2014
12:13 PM
12
12
13
PM
PDT
Yes. Under neutrality he population fixation rate (i.e. the rate at which new variants take over a population) is equal to the invidual mutation rate, that's the point at the hard of this whole thing.wd400
April 6, 2014
April
04
Apr
6
06
2014
12:11 PM
12
12
11
PM
PDT
wd400 @15
in any case I don’t think the genome size is a rate per generaton.
Ok, basepairs/genome then. I think my mistake was mutations/basepair/generation => mutations/(basepair/generation) whereas you intended (mutations/basepair)/generation Though I think you have a typo in that: bases * [(mutations/bases)/gen] = mutations/gen not "bases/gen" and mutations/gen when multiplied by generations indeed yields mutations But if the genome size is "basepairs/genome", then your calculation yields 30 "mutations/genome" which would be interpreted as meaning after 60,000 generations a genome would have acquired 30 mutations (when compared against a genome at generation 1) You have a similar implication when you wrote "In a gene 1200 bases long you’d expect ... gives an expectation of ~6 changes per gene I'm assuming for the purposes of the calculation, base-pairs/gene or base-pairs/genome are homologous concepts, yes?Charles
April 6, 2014
April
04
Apr
6
06
2014
12:08 PM
12
12
08
PM
PDT
Charles, Using your units, I get mutations/generation, in any case I don't think the genome size is a rate per generaton. The mutation rate per individual genome should be in bases per generatoin. So, genome size (bp) x nucleotide mutation rate (mutatoins per bp per gen) gives bases * [(mutations/bases)/gen] = bases/gen Multiply that by several generations and our units are indeed mutations. JDD, The mutation rate is generall much higher in animals than prokaryotes, but that has precisely no relevance to the questions VJ is making since we have obversed mutation rates to form as the basis of these calculations.wd400
April 6, 2014
April
04
Apr
6
06
2014
11:16 AM
11
11
16
AM
PDT
Thanks for the link Dr JDD!bornagain77
April 6, 2014
April
04
Apr
6
06
2014
09:59 AM
9
09
59
AM
PDT
wd400@4 & vjtorely
5e6 bases * a mutation rate of 1e-10 per base * 60 000 generations gives a prediction of 30 mutations.
Your calculation with all units expressed explicitly is: 5x10^6 basepairs/generation * 1x10^-10 mutations/basepair/generation * 6x10^4 generations And we see that 10^6 * 10^-10 * 10^4 all cancel => 1; and 5x6 = 30; and 30 x 1 = 30 So I understand how you get "30". But I don't see how you get "mutations": basepairs/generation * mutations/basepair/generation * generations which resolves to: basepairs/generation * mutations-generation/basepair * generations basepairs/generation cancels -generations/basepair leaving "mutation-generations", not simply "mutations" This matters because if the units are incorrect, then the calculation is incorrect, or incorrectly explained or stated. So 1) is there something else assumed in your calculation that cancels "generations" from the final result? or 2) is there some interpretation that "mutation-generations" equates to just "mutations"? I'm just trying to understand the units on your calculation, I'm not arguing, I'm asking.Charles
April 6, 2014
April
04
Apr
6
06
2014
07:17 AM
7
07
17
AM
PDT
Whale Evolution Vs. Population Genetics - Richard Sternberg PhD. in Evolutionary Biology – video http://www.youtube.com/watch?v=85kThFEDi8obornagain77
April 6, 2014
April
04
Apr
6
06
2014
07:02 AM
7
07
02
AM
PDT
http://www.evolutionnews.org/2014/04/junk_dna_is_a_f084031.html BA77 you might be interested in this excellent article. Note - it also proves that science can advance much better if we assume presence in nature has function (due to design):Dr JDD
April 6, 2014
April
04
Apr
6
06
2014
07:02 AM
7
07
02
AM
PDT
to somewhat reiterate: The Neutral Theory requires that a large percentage of the genome be considered junk. Yet ENCODE 2012 decisively overturned that notion:
Junk No More: ENCODE Project Nature Paper Finds "Biochemical Functions for 80% of the Genome" - Casey Luskin - September 5, 2012 Excerpt: The Discover Magazine article further explains that the rest of the 20% of the genome is likely to have function as well: "And what's in the remaining 20 percent? Possibly not junk either, according to Ewan Birney, the project's Lead Analysis Coordinator and self-described "cat-herder-in-chief". He explains that ENCODE only (!) looked at 147 types of cells, and the human body has a few thousand. A given part of the genome might control a gene in one cell type, but not others. If every cell is included, functions may emerge for the phantom proportion. "It's likely that 80 percent will go to 100 percent," says Birney. "We don't really have any large chunks of redundant DNA. This metaphor of junk isn't that useful."" http://www.evolutionnews.org/2012/09/junk_no_more_en_1064001.html Scientists go deeper into DNA (Video report) (Junk No More) - Sept. 2012 http://bcove.me/26vjjl5a Quote from preceding video: “It's just been an incredible surprise for me. You say, ‘I bet it's going to be complicated', and then you are faced with it and you are like 'My God, that is mind blowing.'” Ewan Birney - senior scientist - ENCODE 2012
I believe Moran himself raised a big stink with the some of the ENCODE scientists trying to tell them that they were wrong in their conclusion that the vast majority of DNA is functional. But as the following paper points out, the motivation from Darwinists for so staunchly resisting the ENCODE findings was driven primarily by philosophical concerns, not empirical concerns.
The extent of functionality in the human genome - John S Mattick and Marcel E Dinger – July 2013 Excerpt of abstract: Finally, we suggest that resistance to these (ENCODE) findings is further motivated in some quarters by the use of the dubious concept of junk DNA as evidence against intelligent design. http://link.springer.com/article/10.1186%2F1877-6566-7-2/fulltext.html
bornagain77
April 6, 2014
April
04
Apr
6
06
2014
06:29 AM
6
06
29
AM
PDT
Moreover, the vast majority of mutations are now known to be 'non-random', i.e. directed. Mutations are not almost all random as is presupposed in neutral theory:
Revisiting the Central Dogma in the 21st Century - James A. Shapiro - 2009 Excerpt (Page 12): Underlying the central dogma and conventional views of genome evolution was the idea that the genome is a stable structure that changes rarely and accidentally by chemical fluctuations (106) or replication errors. This view has had to change with the realization that maintenance of genome stability is an active cellular function and the discovery of numerous dedicated biochemical systems for restructuring DNA molecules.(107–110) Genetic change is almost always the result of cellular action on the genome. These natural processes are analogous to human genetic engineering,,, (Page 14) Genome change arises as a consequence of natural genetic engineering, not from accidents. Replication errors and DNA damage are subject to cell surveillance and correction. When DNA damage correction does produce novel genetic structures, natural genetic engineering functions, such as mutator polymerases and nonhomologous end-joining complexes, are involved. Realizing that DNA change is a biochemical process means that it is subject to regulation like other cellular activities. Thus, we expect to see genome change occurring in response to different stimuli (Table 1) and operating nonrandomly throughout the genome, guided by various types of intermolecular contacts (Table 1 of Ref. 112). http://shapiro.bsd.uchicago.edu/Shapiro2009.AnnNYAcadSciMS.RevisitingCentral%20Dogma.pdf New Research Elucidates Directed Mutation Mechanisms - Cornelius Hunter - January 7, 2013 Excerpt: mutations don’t occur randomly in the genome, but rather in the genes where they can help to address the challenge. But there is more. The gene’s single stranded DNA has certain coils and loops which expose only some of the gene’s nucleotides to mutation. So not only are certain genes targeted for mutation, but certain nucleotides within those genes are targeted in what is referred to as directed mutations.,,, These findings contradict evolution’s prediction that mutations are random with respect to need and sometimes just happen to occur in the right place at the right time.,,, http://darwins-god.blogspot.com/2013/01/news-research-elucidates-directed.html WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? Fully Random Mutations - Kevin Kelly - 2014 Excerpt: What is commonly called "random mutation" does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it. On the contrary, there's much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism's predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern. http://edge.org/response-detail/25264
Moreover, The whole idea of neutral mutations simply goes against the 'common sense' of theoretical concerns:
"Moreover, there is strong theoretical reasons for believing there is no truly neutral nucleotide positions. By its very existence, a nucleotide position takes up space, affects spacing between other sites, and affects such things as regional nucleotide composition, DNA folding, and nucleosome building. If a nucleotide carries absolutely no (useful) information, it is, by definition, slightly deleterious, as it slows cell replication and wastes energy.,, Therefore, there is no way to change any given site without some biological effect, no matter how subtle." - John Sanford - Genetic Entropy and The Mystery of The Genome - pg. 21 - Inventor of the 'Gene Gun'
Moreover, information, which is completely transcendent of energy and mass, is what is constraining the cell to be in such a massive state of thermodynamic disequilibrium. It is not merely the chemical bonding (in fact the chemical bonding is held to be 'passive' to thermodynamic concerns).
Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH - Dr Andy C. McIntosh is the Professor of Thermodynamics (the highest teaching/research rank in U.K. university hierarchy) Combustion Theory at the University of Leeds. Excerpt: This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate. http://journals.witpress.com/paperinfo.asp?pid=420 Information and Thermodynamics in Living Systems - Andy C. McIntosh - May 2013 Excerpt: The third view then that we have proposed in this paper is the top down approach. In this paradigm, the information is non-material and constrains the local thermodynamics to be in a non-equilibrium state of raised free energy. It is the information which is the active ingredient, and the matter and energy are passive to the laws of thermodynamics within the system. As a consequence of this approach, we have developed in this paper some suggested principles of information exchange which have some parallels with the laws of thermodynamics which undergird this approach.,,, http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0008 Quantum Information/Entanglement In DNA - short video http://www.metacafe.com/watch/5936605/ Does DNA Have Telepathic Properties?-A Galaxy Insight - 2009 Excerpt: DNA has been found to have a bizarre ability to put itself together, even at a distance, when according to known science it shouldn't be able to.,,, The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible. http://www.dailygalaxy.com/my_weblog/2009/04/does-dna-have-t.html Quantum entanglement holds together life’s blueprint - 2010 Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours. “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/
It is very interesting to note that quantum entanglement, which demonstrates that ‘information’ in its pure 'quantum form' is completely transcendent of any time and space constraints (i.e. it is 'non-local'), should be found in molecular biology on such a massive scale, for how can the quantum entanglement 'effect' in biology possibly be explained by a material (matter/energy) 'cause' when the quantum entanglement 'effect' falsified material particles as its own 'causation' in the first place? (Bell, A. Aspect, A. Zeilinger) Appealing to the probability of various configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various 'special' configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! Verse and Music:
John 1:1-4 In the beginning was the Word, and the Word was with God, and the Word was God. The same was in the beginning with God. All things were made by Him, and without Him was not anything made that was made. In Him was life, and that life was the Light of men. Evanescence - The Other Side (Lyric Video) http://www.vevo.com/watch/evanescence/the-other-side-lyric-video/USWV41200024?source=instantsearch
bornagain77
April 6, 2014
April
04
Apr
6
06
2014
06:06 AM
6
06
06
AM
PDT
a few related notes:
Groundbreaking Genetic Discoveries Challenge Ape to Human Evolutionary Theory – June 17, 2013 Excerpt: Ultimately, the study results were contradictory to what evolutionists had theorized. Not only were genetic recombination rates markedly low in areas of human-chimp DNA differences (“rearranged” chromosomes), but the rates were much higher in areas of genetic similarity (“collinear” chromosomes). This is the reverse of what evolutionists had predicted. “The analysis of the most recent human and chimpanzee recombination maps inferred from genome-wide single-nucleotide polymorphism data,” the scientists explained, “revealed that the standardized recombination rate was significantly lower in rearranged than in collinear chromosomes.” Jeffrey Tomkins, a Ph.D. geneticist with the Institute for Creation Research (ICR), told the Christian News Network that these results were “totally backwards” from what evolutionists had predicted, since genetic recombination is “not occurring where it’s supposed to” under current evolutionary theory. Dr. Tomkins further emphasized that evolutionists greatly exaggerate the genetic similarities between humans and chimps, and often ignore areas of DNA where major differences do exist. “It’s called cherry-picking the data,” he explained. “There are many genetic regions between humans and chimps that are radically different. In fact, humans have many sections of DNA that are missing in chimps and vice versa. Recent research is now showing that the genomes are only 70% similar overall.”,,, http://christiannews.net/2013/06/17/groundbreaking-genetic-discoveries-challenge-ape-to-human-evolutionary-theory/
Of related note: Richard Dawkins claimed that the FOXP2 gene was among ‘the most compelling evidences’ for establishing that humans evolved from monkeys, yet, as with all the other evidences offered from Darwinists, once the FOXP2 gene was critically analyzed it fell completely apart as proof for human evolution:
Dawkins Best Evidence (FOXP2 gene) Refuted - video http://www.youtube.com/watch?v=IfFZ8lCn5uU Human brain evolution: From gene discovery to phenotype discovery - Todd M. Preuss - February 2012 Excerpt: It is now clear that the genetic differences between humans and chimpanzees are far more extensive than previously thought; their genomes are not 98% or 99% identical.,,, ,,our understanding of the relationship between genetic changes and phenotypic changes is tenuous. This is true even for the most intensively studied gene, FOXP2,, In part, the difficulty of connecting genes to phenotypes reflects our generally poor knowledge of human phenotypic specializations, as well as the difficulty of interpreting the consequences of genetic changes in species that are not amenable to invasive research. http://www.pnas.org/content/109/suppl.1/10709.full.pdf
As well, the primary piece of evidence, at the Dover trial, trying to establish chimp human ancestry from SNP (Single Nuecleotide Polymorphism) evidence was overturned:
Dover Revisited: With Beta-Globin Pseudogene Now Found to Be Functional, an Icon of the “Junk DNA” Argument Bites the Dust - Casey Luskin - April 23, 2013 http://www.evolutionnews.org/2013/04/an_icon_of_the_071421.html
It is also important to remember that neutral mutations, purely as a theory (and the abandonment of Selection), was forced upon Darwinists by the math itself, not by the empirical evidence:
A Short History Of The Junk DNA Argument Of Darwinists Excerpt: Kimura's Quandary Excerpt: Kimura realized that Haldane was correct,,, He developed his neutral theory in response to this overwhelming evolutionary problem. Paradoxically, his theory led him to believe that most mutations are unselectable, and therefore,,, most 'evolution' must be independent of selection! Because he was totally committed to the primary axiom (neo-Darwinism), Kimura apparently never considered his cost arguments could most rationally be used to argue against the Axiom's (neo-Darwinism's) very validity. John Sanford PhD. - "Genetic Entropy and The Mystery of the Genome" - pg. 161 - 162 https://docs.google.com/document/d/14-TXfGxPu-3YeCHtLmxTmL4UZN90Odt135c59yTIFsw/edit
A graph featuring 'Kimura's Distribution' being ‘properly used’ is shown in the following video:
Evolution Vs Genetic Entropy - Andy McIntosh - video https://vimeo.com/91162565
Moreover, the empirical evidence itself (and common sense) tell us that the overwhelming majority of mutations are deleterious, not neutral, in their effects,,
Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez 1, Robert J. Marks II 2, Jorge Fernandez 3 and John C. Sanford 4 - May 2013 Excerpt: It is almost universally acknowledged that beneficial mutations are rare compared to deleterious mutations [1–10].,, It appears that beneficial mutations may be too rare to actually allow the accurate measurement of how rare they are [11]. 1. Kibota T, Lynch M (1996) Estimate of the genomic mutation rate deleterious to overall fitness in E. coli . Nature 381:694–696. 2. Charlesworth B, Charlesworth D (1998) Some evolutionary consequences of deleterious mutations. Genetica 103: 3–19. 3. Elena S, et al (1998) Distribution of fitness effects caused by random insertion mutations in Escherichia coli. Genetica 102/103: 349–358. 4. Gerrish P, Lenski R N (1998) The fate of competing beneficial mutations in an asexual population. Genetica 102/103:127–144. 5. Crow J (2000) The origins, patterns, and implications of human spontaneous mutation. Nature Reviews 1:40–47. 6. Bataillon T (2000) Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? Heredity 84:497–501. 7. Imhof M, Schlotterer C (2001) Fitness effects of advantageous mutations in evolving Escherichia coli populations. Proc Natl Acad Sci USA 98:1113–1117. 8. Orr H (2003) The distribution of fitness effects among beneficial mutations. Genetics 163: 1519–1526. 9. Keightley P, Lynch M (2003) Toward a realistic model of mutations affecting fitness. Evolution 57:683–685. 10. Barrett R, et al (2006) The distribution of beneficial mutation effects under strong selection. Genetics 174:2071–2079. 11. Bataillon T (2000) Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? Heredity 84:497–501. http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0006 “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain. http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/ Unexpectedly small effects of mutations in bacteria bring new perspectives - November 2010 Excerpt: Most mutations in the genes of the Salmonella bacterium have a surprisingly small negative impact on bacterial fitness. And this is the case regardless whether they lead to changes in the bacterial proteins or not.,,, using extremely sensitive growth measurements, doctoral candidate Peter Lind showed that most mutations reduced the rate of growth of bacteria by only 0.500 percent. No mutations completely disabled the function of the proteins, and very few had no impact at all. Even more surprising was the fact that mutations that do not change the protein sequence had negative effects similar to those of mutations that led to substitution of amino acids. A possible explanation is that most mutations may have their negative effect by altering mRNA structure, not proteins, as is commonly assumed. http://www.physorg.com/news/2010-11-unexpectedly-small-effects-mutations-bacteria.html
bornagain77
April 6, 2014
April
04
Apr
6
06
2014
06:05 AM
6
06
05
AM
PDT
Dr JDD: Thank you for your thoughtful post. We really need more people with biological experience here! I think that, when we try to "explain" the evolution of functional sequences, like proteins, the mutation rate can be safely approximated in favour of the darwinian theory: the theory will however fail, and without any possible doubt. I will try to be more clear. What really counts here is not the mutation rate itself, but the number of states, or sequence configurations, that can be really achieved by the system in the available time. When Dembski proposed his famous universal probability bound, he set the threshold very high (500 bits of complexity, about 10^150 possible configurations), so that he could exclude any possible random search in the whole universe, even using all quantic states from the big bang on as bits for the computation. That is really remarkable, because 500 bits is equivalent to the complexity of a 115 AAs sequence (if the target space were made of one single state). Even considering the functional redundancy, we are well beyond that threshold in many complex proteins. For example, in Durston's famous paper where he analyzes 35 protein families, 12 protein families have functional complexities beyond this universal probability bound, with the highest functional complexity being 2416 bits (Flu PB2). But I always felt that Dembski was being too generous here. So some time ago I tried to compute a gross threshold which was more appropriate for a biological system. So, I considered our planet, with a life span of 5 billion years, as though it had been fully covered by prokaryotes from the beginning of its existence to now, and I tried to compute, grossly, the total number of states which could have been tested by such a system, considering a mean bacterial genome, reproduction time, and a very generous estimation of a global bacterial population on our planet. The result, which can be more or less appropriate, was that 150 bits (10^45) of functional complexity were more than enough to exclude a random generation of a functional sequence in the whole life of our planet. Now, that is even more remarkable, because 150 bits is equivalent to about 35 AAs, and in Durston's paper 29 protein families out of 35 were well beyond that threshold. Are we still exaggerating in favour of darwinism? Yes, we certainly are. First of all, prokaryotic life did not certainly begin 5 billion years ago (which is even more than earth's real life span). Second, the earth was not certainly fully covered by prokaryotes from the beginning of life. Third, the appearance of new protein families is not restricted to prokaryotes, but it goes on in higher beings, up to mammals. And mammals reproduce much more slowly than prokaryotes, and, even more important, they are not as many of them on our planet. Therefore, the number of states that can be reached / tested by mammals, or more in general by metazoa, is much smaller than what can be reached / tested by prokaryotes, whatever the mutation rate. And still, new complex functional protein families which never existed before, and are totally unrelated to what existed before, continued to emerge in metazoa, up to mammals. So, maybe 150 bits is still too generous as a biological probability bound. After all, both Behe and Axe, starting from bottom up considerations, tend to fix the threshold of what random variation can achieve at 3-5 AAs (about 13-22 bits). But I fell that I can safely be generous. We win anyway. So, let it be 150 bits, for now :)gpuccio
April 6, 2014
April
04
Apr
6
06
2014
02:51 AM
2
02
51
AM
PDT
Great pot gpuccio - I would have to agree, especially in light of events like the Cambrian explosion which would require very rapid mutation rate/introduction of novel beneficial mutations of a very high magnitude. Even with environmental pressures/changes, noone can postulate an intellectually satisfying naturalistic explanation of this. Anyway, putting that aside, I do have a problem with this comparing bacterium mutational rates to humans. I have not given this a huge amount of time (just relating to own experience which may be a foolish and naive thing to do!), so this may be easy to answer, but would be interested to hear an answer. I am not saying this is evidence against these experiments, just asking a question. When you first engage in the act of PCR, you use enzymes to copy and replicate your DNA of interest. Thus the cheapest and most basic approach in labs that I used many years ago now, is Taq polymerase (still frequently used). However you soon learn that not all polymerase is equal. You realise that Taq has a higher error rate, and that actually there is another ancient polymerase such as Pfu polymerase which provides proof-reading abilities, thus reducing the error rate dramatically (but it is a little more expensive so you don't give this to undergrads unless the work is quite important!). That raises a point though, that not all polymerase are equal. If there is variation in ancient polymerases quite closely related how much more in eukaryotic polymerases? Well there is certainly published work showing that human polymerase, even with its proof reading capabilities is much less prone to error than other eukaryotic replication systems. By the way, as a side point, if you turn off proof reading capabilities in mice, you get genomic instability and cariogenesis...anyway, I digress. My question is, how do researchers take into account the potential differencies in the accuracies of human polymerases (and associated machinery in transcription) in these experiments that utilise bacteria? Is it a direct comparison made or are there factors used to make the comparison more likely to relate to the actual in vivo scenario? And, if there is no adjustment made for improved error rates in humans (assuming this holds true), will these bacterial experiments not overestimate the mutation rates? Finally, somewhat irrelevant but worth considering - modelling bacteria to relate to advanced multicellular organisms is probably the only way to model this, but is severely limited. EVen in the world of pharmaceutics we can use mice/rats as models, even chimps and yet cannot predict the way drugs will act in humans fully. Pharma history is littered with drugs that looked safe or very effective in "closely related organisms to humans" (i.e. mice, etc.) yet once they go into the clinic they kill/harm people or show no efficacy. Much of our understanding of even cellular processes is based on animal models yet there is clear differences in humans meaning we just do not understand a lot about ourselves and rather make assumptions that what is true for a mouse will hold true for a human. Often true, but often it is more complex than that. Always worth considering. Again, I am not attacking this bacterial work, merely making a few comments and questions. JDDr JDD
April 6, 2014
April
04
Apr
6
06
2014
12:22 AM
12
12
22
AM
PDT
VJ: Without going onto the technical details that you so patiently outline, I would like to state once more what I have said is a very simple concept, but it is often overlooked as soon as darwinists start debating their beloved neutral theory. The idea is simple. The problem for any non design theory of biological information is to explain how functional information arises. Darwinists usually agree with us (temporarily) that complex functional information cannot arise merely through random processes (that is, though a random search or random walk). That's why they invoke their beloved Natural Selection. But, when the dramatic limitations of NS, and the complete absence of any empirical support for its role in macroevolution, are suggested to them, suddenly they seem to forget what they have said before, and go back to the idea of neutral mutations. But neutral mutations are by definition random, and we are again in the initial scenario (complex functional information cannot arise by random processes alone). Here comes to their (apparent) rescue the magical genetic drift: "Well, neutral mutations can be fixed just the same, by genetic drift!" (that is usually said with a great self-satisfied smile on the darwinist's face). Correct. And so? Everybody seems to overlook the important point. I will try to state it here as simply as possible: If it is true that a random mutation can become fixed by genetic drift, it is equally true that such a process can happen with the same probability for any random neutral mutation. IOWs, the process is simply a random aspect of Random Variation. The probability of each mutation to be present in the genome remain exactly the same. What does that mean? It means that genetic drift, while remaining an interesting phenomenon, is completely irrelevant to the generation of functional information. The only important quantities in the computation of the system where functional information arises are: a) The random search/random walk, which can be defined by two parameters: a1) How improbable the functional state is versus the search space (the target space / search space ratio). a2) How many states can be reaches / tested by the random variation process. b) Eventual non random components of the process (such as NS), whose role must be understood, tested and verified. Now, the important point is: Genetic drift does not modify any of these parameters. The improbability of the functional state remains absolutely the same. The number of states that the system can reach /test in the given time remains the same. Indeed, it depends only on the variation rate, however computed. Genetic drift only modifies randomly the modalities of random variation, but in no way it increases significantly the variation rate. The role, if any, of NS remains the same. So, I really don't understand all this emphasis on genetic drift. Random genetic drift is totally irrelevant to our problem of how functional information arises, because it is a random process that bears no relationship to function, and does not modify the probabilistic resources of the system Well, I suppose that this is a verbose way of saying what Dr. Kozulic and yourself had already clearly stated in the OP: "In short: it is the pattern of fixation which neither the theory of neutral evolution nor the neo-Darwinian theory of natural selection, nor any combination of the two, can adequately explain." I just wanted to be very clear on the reasons why I so fully agree with you :)gpuccio
April 5, 2014
April
04
Apr
5
05
2014
10:05 PM
10
10
05
PM
PDT
Every time you've written one of these posts you added an addendum acknowledging nad elementary mistake. Do you think maybe you should learn something the first thing about population genetics before you make these claims? In this case the answer to "Which mutation rate is the fixation rate equal to?" is all of them. THe fixation rate per base is equal to the individual mutation rate per base. The fixation rate per genome is the same as the individual mutation rate per genome. In the passage cited above, Professor Moran referred to Lenski’s results with E. coli bacteria: a mere 35 fixations after 60,000 generations. That’s about 0.0006 fixation events per generation, for the population as a whole. The E. coli genome is more than 600 times smaller than the human genome and has a mutation rate ~ 2 orders of magnitiude lower (in part because there are multiple cell divisions per generation in animals). Using back of an envelope numbers, 5e6 bases * a mutation rate of 1e-10 per base * 60 000 generations gives a prediction of... 30 mutations. What the figure shows is that multiple mutations (up to 21) have become fixed in thousands of different proteins, within the relatively short span of five million years. Yes. In a gene 1200 bases long you'd expect 1e-8 mutations per base * 1200 bases * 240,000 generations * 2 indpendantly evolving lineages gives an expectation of ~6 changes per gene. Taking into account larger genes, and the variance inherit in a poisson process like mutation 21 is not very surprising. For example Gillespie, in the textbook referred to above, writes...[snip text book relationship between Ne and heterozygosity] I don't know what the point of this quote is. Small populations contain less diversity because variants fix more quickly. What has that to do with this post?wd400
April 5, 2014
April
04
Apr
5
05
2014
08:36 PM
8
08
36
PM
PDT
I know I sometimes get overwhelmed by the question of which numbers we are talking about. It seems to me that the 22.4 million mutation difference between human and chimp is the mutations that are found in the genes that humans and chimps have in common. It seems to me that the calculation of how many mutations should fix due to drift, as measured in the bacteria, are of the entire dna of the bacteria, including the non-coding dna. If I am right, he is comparing apples to oranges. The shared genes should be particularly resistant to mutation because they are active regions. We know, for instance, that in critical DNA no mutations stick (ultraconserved) because all mutations are detrimental. It is a whole lot easier to find non-detrimental mutations in low-priority DNA than it is in the DNA that actually prescribes the shape of the mechanical parts of the organism.Moose Dr
April 5, 2014
April
04
Apr
5
05
2014
08:21 PM
8
08
21
PM
PDT
Of interest to the mutation rate of Lenski's e-coli: New Work by Richard Lenski: Excerpt: Interestingly, in this paper they report that the E. coli strain became a “mutator.” That means it lost at least some of its ability to repair its DNA, so mutations are accumulating now at a rate about seventy times faster than normal. http://www.evolutionnews.org/2009/10/new_work_by_richard_lenski.html Lenski's Long-Term Evolution Experiment: 25 Years and Counting - Michael Behe - November 21, 2013 Excerpt: Twenty-five years later the culture -- a cumulative total of trillions of cells -- has been going for an astounding 58,000 generations and counting. As the article points out, that's equivalent to a million years in the lineage of a large animal such as humans.,,, ,,,its mutation rate has increased some 150-fold. As Lenski's work showed, that's due to a mutation (dubbed mutT) that degrades an enzyme that rids the cell of damaged guanine nucleotides, preventing their misincorporation into DNA. Loss of function of a second enzyme (MutY), which removes mispaired bases from DNA, also increases the mutation rate when it occurs by itself. However, when the two mutations, mutT and mutY, occur together, the mutation rate decreases by half of what it is in the presence of mutT alone -- that is, it is 75-fold greater than the unmutated case. Lenski is an optimistic man, and always accentuates the positive. In the paper on mutT and mutY, the stress is on how the bacterium has improved with the second mutation. Heavily unemphasized is the ominous fact that one loss of function mutation is "improved" by another loss of function mutation -- by degrading a second gene. Anyone who is interested in long-term evolution should see this as a baleful portent for any theory of evolution that relies exclusively on blind, undirected processes. http://www.evolutionnews.org/2013/11/richard_lenskis079401.htmlbornagain77
April 5, 2014
April
04
Apr
5
05
2014
08:15 PM
8
08
15
PM
PDT
Anyone who refers to Lenski's work as their main supporting evidence for 'bottom up' evolution is definitely not looking at the Lenski experiment in an unbiased manner: Richard Lenski's Long-Term Evolution Experiments with E. coli and the Origin of New Biological Information - September 2011 Excerpt: The results of future work aside, so far, during the course of the longest, most open-ended, and most extensive laboratory investigation of bacterial evolution, a number of adaptive mutations have been identified that endow the bacterial strain with greater fitness compared to that of the ancestral strain in the particular growth medium. The goal of Lenski's research was not to analyze adaptive mutations in terms of gain or loss of function, as is the focus here, but rather to address other longstanding evolutionary questions. Nonetheless, all of the mutations identified to date can readily be classified as either modification-of-function or loss-of-FCT. (Michael J. Behe, "Experimental Evolution, Loss-of-Function Mutations and 'The First Rule of Adaptive Evolution'," Quarterly Review of Biology, Vol. 85(4) (December, 2010).) http://www.evolutionnews.org/2011/09/richard_lenskis_long_term_evol051051.html Rose-Colored Glasses: Lenski, Citrate, and BioLogos - Michael Behe - November 13, 2012 Excerpt: Readers of my posts know that I'm a big fan of Professor Richard Lenski, a microbiologist at Michigan State University and member of the National Academy of Sciences. For the past few decades he has been conducting the largest laboratory evolution experiment ever attempted. Growing E. coli in flasks continuously, he has been following evolutionary changes in the bacterium for over 50,000 generations (which is equivalent to roughly a million years for large animals). Although Lenski is decidedly not an intelligent design proponent, his work enables us to see what evolution actually does when it has the resources of a large number of organisms over a substantial number of generations. Rather than speculate, Lenski and his coworkers have observed the workings of mutation and selection.,,, In my own view, in retrospect, the most surprising aspect of the oxygen-tolerant citT mutation was that it proved so difficult to achieve. If, before Lenski's work was done, someone had sketched for me a cartoon of the original duplication that produced the metabolic change, I would have assumed that would be sufficient -- that a single step could achieve it. The fact that it was considerably more difficult than that goes to show that even skeptics like myself overestimate the power of the Darwinian mechanism. http://www.evolutionnews.org/2012/11/rose-colored_gl066361.html Genetic Entropy Confirmed (in Lenski's e-coli) - June 2011 Excerpt: No increases in adaptation or fitness were observed, and no explanation was offered for how neo-Darwinism could overcome the downward trend in fitness. http://crev.info/content/110605-genetic_entropy_confirmed Mutations : when benefits level off - June 2011 - (Lenski's e-coli after 50,000 generations) Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually. http://www2.cnrs.fr/en/1867.htm?theme1=7 The Mutational Meltdown in Asexual Populations - Lynch Excerpt: Loss of fitness due to the accumulation of deleterious mutations appears to be inevitable in small, obligately asexual populations, as these are incapable of reconstituting highly fit genotypes by recombination or back mutation. The cumulative buildup of such mutations is expected to lead to an eventual reduction in population size, and this facilitates the chance accumulation of future mutations. This synergistic interaction between population size reduction and mutation accumulation leads to an extinction process known as the mutational meltdown,,, http://www.oxfordjournals.org/our_journals/jhered/freepdf/84-339.pdfbornagain77
April 5, 2014
April
04
Apr
5
05
2014
08:11 PM
8
08
11
PM
PDT
1 2

Leave a Reply