Intelligent Design

Haldane’s dilemma – what does science really say?

Spread the love

Recently, while reading a post by Professor Larry Moran over at his Sandwalk blog, I stumbled across a lively discussion of Haldane’s dilemma in the comments section. Not being a geneticist, I hadn’t really paid much attention to the dilemma, until now.

For those who are interested in following up the matter, I’m going to post a few links to relevant articles arguing that Haldane’s dilemma remains unsolved (with asterisks placed in front of what I think are the best ones), plus some of the best responses to the dilemma that I’ve seen by evolutionists, before throwing the discussion open to readers.

Articles arguing that Haldane’s dilemma is a real problem for evolution

A Dilemma for Haldane by PaV at Uncommon Descent (2013).
Walter ReMine on Haldane’s Dilemma by Bob Enyart (August 3, 2012).
*** Haldane’s dilemma – the trade secret of evolutionary genetics by Walter ReMine (August 21, 2007).
Haldane’s view of Haldane’s dilemma by Walter ReMine (August 2007).
*** Haldane’s dilemma. Article in CreationWiki.
More Precise Calculations of the Cost of Substitution by Walter ReMine (CRS Quarterly, Vol 43 No 2 pp 111-120 September 2006).
Cost theory and the cost of substitution — a clarification by Walter ReMine (TJ 19(1) 2005, pp. 113-125).
*** Haldane’s dilemma has not been solved by Dr. Don Batten at creation.com.
The Biotic Message: Evolution versus Message Theory by Walter James ReMine reviewed by Dr. Don Batten at creation.com.
Haldane’s dilemma by Laurence D. Smart.
*** Answering Evolutionist Attempts to Dismiss “Haldane’s Dilemma” by Fred Williams (October 2000; updated subsequently).

Articles arguing that Haldane’s dilemma is not a real problem for evolution

*** Haldane’s dilemma. Wikipedia article.
*** Haldane’s non-dilemma (July 1, 2007) and ReMine strikes back (January 27, 2008), by Ian Musgrave at Panda’s Thumb.
*** How fast can evolution work? by Mike Dunford (January 25, 2007)
*** Walter ReMine and Haldane’s Dilemma. Article in EvoWiki.
Claim CB121: Haldane’s dilemma by Mark Isaak at TalkOrigins, 2006.
*** Haldane’s Dilemma by Robert Williams.

Original Sources

  • Haldane, J.B.S., “The Cost of Natural Selection“, J. Genet. 55:511–524, 1957.
  • Van Valen, Leigh, “Haldane’s Dilemma, evolutionary rates, and heterosis”, Amer. Nat. 47:185–190, 1963.
  • Grant, Verne & Flake, Robert, “Solutions to the Cost-of-Selection Dilemma“, Proc Natl Acad Sci U S A. 71(10): 3863–3865, Oct. 1974.
  • Nunney, Leonard, “The cost of natural selection revisited“, Ann. Zool. Fennici. 40:185–194, 2003. (This paper describes computer simulations of small populations with variations in mutation rate and other factors, and produces results that are dramatically different than Haldane’s low substitution limit except in certain limited situations).

The debate in a nutshell

Here’s an excerpt from ReMine’s August 2007 article:

What is Haldane’s Dilemma?

Briefly. Haldane’s Dilemma establishes a limit of 1,667 beneficial substitutions (where a substitution is almost always one nucleotide) over the past ten million years of the lineage leading to humans. The origin of all the uniquely human adaptations would have to be explained within that limit.(1) That is a serious problem.

The famous evolutionary geneticist, J.B.S. Haldane, showed that for higher vertebrates (species with low reproduction rates), the long-term rate of beneficial substitution cannot plausibly be faster than one substitution per 300 generations.(2)

Ten million years for the evolution of humans from some ape-humanoid ancestor. (This amount of time is a factor of two or three times the alleged split between chimps and humans. Though that date seems to be getting revised recently.)

A 20 year effective generation time. (This is approximately the age of parents when they give births, averaged over all births that reach mid-parenthood.) My book documents that figure from several evolutionists.

According to evolutionists the substitutions are almost always a single nucleotide (called a point mutation).(3)

All the key data, assumptions, models, and calculations are taken from evolutionists, and put a limit on the rate of beneficial evolution. I call it the “Haldane limit,” or the “1,667 limit.”(4) Since my book came out, there has been no serious dispute that Haldane’s analysis (if correct) places a 1,667 limit on human evolution.

My book identifies many factors further reducing that figure by orders of magnitude.(5) In other words, Haldane’s estimate is overly optimistic in favor of evolution. Yet I focus on the 1,667 limit because it derives directly from Haldane – so evolutionists cannot evade the issue by blaming it on me.

All those matters were known to evolutionary geneticists in 1957 when Haldane published his argument. Yet despite it being interesting, important, and easy to communicate, they did not inform the public. No, there was no conspiracy. But it was a staggering bit of negligence. Haldane’s Dilemma is not just the problem itself, but also the evolutionists’ negligence for not communicating it to the public…

[Footnotes]

(1) Haldane’s Dilemma puts a limit on the rate of beneficial evolution. It does not limit the rate of neutral or harmful evolution, which can be far more rapid. However, my book also contributes a style of argument previously unheard of – a serious limit on the rate of expressed neutral substitutions. The argument involves something routinely left out of evolutionary discussions – error catastrophe. By seeing the connection between error catastrophe and plausible substitution rates, I was able to create a new type of argument.

(2) Haldane’s calculations included the possibility of many substitutions overlapping in time. His argument did not require single substitutions tacked end-to-end.

(3) Sometimes the ‘thing’ being substituted into the population might be larger than a nucleotide, such as: insertion, deletion, gene inversion, gene duplication, or the relative order of genes on a chromosome. Each of these would count as a substitution, and the argument puts a limit on the total number of substitutions.

(4) The math is easy: 1,667 substitutions = 10,000,000 / (20 * 300)
Substitutions = (Years) / [(Years / Generation) * (Generations / Substitution)]

(5) As discussed in my book, several factors could reduce the 1,667 limit significantly. For example, according to Eldredge and Gould’s evolutionary theory, punctuated equilibria, species are in statis at least 99 % of the time, and Gould claimed punc-eq applies to human evolution. According to Gould (in his last book, The Structure of Evolutionary Theory) genetic change would typically cease during statis. If correct, this factor alone could reduce the Haldane limit by a factor of about 100, to a limit of 17 substitutions. I was the first to bring up this relationship between punc-eq and Haldane’s Dilemma. Evolutionists should have seen this relationship, but if they did, they did not publicize it.

And here’s an excerpt from Musgrave’s July 2007 article:

What the real problem is: One of the consequences of Haldane’s calculation is that it sets an upper limit to the amount of allelic variation (heterozygosity) in the genome. Under Haldanes’s assumptions, if different alleles of genes represent deleterious variants being selected against, too much variation means that the organisms fitness fall below survivable levels. When the variation in the genomes of several organisms was measured, it was way above the limits that would be survivable if Haldane’s assumptions held. The problem is not that evolution is too slow; the problem is that it is much faster than Haldane’s limit.

Let’s restate that, the amount of measured variation in the genome meant that if Haldane’s assumptions were right, all vertebrates would be dead. So we know that Haldane was wrong. Exactly where he was wrong occupied many pages of journal articles in the 60’s and 70’s. Kimura (Kimura, 1968) used the heterozygosity problem to advance the neutral theory. In neutral theory, most mutations are neutral with respect to fitness, and neutral alleles are fixed by drift. Since the alleles have no effect on fitness, a very large number of allelic variants can be in the population and not reduce its fitness, thus solving the heterozygosity problem.

Several others proposed selectionist explanations using different assumptions to Haldane’s that could drive more substitutions. The technical details need not concern us here, suffice it to say there were a number of models which could exceed Haldane’s “speed limit” (soft selection, truncation selection and gene hitchhiking for example. All of which have some experimental and observation evidence, see Ewens, 1969, Grant and Flake 1974, Smith, 1968 and many others in the reference list). The discussions over Haldane’s dilemma rapidly got subsumed into the larger neutralist vs adaptionist debate. In the end, the evidence came down on the side of the neutralists, and it is accepted that the majority of variation in genomes is due to neutral mutations [2].

How many benefical mutations? While the majority of variation is neutral, the question remains exactly how much variation is due to selection, and does it break Haldane’s “speed limit”. Recent comparisons of Human and Chimp genomes, using the Macaque as an out group, have given us a good idea of how many genes have been fixed since the last common ancestor of chimps and humans (Bakewell, 2007).

154

Actually, that’s 154 of 13,888 genes. Given that we have around 22,000 genes [3] in our genome (http://www.ensembl.org/Homo_sapiens/index.html), then if the same percentage of beneficial mutations holds for the rest of the genome, no more than 238 fixed beneficial mutations is what separates us from the last common ancestor of chimps and humans.

You are probably sitting there astonished that we are around 240 genes away from our last common ancestor with the chimp and saying “this can’t be right”[4] (how much did the guess you wrote down differ from the real thing?). However, this result agrees with previous estimates of the number of positively selected genes (Arbiza, 2006, Yu 2006). You can argue until the cows come home about whether you can get around Haldane’s assumptions using truncation selection, soft selection or whatever, the plain fact is that humans and the last common ancestor of humans and chimps are separated by far fewer fixed beneficial mutations than even Haldane’s limit allows.

Now, it’s likely that the above values is an underestimate, and the some weakly selected genes have been missed, but it is in accord with previous studies using smaller gene sets (Arbiza, 2006, Yu 2006). Even if you say we missed half of the genes that underwent selection (very unlikely), the number of beneficial genes fixed by natural selection would be around 480, and the real number is certainly less (Arbiza, 2006).

The above study only covered protein coding genes, not regulatory sequences, and most biologists expect that changes in regulatory sequences played an important role in evolution. Getting at the number of beneficial mutations in regulatory genes that have been fixed by natural selection is a lot harder, but it seems like around 100 regulatory genes may have been selected (Donaldson & Gottgens 2006, Kehrer-Sawatzki & Cooper 2007). Again, even if we set the number of regulatory genes that have been selected as the same number as the most wildly optimistic estimate of protein coding genes fixed by natural selection, then we end up with 960 fixed beneficial mutations, below ReMine’s calculation of Haldane’s limit [5]. This means Haldane’s dilemma is irrelevant to human evolution.

Conclusion: Haldane’s dilemma has never been a problem for evolution, but the technical nature of the arguments involved made it difficult to clearly demonstrate anti-evolutionists misuse of the “dilemma”. Also, the difficulty in getting the original papers meant that the distortion of Haldane’s work by anti-evolutionists was not obvious.

Now Walter ReMine’s claim that 1667 beneficial mutations is too few to generate a philosopher poet from the common ancestor of chimps and humans is shown to be trivially false from comparison of the human and chimp genome. As this claim was the keystone of ReMine’s argument, Haldane’s dilemma should disappear as an anti-evolutionist claim.

Excerpt from Wikipedia:

Haldane stated at the time of publication “I am quite aware that my conclusions will probably need drastic revision”, and subsequent corrected calculations found that the cost disappears. He had made an invalid simplifying assumption which negated his assumption of constant population size, and had also incorrectly assumed that two mutations would take twice as long to reach fixation as one, while sexual recombination means that two can be selected simultaneously so that both reach fixation more quickly.

Excerpt from a recent MedicalXpress article (Fate of new genes cannot be predicted, September 13, 2013):

…[T]he research team addressed the probability of fixation of a beneficial allele in the population, i.e., to have all individuals of the population carrying the new allele. They repeated the competition experiments between the two same lines of C. elegans, but this time used an initial higher number of invading individuals, to mimic a population in which the beneficial allele was already established. The researchers observed that the adaptive value of each allele, i.e., whether it behaves as beneficial or deleterious, depended on its frequency in the population. If its frequency was higher than 5% (when more than five different individuals in a population of 100 individuals), the allele was perceived as deleterious and it started to be eliminated by natural selection. But when the frequency was less than 5%, the allele was beneficial. The result of these complex dynamics is that genetic diversity could be maintained indefinitely, without one allele or the other ever being fixed in the population…

Henrique Teotónio adds: “To our knowledge, this is the first time anyone was able to directly test Haldane’s theory. We have proved it correct for the initial stages, when a new allele appears in a population. But our results show that further empirical work and more theoretical models are required to accurately predict the fate of that allele over long time spans”.

Well, that’s what I’ve dug up. I invite everyone to put in their two cents’ worth. Is Haldane’s dilemma a real one for evolutionists?

27 Replies to “Haldane’s dilemma – what does science really say?

  1. 1
    bornagain77 says:

    Using Numerical Simulation to Better Understand Fixation Rates, and Establishment of a New Principle – “Haldane’s Ratchet” – Christopher L. Rupe and John C. Sanford – 2013
    Excerpt: We then perform large-scale experiments to examine the feasibility of the ape-to-man scenario over a six million year period. We analyze neutral and beneficial fixations separately (realistic rates of deleterious mutations could not be studied in deep time due to extinction). Using realistic parameter settings we only observe a few hundred selection-induced beneficial fixations after 300,000 generations (6 million years). Even when using highly optimal parameter settings (i.e., favorable for fixation of beneficials), we only see a few thousand selection-induced fixations. This is significant because the ape-to-man scenario requires tens of millions of selective nucleotide substitutions in the human lineage.
    Our empirically-determined rates of beneficial fixation are in general agreement with the fixation rate estimates derived by Haldane and ReMine using their mathematical analyses. We have therefore independently demonstrated that the findings of Haldane and ReMine are for the most part correct, and that the fundamental evolutionary problem historically known as “Haldane’s Dilemma” is very real.
    Previous analyses have focused exclusively on beneficial mutations. When deleterious mutations were included in our simulations, using a realistic ratio of beneficial to deleterious mutation rate, deleterious fixations vastly outnumbered beneficial fixations. Because of this, the net effect of mutation fixation should clearly create a ratchet-type mechanism which should cause continuous loss of information and decline in the size of the functional genome. We name this phenomenon “Haldane’s Ratchet”.
    http://media.wix.com/ugd/a704d.....fa9c20.pdf

  2. 2
    scordova says:

    Haldane’s dilemma is real.

    For all the Darwinists complaining about creationists or Haldane making invalid assumptions, they never come up with the correct model or simulation.

    I’ll never forget when an evolutionist PhD candidate named Erik was arguing with us and Dr. Robert Carter (one of Walter ReMine’s associates), he kept listing all the things that Mendel’s accounting didn’t have and Dr. Carter said, “no you’re mistaken, we model that too!”

    Then I asked Erik, “Ok, so you complain the creationist model is flawed. What software simulation do you use to prove there isn’t any Haldane’s dilemma?” The look of bewilderment on Erik’s face said it all. He realized he didn’t have any work he could cite that refutes Haldane’s dilemma, not one computer simulation with all the parameters he’s demanding but which creationists model.

    I told him, “see, you guys complain about invalid parameters, but you never describe or model what you think the valid parameters are and following the resulting inference.”

    Think I’m kidding? I posed the same question at UD, and you can see for yourself the paucity of answers:

    Calling All Darwinists, Where is You’re Best Population Genetics Simulation

    Darwinists never give the right parameters, they’ll only insist that anti-Darwinian results must be necessarily wrong. That’s because they don’t want to come to terms with the truth, Darwinism is failed hypothesis many times over.

    FWIW, a related thread:
    Most evolution is free of selection, therefore Darwinism must be false

  3. 3
    vjtorley says:

    Hi bornagain77,

    Thank you for your post. I was struck by this paragraph in the paper:

    The ape-to-man scenario requires the fixation of tens of millions of mutations within each lineage. Most such mutations would necessarily have been nearly-neutral in their effect, but none can be assumed to have been perfectly neutral. It is widely agreed that many such fixations would have been slightly deleterious. Yet to enable a net increase in fitness (i.e., allowing increased intelligence in the human lineage, etc.), and even to simply avoid extinction due to accumulating deleterious mutations, the large majority of these tens of millions of fixations would have had to have been beneficial. The scenario clearly demands over ten million beneficial fixations. Yet the fundamental problem of Haldane’s Dilemma only permits the selective fixation of hundreds, or at best, thousands of beneficial mutations in that six million year time period. The ape-to-man scenario falls short of the needed beneficial fixations by a factor of at least three orders of magnitude.

    I’m sure that Ian Musgrave would contest the first sentence I’ve highlighted. How do we know that ten million beneficial fixations would have been required?

  4. 4
    vjtorley says:

    Hi Sal,

    I find it very interesting that Nunney won’t release his software into the public domain, while Hey’s program contains a bug. Something doesn’t smell right.

    On the other hand, what about this hypothesis? Suppose that we have been going downhill genetically in terms of overall fitness, over millions of years, but that we’ve been degenerating very slowly, and from a “bug-free” initial state, which is why we haven’t died out yet. Is that possible?

    In my post, The Edge of Evolution, I cited a paper by Dr. Branko Kozulic, which said that every species of living thing has hundreds of unique proteins, each of which is like no other, and that every species of living thing has hundreds of unique genes, too. Dr. Kozulic proposes that the genes and proteins that are unique to a given species can be used to define that species. If he’s right, then by that token, we shouldn’t need millions of beneficial mutations to get us from an ape-like ancestor to a human being. A few hundred should be enough. Or is there something I’m missing?

  5. 5
    JGuy says:

    Problem will be on steroids when it’s proven chimps and human genomes are 70%-80% different. Calling it junk get’s refuted by ENCODE.

    6-10 million years to account 600,000,000 to 900,000,000 nucleotide differences.

    Fun times. B-)

  6. 6
    bornagain77 says:

    Dr Torley they cite Britten 2002:

    Excerpt: The actual functional difference between the chimp and human genome is not a matter of just a few thousand nucleotides. Minimally, tens of millions nucleotide substitutions are required (Britten, 2002).
    http://media.wix.com/ugd/a704d.....fa9c20.pdf

    One interesting observation is that the sequence divergence between chimp and human is quite large, in excess of 20% for a few regions. Some of the larger gaps are broken by regions within them that align with appropriate segments of the other species’ DNA sequence but only have distant similarity.
    http://www.pnas.org/content/99/21/13633.full

    in the 12 years since Britten, the situation has only gotten far worse for Darwinists

    Comprehensive Analysis of Chimpanzee and Human Chromosomes Reveals Average DNA Similarity of 70% – by Jeffrey P. Tomkins – February 20, 2013
    Excerpt: For the chimp autosomes, the amount of optimally aligned DNA sequence provided similarities between 66 and 76%, depending on the chromosome. In general, the smaller and more gene-dense the chromosomes, the higher the DNA similarity—although there were several notable exceptions defying this trend. Only 69% of the chimpanzee X chromosome was similar to human and only 43% of the Y chromosome. Genome-wide, only 70% of the chimpanzee DNA was similar to human under the most optimal sequence-slice conditions. While, chimpanzees and humans share many localized protein-coding regions of high similarity, the overall extreme discontinuity between the two genomes defies evolutionary timescales and dogmatic presuppositions about a common ancestor.
    http://www.answersingenesis.or.....chromosome

    and that is not even counting the large differences being found for the ‘regulatory’ regions which use to be considered junk by Darwinists and therefore ignored by them:

    Darwinian Logic: The Latest on Chimp and Human DNA – Jonathan Wells – October 2011
    Excerpt: Now a research team headed by John F. McDonald at Georgia Tech has published evidence that large segments of non-protein-coding DNA differ significantly between chimps and humans,,,, If the striking similarities in protein-coding DNA point to the common ancestry of chimps and humans, why don’t dissimilarities in the much more abundant non-protein-coding DNA point to their separate origins?
    http://www.evolutionnews.org/2.....52291.html

    Junk No More: ENCODE Project Nature Paper Finds “Biochemical Functions for 80% of the Genome” – Casey Luskin – September 5, 2012
    Excerpt: The Discover Magazine article further explains that the rest of the 20% of the genome is likely to have function as well:
    “And what’s in the remaining 20 percent? Possibly not junk either, according to Ewan Birney, the project’s Lead Analysis Coordinator and self-described “cat-herder-in-chief”. He explains that ENCODE only (!) looked at 147 types of cells, and the human body has a few thousand. A given part of the genome might control a gene in one cell type, but not others. If every cell is included, functions may emerge for the phantom proportion. “It’s likely that 80 percent will go to 100 percent,” says Birney. “We don’t really have any large chunks of redundant DNA. This metaphor of junk isn’t that useful.””
    We will have more to say about this blockbuster paper from ENCODE researchers in coming days, but for now, let’s simply observe that it provides a stunning vindication of the prediction of intelligent design that the genome will turn out to have mass functionality for so-called “junk” DNA. ENCODE researchers use words like “surprising” or “unprecedented.” They talk about of how “human DNA is a lot more active than we expected.” But under an intelligent design paradigm, none of this is surprising. In fact, it is exactly what ID predicted.
    http://www.evolutionnews.org/2.....64001.html

    Scientists go deeper into DNA (Video report) (Junk No More) – Sept. 2012
    http://bcove.me/26vjjl5a

    Quote from preceding video:
    “It’s just been an incredible surprise for me. You say, ‘I bet it’s going to be complicated’, and then you are faced with it and you are like ‘My God, that is mind blowing.’”
    Ewan Birney – senior scientist – ENCODE

    ENCODE: Encyclopedia Of DNA Elements – video
    http://www.youtube.com/watch?v=Y3V2thsJ1Wc

    Quote from preceding video:
    “It’s very hard to get over the density of information (in the genome),,, The data says its like a jungle of stuff out there. There are things we thought we understood and yet it is much, much, more complex. And then (there are) places of the genome we thought were completely silent and (yet) they’re (now found to be) teeming with life, teeming with things going on. We still really don’t understand that.”
    Ewan Birney – senior scientist – ENCODE

  7. 7
    Eric Anderson says:

    So, if I may summarize, the primary substantive evolutionist response to Haldane’s dilemma is the assertion that it doesn’t take much to get from a common ancestor to a chimp and a human, so there really isn’t much of a problem anyway.

    Could be.

    Presumably, though, one realizes that that is a whopper of an assumption, unproven, undemonstrated, and perilously close to circular reasoning.

    The problem is that we really have no idea what is actually required to build a human (or a chimp). So any discussion about getting from A to B is largely an exercise in speculation. It seems that no matter how many or how few changes are found between species A and species B, one can always fall back on “Well, there you have it. That is how many changes are needed to get from A to B.”*

    Of course that kind of analysis is utterly unhelpful in answering the real question.

    Probably the best we can do is try to break the problem down. Perhaps focus on, say, one specific morphological feature and try to determine the specific genes, proteins, and molecular machines involved in that limited, specific feature. At that point we might get a sense as to what is involved. Yes, we would still have to extrapolate to the whole organism, particularly with respect to less physical differences like mental ability and intelligence, but at least we could start making an assessment based on actual real-world engineering, rather than circular reasoning along the lines of “Well, that is how many changes there are, so that is what was required.”

    —–

    * The committed materialist has no problem with Haldane’s dilemma. Because such an individual already “knows,” via a priori philosophical fiat, that humans and chimps descended from a common ancestor via a series of accidental copying errors. So such an individual will never look at the data and question whether it explains the arrival of humans. Rather, they will look at the data and say, “OK, that is how many changes were required to get to humans.”

  8. 8
    Eric Anderson says:

    Sal, OT. I don’t know if you followed the wrap up on the other thread. I fixed my email address in my profile. Shoot me a note when you get a chance.

    Cheers,

  9. 9
    bornagain77 says:

    a few more notes Dr. Torley:

    Darwinists simply don’t have any evidence for any beneficial mutations (certainly not in the numbers that would be necessary to overcome the ‘Haldane’s rachet’ effect of deleterious mutations fixating):

    Critic ignores reality of Genetic Entropy – Dr John Sanford – 7 March 2013
    Excerpt: Where are the beneficial mutations in man? It is very well documented that there are thousands of deleterious Mendelian mutations accumulating in the human gene pool, even though there is strong selection against such mutations. Yet such easily recognized deleterious mutations are just the tip of the iceberg. The vast majority of deleterious mutations will not display any clear phenotype at all. There is a very high rate of visible birth defects, all of which appear deleterious. Again, this is just the tip of the iceberg. Why are no beneficial birth anomalies being seen? This is not just a matter of identifying positive changes. If there are so many beneficial mutations happening in the human population, selection should very effectively amplify them. They should be popping up virtually everywhere. They should be much more common than genetic pathologies. Where are they? European adult lactose tolerance appears to be due to a broken lactase promoter [see Can’t drink milk? You’re ‘normal’! Ed.].
    African resistance to malaria is due to a broken hemoglobin protein [see Sickle-cell disease. Also, immunity of an estimated 20% of western Europeans to HIV infection is due to a broken chemokine receptor—see CCR5-delta32: a very beneficial mutation. Ed.] Beneficials happen, but generally they are loss-of-function mutations, and even then they are very rare!
    http://creation.com/genetic-entropy

    The evidence for the detrimental nature of mutations in humans is simply overwhelming. Scientists have already cited over 148,413 mutational disorders for humans.

    Inside the Human Genome: A Case for Non-Intelligent Design – Pg. 57 By John C. Avise
    Excerpt: “Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens.”

    I went to the mutation database website cited by John Avise and found:

    HGMD®: 148,413 mutation total (as of March 2014)
    http://www.hgmd.org/

    Moreover, whenever a ‘coordinated mutation’ is required for a new function the waiting time for fixation by Darwinian processes explodes exponentially:

    Human Evolution: A Facebook Dialog – By Ann Gauger – Nov. 12, 2012
    Excerpt: PM:Is it also possible that the mechanism that you refer to in your video clip is not the only/main one at play?
    Biologic: The mechanism I refer to is based on the standard Darwinian model for evolution. Published population genetics estimates for how long it would take to make *and fix* a single base change to a DNA binding site in a 1 kb segment of DNA are prohibitively long—six million years. To get a second mutation in the same DNA binding site would take in excess of 200 million years.
    Now to go from hominid to human requires many changes, most of them to gene expression patterns. It is much easier to change the DNA binding site than to change the transcription factor’s specificity. And all these mutations must work together and be beneficial to the evolving organism. The window of time available according to the fossil record and phylogenetic estimates is too short for known mechanisms to be sufficient. So do I think there are are other things at play?
    Yes.
    http://www.biologicinstitute.o.....ialog?og=1

    Don’t Mess With ID (Overview of Behe’s ‘Edge’ and Durrett and Schmidt’s paper at the 20:00 minute mark) – Paul Giem – video
    http://www.youtube.com/watch?v=5JeYJ29-I7o

    Waiting Longer for Two Mutations – Michael J. Behe
    Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that ‘for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years’ (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless “using their model” gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model.
    http://www.discovery.org/a/9461

    Science & Human Origins: Interview With Dr. Douglas Axe (podcast on the strict limits found for changing proteins to other very similar proteins) – July 2012
    http://intelligentdesign.podom.....3_53-07_00

    When Theory and Experiment Collide — April 16th, 2011 by Douglas Axe
    Excerpt: Based on our experimental observations and on calculations we made using a published population model [3], we estimated that Darwin’s mechanism would need a truly staggering amount of time—a trillion trillion years or more—to accomplish the seemingly subtle change in enzyme function that we studied.
    http://www.biologicinstitute.o.....nt-collide

  10. 10
    bornagain77 says:

    And we now have evidence that many mutations for the new ORFan genes, which Dr. Kozulic wrote about, (and not counting regulatory regions) would have been required to be ‘coordinated’:

    (A Reply To PZ Myers) Estimating the Probability of Functional Biological Proteins? Kirk Durston , Ph.D. Biophysics – 2012
    Excerpt (Page 4): The Probabilities Get Worse
    This measure of functional information (for the RecA protein) is good as a first pass estimate, but the situation is actually far worse for an evolutionary search. In the method described above and as noted in our paper, each site in an amino acid protein sequence is assumed to be independent of all other sites in the sequence. In reality, we know that this is not the case. There are numerous sites in the sequence that are mutually interdependent with other sites somewhere else in the sequence. A more recent paper shows how these interdependencies can be located within multiple sequence alignments.[6] These interdependencies greatly reduce the number of possible functional protein sequences by many orders of magnitude which, in turn, reduce the probabilities by many orders of magnitude as well. In other words, the numbers we obtained for RecA above are exceedingly generous; the actual situation is far worse for an evolutionary search.
    http://powertochange.com/wp-co.....Myers_.pdf

    “Why Proteins Aren’t Easily Recombined, Part 2” – Ann Gauger – May 2012
    Excerpt: “So we have context-dependent effects on protein function at the level of primary sequence, secondary structure, and tertiary (domain-level) structure. This does not bode well for successful, random recombination of bits of sequence into functional, stable protein folds, or even for domain-level recombinations where significant interaction is required.”
    http://www.biologicinstitute.o.....ned-part-2

    Moreover, Darwinists have no evidence that supposed beneficial mutations can ‘coordinate’ so as to build up functional information over and above what is already present in life:

    Mutations : when benefits level off – June 2011 – (Lenski’s e-coli after 50,000 generations)
    Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually.
    http://www2.cnrs.fr/en/1867.htm?theme1=7

    New Research on Epistatic Interactions Shows “Overwhelmingly Negative” Fitness Costs and Limits to Evolution – Casey Luskin June 8, 2011
    Excerpt: In essence, these studies found that there is a fitness cost to becoming more fit. As mutations increase, bacteria faced barriers to the amount they could continue to evolve. If this kind of evidence doesn’t run counter to claims that neo-Darwinian evolution can evolve fundamentally new types of organisms and produce the astonishing diversity we observe in life, what does?
    http://www.evolutionnews.org/2.....47151.html

  11. 11
    Ho-De-Ho says:

    Pip pip everyone. This Haldane’s spot of bother is dashed interesting. I think I may have got the wrong end of the stick though, if somebody could put me straight.

    The tireless researcher BA77 (a living search engine, who deserves a medal if you ask me) reveals that there is only 69% similarity between chimp and human X chromosomes and 43% on the Y.

    Wouldn’t that mean that there are around 48 million differences in the X’s and 33 million in the Y’s?

    That would be 270 changes per 20 year generation just for those two chromosomes.

    That’s an awful lot. Which is why I think I must have sliced my mental shot, so to speak.

  12. 12
    drc466 says:

    I don’t have any answers, but two things jump out at me from the evolutionists response:

    Let’s restate that, the amount of measured variation in the genome meant that if Haldane’s assumptions were right, all vertebrates would be dead.

    From a YEC perspective, this statement is very possibly correct. Given our understanding of how genetic load is affecting the human genome, it is quite possible that at current rates of mutation our survivability over millions of years is limited. In other words – we’re all doomed!

    Recent comparisons of Human and Chimp genomes, using the Macaque as an out group, have given us a good idea of how many genes have been fixed since the last common ancestor of chimps and humans (Bakewell, 2007). 154.

    Isn’t this a bit of a bait-and-switch, Dr. Torley? Haldane was talking nucleotides – they’re talking complete genes. This translates to millions of nucleotides, if I’m not mistaken. Basically, they steal a base by a (rather difficult) assumption that an entire gene gets substituted wholesale. As long as we’re scaling up so seamlessly, why not suggest that the entire DNA strand is a single substitution – we’ve just solved Haldane’s Dilemma!

  13. 13
    Ho-De-Ho says:

    drc466 that is a good spot about genes. According to this link:

    http://genetics.thetech.org/ab...../what-gene

    “Genes vary in size, from just a few thousand pairs of nucleotides (or “base pairs”) to over two million base pairs.”

    240 genes multiplied by a “few thousand nucleotide changes” to “two million” would put one in a bracket of between 750,000 and 480,000,000 tweaks.

    I don’t know what the answer is, but I sympathize with that Haldane fellow. He must have scratched his head all night over that one.

  14. 14
    scordova says:

    Eric,

    I was considering starting a new website and I wanted to consult with you about it. Events have forced my hand to proceed since I got invited to provide a presentation to university students who’ll receive homework credit for attending my presentation on ID, Creation, and Christianity! Yay!

    The details are here about three separate websites. This post was pretty much what was in my e-mail.

    1 experimental forum, 2 creationist blogs

    Sorry for the off-topic, so I direct further comments on the matter to that thread.

    Sal

  15. 15
    Joe says:

    Evolutionists will tell you that most of the genetic change was not due to pont mutations rather it was larger changes like insertions and deletions, duplications and recombinations.

    Nick Matzke once told me:

    Indels happen through duplication or loss of big chunks of DNA (sometimes including genes, often not). Each of these can happen in a single step e.g. through unequal crossing over, where the chromosomes don’t line up perfectly.

    So in a single “mutation” you can get a “difference” of 100s or 1000s of DNA base pairs (even if the “difference” is just the fact that the mutant has say 3000 bases twice next to each other on the chromosome, and the nonmutant has those 3000 bases once.

    To take the rate of point mutation (the type of mutation from A to T or C to A or whatever) and then make conclusions using the difference statistics which include indels is wildly, hopeless illegitimate. To then build upon this wild misunderstanding vicious insults and cussing at evolutionists is ridiculous.

    That said, no one knows how long it takes for a mutation to become fixed Haldane said 300 generations yet a fruit fly experiment has it over 600 gemrations (and still nothing became fixed).

    Truncation selection is a pipe-dream. And just how can neutral muations explain the physiological and anatomical differences between chimps and humans? BTW Kimura has never been verified, so forget about neutral mutation fixation outside of design or severe bottle-necks.

    However Haldane’s dilemma is moot, but not for the reason Musgrave provided. Haldane’s dilemma is moot because no one knows if any amount of genetic change can turn a quadraped/ knuckle-walker into an upright biped. And that is because no one knows what makes a chimp a chimp nor what makes a human a human. Evolutionists require all that to be in the genome. Yet there isn’t any evidence to support that claim and evidence against it:

    To understand the challenge to the “superwatch” model by the erosion of the gene-centric view of nature, it is necessary to recall August Weismann’s seminal insight more than a century ago regarding the need for genetic determinants to specify organic form. As Weismann saw so clearly, in order to account for the unerring transmission through time with precise reduplication, for each generation of “complex contingent assemblages of matter” (superwatches), it is necessary to propose the existence of stable abstract genetic blueprints or programs in the genes- he called them “determinants”- sequestered safely in the germ plasm, away from the ever varying and destabilizing influences of the extra-genetic environment.

    Such carefully isolated determinants would theoretically be capable of reliably transmitting contingent order through time and specifying it reliably each generation. Thus, the modern “gene-centric” view of life was born, and with it the heroic twentieth century effort to identify Weismann’s determinants, supposed to be capable of reliably specifying in precise detail all the contingent order of the phenotype. Weismann was correct in this: the contingent view of form and indeed the entire mechanistic conception of life- the superwatch model- is critically dependent on showing that all or at least the vast majority of organic form is specified in precise detail in the genes.

    Yet by the late 1980s it was becoming obvious to most genetic researchers, including myself, since my own main research interest in the ‘80s and ‘90s was human genetics, that the heroic effort to find information specifying life’s order in the genes had failed. There was no longer the slightest justification for believing there exists anything in the genome remotely resembling a program capable of specifying in detail all the complex order of the phenotype. The emerging picture made it increasingly difficult to see genes as Weismann’s “unambiguous bearers of information” or view them as the sole source of the durability and stability of organic form. It is true that genes influence every aspect of development, but influencing something is not the same as determining it. Only a small fraction of all known genes, such as the developmental fate switching genes, can be imputed to have any sort of directing or controlling influence on form generation. From being “isolated directors” of a one-way game of life, genes are now considered to be interactive players in a dynamic two-way dance of almost unfathomable complexity, as described by Keller in The Century of The Gene- Michael Denton “An Anti-Darwinian Intellectual Journey”, Uncommon Dissent (2004), pages 171-2

    Oops, so sorry.

  16. 16
    scordova says:

    VJ,

    You said something doesn’t smell right. And it is even more obvious than that.

    Just consider we have 7 billion people on the planet. Ask an evolutionist how long it will take for a novel trait in one part of the human population to become infused into every person on the planet.

    When I pressed them to supply even ONE mutation they think might result in an overtake, it had to do with ability or non-ability to digest milk, so even that one is dubious.

    That’s Haldanen’s dilemma for all to see.

    Dr. Dawkins can you name one mutation you expect to overtake the entire human population, and how many generations do you think it will take?

    Any answer? You don’t need fancy math or computer simulations to sense the difficulty.

    Sal

  17. 17
    scordova says:

    VJ,

    Consider the ubiquitous E-coli bacteria. Yes, there are regional variants, but how can Darwinism explain that so many regional variants have fixed characters especially when the bacteria is global?

    Will ANY Darwinist be willing to predict a single mutation in all the E Coli of the world will overtake the entire global E Coli population in 300 generations?

    A fundamental assumption in Darwinism, is the assumption of “well stirred”. But for something like bacteria around the globe, the assumption of “well stirred” doesn’t hold, and thus maybe they have to assume the essentials of a given bacteria evolved, and stopped evolving after exit from a warm little pond, other wise the well-stirring problem rears it’s ugly head.

    That’s an intuitive example of something a little bit beyond Haldane’s dilemma, but which makes the dilemma believable.

  18. 18
    Joe says:

    Sal, That is the reason for the need for population isolations. No need for all 7 billion, just the isolated “tribe”. You also would have to try to limit competing beneficial traits, or perhaps combine them if they arise in both sexes.

    The best bet is a small population with a single selection pressure. Something like a virus that wipes out all but the fortunate that were naturally immune due to a variation.

  19. 19
    littlejohn says:

    What if a environmental cue, or other stimuli trips a cascade of genetic switches in the collective genomes of an entire population, thereby enabling beneficial new traits to become fixed within a few generations? If so, it seems it would take a minimum number of genetic alterations in order to meet new ecological challenges or opportunities.

    Perhaps organisms come equipped with genomic tool kits capable of cultivating whole or partial genomic adaptive responses, accounting for an intrinsic ability to process a cascade of edits virtually simultaneously, providing life with extreme evolutionary potency?

    Perhaps the dilemma is simply another misunderstanding of how evolution was intended to operate?

  20. 20
    scordova says:

    Sal, That is the reason for the need for population isolations.

    In which case it is fixation because of isolation (accident) not nature selecting the fittest. That’s origin of “species” by accident, not via “natural selection”.

    See: Accidental Origins of Species

    In fact, if you just have one or one couple of the species geographically isolated, you’ve “fixed” the genome, and it isn’t via selection.

    That’s the problem, one can have lots of “fixation” that has really nothing to do with selection. Accidents of isolation or natural disasters reducing a population can do a better job of fixing traits than selection.

    With small populations, it become difficult to tell if selection is the mechanism of evolution or just plain luck, as Salthe points out:

    As populations become smaller, sampling errors conspire to drive their gene pools apart statistically because deterministic forces (as selection is often imagined to be — but see below) cannot function as effectively in small populations. In these models we clearly see that selection is just a bias on randomness, and its effects weaken as the effectiveness of statistical predictive techniques weaken as a population’s size declines.

    Critique of Natural Selection

    Ah the irony! In small populations, selection can’t be proven as the mechanism. In large populations, where it might be proven that selection is the mechanism, selection might be prevented from working to begin with because of the problem of stirring, and worse, with large populations, Haldane’s dilemma become increasingly severe.

    It must be pointed out, selection after-the-fact is not the same as selection before-the-fact. Selection for hearts is after-the-fact selection. It’s not the same as selection for traits that don’t yet exist in a fraction of the population or all of the population. Darwinist continuously conflate the two concepts.

    Darwinism is constantly invoked because it is falsely believed it creates complexity over time. It doesn’t, and now we realize it can’t even be proven to be a mechanism of fixation versus simple accidents of isolation.

    traits that have been most important in the lives of
    organisms up to this moment will be least likely to be able to evolve further!

    So hearts are difficult to evolve because they are vital organs. The fact that hearts are needed for survival today doesn’t mean that selection evolved them (the kind of selection that we see that favors hearts in humans today is after-the-fact selection, not before-the-fact selection). See:
    Selection before something exists is not the same a selection after something exists

    If hearts evolved, they would have to be non-vital hearts during evolutionary history before become vital organs.

    Darwinists seem to not comprehend simple logic. Necessary organs for survival in the present day could not have been selected for on the grounds it was also similarly necessary in the past, because if the organ was non-existent but vital back then, the species line would already be dead!

    And if that vital organ evolved from un necessary pre-cursors, it becomes questionable selection was involved to begin with, since there is no requirement un-necessary pre-cursors be subject to selection.

    In fact observation suggests, most intermediate precursors would be a liability, and hence selection would actually have to absent for evolution to happen. As Gould famously said, “what good is half a wing?”

    I pointed out the problem:
    Selection has to fail for evolution to work.

  21. 21
    wd400 says:

    Consider the ubiquitous E-coli bacteria. Yes, there are regional variants, but how can Darwinism explain that so many regional variants have fixed characters especially when the bacteria is global?

    Why is this a problem? The measure by which population structure is measured is called the fixation index… fixed differences between populations are the expected result of population structure (especially for non-recombining species).

    Will ANY Darwinist be willing to predict a single mutation in all the E Coli of the world will overtake the entire global E Coli population in 300 generations?

    In an hour and a half? No.

    A fundamental assumption in Darwinism, is the assumption of “well stirred”. But for something like bacteria around the globe, the assumption of “well stirred” doesn’t hold, and thus maybe they have to assume the essentials of a given bacteria evolved, and stopped evolving after exit from a warm little pond, other wise the well-stirring problem rears it’s ugly head.

    Again, I don’t know why population structure is a problem. Extereme populaton structure (i.e. speciation) is required for evoluionary biology to work.

  22. 22
    scordova says:

    wd400,

    If I recall correctly, you are not a Darwinian even though you are an evolutionary biologist.

    Without well-stirring, it’s hard for natural selection to act on the whole population.

    If then we say isolation causes differences to become emphasized, then it raises the question of how much natural selection is involved in the change of the isolated population:

    The question of specialization by an isolated population is:

    1. how much specialization due to natural selection
    2. how much due to random drift
    3. how much due to evolution due to environmental response like developmental plasticity (i.e. grashoppers becoming locusts, not result of random drift or natural selection, but developmental plasticity and environmental response, Mary Jane West-Eberhard thinks this sort of plasticity will even result in genetic changes since she views the genes as also affected by the epigenome — not necessarily a one-way street)

    The question of Haldane’s dilemma is how many traits or collected groups of traits or perhaps even individual nucleotides can be fixed per generation. I don’t recall you’ve ever provide a figure, and I think you stated you tend to view most evolution as neutral (free of selection), so as far as I can tell, you’re not necessarily in disagreement with Haldane’s estimate.

    I think a basic academic question is:

    1. how many individual nucleotides can be selected for per generation

    2. how many individual amino acid substitutions can be selected for per generation

    3. how many complete genes can be selected for per generation

    4. how many linked traits can be selected for per generation

    etc.

    1/300 th per generation seems slow to me, and that was basically the conclusion of Kimura and Ohta. Larry Moran loves Ohta’s work!

    If you have a figure for the average number of traits per generation that can be fixed via selection, we’d be interested to hear, but I’ve not gotten a straight answer from any one. A straight answer would be something like:

    “1 selectable nucleotide ever 10 generations”.

    I don’t get answers that straight, I get, “Haldane was wrong”. If Haldane was wrong, then what is the right number?

    PS
    I don’t mean to single you out asking for numbers. I’ve been just as blunt with my colleagues when I asked them question like how many bits of CSI is in an object or what is the amount of entropy in a design expressed in Joule/Kelvin. It certainly doesn’t make me a winner in popularity contests.

  23. 23
    scordova says:

    VJTorley,

    Something very important, Larry Moran isn’t using natural selection as the mechanism of the fixation rate for evolution, he is using NEUTRAL EVOLUTION.

    See:
    IF not rupe or Sanford would you believe Wiki

    Compare what I said with what Moran said

    says the rate of new mutations is the rate at which new mutations become features of every member of the population (a process called fixation).

    and that agrees with Moran

    This corresponds to a substitution rate (fixation) of 121 mutations per generation and that’s very close to the mutation rate as predicted by evolutionary theory.

    The problem for Moran is the 98% figure is bogus and creationists should stop supporting it.

    See:
    Jeff Tomkins vs. Evolutionary Biologists who got laughed off stage

    The difference is possibly 70%, the 98% uses a dictionary trick (explained in the link).

  24. 24
    scordova says:

    VJ,

    The NEUTRAL EVOLUTION theory is actually mathematically sound, and is based in part on Haldane’s dilemma. The problem is it results in circularity.

    If we say the divergence identity between humans and chimps is 70%, that means we have 1.2 giga bases different. We then can “calculate” a mutation rate based on this assumption and human/chimp divergence of 6 million years ago, and generation time of 20 years and a factor of 1/2 to account for the fact we have 2 species lines mutating:

    (1,200,000,000 / 6,000,000 / 2) * 20 = 2000 mutation per every generations

    And then they’ll say, “hey look the fixation rate agrees with the mutation rate”. CIRCULAR REASONING.

    Yes the mutation rate will agree with the mutation rate because we use the fixation rate to calculate the mutation rate!

    Kimura’s formula is correct if we really really knew the differences between humans and chimps was because they evolved from a common ancestor. It is falsified if the true mutation rate (as measured in realtime lab experiments) deviates from the rate predicted by Kimura’s prediction.

    All that is moot because even 127 mutation per generation will lead to inevitable genetic deterioration. A problem I described at the bottom of a long essay I wrote:
    Death of the Fittest

    So:

    1. Moran essentially dumps Darwinism by invoking neutralism to explain the difference between humans and chimps, but he’s not explicit about it. The reason I also know this is what he believes is he’s a big fan of Ohta and Kimura.

    2. even the fixation rate he gives I would agree with only in as much is it is circularly reasoned rate! It cannot be falsified except by realtime observations such as the deep pedigree study I cited in:
    Nachman’s Paradox.

    So then what of the problem of design? Given he loves “The Logic of Chance” by Koonin, it would appear he’s a multiverse advocate. 😯

    PS
    Tomkins uses divergences in intergenic regions and introns to give 70% figure. This would result in an interesting study since many of the molecular clocks are tuned to only specific proteins. If introns and intergenic regions are included, we my see unresolvable contradiction in the molecular clocks, or possibly the problem will be too intractable to resolve.

  25. 25
    tjguy says:

    VJ, you asked this:

    On the other hand, what about this hypothesis? Suppose that we have been going downhill genetically in terms of overall fitness, over millions of years, but that we’ve been degenerating very slowly, and from a “bug-free” initial state, which is why we haven’t died out yet. Is that possible?

    Apart from the millions of years idea, this is what creationists would predict.

    What was it that made you consider this idea?

    Here is a problem that many people believe to be unsolved. Some evolutionists agree. But all that they have to do to “solve” it, is to come up with what they claim is a “plausible” explanation. Others would see it as an ad hoc explanation. But whatever. If some big name scientist says it is solved, everyone can quote him and ignore the devil in the details.

    Fortunately for them their idea cannot be directly tested.

    It just goes to show the problem of bias and worldview in interpreting evidence.

  26. 26
    vjtorley says:

    Hi Tjguy,

    I got the idea from considering front-loading, coupled with the evidence that at least some of our DNA appears to be junk (even if it’s far less than what most evolutionists believe). It occurred to me that maybe the Creator designed the DNA of the first living thing and subsequently intervened to implement new irreducibly complex systems in various lineages of organisms, but without fixing the (mostly harmless) junk which accumulated over the course of time.

  27. 27
    vjtorley says:

    Hi Sal,

    Thanks very much for your helpful posts on neutral evolution during the past few days. Sorry for not responding sooner, but I’ve been very busy with lots of work. Thanks once again.

Leave a Reply