Darwinism Genetics Intelligent Design

What? Only an “extremely occasional” mutation is beneficial? But Darwinism… ?

Spread the love

From Jessica Hamzelou at New Scientist on a study in Iceland that shows that old fathers pass on more mutations than old mothers:

“If a sequence is not present in the parents but is present in the child, then it’s new,” says Stefánsson. They discovered that 80 per cent of new mutations come from the father, and that the number of mutations increases in line with the age of the parents.

These mutations won’t all be harmful. We’re all born with at least 70 new mutations, and most of these don’t affect the way our bodies and brains work. “The vast majority of mutations don’t matter, says Leo Schalkyk at the University of Essex. “There might be the occasional mutation that is deleterious, or the extremely occasional mutation that is beneficial.” More.

But Darwinism is natural selection acting on random mutations, creating the thousands of apparently irreducibly complex structures we see in life forms. Enforced in the academy and legislated for the schools. And probability is one of those subjects that, if you dare address it, marks you out as a doubtful person for even wondering.

Darwinism increasingly resembles a totalitarian political philosophy in that the official communications are rigidly controlled but then someone inadvertently gives the game away: But better watch your back if you ever think of trying to do the math.

See also: “To what can science appeal if not evidence?” Rob Sheldon responds

58 Replies to “What? Only an “extremely occasional” mutation is beneficial? But Darwinism… ?

  1. 1
    Mung says:

    The hidden secrets of Darwinism. You have to be at least level six in the Darwin Cult in order to be taught the mysteries of how an extremely occasional beneficial mutation happens at just the right time in just the right place in just the right organism who just happens to need a better eye.

    No doubt beneficial mutations were much more common in the far distant past.

  2. 2
    Tom Robbins says:

    MUNG – LOL!!!

  3. 3
    ET says:

    Not to mention the fact that “beneficial” is a relative commodity and includes a loss of function, eg those who don’t need better eyes.

  4. 4
    Florabama says:

    The fact there is only the “extremely occasional” mutation that is beneficial absolutely falsifies Darwinism, but even one doesn’t accept this clear reality, extremely occasional does nothing for evolution. Darwinism needs thousands perhaps millions of “extremely occasional” beneficial mutations to happen a row across thousands of generations to build a new organ system. Like flipping a million heads in a row, Darwinism is fantasy.

  5. 5
    Gordon Davisson says:

    Why is this supposed to be a problem for “Darwinism”? A low rate of beneficial mutations just means that adaptive evolution will be slow. Which it is.

    And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual. Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit. If I’ve accounted for everything the overall rate of fixation of beneficial mutations per generation should be: (fraction of mutations that’re beneficial) * (fraction of beneficial mutations that aren’t wiped out by genetic drift) * (# of mutations per individual) * (population).

    Florabama’s description is exactly wrong. Beneficial mutations don’t have to happen “in a row”, they can happen entirely independently of each other, and spread independently via selection. You may be thinking of the argument from irreducible complexity, but that’s an argument that evolution depends on mutations that are only beneficial in combination, which is a different matter. (And FYI evolutionists dispute how much of a problem this actually is. But again, that’s another matter.)

  6. 6
    Mark from CO says:

    I’m new at this and not as knowledgeable as others who comment. Gordon Davisson’s comment is not clear to me.
    Isn’t it true that even if the step-wise mutations happened over the entire population, the mutations would have to happen relatively at the same time within the entities that the mutations will combine? And don’t the mutations, let just say just a two step-wise mutations, have to occur, one in the male and one in the female? And won’t these two individuals have to produce an offspring for these two step-wise mutations to come together? If the population size is very small, I could see where this may be feasible. But assuming a population size of a million, equally divided by male and female, isn’t it just a one in 250,000,000,000 chance (500,000 x 500,000) chance the two would be partners? Factoring in the time element and the probability of being in the same location vicinity, isn’t just a 2 step-wise mutation virtually impossible?

    As I say, I’m new to this and not as knowledgeable as others. Please help me better understand the mathematics.

    Thank you!

  7. 7
    J-Mac says:

    ET@3
    I was going to write the same thing…

    Furthermore, those who no longer consider themselves “Darwinists”, for obvious reasons, such as Larry Moran, PZ. Myers insist on Neural Theory with the emphasis on random genetic drift. This is how they put it:

    “First thing you have to know: the revolution is over. Neutral and nearly neutral theory won. The neutral theory states that most of the variation found in evolutionary lineages is a product of random genetic drift. Nearly neutral theory is an expansion of that idea that basically says that even slightly advantageous or deleterious mutations will escape selection — they’ll be overwhelmed by effects dependent on population size. This does not in any way imply that selection is unimportant, but only that most molecular differences will not be a product of adaptive, selective changes.”

  8. 8
    kurx78 says:

    In Darwinian Wonderland you can get banana cupcakes from stones if you “give evolution enough time”

  9. 9
    ET says:

    Gordon Davisson- You don’t build a vison system by mutating areas that have nothing to do with vision systems. The “mutations in a row” is in response to Dawkins’ “cumulative selection” pipe-dream which requires the “right” mutations to accumulate within the genetic system responsible for the adaptation under construction. Vision systems are built by the “right” mutations accumulating in the genetic system responsible for constructing them. But first that system had to be constructed from the accumulations of the “right” mutations.

    A developmental organization system just happened evolved from a system that was doing just fine without one. But hey, on a planet that just happened to pumped out living organisms it didn’t want nor need, that shouldn’t sound so surprising or counter-intuitive.

  10. 10
    Dionisio says:

    What roles do the beneficial mutations play in the evo-devo movie?

    Dev(d) = Dev(a) + Delta(a,d)

    Are they nominated for the (peer-reviewed) academic award?

    In the auto industry, in order to change a car model, one must change the process required to produce it.

    And cars are nothing compared to the simplest biological systems.

    Does the penny drop now?

    Ok, then y’all may continue your discussion.

    🙂

  11. 11
    gpuccio says:

    Gordon Davisson:

    You say:

    “And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual.”

    Yes, but any “beneficial” mutation that appears in one individual will have to expand to great part of the population, if NS has to have any role in lowering the probabilistic barriers.

    That means that:

    1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something! Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

    If you can, I really admire your imagination.

    2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.

    While that is simply impossible, because those “stepwise” mutations simply do not exist and never will exist, even if we imagine that they exist the process requires certainly a lot of time.

    Moreover, as the process seems not to leave any trace of itself in the proteomes we can observe today, because those functionally intermediate forms simply do not exist, we must believe that each time the expansion of the new trait, with its “precious” single aminoacid mutation, must be complete, because it seems that it can erase all tracks of the process itself.

    So, simple imagination is not enough here: you really need blind faith in the impossible. Credo quia absurdum, or something like that.

    Then you say:

    “Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit.”

    Absolutely! And it’s not a bit, it’s a lot.

    If you look at the classic paper about rugged landscape:

    http://journals.plos.org/ploso.....ne.0000096

    you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS. Just think about the implications of that simple fact.

    You say:

    “Beneficial mutations don’t have to happen “in a row”, they can happen entirely independently of each other, and spread independently via selection.”

    Yes, but only if each individual mutation confers a strong enough reproductive advantage. That must be true for each single specific aminoacid position of each single new functional protein that appears in natural history. Do you really believe that? Do you really believe that each complex functional stricture can be deconstructed into simple steps, each conferring reproductive advantage? Do you believe that we can pass from “word” source code to “excel” source code by single byte variations (yes, I am generous here, because a single aminoacid has at most about 4 bits of information, not 8), each of them giving a better software which can be sold better than the previous version?

    Maybe not even “credo quia absurdum” will suffice here. There are limits to the absurd that can be believed, after all!

    You say:

    “You may be thinking of the argument from irreducible complexity, but that’s an argument that evolution depends on mutations that are only beneficial in combination, which is a different matter.”

    No, the argument of IC, as stated by Behe, is about functions which require the cooperation of many individual complex proteins. That is very common in biology.

    The argument of functional complexity, instead, is about the necessity of having, in each single protein, all the functional information which is minimally necessary to give the function of the protein itself. How many AAs would that be, for example, for dynein? Or for the classic ATP synthase?

    Here, the single functional element is so complex that it requires hundreds of specific aminoacids to be of any utility. If that single functional element also requires to work with other complex single elements to give the desired function (which is also the rule in biology), then the FC of the system is multiplied. That is the argument of IC, as stated by Behe. The argument for FC in a single functional structure is similar, but it is directly derived form the concept of CSI as stated by Dembski (and others before and after him).

    And finally you say:

    “And FYI evolutionists dispute how much of a problem this actually is. But again, that’s another matter.”

    It’s not another matter. It’s simply a wrong matter.

    Both FC and IC are huge problems for any attempt to defend the neo-darwinian theory. I am not surprised at all that “evolutionists” dispute that, however. See Tertullian’s quote above! 🙂

  12. 12
    aarceng says:

    What is “extremely rare”? If it’s 1 in a billion with today’s population we should get 7 every generation. Even if 6 of those are lost we are left with 1 per generation.

  13. 13
    Dionisio says:

    gpuccio @10,

    Please, don’t ask difficult questions or present difficult problems. Be nicer to your interlocutors. Don’t get so serious. This is just about biology. 🙂

    Remember that two years ago a Canadian biochemistry professor stopped discussing with me because I didn’t ask honest questions. I subtly used the ‘tricky’ word ‘exactly’ that was not highlighted in bold characters. Mea culpa. 🙂

    My mistake granted my interlocutor the right to quit the incipient discussion. Note that my questions were very easy compared to yours.

    🙂

  14. 14
    Mung says:

    A low rate of beneficial mutations just means that adaptive evolution will be slow. Which it is.

    Except when it isn’t. You left that out.

    Improbable Destinies

  15. 15
    Mung says:

    Furthermore, those who no longer consider themselves “Darwinists”, for obvious reasons, such as Larry Moran, PZ. Myers insist on Neural Theory with the emphasis on random genetic drift.

    I always wondered what was wrong with PZ Myers and now I finally know. It’s his Neural Theory. 🙂

  16. 16
    Mung says:

    gpuccio:

    Moreover, as the process seems not to leave any trace of itself in the proteomes we can observe today, because those functionally intermediate forms simply do not exist…

    I love this convenient feature of evolution.

    The absence of evidence is evidence!

  17. 17
    gpuccio says:

    Dionisio:

    “Note that my questions were very easy compared to yours.”

    Yes, I am a very bad guy! 🙂

  18. 18
    J-Mac says:

    Gordon Davisson,

    Why is this supposed to be a problem for “Darwinism”? A low rate of beneficial mutations just means that adaptive evolution will be slow. Which it is.

    Yep! Fits beautifully into the evolutionary process of a few pound land walking rat into140 ton whale…Can’t argue with that! -;)

  19. 19
    Florabama says:

    Extremely rare and occasionally beneficial doesn’t produce the dramatic change across higher taxa that must happen if Neo-Darwinism is real. Lenskis bacteria have managed nothing in 60,000 generations of extreme selective pressure that wasn’t already present in the genome ( citrate consumption) and the inability of NS to meet the claims of Darwism has been acknowledged by honest evolutionists. That’s why this statement appears on the front page of The Third Way of Evoultion: “Moreover, some Neo-Darwinists have elevated Natural Selection into a unique creative force that solves all the difficult evolutionary problems without a real empirical basis.” Without an empirical basis indeed. Neo-Darwinism has been falsified and even evolutionists are admitting as much.

  20. 20
    EugeneS says:

    Gpuccio

    I am sorry to be off-topic. But I have a question for you. Could you provide some context into the hype of 98% homology between chimp and human. I read somewhere that the 2% difference only accounts for protein coding regions. Is that the case? If this is correct, then the difference is underestimated.

    As far as I know, this figure is only about substitutions and not about insertions or deletions. So it does not account for a lot of new function in humans. But that is a slightly different matter. My main interest is whether the original comparison was done for protein coding DNA only.

    With this figure of “just a few percent” difference in that context there is a lot that is conveniently untold. It is an underestimate in many respects.

    Many Thanks.

  21. 21
    Dionisio says:

    @13 error correction:

    gpuccio @11

  22. 22
    Dionisio says:

    EugeneS @20:

    Ochen interyestnye vaprosy. Spasibo!

  23. 23
    gpuccio says:

    EugeneS:

    Well, that is a serious issue indeed. I have not really delved deeply in it, but according to this paper:

    http://genome.cshlp.org/content/15/12/1746.long

    The difference between the two genomes, whole genomes, at least as measured in 2005, should be:

    About 1% difference in SNPs (single nucleotide mutations)

    About 3% for indels

    That is, about 4% total difference in raw nucleotide sequence.

    I think the difference is extremely low for the exome. Human and chimp proteins are really almost identical, in most cases.

    What does that mean?

    First of all, there are differences, especially in non coding DNA, and they could be important, probably are in many cases. But it is really difficult, at present, to understand their meaning.

    Even some of the small differences in proteins could have a functional role, but we must also allow for some neutral variation between the two genomes.

    The really amazing thing, IMO, is that the genomic differences seem really low if compared to the huge functional differences between us and chimps.

    Now, neo-darwinists seem to interpret that fact in the sense that there are really no big differences between us and chimps, or that small genomic differences can, by magic, generate huge differences in nervous system organization, and so on.

    Of course, they are free to believe these things, but I certainly beg to differ.

    For me, the striking similarity between the genomes is evidence of the simple fact that the differences cannot be really explained in terms of genomics only, or of genomics as we understand it today.

    IOWs, I believe that there is a huge functional difference between us and chimps, and therefore a huge informational difference. But, by far, we don’t understand where the information is recorded.

    But we will, I am certain of that. In time we will.

    For the moment, we could derive some inspiration from the opposite fact that, for example, C. elegans and C. briggsae present striking genomic differences, but are very much similar species under many aspects.

  24. 24
    EugeneS says:

    GP

    Thank you very much indeed. Here is what I have on the issue:

    http://academic.brooklyn.cuny......s/1836.pdf

    https://www.scientificamerican.com/article/tiny-genetic-differences-between-humans-and-other-primates-pervade-the-genome/

    http://sciencerefutesevolution.....tives.html

    The second link has a nice picture (which does say it portrays only protein-coding differences). However, it does not say that these are the only differences.

    The third one is very interesting in that it mentions epigenetic differences. Unfortunately it does not cite any academic sources.

    In Dembski’s and Well’s popular book on design it is pointed out that even 1% difference when it refers to code could be crucial. DNA is not a novel. In the case of two printed copies of a novel that are 1% different we could say they are practically identical, because we as readers can very easily identify those differences as typos. But in the case of DNA, we cannot say that. Using a computer analogy, a single typo in a bootloader file, for example, can be really fatal.

  25. 25
    Dionisio says:

    gpuccio,

    Thank you for the insightful comment @23.

    Could the following questions somehow relate -at least slightly- to your comment @23 in response to the comment @20 by EugeneS?

    Do different cell types within the human body have the same DNA (both coding and non-coding)?

    I they do, then why are they different?

    Is it because the epigenetic switches?

    Do different cell types have different epigenetic markers/ switches – different by position and/or type?

    Or do they all have the same epigenetic markers/switches (by position and type) but they get turned on/off differently?

    Thanks.

  26. 26
    Dionisio says:

    EugeneS,

    Very interesting issues you’ve brought up @20 & @24.

    I’m glad gpuccio got involved in this mini discussion.

    It feels strange to be in the middle of a friendly discussion between two doctors (one academic, another medical). I’m sure you both are gracious enough to not require a doctoral degree at least for this time. 🙂

  27. 27
    gpuccio says:

    Dionisio:

    1) Different cell types in the human body have the same DNA (genomic sequence), as a rule. There are, however, important exceptions:

    a) Gametes

    b) Immune cells, which undergo specific DNA rearrangements in a very limited portion of their DNA.

    Moreover, recent attention has been given to “intraindividual somatic variations”m that is differences in DNA in cells from the same individual, even if the meaning and extent of those variations are still not well understood.

    2) However, even if the DNA sequence is roughly the same, the DNA state is certainly very different in different cell types, due to constant epigenetic influences that determine the type and state of each cell.

    3) In principle, epigenetic events should be determined by information in the genome, too. Or by epigenetic information pre-existent in the cell (for example, in the original gametes). Therefore, it remains vastly unclear why different cells undergo different epigenetic destinies. It seems to be a huge sum of cascades of events, controlled in some way by some basic information in the cell, but I am afraid that we really don’t understand anything of how the whole process is really controlled (and i suppose you will agree with me! 🙂 ).

    4) In principle, epigenetic markers that are in the genome sequence are the same in all cells. But, again, the state of those markers can be very different in different cells. Moreover, all other epigenetic events which are not connected to the genome sequence, or only to it, like DNA methilation, or hystone post-translational modifications, are certainly different in different cells. But, in some way, those modifications must “know” to which DNA sequences they must be applied, and that should depend on the DNA sequence itself, or on its state.

  28. 28
    Dionisio says:

    gpuccio,

    “it remains vastly unclear why different cells undergo different epigenetic destinies. It seems to be a huge sum of cascades of events, controlled in some way by some basic information in the cell, but I am afraid that we really don’t understand anything of how the whole process is really controlled (and i suppose you will agree with me!).”

    Yes, I agree, but I wouldn’t say that we don’t understand anything, because I think we understand a little more than we did before. 🙂
    Besides, we know that the Darwinists understand it almost completely. And that’s encouraging. 🙂

    Thanks for the explanation.

    Here’s an old paper on alleged DNA differences within the same individual:

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2992694/pdf/humu0031-1174.pdf

  29. 29
    gpuccio says:

    EugeneS:

    Well, I think that the papers you quote essentially agree on the rough percentage of difference. 4% total seems to be realistic, IMO.

    You say:

    “In Dembski’s and Well’s popular book on design it is pointed out that even 1% difference when it refers to code could be crucial.”

    That’s certainly true.

    You say:

    “DNA is not a novel. In the case of two printed copies of a novel that are 1% different we could say they are practically identical, because we as readers can very easily identify those differences as typos. But in the case of DNA, we cannot say that. Using a computer analogy, a single typo in a bootloader file, for example, can be really fatal.”

    True. It is certainly correct that even small differences, if they happen in the right places, can collapse a complex system.

    But the problem, IMO, is: can small differences generate entirely new functionalities?

    I am rather skeptical about that. Of course, small differences in key regulation sites could change a lot: for example, small differences at genetic level could make limbs bigger or smaller, or could change the color of the skin, and so on.

    But could small differences change limbs into wings, with all the detailed new information necessary for the transition? I really believe that it is not possible.

    Taking again your example of the novel, if you want to add a new character in the novel, and give him an important role in the plot, you cannot do that by small changes to a few words. You need to write in detail a lot of new parts in the novel.

    That is true in software too. If you want to add a new functionality, you need to add a lot of code. A collapse can be achieved with a few typos, but a new function needs a lot of new specific bits.

    Now, the problem is: do we really believe that the remarkable, infinitely complex and very efficient structure of human brain, just to cite the most striking difference between humans and chimps, can be the result of just a small amount of different information, be it in regulation or elsewhere?

    I absolutely believe that this is not the case.

    I suppose that software programmers would be very happy to know that magic procedure by which you can go from an old operating system to a completely renovated one, with amazing new functionalities, greater speed and efficiency, completely new results. But I would not hold my breath, in their place. 🙂

  30. 30
    EugeneS says:

    Dionisio

    Of course not 🙂 Regardless of this discussion, sometimes, a doctorate creates an unnecessary cognitive bias which brings problems when one is trying to be objective 😉 So, no problem!

    GP

    Thank you very much again for your comment on epigenetics.

    I certainly agree with your remarks regarding functional info in general (that new functionality requires lots of new functional information injections). But can one be sure that the genome is everything we have in terms of code? Perhaps, not. It almost feels (especially considering things like brain organization and functioning) as if the genome must be complemented by something else… I do not know.

  31. 31
    Mung says:

    gpuccio:

    1) Different cell types in the human body have the same DNA (genomic sequence), as a rule. There are, however, important exceptions:

    a) Gametes

    Is the gamete exception something that applies to all sexually reproducing species or only to some sexually reproducing species?

    https://en.wikipedia.org/wiki/Gamete

  32. 32
    Dionisio says:

    gpuccio @29:

    “But could small differences change limbs into wings, with all the detailed new information necessary for the transition? I really believe that it is not possible.”

    I share your belief in that too.

    However, after seeing a number of papers lately, perhaps this is fact.

  33. 33
    Dionisio says:

    gpuccio,

    “I suppose that software programmers would be very happy to know that magic procedure by which you can go from an old operating system to a completely renovated one, with amazing new functionalities, greater speed and efficiency, completely new results.”

    Well, my former employer would have fired my project leader –who had the main ideas describing the engineering design software we developed– and all the programmers that worked under his direction. But apparently my former employer didn’t read the memo about the magic Darwinian trick, because we had tons of work for years.

    🙂

  34. 34
    gpuccio says:

    Mung:

    “Is the gamete exception something that applies to all sexually reproducing species or only to some sexually reproducing species?”

    I think it is a general aspect of sexual reproduction, but I am not really sure. Meiotic recombination seems to be a powerful tool to remix existing information, although I think it is practically powerless to generate new relevant functional information.

  35. 35
    Gordon Davisson says:

    Hi, gpuccio. Sorry about my late reply (as usual, I’m afraid). Before I comment specifically to what you said, I need to make a general comment that I still don’t see how the original point — that beneficial mutations are rare — refutes evolution. The arguments you’re making against evolution’s ability to create complex functional systems don’t seem to have a very close connection to the rate of beneficial mutations. Note that all of these would be considered beneficial mutations:

    * Minor changes to an existing functional thing (protein, regulatory region, etc) that improve its function slightly.
    * Minor changes to an existing functional thing that change its function slightly, in a way that makes it fit the organism’s current environment better.
    * Changes that decrease function of something that’s overdoing its role (e.g. the mutation discussed here, which winds up giving people unusually strong bones).
    * Mutations that create new functional systems.
    * Mutations that are partway along a path to new functional systems, and are beneficial by themselves.

    Your argument is (if I may oversimplify it a bit) essentially that the last two are vanishingly rare. But when we look at the overall rate of beneficial mutations, they’re mixed in with other sorts of beneficial mutations that’re completely irrelevant to what you’re talking about! Additionally, several types of mutations that’re critical in your argument are not immediately beneficial aren’t going to be counted in the beneficial mutation rate:

    * Mutations that move closer to a new functional system (or higher-functioning version of an existing system), but aren’t actually there yet.
    * Mutations that produce new functional systems that don’t immediately contribute to fitness.

    Furthermore, one of the reasons for the rate of beneficial mutations may be low is that there may simply not be much room for improvement. For example, the experiment you cited about evolution on a rugged fitness landscape suggests that the wild-type version of the protein they studied may be optimal — it cannot be improved, whether by evolution or intelligent design or whatever. If that’s correct, the rate of beneficial mutations to this protein will be exactly zero, but that’s not because of any limitation of what mutations can do.

    Now, on to your actual argument:

    And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual.

    Yes, but any “beneficial” mutation that appears in one individual will have to expand to great part of the population, if NS has to have any role in lowering the probabilistic barriers.

    That means that:

    1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something!

    I’d disagree slightly here. There isn’t any particular “strong enough” threshold; the probability that a beneficial mutation will “escape genetic drift” is roughly proportional to how beneficial it is. Mutations that’re only slightly beneficial thus become fixed at a lower (but still nonzero) rate.

    Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

    If you can, I really admire your imagination.

    I’ll discuss some of these points more below, but just two quick things here: first, this is just an argument from incredulity, not an argument from actual knowledge or evidence. Second, the article you cited about a rugged fitness landscape showed that they were able to evolve a new functional protein starting from a random polypeptide (the limit they ran into wasn’t getting it to function, but in optimizing that function).

    2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.

    While that is simply impossible, because those “stepwise” mutations simply do not exist and never will exist, even if we imagine that they exist the process requires certainly a lot of time.

    This is simply wrong. Take the evolution of atovaquone resistance in P. falciparum (the malaria parasite). Unless I’m completely misreading the diagram Larry Moran gives in http://sandwalk.blogspot.com/2.....ution.html, one of the resistant variants (labelled “K1”) required 7 mutations in a fairly specific sequence, and at most 4 of them were beneficial. In order for this variant to evolve (which it did), it had to pass at least 3 steps unassisted by selection (which you claim here is impossible) and all 4 beneficial mutations had to overcome genetic drift.

    At least in this case, beneficial intermediates are neither as rare nor as necessary as you claim.

    Moreover, as the process seems not to leave any trace of itself in the proteomes we can observe today, because those functionally intermediate forms simply do not exist, we must believe that each time the expansion of the new trait, with its “precious” single aminoacid mutation, must be complete, because it seems that it can erase all tracks of the process itself.

    So, simple imagination is not enough here: you really need blind faith in the impossible. Credo quia absurdum, or something like that.

    Except we sometimes do find such traces. In the case of atovaquone resistance, many of the intermediates were found in the wild. For another example, in https://uncommondescent.com/intelligent-design/double-debunking-glenn-williamson-on-human-chimp-dna-similarity-and-genes-unique-to-human-beings/, VJTorley found that supposedly-novel genes in the human genome actually have very near matches in the chimp genome.

    Then you say:

    Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit.

    Absolutely! And it’s not a bit, it’s a lot.

    If you look at the classic paper about rugged landscape:

    http://journals.plos.org/ploso…..ne.0000096

    you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS. Just think about the implications of that simple fact.

    That’s not exactly what they say. Here’s the relevant paragraph of the paper (with my emphasis added):

    The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination. Recombination among neutral or surviving entities may suppress negative mutations and thus escape from mutation-selection-drift balance. Although the importance of recombination or DNA shuffling has been suggested [30], we did not include such mechanisms for the sake of simplicity. However, the obtained landscape structure is unaffected by the involvement of recombination mutation although it may affect the speed of search in the sequence space.

    In other words, they used a simplified model of evolution that didn’t include all actual mechanisms, and they think it likely that’s why their model says the wild type couldn’t have evolved with a reasonable population size. So it must’ve been intelligent design… or maybe just homologous recombination. Or some other evolutionary mechanism they didn’t include.

    Or their model of the fitness landscape might not be completely accurate. I’m far from an expert on the subject, but from my read of the paper:

    * They measured how much infectivity (function) they got vs. population size (larger populations evolved higher infectivity before stagnating), fit their results to a theoretical model of the fitness landscape, and used that to extrapolate to the peak possible infectivity … which matched closely to that of the wild type. But their experimental results only measured relative infectivities between 0.0 and 0.52 (using a normalized logarithmic scale), and the extrapolation from 0.52 to 1.0 is purely theoretical. How well does reality match the theoretical model in the region they didn’t measure?

    * But it’s worse than that, because their measurements were made on one functional “mountain”, and the wild type appears to reside on a different mountain. Do both mountains have the same ruggedness and peak infectivity? They’re not only extrapolating from the base of a mountain to its peak, but from the base of one mountain to the peak of another. The fact that the infectivity of the wild type matches closely with their theoretical extrapolation of the peak is suggestive, but hardly solid evidence.

    So between the limitations of their simulation of actual evolutionary processes and the limitations of the region of the landscape over which they gathered data, I don’t see how you can draw any particularly solid conclusions from that study.

    Well, except that there are some conclusions available from the region of the landscape that they did make measurements on: between random sequences and partial function. They say:

    The landscape structure has a number of implications for initial functional evolution of proteins and for molecular evolutionary engineering. First, the smooth surface of the mountainous structure from the foot to at least a relative fitness of 0.4 means that it is possible for most random or primordial sequences to evolve with relative ease up to the middle region of the fitness landscape by adaptive walking with only single substitutions. In fact, in addition to infectivity, we have succeeded in evolving esterase activity from ten arbitrarily chosen initial random sequences [17]. Thus, the primordial functional evolution of proteins may have proceeded from a population with only a small degree of sequence diversity.

    This seems to directly refute your claim that stepwise-beneficial mutations cannot produce functional proteins. They showed that it can. And they also showed that (as with the atovaquone resistance example) evolution doesn’t require stepwise-beneficial paths either. They found that stepwise-beneficial paths existed up to a relative fitness of 0.4, but they experimentally achieved relative fitnesses up to 0.52! So even with the small populations and limited evolutionary mechanisms they used, they showed it was possible to evolve significantly past the limits of stepwise-beneficial paths.

    I don’t have to imagine this. They saw it happen.

  36. 36
    gpuccio says:

    Gordon Davisson:

    First of all, thank you for your detailed and interesting comments to what I wrote. You raise many important issues that deserve in depth discussion.

    I will try to make my points in order, and I will split them in a few different posts:

    1) The relevance of the rate of “beneficial” mutations.

    You say:

    Before I comment specifically to what you said, I need to make a general comment that I still don’t see how the original point — that beneficial mutations are rare — refutes evolution. The arguments you’re making against evolution’s ability to create complex functional systems don’t seem to have a very close connection to the rate of beneficial mutations.

    I don’t agree. As you certainly know, the whole point of ID is to evaluate the probabilistic barriers that make it impossible for the proposed mechanism of RV + NS to generate new complex functional information. The proposed mechanism relies critically on NS to overcome those barriers, therefore it is critical to understand quantitatively how often RV occurs that can be naturally selected, expanded and fixed.

    Without NS, it is absolutely obvious that RV cannot generate anything of importance. Therefore, it is essential to understand and demonstrate how much NS can have a role in modifying that obvious fact, and the rate of naturally selectable mutations (not of “beneficial mutations, because a beneficial mutation which cannot be selected because it does not confer a sufficient reproductive advantage is of no use for the model) is of fundamental importance in the discussion.

    2) Types of “beneficial” mutations (part 1).

    You list 5 types of beneficial mutations. Let’s consider the first 3 types:

    Note that all of these would be considered beneficial mutations:

    * Minor changes to an existing functional thing (protein, regulatory region, etc) that improve its function slightly.
    * Minor changes to an existing functional thing that change its function slightly, in a way that makes it fit the organism’s current environment better.
    * Changes that decrease function of something that’s overdoing its role (e.g. the mutation discussed here, which winds up giving people unusually strong bones).

    Well, I would say that these three groups have two things in common:

    a) They are mutations which change the functional efficiency (or inefficiency) of a specific function that already exists (IOWs, no new function is generated).

    b) The change is a minor change (IOWs, it does not imply any new complex functional information).

    OK, I am happy to agree that, however common “beneficial” mutations may be, they almost always, if not always, are of this type. that’s what we call “microevolution”. It exists, and nobody has ever denied that. Simple antibiotic resistance has always been a very good example of that.

    Of course, while ID does not deny microevolution, ID theory definitely shows its limits. They are:

    a) As no new function is generated, this kind of variation can only tweak existing functions.

    b) While the changes are minor, they can accumulate, especially under very strong selective pressure, like in the case of antibiotic resistance (including malaria resistance). But gradual accumulation of this kind of tweaking takes long times even under extremely strong pressure, requires a continuous tweaking pathway that is not always existing, and is limited, however, by how much the existing function can be optimized by simple stepwise mutations.

    I will say more about those points when I answer about malaria resistance and the rugged landscape experiment. I would already state here, however, that both those scenarios, that you quote in your discussion, are of this kind, IOWs they fall under one of these three definitions of “beneficial” mutations.

    3) Types of “beneficial” mutations (part 2).

    The last two types are, according to what you say:

    * Mutations that create new functional systems.
    * Mutations that are partway along a path to new functional systems, and are beneficial by themselves.

    These are exactly those kinds of “beneficial” mutations that do not exist.

    Let’s say for the moment that we have no example at all of them.

    For the first type,are you suggesting that there are simple mutations that “create new functional systems”? Well, let’s add an important word:

    “create new complex functional systems”?

    That word is important, because, as you certainly know, the whole point of ID is not about function, but about complex function. Nobody has ever denied that simple function can arise by random variation.

    So, for this type, I insist: what examples do you have?

    You may say that even if you have no examples, it’s my burden to show that it is impossible.

    But that is wrong. You have to show not only that it is possible, but that it really happens and has real relevance to the problem we are discussing. We are making empirical science here, not philosophy. Only ideas supported by facts count. So, please, give the facts.

    I would say that there is absolutely no reason to believe that a “simple” variation can generate “new complex functional systems”. There is no example of that in any complex system. Can the change of a letter generate a new novel? Can the change of a byte generate a new complex software, with new complex functions? Can a mutation of 1 – 2 aminoacids generate a new complex biological system?

    The answer is no, but if you believe differently, you are welcome: just give facts.

    In the last type of beneficial mutations, you hypothesize, if I understand you well, that a mutation can be part of the pathway to a new complex functional system, which still does not exist, but can be selected because it is otherwise beneficial.

    So, let’s apply that to the generation of a new functional protein, like ATP synthase. Let’s say the beta chain of it, which, as we all know, has hundreds of specific aminoacid positions, conserved from bacteria to humans (334 identities between E. coli and humans).

    Now, what you are saying is that we can in principle deconstruct those 334 AA values into a sequence of 334 single mutations, or if you prefer 167 two AAs mutations, each of which is selected not because the new protein is there and works, but because the intermediate state has some other selectable function?

    Well, I say that such an assumption is not reasonable at all. I see no logical reason why that should be possible. If you think differently, please give facts.

    I will say it again; the simple idea that new complex functions can be deconstructed into simple steps, each of them selectable for some not specified reason, is pure imagination. If you have facts, please give them, otherwise that idea has not relevance in a scientific discussion.

    More in nest post.

  37. 37
    gpuccio says:

    Gordon Davisson (second part):

    4) Other types of mutation?

    You add two further variations in your list of mutations. Here they are:

    * Mutations that move closer to a new functional system (or higher-functioning version of an existing system), but aren’t actually there yet.
    * Mutations that produce new functional systems that don’t immediately contribute to fitness.

    I am not sure that I understand what you mean. If I understand correctly, you are saying that there are mutations which in the end will be useful, bur for the moment are not useful.

    But, then, they cannot be selected as such. Do you realize what that means?

    It means that they can certainly occur, but they have exactly the same probability to occur as any other mutation. Moreover, as they are no selected, they remain confined to the original individual or clone, unless they are fixed by genetic drift.

    But again, they have exactly the same probability as any other mutation to be fixed by genetic drift.

    That brings us to a very strong conclusion that is often overlooked by darwinists, especially the neutralists:

    Any mutation that does not have the power to be naturally selected is completely irrelevant in regard to the probabilistic barriers because its probability is exactly the same as any other mutation to occur or to be fixed by drift.

    IOWs, only mutations that can be naturally selected change the game in regard to the computation of the probabilistic barriers. Nothing else. All variation which cannot be naturally selected is irrelevant, because it is just a new random state, and is already considered when we compute the probabilities for a random search to get the target.

    5) Optimal proteins?

    You say:

    Furthermore, one of the reasons for the rate of beneficial mutations may be low is that there may simply not be much room for improvement. For example, the experiment you cited about evolution on a rugged fitness landscape suggests that the wild-type version of the protein they studied may be optimal — it cannot be improved, whether by evolution or intelligent design or whatever. If that’s correct, the rate of beneficial mutations to this protein will be exactly zero, but that’s not because of any limitation of what mutations can do.

    OK, I can partially agree. The proteins as we see them now are certainly optimal in most cases. But they were apparently optimal just from the beginning.

    For example, our beloved ATP synthase beta chain already had most of its functional information in LUCA, according to what we can infer from homologies. And, as I have shown in my OPs about the evolution of information in vertebrates, millions of bits of new functional information have appeared at the start of the vertebrate branch, rather suddenly, and then remained the same for 400+ million years of natural history. So, I am not sure that the optimal state of protein sequences is any help for neo-darwinism.

    Moreover, I should remind you that protein coding genes are only a very small part of genomes. Non coding DNA, which according to darwinists is mostly useless, can certainly provide ample space for beneficial mutations to occur.

    But I will come back to that point in the further discussion.

    I would like to specify that my argument here is not to determine how common exactly are beneficial mutations in absolute, but rather to show that rare beneficial mutations are certainly a problem for neo-darwinism, a very big problem indeed, especially considering that (almost) all the examples we know of are examples of micro-evolution, and do not generate any new complex functional information.

    More in next post.

  38. 38
    gpuccio says:

    Gordon Davisson (third part):

    5) The threshold for selectability.

    You say:

    I’d disagree slightly here. There isn’t any particular “strong enough” threshold; the probability that a beneficial mutation will “escape genetic drift” is roughly proportional to how beneficial it is. Mutations that’re only slightly beneficial thus become fixed at a lower (but still nonzero) rate.

    I don’t think we disagree here. Let’s say that very low reproductive advantages will not be empirically relevant, because they will not significantly raise the probability of fixation above the generic one from genetic drift.

    On the other hand, even if there is a higher probability of fixation, the lower it is, the lower will be the effect on probabilistic barriers. Therefore, only a significant reproductive advantage will really lower the probabilistic barriers in a relevant way.

    6) The argument from incredulity.

    You say:

    I’ll discuss some of these points more below, but just two quick things here: first, this is just an argument from incredulity, not an argument from actual knowledge or evidence. Second, the article you cited about a rugged fitness landscape showed that they were able to evolve a new functional protein starting from a random polypeptide (the limit they ran into wasn’t getting it to function, but in optimizing that function).

    I really don’t understand this misuse of the “argument from incredulity” issue (are, of course, not the only one to use it improperly).

    The scenario is very simple: in science, I definitely am incredulous of any explanation which is not reasonable, has no explanatory power, and especially is not supported by any fact.

    This is what science is. I am not a skeptic (I definitely hate that word), but I am not a credulous person who believes in things only because others believe in them.

    You can state any possible theory in science. Some of them will be logically inconsistent, and we can reject from the start. But others will be logically possible, but unsupported by observed facts and by sound reasoning. We have the right and the duty to ignore those theories as devoid of any true scientific interest.

    This is healthy incredulity. The opposite of blind faith.

    I will discuss the rugged landscape issue in detail, later.

    More in next post.

  39. 39
    gpuccio says:

    Gordon Davisson (fourth part):

    7) Malaria resistance.

    In the end, the only facts you provide in favour of the neo-darwinist scenario are those about malaria resistance and the rugged landscape experiment. I will deal with the first here, and with the second in next post.

    You say:

    This is simply wrong. Take the evolution of atovaquone resistance in P. falciparum (the malaria parasite). Unless I’m completely misreading the diagram Larry Moran gives in http://sandwalk.blogspot.com/2…..ution.html, one of the resistant variants (labelled “K1”) required 7 mutations in a fairly specific sequence, and at most 4 of them were beneficial. In order for this variant to evolve (which it did), it had to pass at least 3 steps unassisted by selection (which you claim here is impossible) and all 4 beneficial mutations had to overcome genetic drift.

    At least in this case, beneficial intermediates are neither as rare nor as necessary as you claim.

    Now, let’s clarify. In brief, my point is that malaria resistance, like simple antibiotic resistance in general, is one of the few known cases of microevolution.

    As I have already argued in my post #36, microevolutionary events are characterized by the following:

    a) No new function is generated, but only a tweaking of some existing function.

    b) The changes are minor. Even if more than one mutation accumulates, the total functional information added is always small.

    I will discuss those two points for malaria resistance in the next point, but I want to clarify immediately that you are equivocating what I wrote when you say:

    “This is simply wrong.”

    Indeed, you quote my point 2) from post #11:

    “2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.”

    But you don’t quote the premise, in point 1:

    “1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something! Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

    I have emphasized the relevant part, that you seem to have ignored. Point 2 is referring to that scenario.

    It is rather clear that I am speaking of the generation of bew complex functional information, and I even make an example, dynein.

    So, I am not saying that no beneficial mutation can be selected, or that when that happens, like in microevolution, we cannot find the intermediate states.

    What I am saying is that such a model cannot be applied to the generation of new complex final information, like dynein, because it is impossible to decosntruct a new complex functional unit into simple steps, each of them naturally selectable, while the new protein still does not even exist.

    So, What I say is not wrong at all, and mt challenge to imagine such a pathway for dynein, of for ATP synthase beta chain, or for any of the complex functional proteins that appear in the course of natural history, or to find intermediates of that pathway, remains valid.

    But let’s go to malaria.

    I have read the Moran page, and I am not sure of your interpretation that 7 mutations (4 + 3) are necessary to give the resistance. Indeed, Moran says:

    “It takes at least four sequential steps with one mutation becoming established in the population before another one occurs.”

    But the point here is not if 4 or 7 mutations are needed. The point is that this is a clear example of microevolution, although probably one of the most complex that have been observed.

    Indeed:

    a) There is no generation of a new complex function. Indeed, there is no generation of a new function at all, unless you consider becoming resistant to an antibiotic because a gene loses the function to uptake the antibiotic a new “function”. Of course, we can define function as we like, but the simple fact is that here there is an useful loss of function, what Behe calls “burning the bridges to prevent the enemy from coming in”.

    b) Whatever out definition of function, the change here is small. It is small if it amounts to 4 AAs (16 bits at most), it is small if it amounts to 7 aminoacids (28 bits at most).

    OK, I understand that Behe puts the edge to two AAs in his book. Axe speaks of 4, from another point of view.

    Whatever. The edge is certainly thereabout.

    When I have proposed a threshold of functional complexity to infer design for biological objects, I have proposed 120 bits. That’s about 35 AAs.

    Again, we must remember that all known microevolutionary events have in common a very favourable context which makes optimization easier:

    a) They happen in rapidly reproducing populations.

    b) They happen under extreme environmental pressure (the antibiotic)

    c) The function is already present and it can be gradually optimized (or, like in the case of resistance, lost).

    d) Only a few bits of informational change are enough to optimize or lose the function.

    None of that applies to the generation of new complex functional information, where the function does not exist, the changes are informationally huge, and environmental pressure is reasonably much less than reproducing under the effect of a powerful antibiotic.

    8) VJ’s point:

    You say:

    VJTorley found that supposedly-novel genes in the human genome actually have very near matches in the chimp genome.

    It’s funny that you quote a point that I consider a very strong argument for ID.

    First of all, VJ’s arguments are in confutation of some statements by Cornelius Hunter, with whom I often disagree.

    Second, I am not sure that ZNF843 is a good example, because I blasted the human protein and found some protein homologs in primates, with high homology.

    Third, there are however a few known human proteins which have no protein counterpart in other primates, as VJ correctly states. These seem to have very good counterparts in non coding DNA of primates.

    So, if we accept these proteins as real and functional (unfortunately not much is known about them, as far as I know), then what seems to happen is that:

    a) The sequence appears in some way in primates as a non coding sequence. That means that no NS for the sequence as representing a protein can take place.

    b) In some way, the sequence acquires a transcription start in humans, and becomes an ORF. So the protein appears for the first time in humans and, if we accept the initial assumption, it is functional.

    Well, if that kind of process will be confirmed, it will be a very strong evidence of design. the sequence is prepared in primates, where is seems to have no function at all, and is activated in humans, when needed.

    The origin of functional proteins from non coding DNA, which is gaining recognition in the recent years, is definitive evidence of design. NS cannot operate on non coding sequences, least of all make them good protein coding genes. So, the darwinian mechanism is out, in this case.

    More in next post.

  40. 40
  41. 41
    gpuccio says:

    Gordon Davisson (fifth part):

    9) The rugged landscape experiment

    OK, this is probably the most interesting part.

    For the convenience of anyone who may be reading this, I give the link to the paper:

    http://journals.plos.org/ploso.....=printable

    First of all, I think we can assume, for the following discussion, that the wild-type version of the protein they studied is probably optimal, as you suggested yourself. In any case, it is certainly the most functional version of the protein that we know of.

    Now, let’s try to understand what this protein is, and how the experiment was realized.

    The protein is:

    G3P_BPFD (P03661).

    Length: 424 AAs.

    Funtion (from Uniprot):

    “Plays essential roles both in the penetration of the viral genome into the bacterial host via pilus retraction and in the extrusion process. During the initial step of infection, G3P mediates adsorption of the phage to its primary receptor, the tip of host F-pilus. Subsequent interaction with the host entry receptor tolA induces penetration of the viral DNA into the host cytoplasm. In the extrusion process, G3P mediates the release of the membrane-anchored virion from the cell via its C-terminal domain”

    I quote from the paper:

    Infection of Escherichia coli by the coliphage fd is mediated by the minor coat protein g3p [21,22], which consists of three distinct domains connected via flexible glycine-rich linker sequences [22]. One of the three domains, D2, located between the N-terminal D1 and C-terminal D3 domains, functions in the absorption of g3p to the tip of the host F-pilus at the initial stage of the infection process [21,22]. We produced a defective phage, ‘‘fdRP,’’ by replacing the D2 domain of the fd-tet phage with a soluble random polypeptide, ‘‘RP3-42,’’ consisting of 139 amino acids [23].

    So, just to be clear:

    1) The whole protein is implied in infectivity

    2) Only the central domains has been replaced by random sequences

    So, what happens?

    From the paper:

    The initial defective phage fd-RP showed little infectivity, indicating that the random polypeptide RP3-42 contributes little to infectivity.

    Now, infectivity (fitness was measured by an exponential scale, in particular as:

    W = ln(CFU) (CFU = colony forming units/ml)

    As we can see in Fig. 2, the fitness of the mutated phage (fd-RP) is 5, that is:

    CFU = about 148 (e^5)

    Now, always from Fig 2 we can see that the fitness of the wildtype protein is about 22.5, that is:

    CFU = about 4.8 billions

    So, the random replacement of the D2 domain certainly reduces infectivity a lot, and it is perfectly correct to say that the fd-RP phage “showed little infectivity”.

    Indeed, infectivity has been reduced of about 32.6 million times!

    But still, it is there: the phage is still infective.

    What has happened is that by replacing part of the g3p protein with random sequences, we have “damaged” the protein, but not to the point of erasing completely its function. The protein is still there, and in some way it can still work, even with the have damage/deformation induced by our replacement.

    IOWs, the experiment is about retrieving an existing function which has been artificially reduced, but not erased. No new function is generated, but an existing reduced function is tweaked to retrieve as much as possible of its original functionality.

    This is an important point, because the experiment is indeed one of the best contexts to measure the power of RM + NS in the most favorable conditions:

    a) The function is already there.

    b) Only part of the protein has been altered

    c) Phages are obviously a very good substrate for NS

    d) The environmental pressure is huge and directly linked to reproductive success (a phage which loses infectivity cannot simply reproduce).

    IOWs, we are in a context where NS shoul really operate at its most.

    Now, what happens?

    OK, some infectivity is retrieved by RM. How much?

    At the maximum of success, and using the most numerous library of mutations, the retrieved infectivity is about 14.7 (see again Fig. 2). Then the adaptive walk stops.

    Now, that is a good result, and the authors are certainly proud of it, but please don’t be fooled by the logarithmic scale.

    An infectivity of 14.7 corresponds to:

    about 2.4 million CFU

    So, we have an increase of:

    about 17000 times as stated by the authors.

    But, as stated by the authors, the fitmess should still increase of about 2000 times (fitness 7.6) to reach the functionality of the wild type. that means passing from:

    2.4 million CFU

    to

    4.8 billion CFU

    So, even if some good infectivity has been retrieved, we are still 2000 times lower than the value in the wild type!

    And that’s the best they could achieve.

    Now, why that limit?

    The authors explain that the main reason for that is the rugged landscape of protein function. That means that RM and NS achieve some good tweaking of the function, but starting from different local optima in the landscape, and those local optima can go only that far.

    The local optimum corresponding to the wildtype has never been found. See the paper:

    “The sequence selected finally at the 20th generation has ~W = 0.52 but showed no homology to the wild-type D2 domain, which was located around the fitness of the global peak. The two sequences would show significant homology around 52% if they were located on the same mountain. Therefore, they seem to have climbed up different mountains”

    The authors conclude that:

    “The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wildtype phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination”.

    Now, having tried to describe in some detail the experiment itself, I will address your comments in next post.

  42. 42
    Mung says:

    First of all, I think we can assume, for the following discussion, that the wild-type version of the protein they studied is probably optimal, as you suggested yourself.

    A miracle.

    Natural selection is not supposed to give optimal outcomes.

  43. 43
    gpuccio says:

    Mung:

    “A miracle.

    Natural selection is not supposed to give optimal outcomes.”

    Don’t be so pessimistic!

    Give it enough time in the multiverse, and you will see… 🙂

  44. 44
    Mung says:

    > And that’s the best they could achieve.

    So they just disproved that the optimal protein could happen by RM+NS.

  45. 45
    Mung says:

    NS finds optimal solutions to problems posed by the environment. Except when it doesn’t. It’s a great theory.

  46. 46
    gpuccio says:

    Gordon Davisson (sixth part):

    10) Your comments about the rugged landscape paper

    You say:

    That’s not exactly what they say. Here’s the relevant paragraph of the paper (with my emphasis added):

    But it is exactly what they say!

    Let's see what I wrote:

    "you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS.

    (emphasis added)

    Now let’s see what they said:

    “By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.

    (I have kept your emphasis).

    So, the point is that, according to the authors, a library of 10^70 sequences would be necessary to find the wildtype by random substitutions only (plus, I suppose, NS).

    That’s exactly what I said. Therefore, your comment, that “That’s not exactly what they say” is simply wrong.

    Let’s clarify better: 10^70 is a probabilistic resource that is beyond the reach not only of our brilliant researchers, but of nature itself!

    It seems that your point is that they also add that, given that “such a huge search is impractical” (what a politically correct adjective here! 🙂 ), that should:

    “imply that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.”

    which is the part you emphasized.

    As if I had purposefully left out such a clarifying statement!

    Well, of course I have purposefully left out such a clarifying statement, but not because I was quote-mining, but simply because it is really pitiful and irrelevant. Let’s say that I wanted to be courteous to the authors, who have written a very good paper, with honest conclusions, and only in the end had to pay some minimal tribute to the official ideology.

    You see, when you write a paper, and draw the conclusions, you are taking responsibilities: you have to be honest, and to state only what can be reasonably derived from the facts you have given.

    And indeed the authors do that! They correctly draw the strong conclusion that, according to their data, RM + NS only cannot find the wildtype in their experiment (IOWs, the real, optimal function), unless we can provide a starting library of 10^70 sequences, which, as said, is beyond the reach of nature itself, at least on our planet. IOWs, let’s say that it would be “impractical”. 🙂

    OK, that’s the correct conclusion according to their data. They should have stopped here.

    But no, they cannot simply do that! So they add that such a result:

    implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.

    Well, what is that statement? Just an act of blind faith in neo-darwinism, which must be true even when facts falsify it.

    Is it a conclusion derived in any way from the data they presented?

    Absolutely not! There is nothing in their data that suggests such a conclusion. They did not test recombination, or other mechanisms, and therefore they can say absolutely nothing about what it can or cannot do. Moreover, they don’t even offer any real support from the literature for that statement. They just quote one single paper, saying that “the importance of recombination or DNA shuffling has been suggested”. And yet they go well beyond a suggestion, they say that their “conclusion” is implied. IOWs logically necessary.

    What a pity! What a betrayal of scientific attitude.

    If they really needed to pay homage to the dogma, they could have just said something like “it could be possible, perhaps, that recombination helps”. But “imply”? Wow!

    But I must say that you too take some serious responsibility in debating that point. Indeed, you say:

    In other words, they used a simplified model of evolution that didn’t include all actual mechanisms, and they think it likely that’s why their model says the wild type couldn’t have evolved with a reasonable population size. So it must’ve been intelligent design… or maybe just homologous recombination. Or some other evolutionary mechanism they didn’t include.

    Well, they didn’t use “a simplified model of evolution”. They tested the official model: RM + NS. And it failed!

    Since it failed, they must offer some escape. Of course, some imaginary escape, completely unsupported by any facts.

    But the failure of RM + NS, that is supported by facts, definitely!

    I would add that I cannot see how one can think that recombination can work any miracle here: after all, the authors themselves have said that the local optimum of the wildtype has not been found. The problem here is how to find it. Why should recombination of existing sequences, which share no homology with the wildtype, help at all in finding the wildtype? Mysteries of blind faith.

    And have the authors, or anyone else, made new experiments that show how recombination can solve the limit they found? Not that I know. If you are aware of that, let me know.

    Then you say:

    Or their model of the fitness landscape might not be completely accurate.

    Interesting strategy. So, if the conclusions of the authors, conclusions driven from facts and reasonable inferences, are not those that you would expect, you simply doubt that their model is accurate. Would you have had the same doubts, had they found that RM + NS could find easily the wildtype? Just wondering…

    And again:

    So between the limitations of their simulation of actual evolutionary processes and the limitations of the region of the landscape over which they gathered data, I don’t see how you can draw any particularly solid conclusions from that study.

    Well, like you, I am not an expert of that kind of models. I accept the conclusions of the authors, because it seems that their methodology and reasonings are accurate. You doubt them. But should I remind you that they are mainstream authors, not certainly IDists, and that their conclusions must have surprised themselves first of all. I don’t know, but when serious researchers publish results that are not probably what they expected, and that are not what others expect, they must be serious people (except, of course, for the final note about recombination, but anyone can make mistakes after all! 🙂 ).

    Then your final point:

    This seems to directly refute your claim that stepwise-beneficial mutations cannot produce functional proteins. They showed that it can.

    No, for a lot of reasons:

    a) We are in a scenario of tweaking an existing, damaged function to retrieve part of it. We are producing no new functional protein, just “repairing” as much as possible some important damage.

    b) That’s why the finding of lower levels of function is rather easy: it is not complex at all, it is in the reach of the probabilistic resources of the system.

    I will try to explain it better. Let’s say that you have a car, and that its body has been seriously damaged in a car accident. That’s our protein with its D2 domain replaced by a random sequence of AAs.

    Now, you have not the money to buy the new parts that would bring back the old body in all its spendor (the wildtype).

    So, you choose the only solution you can afford: you take a hammer, and start giving gross blows to the body, to reduce the most serious deformations, at least a little.

    the blows you give need not be very precise or specific: if there is some part which is definitely too out of the line, a couple of gross blows will make it less prominent. And so on.

    Of course, the final result is very far from the original: let’s say 2000 times less beautiful and functional.

    However, it is better than what you started with.

    IOWs, you are trying a low information fixing: a repair which is gross, but somewhat efficient.

    And, of course, there are many possible gross forms that you can achieve by your hammer, and that have more or less the same degree of “improvement”.

    On the contrary, there is only one form that satisfies the original request: the perfect parts of the original body.

    So, a gross repair has low informational content. A perfect repair has very high informational content.

    That’s what the rugged landscape paper tells us: the conclusion, derived form facts, is perfectly in line with ID theory. Simple function can be easily reached by some probabilistic resources, by RV + NS, provided that the scenario is one of tweaking an existing function, and not of generating a new complex one.

    It’s the same scenario of malaria resistance, or of other microevolutionary events.

    But the paper tells us something much more important: complex function, that with a high informational content, cannot be realistically achieved with those mechanisms, nor even in the most favorable NS scenario, with an existing function, and the opportunity to tweak it with high mutation rates and highly reproducing populations, and direct relevance of the function to reproduction.

    Complex function cannot be found, not even in those conditions. The wildtype remains elusive, and, if the author’s model is correct, which I do believe, will remain elusive in any non design context.

    And, if RV and NS cannot even do that, how can they hope to just start finding some new, complex, specific function, like the sequence of ATP synthase beta chain, or dynein, or whatever you like, starting not from an existing, damaged but working function, but just from scratch?

    OK, this is it. I think I have answered your comments. It was some work, I must say, but you certainly deserved it! 🙂

  47. 47
    gpuccio says:

    Mung:

    “So they just disproved that the optimal protein could happen by RM+NS.”

    Yes. Simply yes.

    “NS finds optimal solutions to problems posed by the environment. Except when it doesn’t. It’s a great theory.”

    The best you can dream of. 🙂

  48. 48
    gpuccio says:

    Gordon Davisson and Mung:

    Addendum:

    By the way, in that paper we are dealing with a 139 AAs sequence (the D2 domain).

    ATP synthase beta chain is 529 AAs long, and has 334 identities between E. coli and humans, for a total homology of 663 bits.

    Cytoplasmic dynein 1 heavy chain 1 is 4646 AAs long, and has 2813 identities between fungi and humans, for a total homology of 5769 bits.

    These are not the 16 – 28 bits of malaria resistance. Not at all.

  49. 49
    Mung says:

    Some really nice posts in response to Gordon Davisson, gpuccio.

    Definitely going to bookmark this thread.

    I have a question about the statistical nature of fitness (and therefore of natural selection).

    What is wrong with calling something that is probabilistic “chance based”?

    To put it another way, why do evolutionists get so upset when you point out that evolution is chance based when at its heart it is statistical?

  50. 50
    Origenes says:

    GPuccio

    Thank you for these great posts. Poetry. Very informative. Maybe they can be transformed into an OP.

  51. 51
    Dionisio says:

    gpuccio’s latest OP written as comments within this discussion thread: @36-39, 41, 46.

  52. 52
    gpuccio says:

    Origenes:

    Thank you! 🙂

    “Maybe they can be transformed into an OP.”

    Good idea! Done.

  53. 53
    gpuccio says:

    Mung:

    Thank you! 🙂

    “I have a question about the statistical nature of fitness (and therefore of natural selection).”

    OK.

    “What is wrong with calling something that is probabilistic “chance based”?”

    Nothing is wrong.

    I would say, however, that probability (not in the quantum sense) is just a way to describe events from necessity that cannot really be described in terms of necessity because the system is too complex, there are too many variables or unkown states,, and so on.

    IOWs, what we call “chance” is really only our ignorance: we choose to describe the system by an appropriate probability distribution because we cannot describe it in detail as the result of necessity operating on many unrelated variables.

    But, if a system is describe by a probability distribution, it’s perfectly correct to say that our understanding of the system is “chance based”.

    “To put it another way, why do evolutionists get so upset when you point out that evolution is chance based when at its heart it is statistical?”

    Because they don’t like that idea! 🙂

    OK, the only correct objection that they can really make is that the whole neo darwinist modle is based on two components: RV + NS.

    RV is obviously probabilistic. But NS is an explanatory algorithm which is based on some form of explicit necessity: the idea that a reproductive advantage will usually bring to an expansion and fixation of the trait. Even if there is some chance in that too (see competition with genetic drift, and so on), the main idea that there is a coefficient of selectability is a necessity idea.

    But the problem is that this supposed element of necessity acts only on certain things that arise by mere chance.

    Therefore, the correct attitude is that RV must be evaluated by probability analysis, as ID does, and NS must be analyzed for its real abilities to act on real substrates that have a realistic probability to exist. then, the possible contribution of NS can be entered in the analysis.

    That’s what I mean when I say that NS can lower the probabilistic barriers (but only partially, and we can and must evaluate in what degree), but only in those cases where NS can realistically be demonstrated, or at least hypothesized with some good support from facts.

  54. 54
    gpuccio says:

    Dionisio:

    Now an OP of its own! 🙂

  55. 55
    EugeneS says:

    GPuccio

    I am sorry if you have already addressed it above (I will need time to read it all at my pace). Could you elaborate on gene duplication a bit more in light of probabilistic barriers? In theory, gene duplication allows the neo-Darwinian model to traverse areas of the configuration space where function does not exist. The idea is that a duplicate (paralog) can change without being restrained by natural selection. As soon as it becomes functional, it is immediately subject to natural selection, but with a different function.

    I know this is too speculative, but could you say a bit more? People do mention gene duplication in discussing the capabilities of RV+NS.

    Thanks!

  56. 56

    gpuccio @ 46: “Well, what is that statement? Just an act of blind faith in neo-darwinism, which must be true even when facts falsify it.”

    Neo-Darwinism requires blind faith in RV+NS just like the original Darwinism. A/mats enjoy the self-delusion of having science on their side when the truth is quite different. Theirs is a faith-based philosophy very thinly veiled as science. Everyone sees through the veil at this point.

  57. 57
    Mung says:

    > Theirs is a faith-based philosophy very thinly veiled as science. Everyone sees through the veil at this point.

    Calling it faith-based gives it too much credit. It doesn’t even rise to the level of faith. More like wishful thinking. Fantasizing.

  58. 58
    gpuccio says:

    EugeneS:

    Maybe you have seen that I have moved my discussion with Gordon Davisson to a new OP about the limits of NS.

    I hope you don’t mind if I copy your last post there and answer it there. So, maybe the discussion can do on in a fresh and independent thread.

Leave a Reply