Uncommon Descent Serving The Intelligent Design Community

What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

An interesting discussion, still absolutely open, has taken place in the last few days between Gordon Davisson and me on the thread:

Some very good friends, like Dionisio, Mung and Origenes, seem to have appreciated the discussion, which indeed has touched important issues. Origenes has also suggested that it could be transformed into an OP.

Well, I tought that it was probably a good idea, and luckily it did not require much work. 🙂   So, here it is. Gordon Davisson’s posts are in italics. It’s a bit long, and I am sorry for that!

I thank in advance Gordon Davisson for the extremely good contribution he has already given, and for any other contribution he will give. He is certainly invited to continue the discussion here, if he likes (and I do hope he does!). Of course, anyone else who could be interested is warmly invited to join.  🙂

Gordon Davisson (post #5):

Why is this supposed to be a problem for “Darwinism”? A low rate of beneficial mutations just means that adaptive evolution will be slow. Which it is.

And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual. Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit. If I’ve accounted for everything the overall rate of fixation of beneficial mutations per generation should be: (fraction of mutations that’re beneficial) * (fraction of beneficial mutations that aren’t wiped out by genetic drift) * (# of mutations per individual) * (population).

Florabama’s description is exactly wrong. Beneficial mutations don’t have to happen “in a row”, they can happen entirely independently of each other, and spread independently via selection. You may be thinking of the argument from irreducible complexity, but that’s an argument that evolution depends on mutations that are only beneficial in combination, which is a different matter. (And FYI evolutionists dispute how much of a problem this actually is. But again, that’s another matter.)

gpuccio (post #11):

Gordon Davisson:

You say:

And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual.

Yes, but any “beneficial” mutation that appears in one individual will have to expand to great part of the population, if NS has to have any role in lowering the probabilistic barriers.

That means that:

1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something! Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

If you can, I really admire your imagination.

2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.

While that is simply impossible, because those “stepwise” mutations simply do not exist and never will exist, even if we imagine that they exist the process requires certainly a lot of time.

Moreover, as the process seems not to leave any trace of itself in the proteomes we can observe today, because those functionally intermediate forms simply do not exist, we must believe that each time the expansion of the new trait, with its “precious” single aminoacid mutation, must be complete, because it seems that it can erase all tracks of the process itself.

So, simple imagination is not enough here: you really need blind faith in the impossible. Credo quia absurdum, or something like that.

Then you say:

Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit.

Absolutely! And it’s not a bit, it’s a lot.

If you look at the classic paper about rugged landscape:

http://journals.plos.org/ploso…..ne.0000096

you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS. Just think about the implications of that simple fact.

You say:

Beneficial mutations don’t have to happen “in a row”, they can happen entirely independently of each other, and spread independently via selection.

Yes, but only if each individual mutation confers a strong enough reproductive advantage. That must be true for each single specific aminoacid position of each single new functional protein that appears in natural history. Do you really believe that? Do you really believe that each complex functional stricture can be deconstructed into simple steps, each conferring reproductive advantage? Do you believe that we can pass from “word” source code to “excel” source code by single byte variations (yes, I am generous here, because a single aminoacid has at most about 4 bits of information, not 8), each of them giving a better software which can be sold better than the previous version?

Maybe not even “credo quia absurdum” will suffice here. There are limits to the absurd that can be believed, after all!

You say:

You may be thinking of the argument from irreducible complexity, but that’s an argument that evolution depends on mutations that are only beneficial in combination, which is a different matter.

No, the argument of IC, as stated by Behe, is about functions which require the cooperation of many individual complex proteins. That is very common in biology.

The argument of functional complexity, instead, is about the necessity of having, in each single protein, all the functional information which is minimally necessary to give the function of the protein itself. How many AAs would that be, for example, for dynein? Or for the classic ATP synthase?

Here, the single functional element is so complex that it requires hundreds of specific aminoacids to be of any utility. If that single functional element also requires to work with other complex single elements to give the desired function (which is also the rule in biology), then the FC of the system is multiplied. That is the argument of IC, as stated by Behe. The argument for FC in a single functional structure is similar, but it is directly derived form the concept of CSI as stated by Dembski (and others before and after him).

And finally you say:

And FYI evolutionists dispute how much of a problem this actually is. But again, that’s another matter.

It’s not another matter. It’s simply a wrong matter.

Both FC and IC are huge problems for any attempt to defend the neo-darwinian theory. I am not surprised at all that “evolutionists” dispute that, however. See Tertullian’s quote above!

Gordon Davisson (post #35):

Hi, gpuccio. Sorry about my late reply (as usual, I’m afraid). Before I comment specifically to what you said, I need to make a general comment that I still don’t see how the original point — that beneficial mutations are rare — refutes evolution. The arguments you’re making against evolution’s ability to create complex functional systems don’t seem to have a very close connection to the rate of beneficial mutations. Note that all of these would be considered beneficial mutations:

* Minor changes to an existing functional thing (protein, regulatory region, etc) that improve its function slightly.
* Minor changes to an existing functional thing that change its function slightly, in a way that makes it fit the organism’s current environment better.
* Changes that decrease function of something that’s overdoing its role (e.g. the mutation discussed here, which winds up giving people unusually strong bones).
* Mutations that create new functional systems.
* Mutations that are partway along a path to new functional systems, and are beneficial by themselves.

Your argument is (if I may oversimplify it a bit) essentially that the last two are vanishingly rare. But when we look at the overall rate of beneficial mutations, they’re mixed in with other sorts of beneficial mutations that’re completely irrelevant to what you’re talking about! Additionally, several types of mutations that’re critical in your argument are not immediately beneficial aren’t going to be counted in the beneficial mutation rate:

* Mutations that move closer to a new functional system (or higher-functioning version of an existing system), but aren’t actually there yet.
* Mutations that produce new functional systems that don’t immediately contribute to fitness.

Furthermore, one of the reasons for the rate of beneficial mutations may be low is that there may simply not be much room for improvement. For example, the experiment you cited about evolution on a rugged fitness landscape suggests that the wild-type version of the protein they studied may be optimal — it cannot be improved, whether by evolution or intelligent design or whatever. If that’s correct, the rate of beneficial mutations to this protein will be exactly zero, but that’s not because of any limitation of what mutations can do.

Now, on to your actual argument:

And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual.

Yes, but any “beneficial” mutation that appears in one individual will have to expand to great part of the population, if NS has to have any role in lowering the probabilistic barriers.

That means that:

1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something!

I’d disagree slightly here. There isn’t any particular “strong enough” threshold; the probability that a beneficial mutation will “escape genetic drift” is roughly proportional to how beneficial it is. Mutations that’re only slightly beneficial thus become fixed at a lower (but still nonzero) rate.

Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

If you can, I really admire your imagination.

I’ll discuss some of these points more below, but just two quick things here: first, this is just an argument from incredulity, not an argument from actual knowledge or evidence. Second, the article you cited about a rugged fitness landscape showed that they were able to evolve a new functional protein starting from a random polypeptide (the limit they ran into wasn’t getting it to function, but in optimizing that function).

2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.

While that is simply impossible, because those “stepwise” mutations simply do not exist and never will exist, even if we imagine that they exist the process requires certainly a lot of time.

This is simply wrong. Take the evolution of atovaquone resistance in P. falciparum (the malaria parasite). Unless I’m completely misreading the diagram Larry Moran gives in http://sandwalk.blogspot.com/2…..ution.html, one of the resistant variants (labelled “K1”) required 7 mutations in a fairly specific sequence, and at most 4 of them were beneficial. In order for this variant to evolve (which it did), it had to pass at least 3 steps unassisted by selection (which you claim here is impossible) and all 4 beneficial mutations had to overcome genetic drift.

At least in this case, beneficial intermediates are neither as rare nor as necessary as you claim.

Moreover, as the process seems not to leave any trace of itself in the proteomes we can observe today, because those functionally intermediate forms simply do not exist, we must believe that each time the expansion of the new trait, with its “precious” single aminoacid mutation, must be complete, because it seems that it can erase all tracks of the process itself.

So, simple imagination is not enough here: you really need blind faith in the impossible. Credo quia absurdum, or something like that.

Except we sometimes do find such traces. In the case of atovaquone resistance, many of the intermediates were found in the wild. For another example, in https://uncommondescent.com/intelligent-design/double-debunking-glenn-williamson-on-human-chimp-dna-similarity-and-genes-unique-to-human-beings/, VJTorley found that supposedly-novel genes in the human genome actually have very near matches in the chimp genome.

Then you say:

Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit.

Absolutely! And it’s not a bit, it’s a lot.

If you look at the classic paper about rugged landscape:

http://journals.plos.org/ploso…..ne.0000096

you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS. Just think about the implications of that simple fact.

That’s not exactly what they say. Here’s the relevant paragraph of the paper (with my emphasis added):

The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination. Recombination among neutral or surviving entities may suppress negative mutations and thus escape from mutation-selection-drift balance. Although the importance of recombination or DNA shuffling has been suggested [30], we did not include such mechanisms for the sake of simplicity. However, the obtained landscape structure is unaffected by the involvement of recombination mutation although it may affect the speed of search in the sequence space.

In other words, they used a simplified model of evolution that didn’t include all actual mechanisms, and they think it likely that’s why their model says the wild type couldn’t have evolved with a reasonable population size. So it must’ve been intelligent design… or maybe just homologous recombination. Or some other evolutionary mechanism they didn’t include.

Or their model of the fitness landscape might not be completely accurate. I’m far from an expert on the subject, but from my read of the paper:

* They measured how much infectivity (function) they got vs. population size (larger populations evolved higher infectivity before stagnating), fit their results to a theoretical model of the fitness landscape, and used that to extrapolate to the peak possible infectivity … which matched closely to that of the wild type. But their experimental results only measured relative infectivities between 0.0 and 0.52 (using a normalized logarithmic scale), and the extrapolation from 0.52 to 1.0 is purely theoretical. How well does reality match the theoretical model in the region they didn’t measure?

* But it’s worse than that, because their measurements were made on one functional “mountain”, and the wild type appears to reside on a different mountain. Do both mountains have the same ruggedness and peak infectivity? They’re not only extrapolating from the base of a mountain to its peak, but from the base of one mountain to the peak of another. The fact that the infectivity of the wild type matches closely with their theoretical extrapolation of the peak is suggestive, but hardly solid evidence.

So between the limitations of their simulation of actual evolutionary processes and the limitations of the region of the landscape over which they gathered data, I don’t see how you can draw any particularly solid conclusions from that study.

Well, except that there are some conclusions available from the region of the landscape that they did make measurements on: between random sequences and partial function. They say:

The landscape structure has a number of implications for initial functional evolution of proteins and for molecular evolutionary engineering. First, the smooth surface of the mountainous structure from the foot to at least a relative fitness of 0.4 means that it is possible for most random or primordial sequences to evolve with relative ease up to the middle region of the fitness landscape by adaptive walking with only single substitutions. In fact, in addition to infectivity, we have succeeded in evolving esterase activity from ten arbitrarily chosen initial random sequences [17]. Thus, the primordial functional evolution of proteins may have proceeded from a population with only a small degree of sequence diversity.

This seems to directly refute your claim that stepwise-beneficial mutations cannot produce functional proteins. They showed that it can. And they also showed that (as with the atovaquone resistance example) evolution doesn’t require stepwise-beneficial paths either. They found that stepwise-beneficial paths existed up to a relative fitness of 0.4, but they experimentally achieved relative fitnesses up to 0.52! So even with the small populations and limited evolutionary mechanisms they used, they showed it was possible to evolve significantly past the limits of stepwise-beneficial paths.

I don’t have to imagine this. They saw it happen.

gpuccio (posts 36 -39, 41, 46, 48):

 

Gordon Davisson:

First of all, thank you for your detailed and interesting comments to what I wrote. You raise many important issues that deserve in depth discussion.

I will try to make my points in order, and I will split them in a few different posts:

1) The relevance of the rate of “beneficial” mutations.

You say:

Before I comment specifically to what you said, I need to make a general comment that I still don’t see how the original point — that beneficial mutations are rare — refutes evolution. The arguments you’re making against evolution’s ability to create complex functional systems don’t seem to have a very close connection to the rate of beneficial mutations.

I don’t agree. As you certainly know, the whole point of ID is to evaluate the probabilistic barriers that make it impossible for the proposed mechanism of RV + NS to generate new complex functional information. The proposed mechanism relies critically on NS to overcome those barriers, therefore it is critical to understand quantitatively how often RV occurs that can be naturally selected, expanded and fixed.

Without NS, it is absolutely obvious that RV cannot generate anything of importance. Therefore, it is essential to understand and demonstrate how much NS can have a role in modifying that obvious fact, and the rate of naturally selectable mutations (not of “beneficial mutations, because a beneficial mutation which cannot be selected because it does not confer a sufficient reproductive advantage is of no use for the model) is of fundamental importance in the discussion.

2) Types of “beneficial” mutations (part 1).

You list 5 types of beneficial mutations. Let’s consider the first 3 types:

Note that all of these would be considered beneficial mutations:
* Minor changes to an existing functional thing (protein, regulatory region, etc) that improve its function slightly.
* Minor changes to an existing functional thing that change its function slightly, in a way that makes it fit the organism’s current environment better.
* Changes that decrease function of something that’s overdoing its role (e.g. the mutation discussed here, which winds up giving people unusually strong bones).

Well, I would say that these three groups have two things in common:

a) They are mutations which change the functional efficiency (or inefficiency) of a specific function that already exists (IOWs, no new function is generated).

b) The change is a minor change (IOWs, it does not imply any new complex functional information).

OK, I am happy to agree that, however common “beneficial” mutations may be, they almost always, if not always, are of this type. that’s what we call “microevolution”. It exists, and nobody has ever denied that. Simple antibiotic resistance has always been a very good example of that.

Of course, while ID does not deny microevolution, ID theory definitely shows its limits. They are:

a) As no new function is generated, this kind of variation can only tweak existing functions.

b) While the changes are minor, they can accumulate, especially under very strong selective pressure, like in the case of antibiotic resistance (including malaria resistance). But gradual accumulation of this kind of tweaking takes long times even under extremely strong pressure, requires a continuous tweaking pathway that is not always existing, and is limited, however, by how much the existing function can be optimized by simple stepwise mutations.

I will say more about those points when I answer about malaria resistance and the rugged landscape experiment. I would already state here, however, that both those scenarios, that you quote in your discussion, are of this kind, IOWs they fall under one of these three definitions of “beneficial” mutations.

3) Types of “beneficial” mutations (part 2).

The last two types are, according to what you say:

* Mutations that create new functional systems.
* Mutations that are partway along a path to new functional systems, and are beneficial by themselves.

These are exactly those kinds of “beneficial” mutations that do not exist.

Let’s say for the moment that we have no example at all of them.

For the first type,are you suggesting that there are simple mutations that “create new functional systems”? Well, let’s add an important word:

“create new complex functional systems”?

That word is important, because, as you certainly know, the whole point of ID is not about function, but about complex function. Nobody has ever denied that simple function can arise by random variation.

So, for this type, I insist: what examples do you have?

You may say that even if you have no examples, it’s my burden to show that it is impossible.

But that is wrong. You have to show not only that it is possible, but that it really happens and has real relevance to the problem we are discussing. We are making empirical science here, not philosophy. Only ideas supported by facts count. So, please, give the facts.

I would say that there is absolutely no reason to believe that a “simple” variation can generate “new complex functional systems”. There is no example of that in any complex system. Can the change of a letter generate a new novel? Can the change of a byte generate a new complex software, with new complex functions? Can a mutation of 1 – 2 aminoacids generate a new complex biological system?

The answer is no, but if you believe differently, you are welcome: just give facts.

In the last type of beneficial mutations, you hypothesize, if I understand you well, that a mutation can be part of the pathway to a new complex functional system, which still does not exist, but can be selected because it is otherwise beneficial.

So, let’s apply that to the generation of a new functional protein, like ATP synthase. Let’s say the beta chain of it, which, as we all know, has hundreds of specific aminoacid positions, conserved from bacteria to humans (334 identities between E. coli and humans).

Now, what you are saying is that we can in principle deconstruct those 334 AA values into a sequence of 334 single mutations, or if you prefer 167 two AAs mutations, each of which is selected not because the new protein is there and works, but because the intermediate state has some other selectable function?

Well, I say that such an assumption is not reasonable at all. I see no logical reason why that should be possible. If you think differently, please give facts.

I will say it again; the simple idea that new complex functions can be deconstructed into simple steps, each of them selectable for some not specified reason, is pure imagination. If you have facts, please give them, otherwise that idea has not relevance in a scientific discussion.

4) Other types of mutation?

You add two further variations in your list of mutations. Here they are:

* Mutations that move closer to a new functional system (or higher-functioning version of an existing system), but aren’t actually there yet.
* Mutations that produce new functional systems that don’t immediately contribute to fitness.

I am not sure that I understand what you mean. If I understand correctly, you are saying that there are mutations which in the end will be useful, bur for the moment are not useful.

But, then, they cannot be selected as such. Do you realize what that means?

It means that they can certainly occur, but they have exactly the same probability to occur as any other mutation. Moreover, as they are no selected, they remain confined to the original individual or clone, unless they are fixed by genetic drift.

But again, they have exactly the same probability as any other mutation to be fixed by genetic drift.

That brings us to a very strong conclusion that is often overlooked by darwinists, especially the neutralists:

Any mutation that does not have the power to be naturally selected is completely irrelevant in regard to the probabilistic barriers because its probability is exactly the same as any other mutation to occur or to be fixed by drift.

IOWs, only mutations that can be naturally selected change the game in regard to the computation of the probabilistic barriers. Nothing else. All variation which cannot be naturally selected is irrelevant, because it is just a new random state, and is already considered when we compute the probabilities for a random search to get the target.

5) Optimal proteins?

You say:

Furthermore, one of the reasons for the rate of beneficial mutations may be low is that there may simply not be much room for improvement. For example, the experiment you cited about evolution on a rugged fitness landscape suggests that the wild-type version of the protein they studied may be optimal — it cannot be improved, whether by evolution or intelligent design or whatever. If that’s correct, the rate of beneficial mutations to this protein will be exactly zero, but that’s not because of any limitation of what mutations can do.

OK, I can partially agree. The proteins as we see them now are certainly optimal in most cases. But they were apparently optimal just from the beginning.

For example, our beloved ATP synthase beta chain already had most of its functional information in LUCA, according to what we can infer from homologies. And, as I have shown in my OPs about the evolution of information in vertebrates, millions of bits of new functional information have appeared at the start of the vertebrate branch, rather suddenly, and then remained the same for 400+ million years of natural history. So, I am not sure that the optimal state of protein sequences is any help for neo-darwinism.

Moreover, I should remind you that protein coding genes are only a very small part of genomes. Non coding DNA, which according to darwinists is mostly useless, can certainly provide ample space for beneficial mutations to occur.

But I will come back to that point in the further discussion.

I would like to specify that my argument here is not to determine how common exactly are beneficial mutations in absolute, but rather to show that rare beneficial mutations are certainly a problem for neo-darwinism, a very big problem indeed, especially considering that (almost) all the examples we know of are examples of micro-evolution, and do not generate any new complex functional information.

5) The threshold for selectability.

You say:

I’d disagree slightly here. There isn’t any particular “strong enough” threshold; the probability that a beneficial mutation will “escape genetic drift” is roughly proportional to how beneficial it is. Mutations that’re only slightly beneficial thus become fixed at a lower (but still nonzero) rate.

I don’t think we disagree here. Let’s say that very low reproductive advantages will not be empirically relevant, because they will not significantly raise the probability of fixation above the generic one from genetic drift.

On the other hand, even if there is a higher probability of fixation, the lower it is, the lower will be the effect on probabilistic barriers. Therefore, only a significant reproductive advantage will really lower the probabilistic barriers in a relevant way.

6) The argument from incredulity.

You say:

I’ll discuss some of these points more below, but just two quick things here: first, this is just an argument from incredulity, not an argument from actual knowledge or evidence. Second, the article you cited about a rugged fitness landscape showed that they were able to evolve a new functional protein starting from a random polypeptide (the limit they ran into wasn’t getting it to function, but in optimizing that function).

I really don’t understand this misuse of the “argument from incredulity” issue (are, of course, not the only one to use it improperly).

The scenario is very simple: in science, I definitely am incredulous of any explanation which is not reasonable, has no explanatory power, and especially is not supported by any fact.

This is what science is. I am not a skeptic (I definitely hate that word), but I am not a credulous person who believes in things only because others believe in them.

You can state any possible theory in science. Some of them will be logically inconsistent, and we can reject from the start. But others will be logically possible, but unsupported by observed facts and by sound reasoning. We have the right and the duty to ignore those theories as devoid of any true scientific interest.

This is healthy incredulity. The opposite of blind faith.

I will discuss the rugged landscape issue in detail, later.

7) Malaria resistance.

In the end, the only facts you provide in favour of the neo-darwinist scenario are those about malaria resistance and the rugged landscape experiment. I will deal with the first here, and with the second in next post.

You say:

This is simply wrong. Take the evolution of atovaquone resistance in P. falciparum (the malaria parasite). Unless I’m completely misreading the diagram Larry Moran gives in http://sandwalk.blogspot.com/2…..ution.html, one of the resistant variants (labelled “K1”) required 7 mutations in a fairly specific sequence, and at most 4 of them were beneficial. In order for this variant to evolve (which it did), it had to pass at least 3 steps unassisted by selection (which you claim here is impossible) and all 4 beneficial mutations had to overcome genetic drift.

At least in this case, beneficial intermediates are neither as rare nor as necessary as you claim.

Now, let’s clarify. In brief, my point is that malaria resistance, like simple antibiotic resistance in general, is one of the few known cases of microevolution.

As I have already argued in my post #36, microevolutionary events are characterized by the following:

a) No new function is generated, but only a tweaking of some existing function.

b) The changes are minor. Even if more than one mutation accumulates, the total functional information added is always small.

I will discuss those two points for malaria resistance in the next point, but I want to clarify immediately that you are equivocating what I wrote when you say:

This is simply wrong.

Indeed, you quote my point 2) from post #11:

“2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.”

But you don’t quote the premise, in point 1:

“1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something! Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

I have emphasized the relevant part, that you seem to have ignored. Point 2 is referring to that scenario.

It is rather clear that I am speaking of the generation of bew complex functional information, and I even make an example, dynein.

So, I am not saying that no beneficial mutation can be selected, or that when that happens, like in microevolution, we cannot find the intermediate states.

What I am saying is that such a model cannot be applied to the generation of new complex final information, like dynein, because it is impossible to decosntruct a new complex functional unit into simple steps, each of them naturally selectable, while the new protein still does not even exist.

So, what I say is not wrong at all, and mt challenge to imagine such a pathway for dynein, of for ATP synthase beta chain, or for any of the complex functional proteins that appear in the course of natural history, or to find intermediates of that pathway, remains valid.

But let’s go to malaria.

I have read the Moran page, and I am not sure of your interpretation that 7 mutations (4 + 3) are necessary to give the resistance. Indeed, Moran says:

“It takes at least four sequential steps with one mutation becoming established in the population before another one occurs.”

But the point here is not if 4 or 7 mutations are needed. The point is that this is a clear example of microevolution, although probably one of the most complex that have been observed.

Indeed:

a) There is no generation of a new complex function. Indeed, there is no generation of a new function at all, unless you consider becoming resistant to an antibiotic because a gene loses the function to uptake the antibiotic a new “function”. Of course, we can define function as we like, but the simple fact is that here there is an useful loss of function, what Behe calls “burning the bridges to prevent the enemy from coming in”.

b) Whatever out definition of function, the change here is small. It is small if it amounts to 4 AAs (16 bits at most), it is small if it amounts to 7 aminoacids (28 bits at most).

OK, I understand that Behe puts the edge to two AAs in his book. Axe speaks of 4, from another point of view.

Whatever. The edge is certainly thereabout.

When I have proposed a threshold of functional complexity to infer design for biological objects, I have proposed 120 bits. That’s about 35 AAs.

Again, we must remember that all known microevolutionary events have in common a very favourable context which makes optimization easier:

a) They happen in rapidly reproducing populations.

b) They happen under extreme environmental pressure (the antibiotic)

c) The function is already present and it can be gradually optimized (or, like in the case of resistance, lost).

d) Only a few bits of informational change are enough to optimize or lose the function.

None of that applies to the generation of new complex functional information, where the function does not exist, the changes are informationally huge, and environmental pressure is reasonably much less than reproducing under the effect of a powerful antibiotic.

8) VJ’s point:

You say:

VJTorley found that supposedly-novel genes in the human genome actually have very near matches in the chimp genome.

It’s funny that you quote a point that I consider a very strong argument for ID.

First of all, VJ’s arguments are in confutation of some statements by Cornelius Hunter, with whom I often disagree.

Second, I am not sure that ZNF843 is a good example, because I blasted the human protein and found some protein homologs in primates, with high homology.

Third, there are however a few known human proteins which have no protein counterpart in other primates, as VJ correctly states. These seem to have very good counterparts in non coding DNA of primates.

So, if we accept these proteins as real and functional (unfortunately not much is known about them, as far as I know), then what seems to happen is that:

a) The sequence appears in some way in primates as a non coding sequence. That means that no NS for the sequence as representing a protein can take place.

b) In some way, the sequence acquires a transcription start in humans, and becomes an ORF. So the protein appears for the first time in humans and, if we accept the initial assumption, it is functional.

Well, if that kind of process will be confirmed, it will be a very strong evidence of design. the sequence is prepared in primates, where is seems to have no function at all, and is activated in humans, when needed.

The origin of functional proteins from non coding DNA, which is gaining recognition in the recent years, is definitive evidence of design. NS cannot operate on non coding sequences, least of all make them good protein coding genes. So, the darwinian mechanism is out, in this case.

9) The rugged landscape experiment

OK, this is probably the most interesting part.

For the convenience of anyone who may be reading this, I give the link to the paper:

http://journals.plos.org/ploso…..=printable

First of all, I think we can assume, for the following discussion, that the wild-type version of the protein they studied is probably optimal, as you suggested yourself. In any case, it is certainly the most functional version of the protein that we know of.

Now, let’s try to understand what this protein is, and how the experiment was realized.

The protein is:

G3P_BPFD (P03661).

Length: 424 AAs.

Funtion (from Uniprot):

“Plays essential roles both in the penetration of the viral genome into the bacterial host via pilus retraction and in the extrusion process. During the initial step of infection, G3P mediates adsorption of the phage to its primary receptor, the tip of host F-pilus. Subsequent interaction with the host entry receptor tolA induces penetration of the viral DNA into the host cytoplasm. In the extrusion process, G3P mediates the release of the membrane-anchored virion from the cell via its C-terminal domain”

I quote from the paper:

Infection of Escherichia coli by the coliphage fd is mediated by the minor coat protein g3p [21,22], which consists of three distinct domains connected via flexible glycine-rich linker sequences [22]. One of the three domains, D2, located between the N-terminal D1 and C-terminal D3 domains, functions in the absorption of g3p to the tip of the host F-pilus at the initial stage of the infection process [21,22]. We produced a defective phage, ‘‘fdRP,’’ by replacing the D2 domain of the fd-tet phage with a soluble random polypeptide, ‘‘RP3-42,’’ consisting of 139 amino acids [23].

So, just to be clear:

1) The whole protein is implied in infectivity

2) Only the central domain has been replaced by random sequences

So, what happens?

From the paper:

The initial defective phage fd-RP showed little infectivity, indicating that the random polypeptide RP3-42 contributes little to infectivity.

Now, infectivity (fitness) was measured by an exponential scale, in particular as:

W = ln(CFU) (CFU = colony forming units/ml)

As we can see in Fig. 2, the fitness of the mutated phage (fd-RP) is 5, that is:

CFU = about 148 (e^5)

Now, always from Fig 2 we can see that the fitness of the wildtype protein is about 22.5, that is:

CFU = about 4.8 billions

So, the random replacement of the D2 domain certainly reduces infectivity a lot, and it is perfectly correct to say that the fd-RP phage “showed little infectivity”.

Indeed, infectivity has been reduced of about 32.6 million times!

But still, it is there: the phage is still infective.

What has happened is that by replacing part of the g3p protein with random sequences, we have “damaged” the protein, but not to the point of erasing completely its function. The protein is still there, and in some way it can still work, even with the have damage/deformation induced by our replacement.

IOWs, the experiment is about retrieving an existing function which has been artificially reduced, but not erased. No new function is generated, but an existing reduced function is tweaked to retrieve as much as possible of its original functionality.

This is an important point, because the experiment is indeed one of the best contexts to measure the power of RM + NS in the most favorable conditions:

a) The function is already there.

b) Only part of the protein has been altered

c) Phages are obviously a very good substrate for NS

d) The environmental pressure is huge and directly linked to reproductive success (a phage which loses infectivity cannot simply reproduce).

IOWs, we are in a context where NS shoul really operate at its most.

Now, what happens?

OK, some infectivity is retrieved by RM. How much?

At the maximum of success, and using the most numerous library of mutations, the retrieved infectivity is about 14.7 (see again Fig. 2). Then the adaptive walk stops.

Now, that is a good result, and the authors are certainly proud of it, but please don’t be fooled by the logarithmic scale.

An infectivity of 14.7 corresponds to:

about 2.4 million CFU

So, we have an increase of:

about 17000 times as stated by the authors.

But, as stated by the authors, the fitmess should still increase of about 2000 times (fitness 7.6) to reach the functionality of the wild type. that means passing from:

2.4 million CFU

to

4.8 billion CFU

So, even if some good infectivity has been retrieved, we are still 2000 times lower than the value in the wild type!

And that’s the best they could achieve.

Now, why that limit?

The authors explain that the main reason for that is the rugged landscape of protein function. That means that RM and NS achieve some good tweaking of the function, but starting from different local optima in the landscape, and those local optima can go only that far.

The local optimum corresponding to the wildtype has never been found. See the paper:

The sequence selected finally at the 20th generation has ~W = 0.52 but showed no homology to the wild-type D2 domain, which was located around the fitness of the global peak. The two sequences would show significant homology around 52% if they were located on the same mountain. Therefore, they seem to have climbed up different mountains

The authors conclude that:

The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wildtype phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.

Now, having tried to describe in some detail the experiment itself, I will address your comments.

10) Your comments about the rugged landscape paper

You say:

That’s not exactly what they say. Here’s the relevant paragraph of the paper (with my emphasis added):

But it is exactly what they say!

Let’s see what I wrote:

“you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS.

(emphasis added)

Now let’s see what they said:

By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.

(I have kept your emphasis).

So, the point is that, according to the authors, a library of 10^70 sequences would be necessary to find the wildtype by random substitutions only (plus, I suppose, NS).

That’s exactly what I said. Therefore, your comment, that “That’s not exactly what they say” is simply wrong.

Let’s clarify better: 10^70 is a probabilistic resource that is beyond the reach not only of our brilliant researchers, but of nature itself!

It seems that your point is that they also add that, given that “such a huge search is impractical” (what a politically correct adjective here! ), that should:

“imply that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.”

which is the part you emphasized.

As if I had purposefully left out such a clarifying statement!

Well, of course I have purposefully left out such a clarifying statement, but not because I was quote-mining, but simply because it is really pitiful and irrelevant. Let’s say that I wanted to be courteous to the authors, who have written a very good paper, with honest conclusions, and only in the end had to pay some minimal tribute to the official ideology.

You see, when you write a paper, and draw the conclusions, you are taking responsibilities: you have to be honest, and to state only what can be reasonably derived from the facts you have given.

And indeed the authors do that! They correctly draw the strong conclusion that, according to their data, RM + NS only cannot find the wildtype in their experiment (IOWs, the real, optimal function), unless we can provide a starting library of 10^70 sequences, which, as said, is beyond the reach of nature itself, at least on our planet. IOWs, let’s say that it would be “impractical”.

OK, that’s the correct conclusion according to their data. They should have stopped here.

But no, they cannot simply do that! So they add that such a result:

implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.

Well, what is that statement? Just an act of blind faith in neo-darwinism, which must be true even when facts falsify it.

Is it a conclusion derived in any way from the data they presented?

Absolutely not! There is nothing in their data that suggests such a conclusion. They did not test recombination, or other mechanisms, and therefore they can say absolutely nothing about what it can or cannot do. Moreover, they don’t even offer any real support from the literature for that statement. They just quote one single paper, saying that “the importance of recombination or DNA shuffling has been suggested”. And yet they go well beyond a suggestion, they say that their “conclusion” is implied. IOWs logically necessary.

What a pity! What a betrayal of scientific attitude.

If they really needed to pay homage to the dogma, they could have just said something like “it could be possible, perhaps, that recombination helps”. But “imply”? Wow!

But I must say that you too take some serious responsibility in debating that point. Indeed, you say:

In other words, they used a simplified model of evolution that didn’t include all actual mechanisms, and they think it likely that’s why their model says the wild type couldn’t have evolved with a reasonable population size. So it must’ve been intelligent design… or maybe just homologous recombination. Or some other evolutionary mechanism they didn’t include.

Well, they didn’t use “a simplified model of evolution”. They tested the official model: RM + NS. And it failed!

Since it failed, they must offer some escape. Of course, some imaginary escape, completely unsupported by any facts.

But the failure of RM + NS, that is supported by facts, definitely!

I would add that I cannot see how one can think that recombination can work any miracle here: after all, the authors themselves have said that the local optimum of the wildtype has not been found. The problem here is how to find it. Why should recombination of existing sequences, which share no homology with the wildtype, help at all in finding the wildtype? Mysteries of blind faith.

And have the authors, or anyone else, made new experiments that show how recombination can solve the limit they found? Not that I know. If you are aware of that, let me know.

Then you say:

Or their model of the fitness landscape might not be completely accurate.

Interesting strategy. So, if the conclusions of the authors, conclusions driven from facts and reasonable inferences, are not those that you would expect, you simply doubt that their model is accurate. Would you have had the same doubts, had they found that RM + NS could find easily the wildtype? Just wondering…

And again:

So between the limitations of their simulation of actual evolutionary processes and the limitations of the region of the landscape over which they gathered data, I don’t see how you can draw any particularly solid conclusions from that study.

Well, like you, I am not an expert of that kind of models. I accept the conclusions of the authors, because it seems that their methodology and reasonings are accurate. You doubt them. But should I remind you that they are mainstream authors, not certainly IDists, and that their conclusions must have surprised themselves first of all. I don’t know, but when serious researchers publish results that are not probably what they expected, and that are not what others expect, they must be serious people (except, of course, for the final note about recombination, but anyone can make mistakes after all! ).

Then your final point:

This seems to directly refute your claim that stepwise-beneficial mutations cannot produce functional proteins. They showed that it can.

No, for a lot of reasons:

a) We are in a scenario of tweaking an existing, damaged function to retrieve part of it. We are producing no new functional protein, just “repairing” as much as possible some important damage.

b) That’s why the finding of lower levels of function is rather easy: it is not complex at all, it is in the reach of the probabilistic resources of the system.

I will try to explain it better. Let’s say that you have a car, and that its body has been seriously damaged in a car accident. That’s our protein with its D2 domain replaced by a random sequence of AAs.

Now, you have not the money to buy the new parts that would bring back the old body in all its spendor (the wildtype).

So, you choose the only solution you can afford: you take a hammer, and start giving gross blows to the body, to reduce the most serious deformations, at least a little.

the blows you give need not be very precise or specific: if there is some part which is definitely too out of the line, a couple of gross blows will make it less prominent. And so on.

Of course, the final result is very far from the original: let’s say 2000 times less beautiful and functional.

However, it is better than what you started with.

IOWs, you are trying a low information fixing: a repair which is gross, but somewhat efficient.

And, of course, there are many possible gross forms that you can achieve by your hammer, and that have more or less the same degree of “improvement”.

On the contrary, there is only one form that satisfies the original request: the perfect parts of the original body.

So, a gross repair has low informational content. A perfect repair has very high informational content.

That’s what the rugged landscape paper tells us: the conclusion, derived form facts, is perfectly in line with ID theory. Simple function can be easily reached by some probabilistic resources, by RV + NS, provided that the scenario is one of tweaking an existing function, and not of generating a new complex one.

It’s the same scenario of malaria resistance, or of other microevolutionary events.

But the paper tells us something much more important: complex function, that with a high informational content, cannot be realistically achieved with those mechanisms, nor even in the most favorable NS scenario, with an existing function, and the opportunity to tweak it with high mutation rates and highly reproducing populations, and direct relevance of the function to reproduction.

Complex function cannot be found, not even in those conditions. The wildtype remains elusive, and, if the author’s model is correct, which I do believe, will remain elusive in any non design context.

And, if RV and NS cannot even do that, how can they hope to just start finding some new, complex, specific function, like the sequence of ATP synthase beta chain, or dynein, or whatever you like, starting not from an existing, damaged but working function, but just from scratch?

OK, this is it. I think I have answered your comments. It was some work, I must say, but you certainly deserved it!

Addendum:

By the way, in that paper we are dealing with a 139 AAs sequence (the D2 domain).

ATP synthase beta chain is 529 AAs long, and has 334 identities between E. coli and humans, for a total homology of 663 bits.

Cytoplasmic dynein 1 heavy chain 1 is 4646 AAs long, and has 2813 identities between fungi and humans, for a total homology of 5769 bits.

These are not the 16 – 28 bits of malaria resistance. Not at all.

OK, that’all for the moment. Again, I apologize for the length of it all!  🙂

Comments
GPuccio, Thank you very much for your time. I don't have the derivation in front of me just now. You are probably right, and I am wrong, by the look of it. On rarity, what I was saying was that opponents could say: we agree, so small a fraction of the search space that has actually been visited, and nonetheless, we observe all this bio-complexity. We need to have more information to assess evolability (i.e. how much could be reasonably expected of random walk to traverse). Without it, it is more or less guesswork. There are some good ideas though. I actually had a chat with someone about the rarity of function in protein sequence space. They pointed me to what they consider as evidence against rarity. I am not qualified to judge that but it would be interesting to hear your opinion. The family Buprestidae is among the largest of the beetles, with some 15,000 species known in 450 genera. As far as I understood from our opponent, one of the current explanations is neutral evolution. To repeat, this example was put forward as evidence against the rarity of protein functions in sequence space. It appears, there are some very dense clusters of solutions in it which can be traversed by random walk/neutral drift. I don't know what evidence (if at all) they have supporting the claim that "neutral drift did it". It would be nice to have an expert look into this.Eugene S
March 21, 2018
March
03
Mar
21
21
2018
02:38 AM
2
02
38
AM
PDT
EugeneS: Sorry for answering your interesting comment at #349 so late, I had not seen it! "Regarding the estimate (your comment 209), I actually tried to do the derivation myself with pen and paper. What I am getting is roughly 10^43 states available to evolution (and correspondingly, 143 bits of functional info, not 140 bits). The difference stems from one extra order of magnitude that creeps in when assessing the number of days in 5 billion years = 1.825E+13" Why? Just to understand. I repeated the computation, and it still gives me 1.825E+12. I though the difference was due to the fact that I had used 365 days instead of the more precise 365.25. But even using that, what I get is 1.82625E+12 "What I want to say, is that the estimate of the (minuscule) fraction of the search space that can be visited by evolutionary random walk is not enough by itself. It must be supported by an estimate of rarity of functional states in the search space (such as the one produced by D. Axe, which is 1 functional polypeptide in every 10^77 on average). These two estimates together are a statistical argument against the “grand show of evolution”, as R. Dawkins put it describing the R. Lenski experiment." You are right. But in all my reasonings I use the functional complexity of proteins as measured by their sequence conservation thorugh long evolutionary times. That is in itself a measure of the target space/search space ratio for that protein, so it is a measure, certainly approximate, of the rarity of its functional state. And the results are in good accrod with Axe's results, which however are a measure of the probability of folding, not of the specific function of each protein. As I have often said, the rugged landscape experiment, discussed in detail in this thread, gives very good empirical support to Axe's results. Measuring the functional complexity by sequence conservation through very long evolutionary periods using an universally accepted parameter (the BLAST bitscore), is a very simple, flexible and reliable method to approximately measure the target space/search space ratio for any individual protein. The method, even if differently implemented, is based on the same ideas used by Durston in his fundamental paper: Measuring the functional sequence complexity of proteins https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2217542/pdf/1742-4682-4-47.pdfgpuccio
March 20, 2018
March
03
Mar
20
20
2018
01:37 AM
1
01
37
AM
PDT
GP, Regarding the estimate (your comment 209), I actually tried to do the derivation myself with pen and paper. What I am getting is roughly 10^43 states available to evolution (and correspondingly, 143 bits of functional info, not 140 bits). The difference stems from one extra order of magnitude that creeps in when assessing the number of days in 5 billion years = 1.825E+13 ;) Anyway... I posted a relevant comment in a different thread. I'd like to copy it here as well. The criticism I came across when getting the same ballpark figures is, ok, evolution only had time to forage a tiny fraction of the search space and yet, it could build up all the observed biocomplexity! What I want to say, is that the estimate of the (minuscule) fraction of the search space that can be visited by evolutionary random walk is not enough by itself. It must be supported by an estimate of rarity of functional states in the search space (such as the one produced by D. Axe, which is 1 functional polypeptide in every 10^77 on average). These two estimates together are a statistical argument against the "grand show of evolution", as R. Dawkins put it describing the R. Lenski experiment.EugeneS
November 2, 2017
November
11
Nov
2
02
2017
04:05 AM
4
04
05
AM
PDT
GPuccio @320
All these well known cases can be defined as cases of loss of function mutations which indirectly confer an advantage in a specific selective environment. Let’s call this kind of mutations: beneficial loss-of-function mutations. The point is very simple: loss of function mutations are very simple, even those which are in a sense “beneficial”. That’s why their “target size” is large. That’s why they are in the range of RV.
There are many ways to destroy a thing, but only a few ways to build it. You have provided us with many insights. This seems to be an important one among many.Origenes
October 31, 2017
October
10
Oct
31
31
2017
10:05 AM
10
10
05
AM
PDT
Mung @345: At least gpuccio asks honest questions. Maybe that's why he is supported by a distinguished Canadian biochemistry professor who (sometimes) comments here. :) BTW, did you notice this discussion thread has been visited so many times since October 5?
Popular Posts (Last 30 Days) What are the limits of Natural Selection? An interesting… (2,628) Violence is Inherent in Atheist Politics (2,031) Of course: Mathematics perpetuates white privilege (1,143) Sweeping the Origin of Life Under the Rug (1,012) Has the missing matter of our universe finally been found? (850)
Dionisio
October 30, 2017
October
10
Oct
30
30
2017
09:34 AM
9
09
34
AM
PDT
Mung: "Maybe gpuccio should say less." That's probably a good idea! :) However, I would be happy to have someone engage in my challenge. I really believe that the non-deconstructability of complex functions remains the strongest, simplest, definitive argumetn against NS and all reasonings based on it, as far as any fucntion with any relevant functional complexity is concerned. They can keep all their simple functions and all their precious microevolutionary scenarios, as much as they like: I really invite them to build even the simplest biuological environment with those minimal capabilites. If anyone wants to reason in terms of anything that is important in biolpogy, enzymes, biochemical networks, far from equilibrium states, membranes, transports, regulations, and so on, what is immediately and badly needed is: hundreds or thousands of different specialized proteins, each of them hundreds of AAs long, each of them based on tons of specific functional information. Each of them not desconstructable into simple naturally selectable steps, and of course well beyond the range of any realistic probabilistic scenario involving mere RV.gpuccio
October 30, 2017
October
10
Oct
30
30
2017
09:16 AM
9
09
16
AM
PDT
It certainly appears that the more gpuccio has to say on the matter the less any critic is willing to step up and defend the Darwinian scenario. Maybe gpuccio should say less. :DMung
October 30, 2017
October
10
Oct
30
30
2017
08:51 AM
8
08
51
AM
PDT
To all interested: Now, what do we know about continuous optimization spaces? a) We know that they have been observed in a few simple cases of simple initial function. Almost always of the kind: "beneficial loss of function" in an already existing complex and functional structure. b) We know that they are small: a few aminoacids at most, in all observed cases. c) We know that the process must be continuous, usually with one single stepwise and beneficail mutations. But we also know that the space bewteen initial starting functions is not continuous, not even for the same function. It is a rugged space. In the case of chloroquine resistance, we have seen that it is impossible to pass from one of the two possible starting states to the other: a mini rugged landscape from all points of view, in a very simple syste, of two staring states of two AAs each, with one aminoacid in common! That is really amazing evidence for a discontinuous space. But let's look to another well documented model: the rugged landscape experiment itself. I quote a very important statement in that paper:
More than one such mountain exists in the fitness landscape of the function for the D2 domain in phage infectivity. The sequence selected finally at the 20th generation has ?=?0.52 but showed no homology to the wild-type D2 domain, which was located around the fitness of the global peak. The two sequences would show significant homology around 52% if they were located on the same mountain. Therefore, they seem to have climbed up different mountains.
Emphasis mine. Now, that's really amazing. The best sequence they got as a result of RV + NS and the wildtype sequence showed no homology at all!. IOWs, their functional islands are completely separated in the ocean of the search space. There is no way to pass from one to the other. Because it is too unlikely. But also for another good reason. The reason is that the "mounts" used even in the paper to represent local functional islands are not mount at all. They are holes. The rugged landscape is a landscape made of huge, almost infinite flat surfaces (the vast parts of the search space that are neutral to the function) and a number of distant, completely separated holes. Holes are the islands of local optima, where some optimization can be achieved by small stepwise mutations. IOWs, the only parts of the system where the functional landscape is in some way continuous. If you arrive, by RV, in the small space of one hole, you will most likely fall to the bottom of it (the local optimum). You have almost no chance to climb again out of the hole. So, the functional landscape between starting states is completely discontinuous: they are isolated in the search space, and negative selection acts against any chance of going from one to the other, even if we are speaking of starting states for the same function.gpuccio
October 30, 2017
October
10
Oct
30
30
2017
02:12 AM
2
02
12
AM
PDT
To all interested: Now, let's try to imagine how sich a perfect machine could originate by RV + NS. Just to understand, compare that with Szostak's ATP binding protein, which, even in its strong version, requires only a few specific aminoacids to work. There is the same difference, indeed a much greater difference, than between a formula 1 prototype and a very simple and primitive cart! So, how could this big, complex, specific structure originate Even if it derived fron some existing structure which did something else (and of which we have absolutely no idea, least of all evidence), there must have been an initial form which could perform the actual funtion, and was then optimized by NS. That initial form must have been in the range of pure RV. A few AAs at most. So, can you conceive a RV of, say 5 AAs that generates a structure, even if simpler than the one we observe today (and which was however ready almost at the beginning of natural history), which can assume the three sonformations as a result of the rotation of an inner stalk, coupled to the movements of a protoin gradient through a membrane? If you can, you have certainly a much greater imagination than I have! But that initial 5 AAs random state must be the starting point for a state that, form bacteria to humans, has been based on about 600 conserved aminoacids! Now, do you know how specific a target is a 600+ conserved aminoacid sequence? We are discussing one state out of 20^600, here. I will not insist on that: it should be completely self-evident. Now, to reach that state starting from a five AAs initial state generated by RV (which is in itself a huge accomplishment, but you can make it ten aminoacids, I feel generous today!), we should believe that we had about 600 single aminoacid mutations, each of them conferring a definite improvement of the initial ATP synthase activity, IOWs a ladder of 600 events similar to the two or three events we found in the optimization of chloroquine resistance! We should believe that, surrounding that initial 10 AAs variation and starting function, there is for some reason not a smal optimization space of a few aminoacids, as we have observed in all real examples, but a huge and continuous optimization space of 600 gradual steps! Now, is there any conceptual reason why that idea should be true? And is there any empirical evidence that it is true? Oh, I realize that I have expressed my challenge again. With a specific example, just to help those who want to answer. :)gpuccio
October 30, 2017
October
10
Oct
30
30
2017
01:26 AM
1
01
26
AM
PDT
To all interested: But let's go back to Natural selection, and to the protein functional space. I think we all agree, even oru kind darwinist interlocutors, that RV alone can do little: and yet we have seen that it has the fundamental role, in the algorithm, to find the initial starting function. And the initial starting function must always be simple: it's not important if it is 1, or 2, or 3, or 4 AAs. It's always simple. And the improbability of finding it by RV alone increases esponentially with each new aminocid that is added to the initial functional nucleus. Now, just to illustrate the reasoning, let's refer again to an old friend: ATP synthase. In particular, the two alpha and beta chains which, in three copies each, make up the bulk of the F1 subunit. You can see a good picture of the whole moolecule here: https://en.wikipedia.org/wiki/ATP_synthase#/media/File:Atp_synthase.PNG The alpha and beta chians are the part in red and pink, in the lower part of the picture. That is exactly the part that binds ATP and pohsphate, then changes its conformation and squeezes that two molecules together, synthesizing ATP and storing in its molecule the biochemical energy that comes from the work of the rest of the molecule. You can see a good video explaining the main concepts here: https://www.youtube.com/watch?v=b_cp8MsnZFA and another one here: https://www.youtube.com/watch?v=39UKSfsc9Z0 Now, we are discussing here only the sub-part which operates the final function: a) binds ADP and phsphate b) squeezes them together c) releases ATP, retwrning to the original conformation Therefore, this part undregoes three conformational changes: those changes are effected by the rotation of a motor, made of the three remaining chains of the F0 subunit, gamma, delta and epsilon, which in turn rotates because it is coupled to the energy derived from the passage of protons from one side of the membrane (the intermembrane space) to the other side (the mithocondrial matrix). Now, why are we discussing the alpha-beta part of the F1 subunit? Because it is by far the most conserved part of the whole molecule. As I have said many times, the alpha and beta chain are two rather long protein chains (553 and 529 AAs), extremely conserved in all forms of life. Just to examplify, the alpha chain presents: 290 identities, 373 positives and 561 bits of homology between humans and E. coli. The beta chain presents: 334 identities, 383 positives and 663 bits of homology between humans and E. coli. So, we can certainly argue that the specific geometry and biochemical functionality of this part of the F1 subunit must be extremely fine.tuned and precise, to make the molecule capable of assuming the three specific conformations that allow it to do what it does. More than 1000 aminoacids are necessary to get the structure that is repeated three times in the hexamer. 624 of them have been conserved for billions of years! More in next post.gpuccio
October 30, 2017
October
10
Oct
30
30
2017
01:08 AM
1
01
08
AM
PDT
Origenes: Thank you again for the references and the summary. Of course, Axe is perfectly right! :)gpuccio
October 29, 2017
October
10
Oct
29
29
2017
10:53 AM
10
10
53
AM
PDT
gilthill: Thank you for your kind words and attention! :) Defending Behe's argument is really a pleasure and an honour. I have not read that paper, but I will do it as soon as possible. I just wonder, how could they have it published? :) The abstract is already very interesting:
To establish a string of two nucleotides required on average 84 million years. To establish a string of five nucleotides required on average 2 billion years.
. That is simply what we expect when we transfer Behe's results about chloroquine resistance to a human scenario! And it's not different from the results of the "waiting for two mutations" paper, by Durret and Schmidt:
Consistent with recent experimental observations for Drosophila, we find that a few million years is sufficient, but for humans with a much smaller effective population size, this type of change would take >100 million years.
OK, so Sanders is still optimistic with his 84 million years for two nucleotides. Durret and Schmidt go for >100 million years! However, the two estimates are quite similar. It seems that we are in the right order of magnitude, for that specific problem. Good luck to all my darwinist friends! :)gpuccio
October 29, 2017
October
10
Oct
29
29
2017
10:52 AM
10
10
52
AM
PDT
GPuccio @333
GPuccio: Another huge probabilistic barrier lies in the search space ocean which surrounds functional islands, preventing any pathway from one island to a different one, and even between separated islands which implement, with different levels of efficientcy, the same function (the rugged landscape).
Axe provides several arguments as to why the search space for proteins is enormous and function is scarce —a summation: (1) Proteins are large; for stability, for being able to have important interactions with the substrate some distance away from the place where the actual chemical conversion occurs, for being able to perform simultaneous processes occurring at different sites on the same enzyme (see p.3 and p.4) . (2) The rarity of functional folds; each type of fold has a unique complex tertiary structure, SCOP classifcation of protein structures currently has 2,008 different structural categories for protein domains, (more see p.5. and p.6). (3) No structural modularity; “the highly cooperative nature of protein folding [44] means that stable structure forms all at once in whole chunks—domains—rather than in small pieces. Consequently, self-contained structural modules only become a reality at the domain level, which makes them unhelpful for explaining new folds at that level” — (see p.8 and p.9).Origenes
October 29, 2017
October
10
Oct
29
29
2017
05:56 AM
5
05
56
AM
PDT
Many thanks gpuccio for all your hard work and wonderful writings here ; it really represents an invaluable resource for anyone interested in the evolution debate. Regarding the waiting time problem, I am so pleased with your analysis that vindicates so clearly Behe ´s argument! On the same topic, did you read the article by Sanford et al entitled « the wainting time problem in a model hominin population »? It is really a must read; indeed, using a different approach than Behe, namely numerical simulation, Sanford et al demonstrate that it is absolutey impossible to go from ape to man with the RV + NS algorithm.gilthill
October 29, 2017
October
10
Oct
29
29
2017
03:50 AM
3
03
50
AM
PDT
Origenes: OK: let's assume that. :)gpuccio
October 28, 2017
October
10
Oct
28
28
2017
03:38 PM
3
03
38
PM
PDT
Let's assume that Larry Moran acted on trusting his colleagues: 'they all say that Behe assumes simultaneous mutations, so that must be correct.' This scenario doesn't picture Larry as an independent thinker, but it saves him from having argued out of bad faith.Origenes
October 28, 2017
October
10
Oct
28
28
2017
03:29 PM
3
03
29
PM
PDT
Origenes: Thank you for the documentation of darwinist errors and misrepresentations. The same bad faith is in all those criticisms. Thank you also for quoting Behe's words exactly (I had not the book readily available to look for it). Of course, there is no reference at all to "simultaneous mutations". To quote Paul Gross: "It would be difficult to imagine a more breathtaking abuse of statistical genetics", and, more in general, of statistical concepts, than what we can find in those "criticisms" by Gross himself, Coyne, Matzke, Carroll, and Moran. I would expect nothing better from the likes of Coyne and Matzke, if I have to be sincere, but I must confess that I am a little disappointed of Moran. I thought better of him. This is really bad reasoning, and probably bad faith reasoning, of the worst kind. If these people had just stopped a moment to consider the basics of probability theory, they would have avoided those gross and foolish statements. What has simultaneity to do with a product of probabilities? Do you have to toss two coins simultaneously to have 1 probability out of 4 of getting two heads? Isn't it the same if you toss one coin twice? I suppose Coyne and Moran and others are fond of their scenarios where coins are tossed in the air at the same time, and dice are rolled simultaneously: to quote them again, "If it looks impossible, this is only because of their bizarre and unrealistic assumptions". I am really amazed at the arrogance ot those people. Behe's reasoning is right, humble, pertinent, realistic, correct, scientific and in perfect accord with probability theory. Behe is a true scientist, and a very good man.gpuccio
October 28, 2017
October
10
Oct
28
28
2017
01:54 PM
1
01
54
PM
PDT
On Moran's claim that Behe asks for two simultaneous mutations. Moran isn't the one who came up with that 'criticism':
Paul Gross: “Behe assumes simultaneous mutations at two sites in the relevant gene, but there is no such necessity and plenty of evidence that cumulativeness, rather than simultaneity, is the rule. As Nature‘s reviewer (Kenneth R. Miller) notes, ‘It would be difficult to imagine a more breathtaking abuse of statistical genetics.'” (The New Criterion, 2007) Jerry Coyne: “What has Behe now found to resurrect his campaign for ID? It’s rather pathetic, really. … Behe requires all of the three or four mutations needed to create such an interaction to arise simultaneously. … If it looks impossible, this is only because of Behe’s bizarre and unrealistic assumption that for a protein-protein interaction to evolve, all mutations must occur simultaneously, because the step-by-step path is not adaptive.” (The New Republic, 2007) Nick Matzke: “Here is the flabbergasting line of argument. First, Behe admits that CQR evolves naturally, but contends that it requires a highly improbable simultaneous double mutation, occurring in only 1 in 1020 parasites. … The argument collapses at every step.” (Trends In Ecology and Evolution, 2007) Sean Carroll: “Behe makes a new set of explicit claims about the limits of Darwinian evolution, claims that are so poorly conceived and readily dispatched that he has unwittingly done his critics a great favor in stating them. … Behe’s main argument rests on the assertion that two or more simultaneous mutations are required for increases in biochemical complexity and that such changes are, except in rare circumstances, beyond the limit of evolution. .. Examples of cumulative selection changing multiple sites in evolving proteins include … pyrimethamine resistance in malarial parasites — a notable omission given Behe’s extensive discussion of malarial drug resistance. … [T]he argument for design has no scientific leg to stand on.” (Science, 2007)
Where did they get that from? From Behe's book "the edge of evolution". Which part? This part:
Recall that the odds against getting two necessary, independent mutations are the multiplied odds for getting each mutation individually. What if a problem arose that required a cluster of mutations that was twice as complicated as a CCC? (Let’s call it a double CCC.) For example, what if instead of the several amino acid changes needed for chloroquine resistance in malaria, twice that number were needed? In that case the odds would be that for a CCC times itself. Instead of 10^20 cells to solve the evolutionary problem, we would need 10^40 cells. Workers at the University of Georgia have estimated that about a billion billion trillion (10^30) bacterial cells are formed on the earth each and every year. … If that number has been the same over the entire several-billion-year history of the world, then throughout the course of history there would have been slightly fewer than 10^40 cells, a bit less than we’d expect to need to get a double CCC. The conclusion, then, is that the odds are slightly against even one double CCC showing up by Darwinian processes in the entire course of life on earth. [Michael Behe, The Edge of Evolution: The Search for the Limits of Darwinism, p. 135]
Where does it say "simultaneous"? I don't know.Origenes
October 28, 2017
October
10
Oct
28
28
2017
12:55 PM
12
12
55
PM
PDT
To all interested: Let's finish this analysis of Larry Moran's statements. In his more recent page, he says:
The interesting part of his book was the correct claim that there was an edge of evolution and the incorrect claim that you can't get chloroquine resistance by a stepwise, sequential route.
Why incorrect? It is perfectly correct! Of course, as we have already said, and as it is clearly showed in the Summers paper, the pathway to chloroquine resistance is made up of two parts: a) a first step, which requires two neutral mutations that must happen in the same individual or clone, and that must be present at the same time for NS to act. Each mutation is independent from the other one, and none of the two mutations can be selected if isolated. IOWs, both mutations are neutral, is isolated, in regard to chloroquine resistance. b) successive steps, which are stepwise selectable (IOWs each single mutation confers an optimization of the existing resistance). Now, while the steps in b) are certainly stepwise selectable, chloroquine resistance cannot happen at all if a) does not take place as a first step. The two mutations that make up that a) step, therefore, cannot be obtained in a "stepwise, sequential route". There is no sequenctial route to them least of all stepwise, least of all sequential. They must just happen, independently one from the other. Of course, one will happen before the other. It may be one, it may be the other one. It has no importance at all. Their probability is the probability of two independent events, the product of the two probabilities. This is the scenario if they are both neutral, as it seems to be the case in our example. Of course, if even on of the two were deleterious, the scenario would be much more catastrophic! But there is no need for either of them to be deleterious, or to be cancelled by negative selection, as Moran seems to believe. The probabilities we have computed (the same stated by Behe and accepted by Moran) are the probabilities of the two events when the first one is completely neutral, and is perfectly retained in the individual clone where it happens. So, again, Behe is perfectly right: chloroquine resistance cannot be obtained in a "stepwise, sequential route". Not at all. Because, to be initiated, the pathway requires two independent mutations, not stepwise, not sequential, in any sense. So, what is Moran saying? That after the first two independent mutations, which already confer a good level of cholroquine resistance, other optimizing mutations are added by RV+NS? Yes, that is true, and so? That changes nothing. Behe is right all the same. And Moran is wrong all the same. Behe has always spoken of two mutations, two independent events that are necessary to reach chloroquine resistance. His probabilistic considerations are about that scenario. And they are completely right. And Moran himself admits that Behe could not know the details in Summer's paper at the time he wrote his book. Therefore, he could not know that, beyond the two initial independent mutations that he had correctly predicted, a few additional mutations could optimize the function by RV+NS. The important conclusion, again, is that the first and basic barrier to the RV+NS algorithm is the complexity of the initial variation which generates the selectable function. That is a completely insurmountable barrier for any function with a minimal complexity. Adding a few optimizing mutations is rather easy, for simple functions with a modest continuous functional landscape surrounding them. But that optimization quickly reaches a roof. Another huge probabilistic barrier lies in the search space ocean which surrounds functional islands, preventing any pathway from one island to a different one, and even between separated islands which implement, with different levels of efficientcy, the same function (the rugged landscape).gpuccio
October 28, 2017
October
10
Oct
28
28
2017
12:18 PM
12
12
18
PM
PDT
Dionisio: Yes, but did the changes happen simultaneously? :)gpuccio
October 28, 2017
October
10
Oct
28
28
2017
11:01 AM
11
11
01
AM
PDT
gpuccio @329: Please, don't be so strict, those two names spell almost identically: chloroquine (11 letters) atovaquone (10 letters) 6 common letters: 2 'o' + 1 'q' + 1 'u' + 1 'n' + 1 'e' only 5 different letters :)Dionisio
October 28, 2017
October
10
Oct
28
28
2017
08:23 AM
8
08
23
AM
PDT
To all interested: Now, the important part: Where is Larry Moran wrong? Moran makes some precise statements about the Summers paper, and about Behe's ideas. While I am ready to admit that he is in general respectful enough of Behe as a fellow biochemist, that does not prevent him from criticizing explicitly his ideas. I will refer here to Moran's page about Summer's paper, already quoted here many times: http://sandwalk.blogspot.it/2014/07/michael-behe-and-edge-of-evolution.html but also to a previous page of Moran about Behe's ideas: http://sandwalk.blogspot.it/2010/10/edge-of-evolution.html In his more recent page, Moran summarizes Behe's ideas rather vaguely:
Behe uses the example of drug resistance in Plasmodium falciparum (malaria parasite). Resistance to atovaquone occurs quite often so that's probably due to a single mutation. Resistance to chloroquine, on the other hand, is rare so it's probably due to multiple mutations in the relevant gene (PfCRT, a gene that encodes a transporter protein)
Indeed, in his older page, Moran had described Behe's argument with greater precision and accuracy:
Behe points out that it is sometimes very difficult for the malaria-causing parasite, Plasmodium falciparum, to develop resistance to some drugs used to treat malaria. That's because the resistance gene has to acquire two specific mutations in order to become resistant. A single mutation does not confer resistance and, in many cases, the single mutation is actually detrimental. P. falciparum can become resistant because the population of these single-cell organisms is huge and they reproduce rapidly. Thus, even though the probability of a double mutation is low it will still happen. If the probability of a single mutation is about 10^10 per generation then the probability of a double mutation is 10^20. He refers to this kind of double mutation as CCC, for "chloroquine-complexity cluster," named after mutation to chloroquine resistance in P. falciparum.1 Behe's calculation is correct. If two simultaneous are required then the probability will, indeed, be close to 1 in 10^20.
The emphasis on "simultaneous" is mine. I will soon explain the reason for that. Let's go again to the more recent page. Moran goes on:
The interesting part of his book was the correct claim that there was an edge of evolution and the incorrect claim that you can't get chloroquine resistance by a stepwise, sequential route.
Well, here is a very obvious error: Behe's argument is not that "you can't get chloroquine resistance by a stepwise, sequential route". Not at all. It was, rather that chloroquine resistance required two independent mutations, each of them not selectable, and that therefore its probability was the product of the two individual mutation probabilities. As Moran himself had correctly described in his older page: "That's because the resistance gene has to acquire two specific mutations in order to become resistant. A single mutation does not confer resistance and, in many cases, the single mutation is actually detrimental." Again, the emphasis on "detrimental" is mine, and I will explain the reason for that soon. Now, I believe that Moran here is equivocating, more or less intentionally, on some important points. Let's try to make them clear: 1) As well described in Summer's paper, we can say that the pathway to chloroquine resistance is made of two parts: a) a first step, which requires two neutral mutations b) successive steps, which are stepwise selectable (IOWs each single mutation confers an optimization of the existing resistance). That is what we know now, but Behe did not know all those details at the time he wrote his book. However, his argument, as recognized by Moran himself, was that chloroquine resistance required two independent mutations, each of them not individually beneficial, and therefore not selectable. That is the reason that explains why it is much rarer: in Behe's (and Moran's) words, if the probability of one mutation is 1:10^10, then the probability of two independent mutations is 1:10^20. So, Behe was completely right, and the Summers paper completely confirms his prediction. 2) But then, why is Moran so critical of Behe's statements? The simple truth is that he, more or less intentionally, misrepresents them. And then he criticizes his own misrepresentation. The misrepresentation is based on common misunderstandings of both probability theory and evolution theory. And it is realized thorugh the introduction of two inappropriate and misleading words: a) "simultaneously" b) "detrimental" a) Why does Moran say that: "If two simultaneous are required then the probability will, indeed, be close to 1 in 10^20."? Where does that idea that the two mutations must be "simultaneous" come from? It is not true that the two mutations must be simultaneous, in the sense of occurring at the same time, or in the same individual. The simple requirement is that the two mutations must, at some time, be present "at the same time" in some individual in the population. The difference is huge. Let's say that there is an individual in the population, le's call it "a", where one of the two mutations happen. The probability of that event is 1:10^10. There is no need that the other mutation should happen in the same individual, at the same time (IOWs, that it be "simultaneous"). What is needed is that, while the first mutation is passed on by "a" to its direct descendants, let's call it "the a clone", at some time the second mutation happen in one individual of that clone. A lot of time can pass before the second event. There is no need for simultaneity. What is needed is that, at some time, we have at least one individual with both mutations. Then, and only then, the new trait becomes selectable, because each of the two mutations in itself is neutral. This is exactly the scenario described by Summers for chloroquine resistance. And in this scenario, the probabilities multiply: if the probability of the first event is 1:10^10, the probability of having the two independent events in the same individual clone, whatever the time needed for that, are of about 1:10^20. No need for simultaneity. That is a false concept introduced by Moran, nothing else. The only probabilistic requirement is that the two events must be independent. That means that the first event does not improve the probability of the second one. And that is true, because the first event is not selectable, therefore it does not influence the probabilities of the second event, because it has no effect on the probabilistic resources. b) Why does Moran say that: "That's because the resistance gene has to acquire two specific mutations in order to become resistant. A single mutation does not confer resistance and, in many cases, the single mutation is actually detrimental." Why the attention to the possibility that one or even both mutations can be "detrimental"? That is certainly possible, but it is not part of the main argument. The main argument is that the two mutations are neutral. If the two mutations are neutral, then the probability of both occurring is p*p, 1:10^20 in our example. If even one of the two events is detrimental, the probabilities will be much lower. But that's not Behe's argument. Behe's argument is that the probability of two neutral mutations is the product of each individual probability. Now, in his older page, Moran does not explicitly state his argument against Behe's conclusions. He just opens a debate in the comments. But, if we look at his personal comment labeled: "Wednesday, October 06, 2010 4:11:00 PM" we can understand what he is proposing:
Steve LaBonne says, Allowing multiple rolls of only ONE of the dice will still make Behe's number way off. This doesn't help him much unless BOTH mutations are SUFFICIENTLY detrimental to be subject to strong purifying selection. Only that constraint would be sufficient to require that they be (nearly) simultaneous. I think you're close to understanding the main problem with Behe's argument. He assumes that deleterious mutations will always be rapidly eliminated from the population. That's consistent with the common understanding of evolution so it appears to set up an insoluble problem. However, you and I (and many others) know that evolution doesn't work that way. There's a lot of sloppiness and accident so it's quite possible for inefficient proteins to hang around for a long time. You could get the same effect by gene duplication and messing with the spare copy.
Again, the emphasis is mine. As you can see, he is criticizing Behe for "assuming that deleterious mutations will always be rapidly eliminated from the population." It's the "deleterious" misrepresentation. But Behe is not assuming that. Not at all. His computation of the probabilities refers to two neutral mutations, not to two deleterious mutations. OK, no more time now. I will add some more comments about Moran in next post, as soon as I can.gpuccio
October 28, 2017
October
10
Oct
28
28
2017
07:56 AM
7
07
56
AM
PDT
To all interested: OK, now let's go to Gordon Davisson and Larry Moran. In my OP, I quote Gordon Davisson as saying:
This is simply wrong. Take the evolution of atovaquone resistance in P. falciparum (the malaria parasite). Unless I’m completely misreading the diagram Larry Moran gives in http://sandwalk.blogspot.com/2…..ution.html, one of the resistant variants (labelled “K1”) required 7 mutations in a fairly specific sequence, and at most 4 of them were beneficial. In order for this variant to evolve (which it did), it had to pass at least 3 steps unassisted by selection (which you claim here is impossible) and all 4 beneficial mutations had to overcome genetic drift.
The discussion is, of course, about this page by Larry Moran: http://sandwalk.blogspot.it/2014/07/michael-behe-and-edge-of-evolution.html where he makes comments about the Summers paper that I have discussed here in detail. Most of the discussion is about Fig. 3 (A and B) from that paper. Now, you know how much I appreciate Gordon Davisson, but I must say that he makes a few errors here. I will try to clarify: 1) The discussion is about chloroquine resistance, not atovaquone resistance. That's only a minor mistake, of no importance, and I mention it only for the sake of clarity. 2) The real problem is when he says: "In order for this variant to evolve (which it did), it had to pass at least 3 steps unassisted by selection (which you claim here is impossible)" He is speaking of the K1 variant. Now, it is absolutely not true that the K1 variant "had to pass at least 3 steps unassisted". He is referring, of course, to the 3 neutral mutations that are found in the K1 variant if compared to the wildtype. Which, as we will see, are really only two, indeed one and a half. Let's see: there are, indeed, 7 mutations. 5 of them are beneficial: 75E + 76T which, as we know, are beneficial only if in couple, and appear by RV 220S, 74I, 326S which are added by RV + NS If you follow one of the possible pathways for K1 in Fig. 3 A, for example: HB3 - D39 - D32 - D30 - D20 - D10 - GB4 - K1 we can see that the other two mutations present in K1: 371I and 271E are neutral in that pathway. However, 371I is beneficial in another pathway, so we can consider it potentially beneficial in the right context. But 271E is always neutral. Now, my point is the following: If some mutation that we find in a final functional target is completely neutral to the function, there is absolutely no reason to state that the protein "had to pass that step unassisted" in its evolution. The simple truth is that that particular step is a mere accident. Indeed, the 271E mutation is present only in three natural variants, K1, Dd2 and GB4, which are of course related (they come from the same pathway), and in none of them it seems to contribute to the function. There is also no reason to believe that the neutral mutation had to be fixed by genetic drift. That is absolutely unnecessary. A neutral mutation that happpens to be found in an evolutionary pathway was probably simply "hitchhiked". I quote from the paper: The spectrum of adaptive mutations in experimental evolution https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2271163/
Whole genome sequencing has the potential to reveal all mutations in an evolved genome. Among these are the beneficial mutations, but also neutral or deleterious mutations that happen to arise in the same background and hitchhike to high frequency. This is particularly an issue in asexual populations where the entire genome is one linkage group
IOWs, if some neutral, or even deleterious, mutation arises in the same individual where a beneficial mutation arises, it can be hitchiked by the NS that acts on the beneficail mutation, and therefore fixed only because it is linked to the beneficial mutation. Mere accident. It can happen or not happen, nothing changes. Neutral genetic drift has no role here. So, Gordon Davisson gives unnecessary relevance to the presence of neutral mutations in natural variants: if they do not contribute to the function, they mean nothing. But let's go to Gordon Davisson's last "error": 3) He believes too much in what Larry Moran says! :) But that brings us to the question of Larry Moran's statements, which will be the subject of next post.gpuccio
October 28, 2017
October
10
Oct
28
28
2017
05:39 AM
5
05
39
AM
PDT
Origenes and Mung: Yes, that would be a super-miracle, indeed. But you know, when miracles happen, they become visible. They become facts. The simple problem with the idea of a miraculous protein space is that it's nowhere to be seen, while resurrected Lazarus could certainly be seen! So, if the functional protein space is so miracuolus, how is it that there was no progression from the mutated PfCRT with good affinity for chloroquine to, say, some enzyme that can degrade chloroquine? The scenario is favourable, after all. We have this complex functional protein, a 424 AAs long proteins which does something that we still don't understand well, and which has already acquired, by a few mutations, a good affinity for a new substrate, chloroquine. The protein already has some good folding: after all, it is a functional protein. So, there is no reason why it shoud not reach some distant functional island that confers the ability to degrade chloroquine, instead of simply transporting it out of the vacuole. What's the problem? The protein functional space is so miraculous, after all! What's one more little more miracle? Why does that kind of thing simply not happen?gpuccio
October 28, 2017
October
10
Oct
28
28
2017
04:05 AM
4
04
05
AM
PDT
Dionisio: Of course, dynein! How could I not think of it! And yet, it is a very good candidate for one of my future OPs. It is so obvious that dynein arose from RV+NS. Perhaps we should really switch sides! :) (For all neo darwinists who may be devoid of sense of humour, we are just joking! :) )gpuccio
October 28, 2017
October
10
Oct
28
28
2017
03:54 AM
3
03
54
AM
PDT
Perhaps this is a biology case that answers gpuccio's challenge @103 better than Mung's sound-sensitive spot example? Here's a set of three consecutive video presentations of proteins that apparently resulted from RV+NS producing new complex functional specified information (at least the presenter seems to imply that?) : 1. https://www.youtube.com/embed/9RUHJhskW00 2. https://www.youtube.com/embed/lVwKiWSu8XE 3. https://www.youtube.com/embed/FRtqfpO8THU It seems like this might persuade me to switch sides in this debate? :)Dionisio
October 28, 2017
October
10
Oct
28
28
2017
12:12 AM
12
12
12
AM
PDT
Mung @324 The quote is from Kauffman — Investigations p.9.
Mung: This same question applies to the work by Andreas Wagner. If the search space is constructed in a miraculous manner, how does that possibly exclude the miraculous?
Indeed. It is an attempt to relocate the miraculous (the source of information) where it is less conspicuous: the environment, or in the case of Wagner a miraculous hyperdimensional cube search space.Origenes
October 27, 2017
October
10
Oct
27
27
2017
06:54 PM
6
06
54
PM
PDT
Origenes;
...where did these well-wrought fitness landscapes come from, such that evolution manages to produce the fancy stuff around us?
Can you tell me where this quote comes from? This same question applies to the work by Andreas Wagner. If the search space is constructed in a miraculous manner, how does that possibly exclude the miraculous?Mung
October 27, 2017
October
10
Oct
27
27
2017
06:20 PM
6
06
20
PM
PDT
GPuccio provides us with some excellent arguments as to why there is no continuous functional landscape, and the paper by Douglas Axe (see #314) contains some more, but let's suppose, for the sake of argument, that there is a continuous landscape. How would we explain that fact?
Stuart Kaufmann: If mutation, recombination, and selection only work well on certain kinds of fitness landscapes, yet most organisms are sexual, and hence use recombination, and all organisms use mutation as a search mechanism, where did these well-wrought fitness landscapes come from, such that evolution manages to produce the fancy stuff around us?
If the fitness landscape were such that it steers blind processes to discoveries far beyond our comprehension, what would explain that scenario?
Dembski: The fitness landscape supplies the evolutionary process with information. Only finely tuned fitness landscapes that are sufficiently smooth, don't isolate local optima, and, above all, reward ever-increasing complexity in biological structure and function are suitable for driving a full-fledged evolutionary process. So where do such fitness landscapes come from? ... Okay, so the environment supplies the information needed to drive biological evolution. But where did the environment get that information? From itself? The problem with such an answer is this: conservation of information entails that, without added information, biology's information problem remains constant (breaks even) or intensifies (gets worse) the further back in time we trace it. ... The whole magic of evolution is that it's supposed to explain subsequent complexity in terms of prior simplicity, but conservation of information says that there never was a prior state of primordial simplicity -- the information, absent external input, had to be there from the start. — [source]
Origenes
October 27, 2017
October
10
Oct
27
27
2017
03:28 PM
3
03
28
PM
PDT
To all interested: Still two important points: 5) The function that arises as the first random step, and is then optimized, must be directly and strongly related to survival and/or reproduction. In all well known cases of microevolution, the function is directly related to survival: antibiotic resistance, the rugged landscape experiment. That brings us to the final point: 6) The environmental pressure must be extreme: antibiotic resistance arises so easily because the antibiotic that pervades the system is a direct cause of extinction for all non resistant forms of the population. The advantages of points 5) and 6) are rather obvious: the selection coefficient of the new trait is extremely high. IOWs, the new trait can be selected and fixed with great efficiency, and the time to fixation is relatively short. IOWs, this is the neo-darwinian algorithm at its best: huge populations, high reproduction rate, extreme emvironmental pressure, very simple functions that arise from 1 or 2 mutational events and can easily be optimized by single mutation events, by a definite ladder of increasing function. Piece of cake, indeed! And yet, CR is not so easy at all, as we have seen. It already imposes severe restrictions to the algorithm, and the simple bottleneck of waiting for two mutations makes it a rather rare event, even in those favorable settings. Now, a few words about optimization by a continuous functional landscape. We have seen that, in well known cases of microevolution, the optimization proceeds for a few steps, and then stops. There are, I believe, two strong reasons for that: a) The continuous landscape is limited, it is just a small neighbourhood of the simple starting functional island. Moreover, such a continuous neighborhood is probably more easily found in very simple functions. b) The optimization can only go that far. Starting from that functional island, after a few optimizing mutations, any new change can only be neutral or deleterious. So, we can reasonably believe that, in the case of CR, the few natural forms of resistant molecules represent the best that can be done. It's not a case, I believe, that those rare cases of spontaneous resistance, arising separately from 2 different starting islands, have similar levels of functionality. If you look at Fig. 2A of the Summers paper, you can see that all the "natural" forms deriving form the ET route (Dd2, 783, K1, GB4, China e) have resistance rather comparable to that of Dd2. The natural forms arising from the TD route (Ecu, 7G8, Ph1, Ph2) have comparable values of resistance too, although lower than the values in the ET route. What does that mean? It means that those forms have probably reached the best optimization possible, starting form their respective initial island. And it means that the ET initial island has better possibilities of optimization than the TD initial island. So, we can learn some important concepts from these data: a) The routes of optimization of these simple starting functions seem to be rather short: a roof is quickly reached, and nothing better can be done. b) The initial island is imporant: it conditions that roof of optimization that can be reached. c) Even for a very simple function like CR, we have two different initial islands. d) The functional space between those two islands is not continuous. You have to start either from the ET route, or from the TD route, and follow the respective pathways. You cannot mix the two routes. I quote from the paper:
These two mutational routes are referred to henceforth as “ET” (referring to 75E and 76T) and “TD” (referring to 76T and 326D). Somewhat surprisingly, the combination of N75E and N326D resulted in a decrease, rather than an increase, in CQ transport activity; the addition of N75E to PfCRTEcu1110 (C9) or S326D to PfCRTDd2 (C14) significantly reduced CQ uptake.
We have here the simplest form of rugged landscape: different functional islands, minimally isolated, that can implement the same function with different levels of optimization. Well, in next post I will discuss some partially wrong (IMO) interpretations of these data by Gordon Davisson and Larry Moran.gpuccio
October 27, 2017
October
10
Oct
27
27
2017
02:55 PM
2
02
55
PM
PDT
1 2 3 12

Leave a Reply