Uncommon Descent Serving The Intelligent Design Community

What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

An interesting discussion, still absolutely open, has taken place in the last few days between Gordon Davisson and me on the thread:

Some very good friends, like Dionisio, Mung and Origenes, seem to have appreciated the discussion, which indeed has touched important issues. Origenes has also suggested that it could be transformed into an OP.

Well, I tought that it was probably a good idea, and luckily it did not require much work. 🙂   So, here it is. Gordon Davisson’s posts are in italics. It’s a bit long, and I am sorry for that!

I thank in advance Gordon Davisson for the extremely good contribution he has already given, and for any other contribution he will give. He is certainly invited to continue the discussion here, if he likes (and I do hope he does!). Of course, anyone else who could be interested is warmly invited to join.  🙂

Gordon Davisson (post #5):

Why is this supposed to be a problem for “Darwinism”? A low rate of beneficial mutations just means that adaptive evolution will be slow. Which it is.

And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual. Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit. If I’ve accounted for everything the overall rate of fixation of beneficial mutations per generation should be: (fraction of mutations that’re beneficial) * (fraction of beneficial mutations that aren’t wiped out by genetic drift) * (# of mutations per individual) * (population).

Florabama’s description is exactly wrong. Beneficial mutations don’t have to happen “in a row”, they can happen entirely independently of each other, and spread independently via selection. You may be thinking of the argument from irreducible complexity, but that’s an argument that evolution depends on mutations that are only beneficial in combination, which is a different matter. (And FYI evolutionists dispute how much of a problem this actually is. But again, that’s another matter.)

gpuccio (post #11):

Gordon Davisson:

You say:

And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual.

Yes, but any “beneficial” mutation that appears in one individual will have to expand to great part of the population, if NS has to have any role in lowering the probabilistic barriers.

That means that:

1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something! Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

If you can, I really admire your imagination.

2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.

While that is simply impossible, because those “stepwise” mutations simply do not exist and never will exist, even if we imagine that they exist the process requires certainly a lot of time.

Moreover, as the process seems not to leave any trace of itself in the proteomes we can observe today, because those functionally intermediate forms simply do not exist, we must believe that each time the expansion of the new trait, with its “precious” single aminoacid mutation, must be complete, because it seems that it can erase all tracks of the process itself.

So, simple imagination is not enough here: you really need blind faith in the impossible. Credo quia absurdum, or something like that.

Then you say:

Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit.

Absolutely! And it’s not a bit, it’s a lot.

If you look at the classic paper about rugged landscape:

http://journals.plos.org/ploso…..ne.0000096

you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS. Just think about the implications of that simple fact.

You say:

Beneficial mutations don’t have to happen “in a row”, they can happen entirely independently of each other, and spread independently via selection.

Yes, but only if each individual mutation confers a strong enough reproductive advantage. That must be true for each single specific aminoacid position of each single new functional protein that appears in natural history. Do you really believe that? Do you really believe that each complex functional stricture can be deconstructed into simple steps, each conferring reproductive advantage? Do you believe that we can pass from “word” source code to “excel” source code by single byte variations (yes, I am generous here, because a single aminoacid has at most about 4 bits of information, not 8), each of them giving a better software which can be sold better than the previous version?

Maybe not even “credo quia absurdum” will suffice here. There are limits to the absurd that can be believed, after all!

You say:

You may be thinking of the argument from irreducible complexity, but that’s an argument that evolution depends on mutations that are only beneficial in combination, which is a different matter.

No, the argument of IC, as stated by Behe, is about functions which require the cooperation of many individual complex proteins. That is very common in biology.

The argument of functional complexity, instead, is about the necessity of having, in each single protein, all the functional information which is minimally necessary to give the function of the protein itself. How many AAs would that be, for example, for dynein? Or for the classic ATP synthase?

Here, the single functional element is so complex that it requires hundreds of specific aminoacids to be of any utility. If that single functional element also requires to work with other complex single elements to give the desired function (which is also the rule in biology), then the FC of the system is multiplied. That is the argument of IC, as stated by Behe. The argument for FC in a single functional structure is similar, but it is directly derived form the concept of CSI as stated by Dembski (and others before and after him).

And finally you say:

And FYI evolutionists dispute how much of a problem this actually is. But again, that’s another matter.

It’s not another matter. It’s simply a wrong matter.

Both FC and IC are huge problems for any attempt to defend the neo-darwinian theory. I am not surprised at all that “evolutionists” dispute that, however. See Tertullian’s quote above!

Gordon Davisson (post #35):

Hi, gpuccio. Sorry about my late reply (as usual, I’m afraid). Before I comment specifically to what you said, I need to make a general comment that I still don’t see how the original point — that beneficial mutations are rare — refutes evolution. The arguments you’re making against evolution’s ability to create complex functional systems don’t seem to have a very close connection to the rate of beneficial mutations. Note that all of these would be considered beneficial mutations:

* Minor changes to an existing functional thing (protein, regulatory region, etc) that improve its function slightly.
* Minor changes to an existing functional thing that change its function slightly, in a way that makes it fit the organism’s current environment better.
* Changes that decrease function of something that’s overdoing its role (e.g. the mutation discussed here, which winds up giving people unusually strong bones).
* Mutations that create new functional systems.
* Mutations that are partway along a path to new functional systems, and are beneficial by themselves.

Your argument is (if I may oversimplify it a bit) essentially that the last two are vanishingly rare. But when we look at the overall rate of beneficial mutations, they’re mixed in with other sorts of beneficial mutations that’re completely irrelevant to what you’re talking about! Additionally, several types of mutations that’re critical in your argument are not immediately beneficial aren’t going to be counted in the beneficial mutation rate:

* Mutations that move closer to a new functional system (or higher-functioning version of an existing system), but aren’t actually there yet.
* Mutations that produce new functional systems that don’t immediately contribute to fitness.

Furthermore, one of the reasons for the rate of beneficial mutations may be low is that there may simply not be much room for improvement. For example, the experiment you cited about evolution on a rugged fitness landscape suggests that the wild-type version of the protein they studied may be optimal — it cannot be improved, whether by evolution or intelligent design or whatever. If that’s correct, the rate of beneficial mutations to this protein will be exactly zero, but that’s not because of any limitation of what mutations can do.

Now, on to your actual argument:

And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual.

Yes, but any “beneficial” mutation that appears in one individual will have to expand to great part of the population, if NS has to have any role in lowering the probabilistic barriers.

That means that:

1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something!

I’d disagree slightly here. There isn’t any particular “strong enough” threshold; the probability that a beneficial mutation will “escape genetic drift” is roughly proportional to how beneficial it is. Mutations that’re only slightly beneficial thus become fixed at a lower (but still nonzero) rate.

Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

If you can, I really admire your imagination.

I’ll discuss some of these points more below, but just two quick things here: first, this is just an argument from incredulity, not an argument from actual knowledge or evidence. Second, the article you cited about a rugged fitness landscape showed that they were able to evolve a new functional protein starting from a random polypeptide (the limit they ran into wasn’t getting it to function, but in optimizing that function).

2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.

While that is simply impossible, because those “stepwise” mutations simply do not exist and never will exist, even if we imagine that they exist the process requires certainly a lot of time.

This is simply wrong. Take the evolution of atovaquone resistance in P. falciparum (the malaria parasite). Unless I’m completely misreading the diagram Larry Moran gives in http://sandwalk.blogspot.com/2…..ution.html, one of the resistant variants (labelled “K1”) required 7 mutations in a fairly specific sequence, and at most 4 of them were beneficial. In order for this variant to evolve (which it did), it had to pass at least 3 steps unassisted by selection (which you claim here is impossible) and all 4 beneficial mutations had to overcome genetic drift.

At least in this case, beneficial intermediates are neither as rare nor as necessary as you claim.

Moreover, as the process seems not to leave any trace of itself in the proteomes we can observe today, because those functionally intermediate forms simply do not exist, we must believe that each time the expansion of the new trait, with its “precious” single aminoacid mutation, must be complete, because it seems that it can erase all tracks of the process itself.

So, simple imagination is not enough here: you really need blind faith in the impossible. Credo quia absurdum, or something like that.

Except we sometimes do find such traces. In the case of atovaquone resistance, many of the intermediates were found in the wild. For another example, in https://uncommondescent.com/intelligent-design/double-debunking-glenn-williamson-on-human-chimp-dna-similarity-and-genes-unique-to-human-beings/, VJTorley found that supposedly-novel genes in the human genome actually have very near matches in the chimp genome.

Then you say:

Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit.

Absolutely! And it’s not a bit, it’s a lot.

If you look at the classic paper about rugged landscape:

http://journals.plos.org/ploso…..ne.0000096

you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS. Just think about the implications of that simple fact.

That’s not exactly what they say. Here’s the relevant paragraph of the paper (with my emphasis added):

The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination. Recombination among neutral or surviving entities may suppress negative mutations and thus escape from mutation-selection-drift balance. Although the importance of recombination or DNA shuffling has been suggested [30], we did not include such mechanisms for the sake of simplicity. However, the obtained landscape structure is unaffected by the involvement of recombination mutation although it may affect the speed of search in the sequence space.

In other words, they used a simplified model of evolution that didn’t include all actual mechanisms, and they think it likely that’s why their model says the wild type couldn’t have evolved with a reasonable population size. So it must’ve been intelligent design… or maybe just homologous recombination. Or some other evolutionary mechanism they didn’t include.

Or their model of the fitness landscape might not be completely accurate. I’m far from an expert on the subject, but from my read of the paper:

* They measured how much infectivity (function) they got vs. population size (larger populations evolved higher infectivity before stagnating), fit their results to a theoretical model of the fitness landscape, and used that to extrapolate to the peak possible infectivity … which matched closely to that of the wild type. But their experimental results only measured relative infectivities between 0.0 and 0.52 (using a normalized logarithmic scale), and the extrapolation from 0.52 to 1.0 is purely theoretical. How well does reality match the theoretical model in the region they didn’t measure?

* But it’s worse than that, because their measurements were made on one functional “mountain”, and the wild type appears to reside on a different mountain. Do both mountains have the same ruggedness and peak infectivity? They’re not only extrapolating from the base of a mountain to its peak, but from the base of one mountain to the peak of another. The fact that the infectivity of the wild type matches closely with their theoretical extrapolation of the peak is suggestive, but hardly solid evidence.

So between the limitations of their simulation of actual evolutionary processes and the limitations of the region of the landscape over which they gathered data, I don’t see how you can draw any particularly solid conclusions from that study.

Well, except that there are some conclusions available from the region of the landscape that they did make measurements on: between random sequences and partial function. They say:

The landscape structure has a number of implications for initial functional evolution of proteins and for molecular evolutionary engineering. First, the smooth surface of the mountainous structure from the foot to at least a relative fitness of 0.4 means that it is possible for most random or primordial sequences to evolve with relative ease up to the middle region of the fitness landscape by adaptive walking with only single substitutions. In fact, in addition to infectivity, we have succeeded in evolving esterase activity from ten arbitrarily chosen initial random sequences [17]. Thus, the primordial functional evolution of proteins may have proceeded from a population with only a small degree of sequence diversity.

This seems to directly refute your claim that stepwise-beneficial mutations cannot produce functional proteins. They showed that it can. And they also showed that (as with the atovaquone resistance example) evolution doesn’t require stepwise-beneficial paths either. They found that stepwise-beneficial paths existed up to a relative fitness of 0.4, but they experimentally achieved relative fitnesses up to 0.52! So even with the small populations and limited evolutionary mechanisms they used, they showed it was possible to evolve significantly past the limits of stepwise-beneficial paths.

I don’t have to imagine this. They saw it happen.

gpuccio (posts 36 -39, 41, 46, 48):

 

Gordon Davisson:

First of all, thank you for your detailed and interesting comments to what I wrote. You raise many important issues that deserve in depth discussion.

I will try to make my points in order, and I will split them in a few different posts:

1) The relevance of the rate of “beneficial” mutations.

You say:

Before I comment specifically to what you said, I need to make a general comment that I still don’t see how the original point — that beneficial mutations are rare — refutes evolution. The arguments you’re making against evolution’s ability to create complex functional systems don’t seem to have a very close connection to the rate of beneficial mutations.

I don’t agree. As you certainly know, the whole point of ID is to evaluate the probabilistic barriers that make it impossible for the proposed mechanism of RV + NS to generate new complex functional information. The proposed mechanism relies critically on NS to overcome those barriers, therefore it is critical to understand quantitatively how often RV occurs that can be naturally selected, expanded and fixed.

Without NS, it is absolutely obvious that RV cannot generate anything of importance. Therefore, it is essential to understand and demonstrate how much NS can have a role in modifying that obvious fact, and the rate of naturally selectable mutations (not of “beneficial mutations, because a beneficial mutation which cannot be selected because it does not confer a sufficient reproductive advantage is of no use for the model) is of fundamental importance in the discussion.

2) Types of “beneficial” mutations (part 1).

You list 5 types of beneficial mutations. Let’s consider the first 3 types:

Note that all of these would be considered beneficial mutations:
* Minor changes to an existing functional thing (protein, regulatory region, etc) that improve its function slightly.
* Minor changes to an existing functional thing that change its function slightly, in a way that makes it fit the organism’s current environment better.
* Changes that decrease function of something that’s overdoing its role (e.g. the mutation discussed here, which winds up giving people unusually strong bones).

Well, I would say that these three groups have two things in common:

a) They are mutations which change the functional efficiency (or inefficiency) of a specific function that already exists (IOWs, no new function is generated).

b) The change is a minor change (IOWs, it does not imply any new complex functional information).

OK, I am happy to agree that, however common “beneficial” mutations may be, they almost always, if not always, are of this type. that’s what we call “microevolution”. It exists, and nobody has ever denied that. Simple antibiotic resistance has always been a very good example of that.

Of course, while ID does not deny microevolution, ID theory definitely shows its limits. They are:

a) As no new function is generated, this kind of variation can only tweak existing functions.

b) While the changes are minor, they can accumulate, especially under very strong selective pressure, like in the case of antibiotic resistance (including malaria resistance). But gradual accumulation of this kind of tweaking takes long times even under extremely strong pressure, requires a continuous tweaking pathway that is not always existing, and is limited, however, by how much the existing function can be optimized by simple stepwise mutations.

I will say more about those points when I answer about malaria resistance and the rugged landscape experiment. I would already state here, however, that both those scenarios, that you quote in your discussion, are of this kind, IOWs they fall under one of these three definitions of “beneficial” mutations.

3) Types of “beneficial” mutations (part 2).

The last two types are, according to what you say:

* Mutations that create new functional systems.
* Mutations that are partway along a path to new functional systems, and are beneficial by themselves.

These are exactly those kinds of “beneficial” mutations that do not exist.

Let’s say for the moment that we have no example at all of them.

For the first type,are you suggesting that there are simple mutations that “create new functional systems”? Well, let’s add an important word:

“create new complex functional systems”?

That word is important, because, as you certainly know, the whole point of ID is not about function, but about complex function. Nobody has ever denied that simple function can arise by random variation.

So, for this type, I insist: what examples do you have?

You may say that even if you have no examples, it’s my burden to show that it is impossible.

But that is wrong. You have to show not only that it is possible, but that it really happens and has real relevance to the problem we are discussing. We are making empirical science here, not philosophy. Only ideas supported by facts count. So, please, give the facts.

I would say that there is absolutely no reason to believe that a “simple” variation can generate “new complex functional systems”. There is no example of that in any complex system. Can the change of a letter generate a new novel? Can the change of a byte generate a new complex software, with new complex functions? Can a mutation of 1 – 2 aminoacids generate a new complex biological system?

The answer is no, but if you believe differently, you are welcome: just give facts.

In the last type of beneficial mutations, you hypothesize, if I understand you well, that a mutation can be part of the pathway to a new complex functional system, which still does not exist, but can be selected because it is otherwise beneficial.

So, let’s apply that to the generation of a new functional protein, like ATP synthase. Let’s say the beta chain of it, which, as we all know, has hundreds of specific aminoacid positions, conserved from bacteria to humans (334 identities between E. coli and humans).

Now, what you are saying is that we can in principle deconstruct those 334 AA values into a sequence of 334 single mutations, or if you prefer 167 two AAs mutations, each of which is selected not because the new protein is there and works, but because the intermediate state has some other selectable function?

Well, I say that such an assumption is not reasonable at all. I see no logical reason why that should be possible. If you think differently, please give facts.

I will say it again; the simple idea that new complex functions can be deconstructed into simple steps, each of them selectable for some not specified reason, is pure imagination. If you have facts, please give them, otherwise that idea has not relevance in a scientific discussion.

4) Other types of mutation?

You add two further variations in your list of mutations. Here they are:

* Mutations that move closer to a new functional system (or higher-functioning version of an existing system), but aren’t actually there yet.
* Mutations that produce new functional systems that don’t immediately contribute to fitness.

I am not sure that I understand what you mean. If I understand correctly, you are saying that there are mutations which in the end will be useful, bur for the moment are not useful.

But, then, they cannot be selected as such. Do you realize what that means?

It means that they can certainly occur, but they have exactly the same probability to occur as any other mutation. Moreover, as they are no selected, they remain confined to the original individual or clone, unless they are fixed by genetic drift.

But again, they have exactly the same probability as any other mutation to be fixed by genetic drift.

That brings us to a very strong conclusion that is often overlooked by darwinists, especially the neutralists:

Any mutation that does not have the power to be naturally selected is completely irrelevant in regard to the probabilistic barriers because its probability is exactly the same as any other mutation to occur or to be fixed by drift.

IOWs, only mutations that can be naturally selected change the game in regard to the computation of the probabilistic barriers. Nothing else. All variation which cannot be naturally selected is irrelevant, because it is just a new random state, and is already considered when we compute the probabilities for a random search to get the target.

5) Optimal proteins?

You say:

Furthermore, one of the reasons for the rate of beneficial mutations may be low is that there may simply not be much room for improvement. For example, the experiment you cited about evolution on a rugged fitness landscape suggests that the wild-type version of the protein they studied may be optimal — it cannot be improved, whether by evolution or intelligent design or whatever. If that’s correct, the rate of beneficial mutations to this protein will be exactly zero, but that’s not because of any limitation of what mutations can do.

OK, I can partially agree. The proteins as we see them now are certainly optimal in most cases. But they were apparently optimal just from the beginning.

For example, our beloved ATP synthase beta chain already had most of its functional information in LUCA, according to what we can infer from homologies. And, as I have shown in my OPs about the evolution of information in vertebrates, millions of bits of new functional information have appeared at the start of the vertebrate branch, rather suddenly, and then remained the same for 400+ million years of natural history. So, I am not sure that the optimal state of protein sequences is any help for neo-darwinism.

Moreover, I should remind you that protein coding genes are only a very small part of genomes. Non coding DNA, which according to darwinists is mostly useless, can certainly provide ample space for beneficial mutations to occur.

But I will come back to that point in the further discussion.

I would like to specify that my argument here is not to determine how common exactly are beneficial mutations in absolute, but rather to show that rare beneficial mutations are certainly a problem for neo-darwinism, a very big problem indeed, especially considering that (almost) all the examples we know of are examples of micro-evolution, and do not generate any new complex functional information.

5) The threshold for selectability.

You say:

I’d disagree slightly here. There isn’t any particular “strong enough” threshold; the probability that a beneficial mutation will “escape genetic drift” is roughly proportional to how beneficial it is. Mutations that’re only slightly beneficial thus become fixed at a lower (but still nonzero) rate.

I don’t think we disagree here. Let’s say that very low reproductive advantages will not be empirically relevant, because they will not significantly raise the probability of fixation above the generic one from genetic drift.

On the other hand, even if there is a higher probability of fixation, the lower it is, the lower will be the effect on probabilistic barriers. Therefore, only a significant reproductive advantage will really lower the probabilistic barriers in a relevant way.

6) The argument from incredulity.

You say:

I’ll discuss some of these points more below, but just two quick things here: first, this is just an argument from incredulity, not an argument from actual knowledge or evidence. Second, the article you cited about a rugged fitness landscape showed that they were able to evolve a new functional protein starting from a random polypeptide (the limit they ran into wasn’t getting it to function, but in optimizing that function).

I really don’t understand this misuse of the “argument from incredulity” issue (are, of course, not the only one to use it improperly).

The scenario is very simple: in science, I definitely am incredulous of any explanation which is not reasonable, has no explanatory power, and especially is not supported by any fact.

This is what science is. I am not a skeptic (I definitely hate that word), but I am not a credulous person who believes in things only because others believe in them.

You can state any possible theory in science. Some of them will be logically inconsistent, and we can reject from the start. But others will be logically possible, but unsupported by observed facts and by sound reasoning. We have the right and the duty to ignore those theories as devoid of any true scientific interest.

This is healthy incredulity. The opposite of blind faith.

I will discuss the rugged landscape issue in detail, later.

7) Malaria resistance.

In the end, the only facts you provide in favour of the neo-darwinist scenario are those about malaria resistance and the rugged landscape experiment. I will deal with the first here, and with the second in next post.

You say:

This is simply wrong. Take the evolution of atovaquone resistance in P. falciparum (the malaria parasite). Unless I’m completely misreading the diagram Larry Moran gives in http://sandwalk.blogspot.com/2…..ution.html, one of the resistant variants (labelled “K1”) required 7 mutations in a fairly specific sequence, and at most 4 of them were beneficial. In order for this variant to evolve (which it did), it had to pass at least 3 steps unassisted by selection (which you claim here is impossible) and all 4 beneficial mutations had to overcome genetic drift.

At least in this case, beneficial intermediates are neither as rare nor as necessary as you claim.

Now, let’s clarify. In brief, my point is that malaria resistance, like simple antibiotic resistance in general, is one of the few known cases of microevolution.

As I have already argued in my post #36, microevolutionary events are characterized by the following:

a) No new function is generated, but only a tweaking of some existing function.

b) The changes are minor. Even if more than one mutation accumulates, the total functional information added is always small.

I will discuss those two points for malaria resistance in the next point, but I want to clarify immediately that you are equivocating what I wrote when you say:

This is simply wrong.

Indeed, you quote my point 2) from post #11:

“2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.”

But you don’t quote the premise, in point 1:

“1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something! Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

I have emphasized the relevant part, that you seem to have ignored. Point 2 is referring to that scenario.

It is rather clear that I am speaking of the generation of bew complex functional information, and I even make an example, dynein.

So, I am not saying that no beneficial mutation can be selected, or that when that happens, like in microevolution, we cannot find the intermediate states.

What I am saying is that such a model cannot be applied to the generation of new complex final information, like dynein, because it is impossible to decosntruct a new complex functional unit into simple steps, each of them naturally selectable, while the new protein still does not even exist.

So, what I say is not wrong at all, and mt challenge to imagine such a pathway for dynein, of for ATP synthase beta chain, or for any of the complex functional proteins that appear in the course of natural history, or to find intermediates of that pathway, remains valid.

But let’s go to malaria.

I have read the Moran page, and I am not sure of your interpretation that 7 mutations (4 + 3) are necessary to give the resistance. Indeed, Moran says:

“It takes at least four sequential steps with one mutation becoming established in the population before another one occurs.”

But the point here is not if 4 or 7 mutations are needed. The point is that this is a clear example of microevolution, although probably one of the most complex that have been observed.

Indeed:

a) There is no generation of a new complex function. Indeed, there is no generation of a new function at all, unless you consider becoming resistant to an antibiotic because a gene loses the function to uptake the antibiotic a new “function”. Of course, we can define function as we like, but the simple fact is that here there is an useful loss of function, what Behe calls “burning the bridges to prevent the enemy from coming in”.

b) Whatever out definition of function, the change here is small. It is small if it amounts to 4 AAs (16 bits at most), it is small if it amounts to 7 aminoacids (28 bits at most).

OK, I understand that Behe puts the edge to two AAs in his book. Axe speaks of 4, from another point of view.

Whatever. The edge is certainly thereabout.

When I have proposed a threshold of functional complexity to infer design for biological objects, I have proposed 120 bits. That’s about 35 AAs.

Again, we must remember that all known microevolutionary events have in common a very favourable context which makes optimization easier:

a) They happen in rapidly reproducing populations.

b) They happen under extreme environmental pressure (the antibiotic)

c) The function is already present and it can be gradually optimized (or, like in the case of resistance, lost).

d) Only a few bits of informational change are enough to optimize or lose the function.

None of that applies to the generation of new complex functional information, where the function does not exist, the changes are informationally huge, and environmental pressure is reasonably much less than reproducing under the effect of a powerful antibiotic.

8) VJ’s point:

You say:

VJTorley found that supposedly-novel genes in the human genome actually have very near matches in the chimp genome.

It’s funny that you quote a point that I consider a very strong argument for ID.

First of all, VJ’s arguments are in confutation of some statements by Cornelius Hunter, with whom I often disagree.

Second, I am not sure that ZNF843 is a good example, because I blasted the human protein and found some protein homologs in primates, with high homology.

Third, there are however a few known human proteins which have no protein counterpart in other primates, as VJ correctly states. These seem to have very good counterparts in non coding DNA of primates.

So, if we accept these proteins as real and functional (unfortunately not much is known about them, as far as I know), then what seems to happen is that:

a) The sequence appears in some way in primates as a non coding sequence. That means that no NS for the sequence as representing a protein can take place.

b) In some way, the sequence acquires a transcription start in humans, and becomes an ORF. So the protein appears for the first time in humans and, if we accept the initial assumption, it is functional.

Well, if that kind of process will be confirmed, it will be a very strong evidence of design. the sequence is prepared in primates, where is seems to have no function at all, and is activated in humans, when needed.

The origin of functional proteins from non coding DNA, which is gaining recognition in the recent years, is definitive evidence of design. NS cannot operate on non coding sequences, least of all make them good protein coding genes. So, the darwinian mechanism is out, in this case.

9) The rugged landscape experiment

OK, this is probably the most interesting part.

For the convenience of anyone who may be reading this, I give the link to the paper:

http://journals.plos.org/ploso…..=printable

First of all, I think we can assume, for the following discussion, that the wild-type version of the protein they studied is probably optimal, as you suggested yourself. In any case, it is certainly the most functional version of the protein that we know of.

Now, let’s try to understand what this protein is, and how the experiment was realized.

The protein is:

G3P_BPFD (P03661).

Length: 424 AAs.

Funtion (from Uniprot):

“Plays essential roles both in the penetration of the viral genome into the bacterial host via pilus retraction and in the extrusion process. During the initial step of infection, G3P mediates adsorption of the phage to its primary receptor, the tip of host F-pilus. Subsequent interaction with the host entry receptor tolA induces penetration of the viral DNA into the host cytoplasm. In the extrusion process, G3P mediates the release of the membrane-anchored virion from the cell via its C-terminal domain”

I quote from the paper:

Infection of Escherichia coli by the coliphage fd is mediated by the minor coat protein g3p [21,22], which consists of three distinct domains connected via flexible glycine-rich linker sequences [22]. One of the three domains, D2, located between the N-terminal D1 and C-terminal D3 domains, functions in the absorption of g3p to the tip of the host F-pilus at the initial stage of the infection process [21,22]. We produced a defective phage, ‘‘fdRP,’’ by replacing the D2 domain of the fd-tet phage with a soluble random polypeptide, ‘‘RP3-42,’’ consisting of 139 amino acids [23].

So, just to be clear:

1) The whole protein is implied in infectivity

2) Only the central domain has been replaced by random sequences

So, what happens?

From the paper:

The initial defective phage fd-RP showed little infectivity, indicating that the random polypeptide RP3-42 contributes little to infectivity.

Now, infectivity (fitness) was measured by an exponential scale, in particular as:

W = ln(CFU) (CFU = colony forming units/ml)

As we can see in Fig. 2, the fitness of the mutated phage (fd-RP) is 5, that is:

CFU = about 148 (e^5)

Now, always from Fig 2 we can see that the fitness of the wildtype protein is about 22.5, that is:

CFU = about 4.8 billions

So, the random replacement of the D2 domain certainly reduces infectivity a lot, and it is perfectly correct to say that the fd-RP phage “showed little infectivity”.

Indeed, infectivity has been reduced of about 32.6 million times!

But still, it is there: the phage is still infective.

What has happened is that by replacing part of the g3p protein with random sequences, we have “damaged” the protein, but not to the point of erasing completely its function. The protein is still there, and in some way it can still work, even with the have damage/deformation induced by our replacement.

IOWs, the experiment is about retrieving an existing function which has been artificially reduced, but not erased. No new function is generated, but an existing reduced function is tweaked to retrieve as much as possible of its original functionality.

This is an important point, because the experiment is indeed one of the best contexts to measure the power of RM + NS in the most favorable conditions:

a) The function is already there.

b) Only part of the protein has been altered

c) Phages are obviously a very good substrate for NS

d) The environmental pressure is huge and directly linked to reproductive success (a phage which loses infectivity cannot simply reproduce).

IOWs, we are in a context where NS shoul really operate at its most.

Now, what happens?

OK, some infectivity is retrieved by RM. How much?

At the maximum of success, and using the most numerous library of mutations, the retrieved infectivity is about 14.7 (see again Fig. 2). Then the adaptive walk stops.

Now, that is a good result, and the authors are certainly proud of it, but please don’t be fooled by the logarithmic scale.

An infectivity of 14.7 corresponds to:

about 2.4 million CFU

So, we have an increase of:

about 17000 times as stated by the authors.

But, as stated by the authors, the fitmess should still increase of about 2000 times (fitness 7.6) to reach the functionality of the wild type. that means passing from:

2.4 million CFU

to

4.8 billion CFU

So, even if some good infectivity has been retrieved, we are still 2000 times lower than the value in the wild type!

And that’s the best they could achieve.

Now, why that limit?

The authors explain that the main reason for that is the rugged landscape of protein function. That means that RM and NS achieve some good tweaking of the function, but starting from different local optima in the landscape, and those local optima can go only that far.

The local optimum corresponding to the wildtype has never been found. See the paper:

The sequence selected finally at the 20th generation has ~W = 0.52 but showed no homology to the wild-type D2 domain, which was located around the fitness of the global peak. The two sequences would show significant homology around 52% if they were located on the same mountain. Therefore, they seem to have climbed up different mountains

The authors conclude that:

The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wildtype phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.

Now, having tried to describe in some detail the experiment itself, I will address your comments.

10) Your comments about the rugged landscape paper

You say:

That’s not exactly what they say. Here’s the relevant paragraph of the paper (with my emphasis added):

But it is exactly what they say!

Let’s see what I wrote:

“you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS.

(emphasis added)

Now let’s see what they said:

By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.

(I have kept your emphasis).

So, the point is that, according to the authors, a library of 10^70 sequences would be necessary to find the wildtype by random substitutions only (plus, I suppose, NS).

That’s exactly what I said. Therefore, your comment, that “That’s not exactly what they say” is simply wrong.

Let’s clarify better: 10^70 is a probabilistic resource that is beyond the reach not only of our brilliant researchers, but of nature itself!

It seems that your point is that they also add that, given that “such a huge search is impractical” (what a politically correct adjective here! ), that should:

“imply that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.”

which is the part you emphasized.

As if I had purposefully left out such a clarifying statement!

Well, of course I have purposefully left out such a clarifying statement, but not because I was quote-mining, but simply because it is really pitiful and irrelevant. Let’s say that I wanted to be courteous to the authors, who have written a very good paper, with honest conclusions, and only in the end had to pay some minimal tribute to the official ideology.

You see, when you write a paper, and draw the conclusions, you are taking responsibilities: you have to be honest, and to state only what can be reasonably derived from the facts you have given.

And indeed the authors do that! They correctly draw the strong conclusion that, according to their data, RM + NS only cannot find the wildtype in their experiment (IOWs, the real, optimal function), unless we can provide a starting library of 10^70 sequences, which, as said, is beyond the reach of nature itself, at least on our planet. IOWs, let’s say that it would be “impractical”.

OK, that’s the correct conclusion according to their data. They should have stopped here.

But no, they cannot simply do that! So they add that such a result:

implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.

Well, what is that statement? Just an act of blind faith in neo-darwinism, which must be true even when facts falsify it.

Is it a conclusion derived in any way from the data they presented?

Absolutely not! There is nothing in their data that suggests such a conclusion. They did not test recombination, or other mechanisms, and therefore they can say absolutely nothing about what it can or cannot do. Moreover, they don’t even offer any real support from the literature for that statement. They just quote one single paper, saying that “the importance of recombination or DNA shuffling has been suggested”. And yet they go well beyond a suggestion, they say that their “conclusion” is implied. IOWs logically necessary.

What a pity! What a betrayal of scientific attitude.

If they really needed to pay homage to the dogma, they could have just said something like “it could be possible, perhaps, that recombination helps”. But “imply”? Wow!

But I must say that you too take some serious responsibility in debating that point. Indeed, you say:

In other words, they used a simplified model of evolution that didn’t include all actual mechanisms, and they think it likely that’s why their model says the wild type couldn’t have evolved with a reasonable population size. So it must’ve been intelligent design… or maybe just homologous recombination. Or some other evolutionary mechanism they didn’t include.

Well, they didn’t use “a simplified model of evolution”. They tested the official model: RM + NS. And it failed!

Since it failed, they must offer some escape. Of course, some imaginary escape, completely unsupported by any facts.

But the failure of RM + NS, that is supported by facts, definitely!

I would add that I cannot see how one can think that recombination can work any miracle here: after all, the authors themselves have said that the local optimum of the wildtype has not been found. The problem here is how to find it. Why should recombination of existing sequences, which share no homology with the wildtype, help at all in finding the wildtype? Mysteries of blind faith.

And have the authors, or anyone else, made new experiments that show how recombination can solve the limit they found? Not that I know. If you are aware of that, let me know.

Then you say:

Or their model of the fitness landscape might not be completely accurate.

Interesting strategy. So, if the conclusions of the authors, conclusions driven from facts and reasonable inferences, are not those that you would expect, you simply doubt that their model is accurate. Would you have had the same doubts, had they found that RM + NS could find easily the wildtype? Just wondering…

And again:

So between the limitations of their simulation of actual evolutionary processes and the limitations of the region of the landscape over which they gathered data, I don’t see how you can draw any particularly solid conclusions from that study.

Well, like you, I am not an expert of that kind of models. I accept the conclusions of the authors, because it seems that their methodology and reasonings are accurate. You doubt them. But should I remind you that they are mainstream authors, not certainly IDists, and that their conclusions must have surprised themselves first of all. I don’t know, but when serious researchers publish results that are not probably what they expected, and that are not what others expect, they must be serious people (except, of course, for the final note about recombination, but anyone can make mistakes after all! ).

Then your final point:

This seems to directly refute your claim that stepwise-beneficial mutations cannot produce functional proteins. They showed that it can.

No, for a lot of reasons:

a) We are in a scenario of tweaking an existing, damaged function to retrieve part of it. We are producing no new functional protein, just “repairing” as much as possible some important damage.

b) That’s why the finding of lower levels of function is rather easy: it is not complex at all, it is in the reach of the probabilistic resources of the system.

I will try to explain it better. Let’s say that you have a car, and that its body has been seriously damaged in a car accident. That’s our protein with its D2 domain replaced by a random sequence of AAs.

Now, you have not the money to buy the new parts that would bring back the old body in all its spendor (the wildtype).

So, you choose the only solution you can afford: you take a hammer, and start giving gross blows to the body, to reduce the most serious deformations, at least a little.

the blows you give need not be very precise or specific: if there is some part which is definitely too out of the line, a couple of gross blows will make it less prominent. And so on.

Of course, the final result is very far from the original: let’s say 2000 times less beautiful and functional.

However, it is better than what you started with.

IOWs, you are trying a low information fixing: a repair which is gross, but somewhat efficient.

And, of course, there are many possible gross forms that you can achieve by your hammer, and that have more or less the same degree of “improvement”.

On the contrary, there is only one form that satisfies the original request: the perfect parts of the original body.

So, a gross repair has low informational content. A perfect repair has very high informational content.

That’s what the rugged landscape paper tells us: the conclusion, derived form facts, is perfectly in line with ID theory. Simple function can be easily reached by some probabilistic resources, by RV + NS, provided that the scenario is one of tweaking an existing function, and not of generating a new complex one.

It’s the same scenario of malaria resistance, or of other microevolutionary events.

But the paper tells us something much more important: complex function, that with a high informational content, cannot be realistically achieved with those mechanisms, nor even in the most favorable NS scenario, with an existing function, and the opportunity to tweak it with high mutation rates and highly reproducing populations, and direct relevance of the function to reproduction.

Complex function cannot be found, not even in those conditions. The wildtype remains elusive, and, if the author’s model is correct, which I do believe, will remain elusive in any non design context.

And, if RV and NS cannot even do that, how can they hope to just start finding some new, complex, specific function, like the sequence of ATP synthase beta chain, or dynein, or whatever you like, starting not from an existing, damaged but working function, but just from scratch?

OK, this is it. I think I have answered your comments. It was some work, I must say, but you certainly deserved it!

Addendum:

By the way, in that paper we are dealing with a 139 AAs sequence (the D2 domain).

ATP synthase beta chain is 529 AAs long, and has 334 identities between E. coli and humans, for a total homology of 663 bits.

Cytoplasmic dynein 1 heavy chain 1 is 4646 AAs long, and has 2813 identities between fungi and humans, for a total homology of 5769 bits.

These are not the 16 – 28 bits of malaria resistance. Not at all.

OK, that’all for the moment. Again, I apologize for the length of it all!  🙂

Comments
"Natural selection is a force against evolution" Interesting, because I'd wager that every evolutionary biology professor will at some point during the semester say something along the lines of: "natural selection is the main driving force behind evolution" Now why would your understanding of natural selection and an evolutionary biologist's understanding be complete opposites? ...Hmm I wonder..... You break natural selection into two forms, but these two forms are actually one in the same. Immediate removal (negative selection) of deleterious changes, while allowing neutral (or nearly neutral) and potentially beneficial changes to remain is what, over time, enriches the gene pool for beneficial changes (positive selection). You seem to have (subconsciously?) realized this because immediately after you copy/pasted from your previous post, you go on to say that "positive selection is eliminative too." Yes it is, because negative and positive selection are two parts of a whole. ET says "you don’t get complex adaptations by merely eliminating the less fit" and you think he/she is referring to the fact that random variation is required as well. I read what ET said and it seems to me that he/she is trying to refute evolution by redefining it as only "eliminating the less fit" and then saying "that's not enough." (Is this what's called a strawman argument?) I assumed this largely because it's ET's first comment on this post and it wasn't directed to any one in particular. Either way, that is how I interpreted it and others could do the same just as easily. Now, I'm not sure exactly who "neo-darwinists" are, but "natural selection, by the process of expansion of simpler beneficial mutations, can help in the process, and lower the probabilistic barriers that stand against RV as an engine of complex novelty" is a good explanation of the process of evolution, in my opinion. I think that you disagree with it because you severely underestimate the types and complexities of "random variation". Random variation is NOT just point mutations, which you probably already know. But do you know the variety of changes that are actually encompassed by "random variation"? It is quite astounding. And even relatively simple changes can have huge effects on organisms at both the cellular and the organismal levels. (Both potentially positive and negative effects I might add.)Corey Delvine
October 6, 2017
October
10
Oct
6
06
2017
10:51 AM
10
10
51
AM
PST
gpuccio:
Evolution according to the neo-darwinian algorithm is base on two different processes: random variation, which is probabilistic, and natural selection, which is in a way a necessity process.
I disagree with this. It is natural selection that is probabilistic. Random variation is just magic.Mung
October 6, 2017
October
10
Oct
6
06
2017
10:35 AM
10
10
35
AM
PST
Corey Delvine: "Isn’t evolution more complex than that? Isn’t evolution dependent on constant small changes which are then selected for based on their effects in the organism or between the organism and it’s environment?" Evolution according to the neo-darwinian algorithm is based on two different processes: random variation, which is probabilistic, and natural selection, which is in a way a necessity process. The main topic in this OP and in the comments is exactly NS: how it works, and what are its limits. Have you read the OP? ET's statement, to which you objected, was the following: "Natural selection is an eliminative process and you don’t get complex adaptations by merely eliminating the less fit." Now, there can be no doubt that NS is an eliminative process. Indeed, there are two kinds of NS, as I have discussed in my comment #5 here. I paste here the relevant part:
The reason is simple: NS is of two kinds. 1) Negative NS is the strongest form, the form that is easily recognizable in mature. It is the selection against variation that reduces the function of an existing protein. Another technical term for it is “purifying selection”. Negative selection is a strong, universal force. It is the force that keeps the functional sequence of proteins rather constant, operating against deleterious mutations. So, this powerful force has the effect of keeping the existing functional information as it is. It can only tolerate neutral or quasi-neutral variation. Of course, if a functional gene is supposed to change into something different, either in its original form or in a duplicate functional form, negative selection will act against that change. So, it is in general a force against neo-darwiniam evolution. 2) The aspect of NS which should act in favor of it is positive selection: the fixation, by expansion to the whole population or a significant part of it, of a beneficial mutation which confers a reproductive advantage. Now, this type of mechanism is certainly much rarer than negative selection: indeed, it is very difficult to document it in most cases, even if there are clear cases of positive selection in action, like the cases of microevolution we have discussed in the OP.
Now, there can be no doubt that negative selection is eliminative. But positive selection is eliminative too, in a sense, because it eliminates the previous form of the allele, which has become less fit, allowing the new form to expand. So, I think that the first part of ET's statement is certainly true. The simple truth is that, according to the neo-darwinian algorithm, the only real engine that generates new functional information is random variation (RV). That's why ET states, in the second part of his comment, that "you don’t get complex adaptations by merely eliminating the less fit.". And he is right! Practically everyone agrees that RV has not the probabilistic power to generate even a tiny part of the huge complex functional information we observe in biological beings. Of course, neo-darwinists believe that NS, by the process of expansion of simpler beneficial mutations, can help in the process, and lower the probabilistic barriers that stand against RV as an engine of complex novelty. I am absolutely convinced that that idea is not true. And my long OP here is almost completely dedicated to explaining why I believe that. So, if you have the time, please read it. And, if you want, please comment on it. However, welcome to the discussion.gpuccio
October 6, 2017
October
10
Oct
6
06
2017
09:37 AM
9
09
37
AM
PST
"Natural selection is an eliminative process and you don’t get complex adaptations by merely eliminating the less fit." Isn't evolution more complex than that? Isn't evolution dependent on constant small changes which are then selected for based on their effects in the organism or between the organism and it's environment?Corey Delvine
October 6, 2017
October
10
Oct
6
06
2017
09:07 AM
9
09
07
AM
PST
I like the seriousness of this discussion between GP and GD. Looking forward to seeing what transpires from it.Dionisio
October 6, 2017
October
10
Oct
6
06
2017
07:58 AM
7
07
58
AM
PST
Natural selection is an eliminative process and you don't get complex adaptations by merely eliminating the less fit.ET
October 6, 2017
October
10
Oct
6
06
2017
07:08 AM
7
07
08
AM
PST
--Evolution cannot produce new complex, functional systems (e.g. proteins) without being led to them via step-by-step-beneficial paths.-- Wouldn't a better way of saying this be "Evolution has not been shown to be able to produce new complex, functional systems (e.g. proteins) without being led to them via step-by-step-beneficial paths" ? Saying "cannot" is claiming to prove a negative. It's the one who says "can" or "did" from whom it is reasonable to demand an explanation.tribune7
October 6, 2017
October
10
Oct
6
06
2017
07:07 AM
7
07
07
AM
PST
Great points gpuccio --Note that all of these would be considered beneficial mutations: * Minor changes to an existing functional thing (protein, regulatory region, etc) that improve its function slightly. * Minor changes to an existing functional thing that change its function slightly, in a way that makes it fit the organism’s current environment better. * Changes that decrease function of something that’s overdoing its role (e.g. the mutation discussed here, which winds up giving people unusually strong bones).-- Few deny this and it is not controversial. Someone defending neo-Darwinianism as an axiomatic explanation for all biodiversity should not even bring them up as it just muddies the water.tribune7
October 6, 2017
October
10
Oct
6
06
2017
07:04 AM
7
07
04
AM
PST
Gordon Davisson: Thank you very much for coming back to the discussion! :) I really appreciate what you say in this last post. I agree with all. The purpose of moving our discussion here was exactly to give it some more proper space, to invite everyone interested to join, and to give you a chance, as you wished, to give further contributions with your pace. As I already said, I am very grateful to you for the way you raised so many relevant points, allowing me to express some ideas about them. There is nothing better than a good interlocutor in intellectual confrontation.
* Evolution cannot produce new complex, functional systems (e.g. proteins) without being led to them via step-by-step-beneficial paths. * Such paths don’t exist. Is that a fair summary of your claim? And if so, would you agree that this should be the central topic here?
Absolutely! That is the core of all the discussion, and of ID theory itself, IMO.
Specifically, I think we need to at-least-mostly agree on what evolution is (i.e. whether RM + NS is the “official model”), and on how to evaluate evidence as supporting one or another view (i.e the similarity between Human orphan proteins and chimp non-coding sequnces).
As you said very well, for me the problem is not "evolution", but the explanation of how it happens, IOWs of how complex functional information is generated in biological beings. Theregore, the debate is, IMO, between: a) A design explanation, requiring definite intervention from some conscious intelligent agent to input new complex functional information b) Any other non design explanation that has any potential explanatory power I usually debate neo darwinism, in particular the RV + NS algorithm, because in essence I am convinced that there is nothing else in the non design field that really deserves debate. But, of course, I am ready to discuss anything else that is suggested, by you or others. It could be useful to remind (but I am sure that you are aware of that) that I fully accept common descent, ad that my idea of ID is a model of guided common descent. So, there is no reason to debate common descent, because I suppose we agree on that point. Again, the only problem is how the new complex functional information comes into existence. So, you are really welcome to contribute as you like, and with the pace that you like. Anything will be deeply appreciated. I really love your intellectual honesty! :)gpuccio
October 5, 2017
October
10
Oct
5
05
2017
10:19 PM
10
10
19
PM
PST
Hi, gpuccio; thanks for your replies! I'm going to try to work through your discussion, but I think I should set a few expectations. First off, I am neither a prompt nor a reliable correspondent. Basically, I find that the actual process of writing slow and a bit painful; on the other hand, I'm really good at procrastinating! So I tend to mull over what I should say for quite a long time before I manage to convince myself to actually start typing in a reply. If I ever do convince myself to start... (& then I post and inevitably find a half-dozen mistakes that I should have noticed while I was profreading, and frantically try to fix them within the edit window without adding any new mistakes in the process...) Anyway, I'll try to be at least semi-reliable about getting back to you, but I'll almost certainly not be at all prompt. Sorry about that. On the other hand, I do think a well-mulled-over discussion is generally better quality than something quickly dashed off. Second, I'm not going to try to address all of the points you've raised. There's a tendency in these discussions (and already in this one) for people to start arguing about topic A, then find that they also disagree about related topics B, C, and D, and start arguing about those, which leads to E through Z... and by the time everyone's given up on the discussion, we're well into the Greek alphabet, nothing ever got settled, and all anyone learned is that everyone on the other side is wrong about everything. To avoid the problem of having to settle everything in order to settle anything, I'd like to try to keep the subject of discussion relatively contained. So I'll try to avoid going after every interesting digression that comes along (and try to avoid raising too many side topics myself). But that means we'd better agree, before getting too far, on what we're actually trying to talk about. I think the central topic of discussion is your contention that: * Evolution cannot produce new complex, functional systems (e.g. proteins) without being led to them via step-by-step-beneficial paths. * Such paths don't exist. Is that a fair summary of your claim? And if so, would you agree that this should be the central topic here? BTW, having said this the first two specific topics I plan to address aren't actually directly related to that. But they're foundational issues that IMO we have to come to some sort of agreement about, before we have any real chance of meaningful discussion of that central issue. Specifically, I think we need to at-least-mostly agree on what evolution is (i.e. whether RM + NS is the "official model"), and on how to evaluate evidence as supporting one or another view (i.e the similarity between Human orphan proteins and chimp non-coding sequnces). Ok, last item for the moment: I need to own up to a mistake. Concerning the rugged landscape paper, you said:
1) The whole protein is implied in infectivity 2) Only the central domains has been replaced by random sequences
I missed this in my read through the paper, and was thinking that they't started with an entirely random polypeptide. This does significantly weaken some of the conclusions that I drew from the paper (although I think some also remain intact). (AIUI there are also some other studies that did start from at or near entirely random sequnces and evolve function, but I'd have to do a fair bit of research before I'd be familiar enough with them to argue from them. So I'll put that off for later maybe.)Gordon Davisson
October 5, 2017
October
10
Oct
5
05
2017
09:04 PM
9
09
04
PM
PST
Florabama: Thank you! :)gpuccio
October 5, 2017
October
10
Oct
5
05
2017
03:11 PM
3
03
11
PM
PST
Origenes: Yes, that's the paper I meant. Thank you. Petrushka has always been rather stubborn, but in general he believed in what he said, which is not always true of all our kind interlocutors.gpuccio
October 5, 2017
October
10
Oct
5
05
2017
03:11 PM
3
03
11
PM
PST
Thank you, gpuccio. Fascinating OP and responses.Florabama
October 5, 2017
October
10
Oct
5
05
2017
03:04 PM
3
03
04
PM
PST
GPuccio @2
Strangely, it’s more or less the number given a lot of time ago by Axe for the number of sequences needed to find a folding sequence, if I remember well.
Were you perhaps alluding to the following?
The prevalence of low-level function in four such experiments indicates that roughly one in 10^64 signature-consistent sequences forms a working domain. Combined with the estimated prevalence of plausible hydropathic patterns (for any fold) and of relevant folds for particular functions, this implies the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10^77, adding to the body of evidence that functional folds require highly extraordinary sequences. [Axe, Gauger 'Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds', link]
GPuccio: I must say that the paper was signaled to me by Petrushka (or was it Zachriel? No, I think it was Petrrushka) who strongly believed that it was hard evidence for the power of NS. As soon as I read it, I immediately thought that it was the opposite: absolute evidence of its limits.
Petrushska must blame the authors of the paper for his confusion. As things are presented these days, one has to be an expert to understand what is not said in sections as 'introduction' and 'discussion'.Origenes
October 5, 2017
October
10
Oct
5
05
2017
03:02 PM
3
03
02
PM
PST
Excellent post and comments. Thank you all.Truth Will Set You Free
October 5, 2017
October
10
Oct
5
05
2017
11:22 AM
11
11
22
AM
PST
EugeneS: The most interesting case is, IMO, when a gene duplication is followed by the inactivation of the gene, which becomes a pseudogene. As you say: "The idea is that a duplicate (paralog) can change without being restrained by natural selection." Indeed, an inactivated gene isn't subject any more to the restraining effects of negative selection: it is free to change in all possible ways. On the other hand, until it remains inactivated, it isn't any more subject to the possible expansion of positive selection, in case some beneficial mutation should happen. The point is simple: a pseudogene should behave exactly like any other non coding DNA sequence that is not functional. It is not subject to any form of NS, and its expansion can only happen because of neutral genetic drift. You say: "In theory, gene duplication allows the neo-Darwinian model to traverse areas of the configuration space where function does not exist." But the simple fact is that those areas cannot be traverses, because the whole probability barriers that we well know act against that possibility. As soon as a gene is inactivated, it is fully subject to neutral variation. In the absence of any negative selection, it will soon lose any relation to the original functional sequence of the original gene. After some time, it can be considered at the same level of any random nucleotide sequence. And that must happen, if the purpose is to reach another functional island, a new functional gene that is completely unrelated, at sequence level, to the original one. For example, a new superfamily. But the probabilities that a random sequence finds a new functional island are practically zero. Do not be confused by the rugged landscape paper: there, we have an existing, damaged function. No new function is found, only some fixing of the damage is attained, as I have argued in the OP. To get to functional sequences from random sequences by random variation means to find functional information by chance alone. Even darwinists recognize that it is impossible. Moreover, everyone in the darwinian field seems to ignore the problem in this other statement: " As soon as it becomes functional, it is immediately subject to natural selection, but with a different function." As soon as??? But we are discussing of a non coding gene. OK, it changes. Maybe its new sequence could be vaguely functional. Let's ignore for the moment that it needs a promoter, one or more enhancers, a regulation system, an integration with what already exists, to be really "functional", least of all naturally selectable. But let's be serious, that sequence is not transcribed and translated, Even if it transcribed, it is certainly not translated. How can the living being know that it has become functional? I think that neo-darwinists imagine a cell, or organism, where all non coding sequences are constantly transcribed and translated, filling the cell with junk of all kinds, so that the rare peptide that, as soon as it is translated, really helps may be expanded by NS! Or a scenario where non coding sequences not only, by magic, acquire a configuration that can correspond, by symbolic translation to a functional peptide sequence, and at the right moment, but not before, acquire a starting sequence and become translated ORFs, ready with their promoter, enhancers, TFs, and so on, to be of immediate relevant help to the cell and be expanded and fixed, before any new neutral variation can change the precious result. OK, I think this is folly, utter folly. I do believe that new genes come from non coding DNA, be it a pseudogene or any other sequence, but that process is a design process. The gene is prepared so that, once activated, it will be a functional protein. Then, and only then maybe NS can be of some collateral help. And, in many cases, the configuration of the future gene at the level of non coding DNA, is realized by transposon activity. We have some evidence of that. Transposons are the most likely tool of biological design.gpuccio
October 5, 2017
October
10
Oct
5
05
2017
11:02 AM
11
11
02
AM
PST
EugeneS has posted about another aspect of NS in the old thread from which this OP is derived. I copy and answer his post here:
GPuccio I am sorry if you have already addressed it above (I will need time to read it all at my pace). Could you elaborate on gene duplication a bit more in light of probabilistic barriers? In theory, gene duplication allows the neo-Darwinian model to traverse areas of the configuration space where function does not exist. The idea is that a duplicate (paralog) can change without being restrained by natural selection. As soon as it becomes functional, it is immediately subject to natural selection, but with a different function. I know this is too speculative, but could you say a bit more? People do mention gene duplication in discussing the capabilities of RV+NS. Thanks!
OK, gene duplication. I would say that, while the role of gene duplication is certainly important in evolutionary history, there is great confusion about its relevance in a neo-darwinian theory. To simplify, I will describe two different scenarios: a) A gene is duplicated, remains functional, and the functional copy undergoes RV + NS so that a new gene is obtained. b) A gene is duplicated and inactivated. IOWs, it becomes a pseudogene. Then it undergoes the process of RV + NS. The two situations are completely different. While there may be intermediate scenarios, I think that those two can clarify well the possible role, or non role, of NS. I will say immediately that b) is probably the important issue. However, let's say something about a). Eugene, you mention in your post an aspect of NS that I have not really covered in my OP: that, indeed, NS can and does work, in many cases, against neo darwinian evolution. As you say, it can restrain evolution. The reason is simple: NS is of two kinds. 1) Negative NS is the strongest form, the form that is easily recognizable in mature. It is the selection against variation that reduces the function of an existing protein. Another techinical term for it is "purifying selection". Negative selection is a strong, universal force. It is the force that keeps the functional sequence of proteins rather constant, operating against deleterious mutations. So, this powerful force has the effect of keeping the existing functional information as it is. It can only tolerate neutral or quasi-neutral variation. Of course, if a functional gene is supposed to change into something different, either in its original form or in a duplicate functional form, negative selection will act against that change. So, it is in general a force against neo-darwiniam evolution. 2) The aspect of NS which should act in favor of it is positive selection: the fixation, by expansion to the whole population or a significant part of it, of a beneficial mutation which confers a reproductive advantage. Now, this type of mechanism is certainly much rarer than negative selection: indeed, it is very difficult to document it in most cases, even if there are clear cases of positive selection in action, like the cases of microevolution we have discussed in the OP. I think that the only reasonable scenario where a gene duplication could perhaps generate a new functional gene by the neo-darwinian mechanism is the following: A gene is duplicated, and remains functional. While the original gene ensures that the old function is satisfied, the new gene undergoes small variations, 1 - 5 AAs, at the active site, and is transformed into a similar gene, with more or less different biochemical activity. This could be a mechanism that generates diversification in an existing protein family, for example. The point is: most of the sequence and structure of the old gene, here, will be conserved. That's why it is a good thing that the duplicated gene remains functional, so that negative selection can preserve that bulk of sequence, structure and functionality. On the other hand, while the folding and the general structure of the protein remain the same (IOWs, we remain in the original island of the original protein family) the active site can undergo some small variation that changes its biochemical affinity for substrates, and so in the end provides a different range of activity and function. This variation at the active site is usually in the microevolutionary range (as I said, 1 - 5 AAs), so it could be potentially in the range of very good biological probabilistic resources. So, what do I believe about this scenario? I believe that those cases are borderline: they could be extreme cases of neo-darwinian microevolution, or very simple cases of designed macroevolution. Therefore, it is wise IMO not to focus a design inference on that type of processes: we have indeed a lot of scenarios where the informational jump is hundreds of times bigger, beyond any possible reach of the neo-darwinian theory. For example, all cases of appearance of a new protein superfamily, or in general of a huge quantity of new information in a protein. You can take a look at my OPs about the informational jump in vertebrate proteome to find a lot of such examples. OK, I will discuss the b) scenario in next post.gpuccio
October 5, 2017
October
10
Oct
5
05
2017
10:39 AM
10
10
39
AM
PST
Florabama: I think you are right, Haldane's considerations certainly add further difficulties to the "usefulness" of NS. The idea, if I understand it well, is that if many new beneficial traits have to be expanded simultaneously in a population, starting from different individuals and controlled by different genes, hey will compete for the expansion. I don't know exactly how to add that factor to probability computations, I am not an expert in population genetics. I think, however, that NS is already defunct enough for the arguments that I have tried to detail, and Haldane's problem can be the final blow. :) It is interesting that we often reason as though one single beneficial trait expanded at a time. Probably because that is what happens is some of the microevolutionary "models" from the lab or from highly specific situations, that are indeed the only models for NS and so come naturally to the mind of people when we speak of it. For example, consider simple antibiotic resistance, including malaria resistance, a situation where the environmental pressure (the antibiotic) is so strong that it can efficiently kill in a very short time all or almost all the individuals that have no feature of resistance, however small. In that context, resistance becomes naturally the only important beneficial trait, and it can expand undisturbed in the population, ad in a rather short time. The same is true for the rugged landscape experiment. For a phage, infectivity is survival, because phages cannot reproduce unless they infect the host cell. So here, again, loss of infectivity represents almost certain extinction, and any improvement in the damaged function, however small, becomes a passport for rapid expansion. But these are extreme cases, and somewhat artificial ones. In "normal" evolutionary history, a lot of new "traits" should be evolving at the same time, according to the theory. So, competition for expansion, Haldane's problem, becomes a very real problem indeed. An important point that is often misunderstood is the importance of the expansion. The expansion of a mutation in the original population is critical to the theory. Indeed, each new variation, if it arises as a random event, arises in one individual, and will be confined to the descendants of that individual (let's call it "the original clone") unless it expands to great part of the population (or to all of it, if intermediates really must be cancelled in the process). It's this expansion that can "lower" the probabilistic barriers that, as we well know, make the generation of complex functional information well beyond any universal threshold of impossibility. But the expansion can happen for two different mechanisms: a) Genetic drift. That happens to neutral or quasi neutral traits, that are expanded by this random mechanism. No reproductive avantage is needed here. The proble is that all mutations have the same probability to be expanded by genetic drift. therefore, genetic drift does not in any way lower the probabilistic barriers for anything. Itis absolutely neutral for our reasonings, therfore essentially irrelevant. b) Natural selection. Here the expansion is linked to a reproductive advantage. So, there is a necessity factor in action. As said, NS can potentially lower the probabilistic barriers in some very specific cases, and it certainly does that in known microevolutionary events. But it is a hugely limited process, which can only be invoked in very specific cases, and for the generation of few bits of functional information, not more than that. As I have tried to debate in my long OP! :) And you are right, Haldane's dilemma certainly applies to restrict even more its role in natural evolutionary history.gpuccio
October 5, 2017
October
10
Oct
5
05
2017
06:10 AM
6
06
10
AM
PST
I would be interested in how Haldane's Dilemma plays into the question of beneficial mutation rates. It would seem to me that the cost of reproduction combined with "extremely rare" mutation rates, would make radical evolutionary change nearly impossible.Florabama
October 5, 2017
October
10
Oct
5
05
2017
05:32 AM
5
05
32
AM
PST
Origenes: Thank you for your first comment! After all, the existence of this OP is your merit. :) "Surely there is enough here for several OP’s." Yes, I suppose there is a lot of stuff here. Maybe too much! :) The merit, or responsibility, for that is shared by me with Gordon Davisson, who raised so many good points with his posts that I had to win my inherent laziness and answer all of them. "I find it interesting to see that, as you point out, that the writers of the paper are not straightforward about the meaning of the number 10^70." Yes, that's a big number, isn't it? Strangely, it's more or less the number given a lot of time ago by Axe for the number of sequences needed to find a folding sequence, if I remember well. I am not saying that there is a relationship, but it's a funny coincidence, isn't it? I must say that I have loved the rugged landscape paper since the first time I read it. That 10^70 is so appealing, so much ID style! I must say that the paper was signaled to me by Petrushka (or was it Zachriel? No, I think it was Petrrushka) who strongly believed that it was hard evidence for the power of NS. As soon as I read it, I immediately thought that it was the opposite: absolute evidence of its limits. You are right about the use of words. I think that the authors realized all too well that they had some hot potato in their hands. So, they were honest and published the right facts and the right conclusions, because they obviously believe in their methodology and results, but I think they just tried to "smoothen" the impact by being smart with the words: "enormous range", "impractical", and, in the end, that completely unwarranted invocation of recombination as a convenient deus ex machina. However, I admire them and am really grateful to them for a very good research, one that really means a lot.gpuccio
October 5, 2017
October
10
Oct
5
05
2017
05:30 AM
5
05
30
AM
PST
GPuccio, Surely there is enough here for several OP's. Take for instance your elucidating take on the paper "Experimental Rugged Fitness Landscape in Protein Sequence Space" — 'The rugged landscape experiment' (part 9 & 10). I find it interesting to see that, as you point out, that the writers of the paper are not straightforward about the meaning of the number 10^70. From the paper (emphasis added):
Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region.
"Enormous" is correct of course, but the unsuspected reader can easily think that there is an "enormous" amount of phages out there. But the reality is, that the number under discussion, 10^70, is, as you say, a probabilistic resource that is beyond the reach of nature. As a comparison, the sun contains 10^57 atoms of hydrogen. Later in the paper the reader is again required to read between the lines (emphasis added):
By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical ....
"Impractical in the lab?" some readers might think. No, here by impractical is meant "impractical in this universe" or "never going to happen".Origenes
October 5, 2017
October
10
Oct
5
05
2017
02:57 AM
2
02
57
AM
PST
1 10 11 12

Leave a Reply