Uncommon Descent Serving The Intelligent Design Community

What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

An interesting discussion, still absolutely open, has taken place in the last few days between Gordon Davisson and me on the thread:

Some very good friends, like Dionisio, Mung and Origenes, seem to have appreciated the discussion, which indeed has touched important issues. Origenes has also suggested that it could be transformed into an OP.

Well, I tought that it was probably a good idea, and luckily it did not require much work. 🙂   So, here it is. Gordon Davisson’s posts are in italics. It’s a bit long, and I am sorry for that!

I thank in advance Gordon Davisson for the extremely good contribution he has already given, and for any other contribution he will give. He is certainly invited to continue the discussion here, if he likes (and I do hope he does!). Of course, anyone else who could be interested is warmly invited to join.  🙂

Gordon Davisson (post #5):

Why is this supposed to be a problem for “Darwinism”? A low rate of beneficial mutations just means that adaptive evolution will be slow. Which it is.

And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual. Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit. If I’ve accounted for everything the overall rate of fixation of beneficial mutations per generation should be: (fraction of mutations that’re beneficial) * (fraction of beneficial mutations that aren’t wiped out by genetic drift) * (# of mutations per individual) * (population).

Florabama’s description is exactly wrong. Beneficial mutations don’t have to happen “in a row”, they can happen entirely independently of each other, and spread independently via selection. You may be thinking of the argument from irreducible complexity, but that’s an argument that evolution depends on mutations that are only beneficial in combination, which is a different matter. (And FYI evolutionists dispute how much of a problem this actually is. But again, that’s another matter.)

gpuccio (post #11):

Gordon Davisson:

You say:

And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual.

Yes, but any “beneficial” mutation that appears in one individual will have to expand to great part of the population, if NS has to have any role in lowering the probabilistic barriers.

That means that:

1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something! Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

If you can, I really admire your imagination.

2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.

While that is simply impossible, because those “stepwise” mutations simply do not exist and never will exist, even if we imagine that they exist the process requires certainly a lot of time.

Moreover, as the process seems not to leave any trace of itself in the proteomes we can observe today, because those functionally intermediate forms simply do not exist, we must believe that each time the expansion of the new trait, with its “precious” single aminoacid mutation, must be complete, because it seems that it can erase all tracks of the process itself.

So, simple imagination is not enough here: you really need blind faith in the impossible. Credo quia absurdum, or something like that.

Then you say:

Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit.

Absolutely! And it’s not a bit, it’s a lot.

If you look at the classic paper about rugged landscape:

http://journals.plos.org/ploso…..ne.0000096

you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS. Just think about the implications of that simple fact.

You say:

Beneficial mutations don’t have to happen “in a row”, they can happen entirely independently of each other, and spread independently via selection.

Yes, but only if each individual mutation confers a strong enough reproductive advantage. That must be true for each single specific aminoacid position of each single new functional protein that appears in natural history. Do you really believe that? Do you really believe that each complex functional stricture can be deconstructed into simple steps, each conferring reproductive advantage? Do you believe that we can pass from “word” source code to “excel” source code by single byte variations (yes, I am generous here, because a single aminoacid has at most about 4 bits of information, not 8), each of them giving a better software which can be sold better than the previous version?

Maybe not even “credo quia absurdum” will suffice here. There are limits to the absurd that can be believed, after all!

You say:

You may be thinking of the argument from irreducible complexity, but that’s an argument that evolution depends on mutations that are only beneficial in combination, which is a different matter.

No, the argument of IC, as stated by Behe, is about functions which require the cooperation of many individual complex proteins. That is very common in biology.

The argument of functional complexity, instead, is about the necessity of having, in each single protein, all the functional information which is minimally necessary to give the function of the protein itself. How many AAs would that be, for example, for dynein? Or for the classic ATP synthase?

Here, the single functional element is so complex that it requires hundreds of specific aminoacids to be of any utility. If that single functional element also requires to work with other complex single elements to give the desired function (which is also the rule in biology), then the FC of the system is multiplied. That is the argument of IC, as stated by Behe. The argument for FC in a single functional structure is similar, but it is directly derived form the concept of CSI as stated by Dembski (and others before and after him).

And finally you say:

And FYI evolutionists dispute how much of a problem this actually is. But again, that’s another matter.

It’s not another matter. It’s simply a wrong matter.

Both FC and IC are huge problems for any attempt to defend the neo-darwinian theory. I am not surprised at all that “evolutionists” dispute that, however. See Tertullian’s quote above!

Gordon Davisson (post #35):

Hi, gpuccio. Sorry about my late reply (as usual, I’m afraid). Before I comment specifically to what you said, I need to make a general comment that I still don’t see how the original point — that beneficial mutations are rare — refutes evolution. The arguments you’re making against evolution’s ability to create complex functional systems don’t seem to have a very close connection to the rate of beneficial mutations. Note that all of these would be considered beneficial mutations:

* Minor changes to an existing functional thing (protein, regulatory region, etc) that improve its function slightly.
* Minor changes to an existing functional thing that change its function slightly, in a way that makes it fit the organism’s current environment better.
* Changes that decrease function of something that’s overdoing its role (e.g. the mutation discussed here, which winds up giving people unusually strong bones).
* Mutations that create new functional systems.
* Mutations that are partway along a path to new functional systems, and are beneficial by themselves.

Your argument is (if I may oversimplify it a bit) essentially that the last two are vanishingly rare. But when we look at the overall rate of beneficial mutations, they’re mixed in with other sorts of beneficial mutations that’re completely irrelevant to what you’re talking about! Additionally, several types of mutations that’re critical in your argument are not immediately beneficial aren’t going to be counted in the beneficial mutation rate:

* Mutations that move closer to a new functional system (or higher-functioning version of an existing system), but aren’t actually there yet.
* Mutations that produce new functional systems that don’t immediately contribute to fitness.

Furthermore, one of the reasons for the rate of beneficial mutations may be low is that there may simply not be much room for improvement. For example, the experiment you cited about evolution on a rugged fitness landscape suggests that the wild-type version of the protein they studied may be optimal — it cannot be improved, whether by evolution or intelligent design or whatever. If that’s correct, the rate of beneficial mutations to this protein will be exactly zero, but that’s not because of any limitation of what mutations can do.

Now, on to your actual argument:

And not as slow as it might appear, since the limiting rate is the rate of beneficial mutations over the entire population, not per individual.

Yes, but any “beneficial” mutation that appears in one individual will have to expand to great part of the population, if NS has to have any role in lowering the probabilistic barriers.

That means that:

1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something!

I’d disagree slightly here. There isn’t any particular “strong enough” threshold; the probability that a beneficial mutation will “escape genetic drift” is roughly proportional to how beneficial it is. Mutations that’re only slightly beneficial thus become fixed at a lower (but still nonzero) rate.

Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

If you can, I really admire your imagination.

I’ll discuss some of these points more below, but just two quick things here: first, this is just an argument from incredulity, not an argument from actual knowledge or evidence. Second, the article you cited about a rugged fitness landscape showed that they were able to evolve a new functional protein starting from a random polypeptide (the limit they ran into wasn’t getting it to function, but in optimizing that function).

2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.

While that is simply impossible, because those “stepwise” mutations simply do not exist and never will exist, even if we imagine that they exist the process requires certainly a lot of time.

This is simply wrong. Take the evolution of atovaquone resistance in P. falciparum (the malaria parasite). Unless I’m completely misreading the diagram Larry Moran gives in http://sandwalk.blogspot.com/2…..ution.html, one of the resistant variants (labelled “K1”) required 7 mutations in a fairly specific sequence, and at most 4 of them were beneficial. In order for this variant to evolve (which it did), it had to pass at least 3 steps unassisted by selection (which you claim here is impossible) and all 4 beneficial mutations had to overcome genetic drift.

At least in this case, beneficial intermediates are neither as rare nor as necessary as you claim.

Moreover, as the process seems not to leave any trace of itself in the proteomes we can observe today, because those functionally intermediate forms simply do not exist, we must believe that each time the expansion of the new trait, with its “precious” single aminoacid mutation, must be complete, because it seems that it can erase all tracks of the process itself.

So, simple imagination is not enough here: you really need blind faith in the impossible. Credo quia absurdum, or something like that.

Except we sometimes do find such traces. In the case of atovaquone resistance, many of the intermediates were found in the wild. For another example, in https://uncommondescent.com/intelligent-design/double-debunking-glenn-williamson-on-human-chimp-dna-similarity-and-genes-unique-to-human-beings/, VJTorley found that supposedly-novel genes in the human genome actually have very near matches in the chimp genome.

Then you say:

Although many beneficial mutations are wiped out by genetic drift before they have a chance to spread through the population, so that decreases the effective rate a bit.

Absolutely! And it’s not a bit, it’s a lot.

If you look at the classic paper about rugged landscape:

http://journals.plos.org/ploso…..ne.0000096

you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS. Just think about the implications of that simple fact.

That’s not exactly what they say. Here’s the relevant paragraph of the paper (with my emphasis added):

The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination. Recombination among neutral or surviving entities may suppress negative mutations and thus escape from mutation-selection-drift balance. Although the importance of recombination or DNA shuffling has been suggested [30], we did not include such mechanisms for the sake of simplicity. However, the obtained landscape structure is unaffected by the involvement of recombination mutation although it may affect the speed of search in the sequence space.

In other words, they used a simplified model of evolution that didn’t include all actual mechanisms, and they think it likely that’s why their model says the wild type couldn’t have evolved with a reasonable population size. So it must’ve been intelligent design… or maybe just homologous recombination. Or some other evolutionary mechanism they didn’t include.

Or their model of the fitness landscape might not be completely accurate. I’m far from an expert on the subject, but from my read of the paper:

* They measured how much infectivity (function) they got vs. population size (larger populations evolved higher infectivity before stagnating), fit their results to a theoretical model of the fitness landscape, and used that to extrapolate to the peak possible infectivity … which matched closely to that of the wild type. But their experimental results only measured relative infectivities between 0.0 and 0.52 (using a normalized logarithmic scale), and the extrapolation from 0.52 to 1.0 is purely theoretical. How well does reality match the theoretical model in the region they didn’t measure?

* But it’s worse than that, because their measurements were made on one functional “mountain”, and the wild type appears to reside on a different mountain. Do both mountains have the same ruggedness and peak infectivity? They’re not only extrapolating from the base of a mountain to its peak, but from the base of one mountain to the peak of another. The fact that the infectivity of the wild type matches closely with their theoretical extrapolation of the peak is suggestive, but hardly solid evidence.

So between the limitations of their simulation of actual evolutionary processes and the limitations of the region of the landscape over which they gathered data, I don’t see how you can draw any particularly solid conclusions from that study.

Well, except that there are some conclusions available from the region of the landscape that they did make measurements on: between random sequences and partial function. They say:

The landscape structure has a number of implications for initial functional evolution of proteins and for molecular evolutionary engineering. First, the smooth surface of the mountainous structure from the foot to at least a relative fitness of 0.4 means that it is possible for most random or primordial sequences to evolve with relative ease up to the middle region of the fitness landscape by adaptive walking with only single substitutions. In fact, in addition to infectivity, we have succeeded in evolving esterase activity from ten arbitrarily chosen initial random sequences [17]. Thus, the primordial functional evolution of proteins may have proceeded from a population with only a small degree of sequence diversity.

This seems to directly refute your claim that stepwise-beneficial mutations cannot produce functional proteins. They showed that it can. And they also showed that (as with the atovaquone resistance example) evolution doesn’t require stepwise-beneficial paths either. They found that stepwise-beneficial paths existed up to a relative fitness of 0.4, but they experimentally achieved relative fitnesses up to 0.52! So even with the small populations and limited evolutionary mechanisms they used, they showed it was possible to evolve significantly past the limits of stepwise-beneficial paths.

I don’t have to imagine this. They saw it happen.

gpuccio (posts 36 -39, 41, 46, 48):

 

Gordon Davisson:

First of all, thank you for your detailed and interesting comments to what I wrote. You raise many important issues that deserve in depth discussion.

I will try to make my points in order, and I will split them in a few different posts:

1) The relevance of the rate of “beneficial” mutations.

You say:

Before I comment specifically to what you said, I need to make a general comment that I still don’t see how the original point — that beneficial mutations are rare — refutes evolution. The arguments you’re making against evolution’s ability to create complex functional systems don’t seem to have a very close connection to the rate of beneficial mutations.

I don’t agree. As you certainly know, the whole point of ID is to evaluate the probabilistic barriers that make it impossible for the proposed mechanism of RV + NS to generate new complex functional information. The proposed mechanism relies critically on NS to overcome those barriers, therefore it is critical to understand quantitatively how often RV occurs that can be naturally selected, expanded and fixed.

Without NS, it is absolutely obvious that RV cannot generate anything of importance. Therefore, it is essential to understand and demonstrate how much NS can have a role in modifying that obvious fact, and the rate of naturally selectable mutations (not of “beneficial mutations, because a beneficial mutation which cannot be selected because it does not confer a sufficient reproductive advantage is of no use for the model) is of fundamental importance in the discussion.

2) Types of “beneficial” mutations (part 1).

You list 5 types of beneficial mutations. Let’s consider the first 3 types:

Note that all of these would be considered beneficial mutations:
* Minor changes to an existing functional thing (protein, regulatory region, etc) that improve its function slightly.
* Minor changes to an existing functional thing that change its function slightly, in a way that makes it fit the organism’s current environment better.
* Changes that decrease function of something that’s overdoing its role (e.g. the mutation discussed here, which winds up giving people unusually strong bones).

Well, I would say that these three groups have two things in common:

a) They are mutations which change the functional efficiency (or inefficiency) of a specific function that already exists (IOWs, no new function is generated).

b) The change is a minor change (IOWs, it does not imply any new complex functional information).

OK, I am happy to agree that, however common “beneficial” mutations may be, they almost always, if not always, are of this type. that’s what we call “microevolution”. It exists, and nobody has ever denied that. Simple antibiotic resistance has always been a very good example of that.

Of course, while ID does not deny microevolution, ID theory definitely shows its limits. They are:

a) As no new function is generated, this kind of variation can only tweak existing functions.

b) While the changes are minor, they can accumulate, especially under very strong selective pressure, like in the case of antibiotic resistance (including malaria resistance). But gradual accumulation of this kind of tweaking takes long times even under extremely strong pressure, requires a continuous tweaking pathway that is not always existing, and is limited, however, by how much the existing function can be optimized by simple stepwise mutations.

I will say more about those points when I answer about malaria resistance and the rugged landscape experiment. I would already state here, however, that both those scenarios, that you quote in your discussion, are of this kind, IOWs they fall under one of these three definitions of “beneficial” mutations.

3) Types of “beneficial” mutations (part 2).

The last two types are, according to what you say:

* Mutations that create new functional systems.
* Mutations that are partway along a path to new functional systems, and are beneficial by themselves.

These are exactly those kinds of “beneficial” mutations that do not exist.

Let’s say for the moment that we have no example at all of them.

For the first type,are you suggesting that there are simple mutations that “create new functional systems”? Well, let’s add an important word:

“create new complex functional systems”?

That word is important, because, as you certainly know, the whole point of ID is not about function, but about complex function. Nobody has ever denied that simple function can arise by random variation.

So, for this type, I insist: what examples do you have?

You may say that even if you have no examples, it’s my burden to show that it is impossible.

But that is wrong. You have to show not only that it is possible, but that it really happens and has real relevance to the problem we are discussing. We are making empirical science here, not philosophy. Only ideas supported by facts count. So, please, give the facts.

I would say that there is absolutely no reason to believe that a “simple” variation can generate “new complex functional systems”. There is no example of that in any complex system. Can the change of a letter generate a new novel? Can the change of a byte generate a new complex software, with new complex functions? Can a mutation of 1 – 2 aminoacids generate a new complex biological system?

The answer is no, but if you believe differently, you are welcome: just give facts.

In the last type of beneficial mutations, you hypothesize, if I understand you well, that a mutation can be part of the pathway to a new complex functional system, which still does not exist, but can be selected because it is otherwise beneficial.

So, let’s apply that to the generation of a new functional protein, like ATP synthase. Let’s say the beta chain of it, which, as we all know, has hundreds of specific aminoacid positions, conserved from bacteria to humans (334 identities between E. coli and humans).

Now, what you are saying is that we can in principle deconstruct those 334 AA values into a sequence of 334 single mutations, or if you prefer 167 two AAs mutations, each of which is selected not because the new protein is there and works, but because the intermediate state has some other selectable function?

Well, I say that such an assumption is not reasonable at all. I see no logical reason why that should be possible. If you think differently, please give facts.

I will say it again; the simple idea that new complex functions can be deconstructed into simple steps, each of them selectable for some not specified reason, is pure imagination. If you have facts, please give them, otherwise that idea has not relevance in a scientific discussion.

4) Other types of mutation?

You add two further variations in your list of mutations. Here they are:

* Mutations that move closer to a new functional system (or higher-functioning version of an existing system), but aren’t actually there yet.
* Mutations that produce new functional systems that don’t immediately contribute to fitness.

I am not sure that I understand what you mean. If I understand correctly, you are saying that there are mutations which in the end will be useful, bur for the moment are not useful.

But, then, they cannot be selected as such. Do you realize what that means?

It means that they can certainly occur, but they have exactly the same probability to occur as any other mutation. Moreover, as they are no selected, they remain confined to the original individual or clone, unless they are fixed by genetic drift.

But again, they have exactly the same probability as any other mutation to be fixed by genetic drift.

That brings us to a very strong conclusion that is often overlooked by darwinists, especially the neutralists:

Any mutation that does not have the power to be naturally selected is completely irrelevant in regard to the probabilistic barriers because its probability is exactly the same as any other mutation to occur or to be fixed by drift.

IOWs, only mutations that can be naturally selected change the game in regard to the computation of the probabilistic barriers. Nothing else. All variation which cannot be naturally selected is irrelevant, because it is just a new random state, and is already considered when we compute the probabilities for a random search to get the target.

5) Optimal proteins?

You say:

Furthermore, one of the reasons for the rate of beneficial mutations may be low is that there may simply not be much room for improvement. For example, the experiment you cited about evolution on a rugged fitness landscape suggests that the wild-type version of the protein they studied may be optimal — it cannot be improved, whether by evolution or intelligent design or whatever. If that’s correct, the rate of beneficial mutations to this protein will be exactly zero, but that’s not because of any limitation of what mutations can do.

OK, I can partially agree. The proteins as we see them now are certainly optimal in most cases. But they were apparently optimal just from the beginning.

For example, our beloved ATP synthase beta chain already had most of its functional information in LUCA, according to what we can infer from homologies. And, as I have shown in my OPs about the evolution of information in vertebrates, millions of bits of new functional information have appeared at the start of the vertebrate branch, rather suddenly, and then remained the same for 400+ million years of natural history. So, I am not sure that the optimal state of protein sequences is any help for neo-darwinism.

Moreover, I should remind you that protein coding genes are only a very small part of genomes. Non coding DNA, which according to darwinists is mostly useless, can certainly provide ample space for beneficial mutations to occur.

But I will come back to that point in the further discussion.

I would like to specify that my argument here is not to determine how common exactly are beneficial mutations in absolute, but rather to show that rare beneficial mutations are certainly a problem for neo-darwinism, a very big problem indeed, especially considering that (almost) all the examples we know of are examples of micro-evolution, and do not generate any new complex functional information.

5) The threshold for selectability.

You say:

I’d disagree slightly here. There isn’t any particular “strong enough” threshold; the probability that a beneficial mutation will “escape genetic drift” is roughly proportional to how beneficial it is. Mutations that’re only slightly beneficial thus become fixed at a lower (but still nonzero) rate.

I don’t think we disagree here. Let’s say that very low reproductive advantages will not be empirically relevant, because they will not significantly raise the probability of fixation above the generic one from genetic drift.

On the other hand, even if there is a higher probability of fixation, the lower it is, the lower will be the effect on probabilistic barriers. Therefore, only a significant reproductive advantage will really lower the probabilistic barriers in a relevant way.

6) The argument from incredulity.

You say:

I’ll discuss some of these points more below, but just two quick things here: first, this is just an argument from incredulity, not an argument from actual knowledge or evidence. Second, the article you cited about a rugged fitness landscape showed that they were able to evolve a new functional protein starting from a random polypeptide (the limit they ran into wasn’t getting it to function, but in optimizing that function).

I really don’t understand this misuse of the “argument from incredulity” issue (are, of course, not the only one to use it improperly).

The scenario is very simple: in science, I definitely am incredulous of any explanation which is not reasonable, has no explanatory power, and especially is not supported by any fact.

This is what science is. I am not a skeptic (I definitely hate that word), but I am not a credulous person who believes in things only because others believe in them.

You can state any possible theory in science. Some of them will be logically inconsistent, and we can reject from the start. But others will be logically possible, but unsupported by observed facts and by sound reasoning. We have the right and the duty to ignore those theories as devoid of any true scientific interest.

This is healthy incredulity. The opposite of blind faith.

I will discuss the rugged landscape issue in detail, later.

7) Malaria resistance.

In the end, the only facts you provide in favour of the neo-darwinist scenario are those about malaria resistance and the rugged landscape experiment. I will deal with the first here, and with the second in next post.

You say:

This is simply wrong. Take the evolution of atovaquone resistance in P. falciparum (the malaria parasite). Unless I’m completely misreading the diagram Larry Moran gives in http://sandwalk.blogspot.com/2…..ution.html, one of the resistant variants (labelled “K1”) required 7 mutations in a fairly specific sequence, and at most 4 of them were beneficial. In order for this variant to evolve (which it did), it had to pass at least 3 steps unassisted by selection (which you claim here is impossible) and all 4 beneficial mutations had to overcome genetic drift.

At least in this case, beneficial intermediates are neither as rare nor as necessary as you claim.

Now, let’s clarify. In brief, my point is that malaria resistance, like simple antibiotic resistance in general, is one of the few known cases of microevolution.

As I have already argued in my post #36, microevolutionary events are characterized by the following:

a) No new function is generated, but only a tweaking of some existing function.

b) The changes are minor. Even if more than one mutation accumulates, the total functional information added is always small.

I will discuss those two points for malaria resistance in the next point, but I want to clarify immediately that you are equivocating what I wrote when you say:

This is simply wrong.

Indeed, you quote my point 2) from post #11:

“2) Each of those “beneficial mutations” (non existing, IMO, but let’s suppose they can exist) has anyway to escape drift and be selected and expanded by NS, so that it is present in most, or all the population. That’s how the following mutation can have some vague probability to be added. That must happen for each single step.”

But you don’t quote the premise, in point 1:

“1) The “beneficial” mutation must not only be “beneficial” in a general sense, but it must already, as it is, confer a reproductive advantage to the individual clone where it was generated. And the reproductive advantage must be strong enough to significantly engage NS (against the non-mutated form, IOWs all the rest of the population), and so escape genetic drift. That is something! Can you really think of a pathway to some complex new protein, let’s say dynein, a pathway which can “find” hundreds of specific, highly conserved aminoacids in a proteins thousands of aminoacid long, whose function is absolutely linked to a very complex and global structure, a pathway where each single new mutation which changes one aminoacid at a time confers a reproductive advantage to the individual, by gradually increasing, one step at a time, the function of a protein which still does not exist?

I have emphasized the relevant part, that you seem to have ignored. Point 2 is referring to that scenario.

It is rather clear that I am speaking of the generation of bew complex functional information, and I even make an example, dynein.

So, I am not saying that no beneficial mutation can be selected, or that when that happens, like in microevolution, we cannot find the intermediate states.

What I am saying is that such a model cannot be applied to the generation of new complex final information, like dynein, because it is impossible to decosntruct a new complex functional unit into simple steps, each of them naturally selectable, while the new protein still does not even exist.

So, what I say is not wrong at all, and mt challenge to imagine such a pathway for dynein, of for ATP synthase beta chain, or for any of the complex functional proteins that appear in the course of natural history, or to find intermediates of that pathway, remains valid.

But let’s go to malaria.

I have read the Moran page, and I am not sure of your interpretation that 7 mutations (4 + 3) are necessary to give the resistance. Indeed, Moran says:

“It takes at least four sequential steps with one mutation becoming established in the population before another one occurs.”

But the point here is not if 4 or 7 mutations are needed. The point is that this is a clear example of microevolution, although probably one of the most complex that have been observed.

Indeed:

a) There is no generation of a new complex function. Indeed, there is no generation of a new function at all, unless you consider becoming resistant to an antibiotic because a gene loses the function to uptake the antibiotic a new “function”. Of course, we can define function as we like, but the simple fact is that here there is an useful loss of function, what Behe calls “burning the bridges to prevent the enemy from coming in”.

b) Whatever out definition of function, the change here is small. It is small if it amounts to 4 AAs (16 bits at most), it is small if it amounts to 7 aminoacids (28 bits at most).

OK, I understand that Behe puts the edge to two AAs in his book. Axe speaks of 4, from another point of view.

Whatever. The edge is certainly thereabout.

When I have proposed a threshold of functional complexity to infer design for biological objects, I have proposed 120 bits. That’s about 35 AAs.

Again, we must remember that all known microevolutionary events have in common a very favourable context which makes optimization easier:

a) They happen in rapidly reproducing populations.

b) They happen under extreme environmental pressure (the antibiotic)

c) The function is already present and it can be gradually optimized (or, like in the case of resistance, lost).

d) Only a few bits of informational change are enough to optimize or lose the function.

None of that applies to the generation of new complex functional information, where the function does not exist, the changes are informationally huge, and environmental pressure is reasonably much less than reproducing under the effect of a powerful antibiotic.

8) VJ’s point:

You say:

VJTorley found that supposedly-novel genes in the human genome actually have very near matches in the chimp genome.

It’s funny that you quote a point that I consider a very strong argument for ID.

First of all, VJ’s arguments are in confutation of some statements by Cornelius Hunter, with whom I often disagree.

Second, I am not sure that ZNF843 is a good example, because I blasted the human protein and found some protein homologs in primates, with high homology.

Third, there are however a few known human proteins which have no protein counterpart in other primates, as VJ correctly states. These seem to have very good counterparts in non coding DNA of primates.

So, if we accept these proteins as real and functional (unfortunately not much is known about them, as far as I know), then what seems to happen is that:

a) The sequence appears in some way in primates as a non coding sequence. That means that no NS for the sequence as representing a protein can take place.

b) In some way, the sequence acquires a transcription start in humans, and becomes an ORF. So the protein appears for the first time in humans and, if we accept the initial assumption, it is functional.

Well, if that kind of process will be confirmed, it will be a very strong evidence of design. the sequence is prepared in primates, where is seems to have no function at all, and is activated in humans, when needed.

The origin of functional proteins from non coding DNA, which is gaining recognition in the recent years, is definitive evidence of design. NS cannot operate on non coding sequences, least of all make them good protein coding genes. So, the darwinian mechanism is out, in this case.

9) The rugged landscape experiment

OK, this is probably the most interesting part.

For the convenience of anyone who may be reading this, I give the link to the paper:

http://journals.plos.org/ploso…..=printable

First of all, I think we can assume, for the following discussion, that the wild-type version of the protein they studied is probably optimal, as you suggested yourself. In any case, it is certainly the most functional version of the protein that we know of.

Now, let’s try to understand what this protein is, and how the experiment was realized.

The protein is:

G3P_BPFD (P03661).

Length: 424 AAs.

Funtion (from Uniprot):

“Plays essential roles both in the penetration of the viral genome into the bacterial host via pilus retraction and in the extrusion process. During the initial step of infection, G3P mediates adsorption of the phage to its primary receptor, the tip of host F-pilus. Subsequent interaction with the host entry receptor tolA induces penetration of the viral DNA into the host cytoplasm. In the extrusion process, G3P mediates the release of the membrane-anchored virion from the cell via its C-terminal domain”

I quote from the paper:

Infection of Escherichia coli by the coliphage fd is mediated by the minor coat protein g3p [21,22], which consists of three distinct domains connected via flexible glycine-rich linker sequences [22]. One of the three domains, D2, located between the N-terminal D1 and C-terminal D3 domains, functions in the absorption of g3p to the tip of the host F-pilus at the initial stage of the infection process [21,22]. We produced a defective phage, ‘‘fdRP,’’ by replacing the D2 domain of the fd-tet phage with a soluble random polypeptide, ‘‘RP3-42,’’ consisting of 139 amino acids [23].

So, just to be clear:

1) The whole protein is implied in infectivity

2) Only the central domain has been replaced by random sequences

So, what happens?

From the paper:

The initial defective phage fd-RP showed little infectivity, indicating that the random polypeptide RP3-42 contributes little to infectivity.

Now, infectivity (fitness) was measured by an exponential scale, in particular as:

W = ln(CFU) (CFU = colony forming units/ml)

As we can see in Fig. 2, the fitness of the mutated phage (fd-RP) is 5, that is:

CFU = about 148 (e^5)

Now, always from Fig 2 we can see that the fitness of the wildtype protein is about 22.5, that is:

CFU = about 4.8 billions

So, the random replacement of the D2 domain certainly reduces infectivity a lot, and it is perfectly correct to say that the fd-RP phage “showed little infectivity”.

Indeed, infectivity has been reduced of about 32.6 million times!

But still, it is there: the phage is still infective.

What has happened is that by replacing part of the g3p protein with random sequences, we have “damaged” the protein, but not to the point of erasing completely its function. The protein is still there, and in some way it can still work, even with the have damage/deformation induced by our replacement.

IOWs, the experiment is about retrieving an existing function which has been artificially reduced, but not erased. No new function is generated, but an existing reduced function is tweaked to retrieve as much as possible of its original functionality.

This is an important point, because the experiment is indeed one of the best contexts to measure the power of RM + NS in the most favorable conditions:

a) The function is already there.

b) Only part of the protein has been altered

c) Phages are obviously a very good substrate for NS

d) The environmental pressure is huge and directly linked to reproductive success (a phage which loses infectivity cannot simply reproduce).

IOWs, we are in a context where NS shoul really operate at its most.

Now, what happens?

OK, some infectivity is retrieved by RM. How much?

At the maximum of success, and using the most numerous library of mutations, the retrieved infectivity is about 14.7 (see again Fig. 2). Then the adaptive walk stops.

Now, that is a good result, and the authors are certainly proud of it, but please don’t be fooled by the logarithmic scale.

An infectivity of 14.7 corresponds to:

about 2.4 million CFU

So, we have an increase of:

about 17000 times as stated by the authors.

But, as stated by the authors, the fitmess should still increase of about 2000 times (fitness 7.6) to reach the functionality of the wild type. that means passing from:

2.4 million CFU

to

4.8 billion CFU

So, even if some good infectivity has been retrieved, we are still 2000 times lower than the value in the wild type!

And that’s the best they could achieve.

Now, why that limit?

The authors explain that the main reason for that is the rugged landscape of protein function. That means that RM and NS achieve some good tweaking of the function, but starting from different local optima in the landscape, and those local optima can go only that far.

The local optimum corresponding to the wildtype has never been found. See the paper:

The sequence selected finally at the 20th generation has ~W = 0.52 but showed no homology to the wild-type D2 domain, which was located around the fitness of the global peak. The two sequences would show significant homology around 52% if they were located on the same mountain. Therefore, they seem to have climbed up different mountains

The authors conclude that:

The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wildtype phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.

Now, having tried to describe in some detail the experiment itself, I will address your comments.

10) Your comments about the rugged landscape paper

You say:

That’s not exactly what they say. Here’s the relevant paragraph of the paper (with my emphasis added):

But it is exactly what they say!

Let’s see what I wrote:

“you will see that the authors conclude that a starting library of 10^70 mutations would be necessary to find the wild-type form of the protein they studied by RM + NS.

(emphasis added)

Now let’s see what they said:

By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.

(I have kept your emphasis).

So, the point is that, according to the authors, a library of 10^70 sequences would be necessary to find the wildtype by random substitutions only (plus, I suppose, NS).

That’s exactly what I said. Therefore, your comment, that “That’s not exactly what they say” is simply wrong.

Let’s clarify better: 10^70 is a probabilistic resource that is beyond the reach not only of our brilliant researchers, but of nature itself!

It seems that your point is that they also add that, given that “such a huge search is impractical” (what a politically correct adjective here! ), that should:

“imply that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.”

which is the part you emphasized.

As if I had purposefully left out such a clarifying statement!

Well, of course I have purposefully left out such a clarifying statement, but not because I was quote-mining, but simply because it is really pitiful and irrelevant. Let’s say that I wanted to be courteous to the authors, who have written a very good paper, with honest conclusions, and only in the end had to pay some minimal tribute to the official ideology.

You see, when you write a paper, and draw the conclusions, you are taking responsibilities: you have to be honest, and to state only what can be reasonably derived from the facts you have given.

And indeed the authors do that! They correctly draw the strong conclusion that, according to their data, RM + NS only cannot find the wildtype in their experiment (IOWs, the real, optimal function), unless we can provide a starting library of 10^70 sequences, which, as said, is beyond the reach of nature itself, at least on our planet. IOWs, let’s say that it would be “impractical”.

OK, that’s the correct conclusion according to their data. They should have stopped here.

But no, they cannot simply do that! So they add that such a result:

implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.

Well, what is that statement? Just an act of blind faith in neo-darwinism, which must be true even when facts falsify it.

Is it a conclusion derived in any way from the data they presented?

Absolutely not! There is nothing in their data that suggests such a conclusion. They did not test recombination, or other mechanisms, and therefore they can say absolutely nothing about what it can or cannot do. Moreover, they don’t even offer any real support from the literature for that statement. They just quote one single paper, saying that “the importance of recombination or DNA shuffling has been suggested”. And yet they go well beyond a suggestion, they say that their “conclusion” is implied. IOWs logically necessary.

What a pity! What a betrayal of scientific attitude.

If they really needed to pay homage to the dogma, they could have just said something like “it could be possible, perhaps, that recombination helps”. But “imply”? Wow!

But I must say that you too take some serious responsibility in debating that point. Indeed, you say:

In other words, they used a simplified model of evolution that didn’t include all actual mechanisms, and they think it likely that’s why their model says the wild type couldn’t have evolved with a reasonable population size. So it must’ve been intelligent design… or maybe just homologous recombination. Or some other evolutionary mechanism they didn’t include.

Well, they didn’t use “a simplified model of evolution”. They tested the official model: RM + NS. And it failed!

Since it failed, they must offer some escape. Of course, some imaginary escape, completely unsupported by any facts.

But the failure of RM + NS, that is supported by facts, definitely!

I would add that I cannot see how one can think that recombination can work any miracle here: after all, the authors themselves have said that the local optimum of the wildtype has not been found. The problem here is how to find it. Why should recombination of existing sequences, which share no homology with the wildtype, help at all in finding the wildtype? Mysteries of blind faith.

And have the authors, or anyone else, made new experiments that show how recombination can solve the limit they found? Not that I know. If you are aware of that, let me know.

Then you say:

Or their model of the fitness landscape might not be completely accurate.

Interesting strategy. So, if the conclusions of the authors, conclusions driven from facts and reasonable inferences, are not those that you would expect, you simply doubt that their model is accurate. Would you have had the same doubts, had they found that RM + NS could find easily the wildtype? Just wondering…

And again:

So between the limitations of their simulation of actual evolutionary processes and the limitations of the region of the landscape over which they gathered data, I don’t see how you can draw any particularly solid conclusions from that study.

Well, like you, I am not an expert of that kind of models. I accept the conclusions of the authors, because it seems that their methodology and reasonings are accurate. You doubt them. But should I remind you that they are mainstream authors, not certainly IDists, and that their conclusions must have surprised themselves first of all. I don’t know, but when serious researchers publish results that are not probably what they expected, and that are not what others expect, they must be serious people (except, of course, for the final note about recombination, but anyone can make mistakes after all! ).

Then your final point:

This seems to directly refute your claim that stepwise-beneficial mutations cannot produce functional proteins. They showed that it can.

No, for a lot of reasons:

a) We are in a scenario of tweaking an existing, damaged function to retrieve part of it. We are producing no new functional protein, just “repairing” as much as possible some important damage.

b) That’s why the finding of lower levels of function is rather easy: it is not complex at all, it is in the reach of the probabilistic resources of the system.

I will try to explain it better. Let’s say that you have a car, and that its body has been seriously damaged in a car accident. That’s our protein with its D2 domain replaced by a random sequence of AAs.

Now, you have not the money to buy the new parts that would bring back the old body in all its spendor (the wildtype).

So, you choose the only solution you can afford: you take a hammer, and start giving gross blows to the body, to reduce the most serious deformations, at least a little.

the blows you give need not be very precise or specific: if there is some part which is definitely too out of the line, a couple of gross blows will make it less prominent. And so on.

Of course, the final result is very far from the original: let’s say 2000 times less beautiful and functional.

However, it is better than what you started with.

IOWs, you are trying a low information fixing: a repair which is gross, but somewhat efficient.

And, of course, there are many possible gross forms that you can achieve by your hammer, and that have more or less the same degree of “improvement”.

On the contrary, there is only one form that satisfies the original request: the perfect parts of the original body.

So, a gross repair has low informational content. A perfect repair has very high informational content.

That’s what the rugged landscape paper tells us: the conclusion, derived form facts, is perfectly in line with ID theory. Simple function can be easily reached by some probabilistic resources, by RV + NS, provided that the scenario is one of tweaking an existing function, and not of generating a new complex one.

It’s the same scenario of malaria resistance, or of other microevolutionary events.

But the paper tells us something much more important: complex function, that with a high informational content, cannot be realistically achieved with those mechanisms, nor even in the most favorable NS scenario, with an existing function, and the opportunity to tweak it with high mutation rates and highly reproducing populations, and direct relevance of the function to reproduction.

Complex function cannot be found, not even in those conditions. The wildtype remains elusive, and, if the author’s model is correct, which I do believe, will remain elusive in any non design context.

And, if RV and NS cannot even do that, how can they hope to just start finding some new, complex, specific function, like the sequence of ATP synthase beta chain, or dynein, or whatever you like, starting not from an existing, damaged but working function, but just from scratch?

OK, this is it. I think I have answered your comments. It was some work, I must say, but you certainly deserved it!

Addendum:

By the way, in that paper we are dealing with a 139 AAs sequence (the D2 domain).

ATP synthase beta chain is 529 AAs long, and has 334 identities between E. coli and humans, for a total homology of 663 bits.

Cytoplasmic dynein 1 heavy chain 1 is 4646 AAs long, and has 2813 identities between fungi and humans, for a total homology of 5769 bits.

These are not the 16 – 28 bits of malaria resistance. Not at all.

OK, that’all for the moment. Again, I apologize for the length of it all!  🙂

Comments
Origenes: Again, I think that we agree on everything, but probably I have not made clear enough the thread of my reasoning, which however is still incomplete. Now I have not the time, but later in the day I will try to explain why we essentially agree, and if possible to complete my reasoning with the part about functional protein space. :)gpuccio
October 23, 2017
October
10
Oct
23
23
2017
11:18 PM
11
11
18
PM
PDT
The politely dissenting interlocutors seem practically gone from this interesting technical discussion thread, but apparently gpuccio and his follow up commenters (Origenes, Mung,...) are "cooking" something tasteful here... This thread keeps attracting readers in relatively larger numbers:
Popular Posts (Last 30 Days) What are the limits of Natural Selection? An interesting… (2,234) [since October 5] Violence is Inherent in Atheist Politics (2,016) Selective Horrid Doubt (1,319) Sweeping the Origin of Life Under the Rug (1,004) Has the missing matter of our universe finally been found? (845)
Dionisio
October 23, 2017
October
10
Oct
23
23
2017
08:50 PM
8
08
50
PM
PDT
gpuccio, Please, would you mind to comment on this? Thanks. "...all mutational types have worked in concert with evolutionary forces to generate the current human brain..." https://link.springer.com/content/pdf/10.1186%2Fs12915-017-0409-z.pdf "May the [evolutionary] force[s] be with you" [Star Wars] :) [emphasis added]Dionisio
October 23, 2017
October
10
Oct
23
23
2017
08:36 PM
8
08
36
PM
PDT
GPuccio @287
Gpuccio: … the role of NS is to expand a selectable step so that it is no more confined to the initial individual, but spreads to the population. So, the probabilities of having a new “beneficial” mutation that can be added to the first are much higher.
That is true of course, but what if there is no second new beneficial mutation that can be added to the first? IOWs what if the first selectable step has no follow-up and leads to a dead end? Or what if the original sequence was only a few mutations away from a breakthrough evolutionary discovery, but was led astray by NS due to a rather trivial selectable step?
Gpuccio: … So, while it is absolutely true that it is always RV which generates the novelty, it is also true that NS, in this extreme scenario, has a fundamental role, because it provides more realistic probabilistic resources. That’s what I mean when I say that NS, if and when it can act, “lowers the probabilistic barriers.”
I am sorry, but I am not convinced. It seems to me that probabilistic barriers are only lowered if, by sheer dumb luck, the area ‘chosen’ by NS happens to contain, nicely lined up, further beneficial mutations. Counting on this is obviously a huge gamble. With a varied population you have a broad search, which is narrowed down by NS, but, again, that’s quite a gamble. It is comparable to searching for an Easter egg on an island. One can have several small groups searching all over the island or one can take a huge gamble by forming one single large group and focus the search on a specific part of the beach. What is the better strategy? Which method “lowers the probabilistic barriers”?
Gpuccio: Probably I should have said: “Both AS and NS, cooperating with RV, can contribute to add some functional information to some existing scenario.”
Sure it can. Anything is possible if you are incredibly lucky — if selectable steps just happen to be nicely lined up.Origenes
October 23, 2017
October
10
Oct
23
23
2017
04:55 PM
4
04
55
PM
PDT
Origenes: Yes, what you say is right. Of course, it is always RV that adds information. What I meant is that the process of RV+NS can add functional information, in the contexts that I have specified. IOWs, the role of NS is to expand a selectable step so that it is no more confined to the initial individual, but spreads to the population. So, the probabilities of having a new "beneficial" mutation that can be added to the first are much higher. IOWs, if we are discussing bacteria, and the final object has a two AAs beneficial mutation, if we assume that the first mutation is selectable, what happens is more or less: a) we have a population of, say, 10^12 bacteria with the original allele b) a single mutation happens by RV in one individual c) it is beneficial, so in time x it is fixed (expanded to the whole population) d) a second single mutation happens in the same allele, and the final form with two beneficail mutations is reached. Now, the role of NS in that kind of scenario is important, because if no NS acted on the first mutation, we would have one individual (or at list its individual clone) where the second mutation should happen. The probabilities here are extremely low (p1*p2). But if the first mutation is expanded to 10^12 individuals, then the seconf mutation has an increase in probability of about 10^12, because now there are 10^12 bacteria with the first mutation which can, by RV, receive the second mutation and reach the target. So, while it is absolutely vtrue that it is always RV which generates the novelty, it is also true that NS, in this extreme scenario, has a fundamental role, because it provides more realistic probabilstic resources. That's what I mean when I say that NS, if and when it can act, "lowers the probabilistic barriers". Probably I should have said: "Both AS and NS, cooperating with RV, can contribute to add some functional information to some existing scenario." Indeed, AS too cannot generate new information, in irself: it is always RV that generates new information in all scenarios based on variation + selection.gpuccio
October 23, 2017
October
10
Oct
23
23
2017
03:59 PM
3
03
59
PM
PDT
GPuccio @285
GPuccio: Both AS and NS can add some functional information to some existing scenario.
I have a problem with this. Perhaps, you can clarify what you mean by “adding information”? My line of thinking is thusly: An individual organism can only receive information by a mutation, never by selection (NS or AS). Selection only acts on what already exists, so in what sense does it add functional information? The idea is, if I understand it correctly, that selection (AS or NS) can add functional information to a population — not an individual organism. Selection can spread existing information throughout a population over time. Does this equate to “adding information to a population”? One thing is for sure, selection does not create any information. What selection does is making a population more homogenous by subtracting information. Selection is in fact elimination. After each successful round of selection there is less variation in a population, and in an important sense less information.Origenes
October 23, 2017
October
10
Oct
23
23
2017
11:38 AM
11
11
38
AM
PDT
To all interested: So, let's continue our discussion of point c): c) How is it that NS can generate functional information, even if in small quantities? Can AS do more, and how much more? We have already seen that AS can select some specific function which is naturally present in some random repertoire, and in some way, through rounds of variation and repeated AS, add some functional information to that function. I will come back to that in a moment. But can NS do the same? Yes, it can. But it is strictly confined to functions that affect reproductive success, giving some advantage. Now, those cases are exceedingly rare. While low level functions in general (like weak ATP binding, or any other weak and trivial biochemical affinity) are moderately represented in random sequences (see the 40 bits of functional information for weak ATP binding), functions that can give some definite reproductive advantage are certainly, as a rule, much more complex, and cannot easily be found in a random library. The few clear examples that we have of NS in action, indeed, are about slightly modifying some existing functional structure to get an advantage in particular contexts (see simple antibiotic resistance) or retrieving an existing function which has been intentionally "damaged" and maintained at very low levels (see the rugged landscape experiment), or something like that. For example, penicillin resistance can be acquired thorugh mutations of PBPs, membrane-bound D,D-peptidases. Penicillins are substrate analogues, and they bind to the enzyme and inactivate it because they are similar to its natural substrate, peptidoglycan. So, the accumulation of one or more mutations that alter the enzyme's binding site can confer antibiotic resistance, even if in principle they are desctructive mutations. Behe has clearly identified that mechanism as the "burning bridges to prevent the enemy from coming in" strategy. Of course, degrading an existing structure is extremely easy if compared to building a new structure. We can say that the function: "degrading an existing structure, so that antibiotics cannot any more target it" is a function with very low functional information. A lot of variations can degrade the existing strucutre, and the target space is very big. That's exactly the reason why the almost powerless NS can be effective in these scenarios: the functional information to be found is very low. I quote here my final statement form my post #87, to the precious Corey Delvine:
I will continue not to be amazed at the power of destructive random variation: whoever has destroyed a house of cards with a very slight movement knows that concept all too well. On the other hand, whoever has built a house of cards with a single, slight movement, is certainly a remarkable individual!
So, our first conclusion is that: Both AS and NS can add some functional information to some existing scenario. However, the range of AS is certainly much wider. NS, instead, can only act on functions that confer a reproductive advantage, so its scope is extremely limited. So limited that the existing models are almost always about degrading an existing function or partially retrieving an existing damaged function. But there is another important factor that should be discussed: the protein space landscape. I will do that in next post.gpuccio
October 23, 2017
October
10
Oct
23
23
2017
10:10 AM
10
10
10
AM
PDT
To all interested: c) How is it that NS can generate functional information, even if in small quantities? Can AS do more, and how much more? Two questions, but strictly related. Some in the ID field find it difficult to believe that RV and NS can generate any functional information at all. But that is not suprising. In my old OP about functional information: https://uncommondescent.com/intelligent-design/functional-information-defined/ I have said explicitly that we can accept any definition of function for the object we are observing. Any function will do, provided that we compute the complexity linked to it, IOWs, the minimal information that is required to implement that function. In that post, I have given the following example:
I am a conscious observer. At the beach, I see various stones. In my consciousness, I represent the desire to use a stone as a chopping tool to obtain a specific result (to chop some kind of food). And I choose one particular stone which seems to be good for that.
In this case, the function is very simple, and the specific information in the configuration of the object that we can use to implement it is very low: many stones will be good for the function. If I choose a stone, and I can perform the function with it, the stone has certainly the functional information required for that function. But the stone was not designed by anyone for that specific purpose: it is just one among many similar stones, the result of random natural forces. In the same way, if we consider, again, the Szostak experiment, he starts with some protein sequences from a random library that "naturally" (if we accept the random library as a given resource) have the property of buinding weakly to ATP (and therefore can be artificially selected by ATP binding columns). Now, we can certainly define "binding to ATP, even with a very weak binding" as a function. But, with that definition, it is a function with low levels of complex information. For example, in the random library used by Szostak, there were 4 sequences, out of 6x10^12 random sequences 80 AAs long, which showed weak binding to ATP, so that they could be selected by ATP binding columns. So, according to those data, we can conclude that the complexity of the function: "Any sequence 80 AAs long that can bind ATP, even with a very weak binding" is about 40 bits. Which is not a low complexity, but certainly not very high. So we can conclude that functions at that level of complexity can be found in realistic random repertoires. Those functions are the result of mere RV, like the stone on the beach. The simpler the function, the lower the complexity of specific information necessary to implement it. So, let's go back to our "low-medium" complexity function: weakly binding ATP. The second question is: can further functional information be added to it by a process of selection? Let's see. We have the space of selection procedures, S, and we have generated a binary partition in it (see my previous posts in this thread, especially #275), so that we can recognize two subsets: NS (very small) AS (very big) In the case of our function, weak ATP binding, can those selection procedures intervene, acting on the few natural sequences exhibiting that function in our random library? For NS, the answer is: no. There is no way that a weak capacity of binding ATP can give some reproductive advantage in any reasonable biological scenario. If someone does not agree, that someone is invited to explain why. So, NS is out of discussion in this case. But AS can certainly intervene: after all, AS can define its function as it likes, and select and expand it as it likes. So, if we use ATP binding columns, we can certainly select those natural sequences. But can the process of AS add functional information add new functional information to the sequences? Again, the answer is yes. After all, Szostak did exactly that, transforming the original weak binding into some strong ATP binding, even with some basic folding of the molecule. We cannot deny that a function defined as: "any sequence which can bind ATP with a strong binding, for example at least of such and such" must be more complex than the original function: "any sequence that can bind ATP at all" because for the first function the target space is certainly smaller. So, we can conclude that in this case AS can add some functional information to an original random low-level function. OK, the discussion is longer than I expected. I will continue in my next post.gpuccio
October 23, 2017
October
10
Oct
23
23
2017
05:17 AM
5
05
17
AM
PDT
GPuccio @282
GPuccio: But there is more: even artificial selection has severe limits, when acting on random variation, if new functional complex information has to be found. I will discuss that in my next post, maybe tomorrow.
That would be crucial stuff, because, if artificial selection is severely limited, then, obviously, natural selection most certainly is.Origenes
October 22, 2017
October
10
Oct
22
22
2017
03:15 PM
3
03
15
PM
PDT
Origenes: "While admittedly, sometimes mutations give a clear reproductive advantage, as you point out, but for most mutations this doesn’t seem to be the case. And I suspect that for those unclear cases the selection coefficient can only be determined by fixation rates — and not vice versa." I can agree. My point is not that neo-darwinists are right in evaluating selections coefficients. I have no interest in that. My point is simply that, if a reproductive advantage really exists, like in the few cases I have mentioned, then NS can work. That's why I have argued that the strongest argument against NS as a possible explanation for complex functional information is that complex functional information cannot be deconstructed into simple naturally selectable steps. From that, my challenge and all the rest. IOWs, NS can work in some very limited and simple cases, but is powerless in almost all the rest of what we observe in natural history. I believe that we cannot deny that NS works in simple and extreme cases of microevolution: that is the only small brick in the neo-darwinian castle which is supported by evidence. To deny it would be useless, indeed deleterious. It is simply true. And it is equally true that there is absolutely no support by evidence that NS can do anything more that that: explain very limited and simple cases of microevolution. If someone on the other side had any empirical support to the idea that complex functions can be deconstructed into naturally selectable steps, they would certainly have answered my challenge by now. Moreover, well before making my challenge here, I have been stating time and again in my comments here, for years, that complex functions cannot be deconstructed into simple naturally selectable steps. Nobody has ever been able to argue that it's the other way round, or to show any evidence in that sense. But there is more: even artificial selection has severe limits, when acting on random variation, if new functional complex information has to be found. I will discuss that in my next post, maybe tomorrow.gpuccio
October 22, 2017
October
10
Oct
22
22
2017
02:25 PM
2
02
25
PM
PDT
GPuccio Correct me if I am wrong, but I think that "selection coefficient" is synonymous with fitness — a notoriously troublesome term.
Lewontin: A zebra having longer leg bones that enable it to run faster than other zebras will leave more offspring only if escape from predators is really the problem to be solved, if a slightly greater speed will really decrease the chance of being taken and if longer leg bones do not interfere with some other limiting physiological process. Lions may prey chiefly on old or injured zebras likely in any case to die soon, and it is not even clear that it is speed that limits the ability of lions to catch zebras. Greater speed may cost the zebra something in feeding efficiency, and if food rather than predation is limiting, a net selective disadvantage might result from solving the wrong problem. Finally, a longer bone might break more easily, or require greater developmental resources and metabolic energy to produce and maintain, or change the efficiency of the contraction of the attached muscles.
and T. Dobzhansky wrote:
... no biologist ‘can judge reliably which ‘characters’ are useful, neutral, or harmful in a given species.’
[Quotes from this Barry Arrington article] While admittedly, sometimes mutations give a clear reproductive advantage, as you point out, but for most mutations this doesn't seem to be the case. And I suspect that for those unclear cases the selection coefficient can only be determined by fixation rates — and not vice versa. And that seems to me to be identical with the same old tautology: Arrington: As we all know, Darwinian theory “predicts” that the “fittest” organisms will survive and leave more offspring. And what makes an organism “fit” under the theory? Why, the fact that it survived and left offspring.Origenes
October 22, 2017
October
10
Oct
22
22
2017
11:25 AM
11
11
25
AM
PDT
Origenes: "Can one determine the selection coefficient of a variety independent from the fixation in a population? I’m asking, because if there is no independent method to determine the selection coefficient — if the fixation in the population informs the selection coefficient — then I don’t see a valid basis for a law." As usual, you ask a very good question. I had wondered about that too, while I was writing my last comments. OK, I am not really an expert in population genetics, but here is what I think. In general, the selection coefficient is probably derived indirectly from observations of the fixation in some particular case. For example, here is a brief statement in Wikipedia at the "selection coefficient" page: "For example, the lactose-tolerant allele spread from very low frequencies to high frequencies in less than 9000 years since farming with an estimated selection coefficient of 0.09-0.19 for a Scandinavian population. Though this selection coefficient might seem like a very small number, over evolutionary time, the favored alleles accumulate in the population and become more and more common, potentially reaching fixation." And there is a reference to a paper: Bersaglieri, T. et al. Genetic signatures of strong recent positive selection at the lactase gene. Am. J. Hum. Genet. 74,1111-1120(2004). Moreover, I think that the Hayashi paper clearly shows that, in some scenarios, we can certainly measure directly the reproductive advantage of some mutational event (in that particular case by measuring infectivity). Measuring the reproductive advantage should be related to measuring directly the selection coefficient, although I could not say exactly how. Moreover, we have the few classic clear examples of microevolution that demonstrate how in extreme scenarios some simple mutation, if it can give an extreme reproductive advantage, is quickly fixed. In that case, we can certainly assume that the selection coefficient is very high, and the probability of fixation is near to 1. That's the case, for example, for simple antibiotic resistance. It happens all the time, both in the wild and in the lab. So, we can know for certain that NS exists and that, in some specific cases, it works very well. That said, all the arguments about the limits of NS in this thread remain absolutely valid. Recognizing the few and simple things that NS can really do is one more strong argument against the imaginary things that it is supposed to do, and that it can never do. gpuccio
October 22, 2017
October
10
Oct
22
22
2017
10:39 AM
10
10
39
AM
PDT
GPuccio @272
GPuccio: … variations with a positive selection coefficient have higher probability to be fixed, and the higher the coefficient, the higher the probability. It’s not chance alone. It’s a causal relationship. A probabilistic law.
Can one determine the selection coefficient of a variety independent from the fixation in a population? I’m asking, because if there is no independent method to determine the selection coefficient — if the fixation in the population informs the selection coefficient — then I don’t see a valid basis for a law.Origenes
October 22, 2017
October
10
Oct
22
22
2017
09:55 AM
9
09
55
AM
PDT
Mung: "How does this differ from natural selection?" It is completely different. In your example, the color of the marble is related to the probability of being drawn simply because of the relative frequency of each color in the population. If the drawing is really random, then the relative frequency will be the only factor in fixing the probability of drawing some specific color. Each individual marble has the same probability of being drawn: as there are more green marbles, the color green has more probabilities of being drawn vs other colors. OK? Now, let's try to apply your example to mutations. Let's say that in a population you have 2997 neutral mutations, and 3 beneficial mutations, at some moment. We can ask: what is the probability of fixation of a beneficial mutation vs the probability of a neutral mutation? If the probability of each mutation to be fixed is the same, independently from the type of mutation, the general probability of having a beneficial mutation fixed will be 1:1000. IOWs, much lower than having a neutral mutation fixed. There is no selection for type of mutation. In this case, we are in the same situation as in your example with colored marbles: the probability of each color to be drawn depends only on the relative frequencies of each color, because color itself has no effect on the probability of being drawn. There is no selection for colors. But, if each beneficial mutation has a greater probability to be fixed than each neutral mutation, then the scenario is different. Let's say that beneficial mutations have a probability of being fixed of 2:1000, which is still a low probability. That will make the probability of each neutral mutation to be fixed only slightly lower: 0.999:1000. However, now the probability of each beneficial mutation to be fixed is more than twice the probability of each neutral mutation. Here, the two variables are no more independent. The nature of the mutation influences the probability of each mutation to be fixed, while in the first scenario the probability was the same for each mutation, whatever the type, and the final probability of a beneficial mutation to be fixed depended only on the number of beneficial mutations available. In the second scenario, the probability of each type of mutation to be fixed always depends (obviously) on the number of available mutations of that type (that is always true), but it also depends on the type of mutation: if we refer to one single mutation, the probability of being fixed will be more than double if it is a beneficial mutation. Here the two variables are dependent, and there is a process of selection where the value of the first variable (not the number of available items) changes the probability of the second variable.gpuccio
October 22, 2017
October
10
Oct
22
22
2017
08:18 AM
8
08
18
AM
PDT
To all interested: b) Can NS be simulated in the lab? I have alredy answered that question in the previous discussion, but I would like to emphasize some points. The answer is: yes, but the simulation must be formally appropriate, if we want to draw realistic concusions about how NS works in the wild. To clarify that point, I have compared two different experiment: 1) Szostak's experiment about the generation of ATP binding sequences. 2) Hayashi's experiment about the rugged landscape. Both are lab simulation. Both seem to be, more or less explicitly, about what NS can do. But my simple conclusion is that: 1. is not a simulation of NS at all. It is only an example of AS. 2. is an appropriate simulation of NS, form which we can draw some cautious but realistic conclusions about how it works. Why is that the case? It's simple. In Szostak's work, all of the properties of NS are lacking: 1) The defined function is not reproductive success. 2) The coupling between function and selection is not implicit in the defined function: it is indirect and symbolic: the connection is established by the specific settings of the experiment. 3) The measurement of function is not implicit in the defined function: it is realized by columns of ATP-derivatized agarose beads, which are certainly not available in a natural scenario. Moreover the only lower threshold of detectable function was the limit of sensitivity of that technique, which is indeed capable of detecting even very low levels of ATP binding. We have also discussed the important concept that ATP binding, in itself, is not a naturally selectable function. I will say something more about that in a later post. 4) The selecting procedure is completely artificial, and depends on the specific procedure used in the experiment: the sequences are isolated by the ATP binding columns, as decribed at 3), and then expanded by PCR or mutagenic PCR. Therefore, the procedure used by Szostak is not a simulation of NS: a simulation is certainly different from what happens in the wild for some aspects, but must retain some basic similarities to the process it is trying to simulate, if any valid inferences are to be drawn from it about the process itself. The Szostak procedure is about AS, and has none of the features of NS. It has nothing to do with NS, and certainly it is no simulation of it. The Hayashi experiment, instead, is bvery different. Let's see: 1) The only defined function is reproductive success, which is expressed here as infectivity. Indeed, for phages, the two concepts are practically the same thing. 2) and 3) and 4) The coupling between function and selection, the measurement of the function and the expansion of selected sequences are not implicit: they are realized by the system. However, what is selected and measured is infectivity, that is reproductive success. And the sequences that are expanded are thjose with higher infectivity. So, what can we say? It is of course an artificial procedure (a simulation). Steps 2, 3 and 4 are simulated, and of course they do not happen like they would in the wild. However, we can assume that the general form of the process has some good resemblance with what would happen in the wild, because we are measuring, selecting and expanding infectivity, reproductive success. And that is what is supposed to happen in the wild, too. The measurement itself is a good simulation of NS, because the increase in infectivity must be detectable as nuber of infected colonies, which guarantees that such a level could probably be detected by NS in the wild. So, this is a good simulation of NS: some important aspects are simulated, but there are good reasons to think that the form of the observed process has good similarities to real NS, and that therefore good and valid inferences about NS can be drawn from the results. The important point, again is: they are using in their simulation the same property (reproductive success) which is the object of NS in the wild. Therefore, this is a valid simultaion, and it has relevance. Of course, it is not necessarily perfect, and has potential flaws, but at least it makes sense! The final conclusion is: NS can be simulated in the lab, but the simulation must make sense: the basci requirement is that only reproductive success can be used as a selectable property in the simulation. Anything else will be a simulation of AS, or simply an example of AS, and will have no connextion with NS.gpuccio
October 22, 2017
October
10
Oct
22
22
2017
07:53 AM
7
07
53
AM
PDT
gpuccio:
In the situation you describe, the system is completely random, but the probability distribution that describes the system is not uniform. That is not a problem, uniform distribution is only one of the probability distributions that describe physical systems. Green marbles have simply a higher probability in the distribution, because there are more of them in the system. Of course, the sum of all probabilities, in discrete distributions, must be 1.
How does this differ from natural selection?Mung
October 22, 2017
October
10
Oct
22
22
2017
07:01 AM
7
07
01
AM
PDT
To all interested: There are indeed a few aspects about NS that probably have not been touche in detail in our discussion. So, I would like to say something about them. Here is the first: a) The differences between Natural Selecion (NS) and Artificail Seleciton (AS, aka Intelligent Selection, IS). I have dedicated a whole OP to this issue: https://uncommondescent.com/intelligent-design/natural-selection-vs-artificial-selection/ However, I would like to summarize her the main differences, and add a few comments. First of all I paste here the final conclusions of my OP:
1) AS can define any function, and select for it. NS works only on one function: reproductive success. 2) In NS, the coupling between function and selection is direct: it’s the function itself which confers the reproductive advantage, which is the reason for the selection itself. In AS, the coupling between the defined function and the selection process is indirect and symbolic: the connection is established by the designer, by definite procedures designed by him. 3) NS has a definite threshold of measurement: it can only act if enough reproductive success is present as to ensure the fixation of the trait. AS can measure and select any desired level of the defined function. 4) In NS, the only selecting procedure is tied to the reproductive success, and is in essence differential reproduction. In AS, any intelligent procedure can be used to isolate, expand and fix the desired function.
That said, I would like to emphasize a special aspect of the issue: NS is a subset of the general set of Selections (possible forms of selection, S). Indeed, an extremely small subset. In fact, NS can be defined as a form of S where: 1) The only defined function is reproductive success in some system. 2) The coupling between function and selection is implicit in the defined function itself, and need not be implemented explicitly in the system in some indirect and symbolic way. 3) The measurement of function is implicit in the defined function, and hase a somewhat fixed threshold (the function must be strong enough to give a detectable reproductive advantage, expressed as a relevant selection coefficient). 4) The selecting procedure is, again, implicit in the defined function, and needs not be implemented in the system. As anyone can see, these specific criteria are hugely restrictive. So, if we imagine the set of all possible forms of selection, NS will be an extremely tiny subset. All the rest will be forms of AS. This explains the different power of NS and AS in generating functional information. I will say more about that later.gpuccio
October 22, 2017
October
10
Oct
22
22
2017
03:42 AM
3
03
42
AM
PDT
Mung: In the situation you describe, the system is completely random, but the probability distribution that describes the system is not uniform. That is not a problem, uniform distribution is only one of the probability distributions that describe physical systems. Green marbles have simply a higher probability in the distribution, because there are more of them in the system. Of course, the sum of all probabilities, in discrete distributions, must be 1. But here we cannot gain any further knowledge about the probabilities of the draws from another variable. There is no causal relationship between two variables, therefore no scenario where, knowing the value of A, you gain additional information about probabilities in B. For example, let's say that some disease, in a population, has a prevalence of 3%. That means that, if you draw by chance some sample from the population, you will have approximately 3% of the sample with the disease (and that will be more precise as the size of the sample increases). Now, let's say that we have another categorical variable which can be observed in the same system, for example sex. Let's say that 60% of the population is male, and 40% of the population is female. So, the distribution of the sex variable is not uniform in the population. Now, there are two possibilities: a) The sex variable and the disease variable are independent. That means that not only 3% of the population in general has the disease, but also 3% of the male population and 3% of the female population. So, knowing if an individual is male of remale does not change his/her probability of having the disease. There is no relationship between the two variables. b) While the prevalence of the disease is 3% in the whole population, its prevalence is higher in males. Let's say that it is 4% in males, and 1.5% in females. With the male/female ratio we have given, that would result in exactly 3% of the disease in the general population. But now, if we draw a sample from the male population, we have a probability of disease which is about 2.66 times higher than if we draw a sample from the female population. Therefore, if we know in advance if an individual is male or female, we have added information about his/her probability of having the disease. Why is that? Because the relationship between the categorical variable sex and the categorical variable disease is not a relationship of independency. The two variables are dependent. And, as the sex variable is probably established before the onset of the disease, the best explanation for that scenario is that being male has a causal role in favouring the disease. Therefore, the distribution of the variable "disease" is not random in relation to the variable sex: it is influenced by it, by a causal relationship. Of course, the variable "disease" can at the same time be completely random in relation to other variables in the system: for example, race. Or race can also have a causal relationship with the disease. Both scenarios are possible. Only a correct statistical analysis of the observed facts can tell us what the best explanation is, and the level of confidence we can have in our conclusions. I hope that helps.gpuccio
October 21, 2017
October
10
Oct
21
21
2017
10:59 AM
10
10
59
AM
PDT
Very fine posts gpuccio! Thank you! We do agree on so much. So if you have an urn containing 20 red marbles, 30 blue marbles and 50 green marbles and you draw a marble "at random" from the urn, what other force or cause is operating other than "chance alone" when it comes to the color of marble that is drawn from the urn? So say you draw a green marble. The chance of a green vs a not green is 50/50. Why not call that "chance alone." :)Mung
October 21, 2017
October
10
Oct
21
21
2017
10:24 AM
10
10
24
AM
PDT
Mung: Now, if the two variables are really independent, then knowing the value of A will have no effect on the probability of B. IOWs, the levels of growth hormone will have no relationship at all with the final height. But that is probably not the case. Children with low levels of growth hormone will have low probabilities of being tall. That means that there is a statistical relationshipb between the two variables, what we call an effect. Which, in the right methodological context, can be intepreted as a causal relationship between levels of growth hormone and final height. So, would it be correct to say that height is due to mere chance? No. And yet, our model is probabilstic. Let's make another example. A is exposition of a population of individuals to influenza virus: some of them have been in contact with people with the disease, some have not. B is the probability of developing respiratory symptoms in the following 3 days. Some will, some will not. Of course, not all exposed people will developed respiratoy symptoms, and not all non exposed people will not develop them. So, is there a relationship of strict, absolute necessity here? No. Not in your sense of the word. The probability of developing symptoms in exposed people is not 1. We have a probabilistic model, again. But can we say that the development of respiratory symptoms is due to chance alone? No, we can't. Exposed people are much more likely to develop symptoms. That's what I call "a neccessity component" in the system. Exposure to influenza is a cause of respiratoy symptoms. Not an absolute cause, and not the only cause. But it is a cause. In physicis, we had that: A causes B, with probability (practically) 1. We called that a law. Now, in biology, we have that: A causes a detectable variation in the probability distribution of B. That is a law, too. Or at least a definite causal relationship, which expresses itself as a definite, detectable effect on the probabiltiy distribution of B. So, my point is that in that situation we cannot say that we are in front of "chance alone". We are in front of chance + detectable causes. If you don't want to call that "necessity", or, as I did, "a necessity component", I have no problems with that. We can simply call it "a causal relationship". But I don't believe that we can call it "chance alone". Finally, does that scenario apply to what we were discussing, in particular NS? I believe it does. A, here, if the nature of the random variation, as assessed by its effects of reproduction. It can be expressed in categorical form (deleterious, neutral, beneficial), or in continuous form, as a selection coefficient. B is the probability of fixation. Again, the system is probabilistic. Not all beneficial variations are fixed, and many neutral variations are fixed by genetic drift, even in competition with NS. So, can we say that chance alone is acting in the system? No, because we can find a statistically detectable relationship between A and B: variations with a positive selection coefficient have higher probability to be fixed, and the higher the coefficient, the higher the probability. It's not chance alone. It's a causal relationship. A probabilistic law. That's what I meant by "a necessity component". However we decide to call it.gpuccio
October 21, 2017
October
10
Oct
21
21
2017
04:34 AM
4
04
34
AM
PDT
Mung: First of all, I want to say that I have for you the greatest esteem (and a huge appreciation for your sense of humour!) Moreover, I am certain that we agree on all important things. For those reasons, I think that it is important for me to clarify my views on this point, about which we seem to differ. Of course, if after all the necessary clarifications we still have different ideas, there's no problem at all. :) I say that we "seem to differ" because, after readin carefully what you said in #270, I believe that part of our "divergence" is due only to a problem of words. But part of it could still be a true difference. Let's see. A necessary premise to all the following discussion is that all the concepts I will discuss are always related to models of reality: our models, and in particula our scientific modles. That's why the word "model" will recur often. The first problem is that we seem to use the word "necessity" in a different way. There is no surprise in that: the word itself is many-faceted, in philosophy and in science. But words are not really a problem, if we clarify what we mean with them. From what you say, I understand (please, correct me if I am wrong) that you use the word in a perfectly correct, but rather strict, sense. For the sake of clarity, I will call that sense of the word, at least in this comment: "absolute necessity". I will also try to define it explicitly, so that there may be no confusion. Let's say that we have a model that includes two variables, A and B. Let's also remember that we are discussing empirical models, not pure logic. So, A and B are observable things. Facts. Let's say that absolute necessity means a model where, if A happens, B must happen (or not happen, which is the same). We could also say that, if A happens, the probability that B happens is 1 (or 0, in the opposite case). In most cases, we interpret that kind of observation, if all the methodological cautions are well satisfied, as a causal relationship between A and B: we say that A is the cause of B. For the sake of this discussion, we can assume that we find that kind of scenario in many contexts of physics: that is not completely true, but true enough for our discussion. So, the law of gravitation according to Newton states that the masses of two objects and their distance are the cause of the gravitational attraction between two bodies, or at least explain that attraction very well, according to a precise mathematical relationship. A (the two masses and the distance) explains B (the gravitational attraction). As that relationship can be observed easily, in practically all contexts, and with great precision, we call that (supposedly) causal relationship a law. In particular, a law of necessity, because, given the right masses and distance, the probability of having a specific force of attraction is practically 1. Especially if we can control well, in our experiemnts or observations, all disturbing variables that could interfere. Let's call a model that describes (well enough) some physical system using only strict necessity a fully deterministic model. OK with that? So, in the laws of physics, or at least in some of them, we can find a very good approximation of the concept of empirical strict necessity. Absolute necessity, according to my conventional name here. Now, what happens in other sciences, like biology or medicine? In those fields, strict necessity in that sense is really a rare thing. The systems we are considering are too complex, the variables are too many, and of many of them we are not aware, or cannot measure any value. That's why we use, in almost all cases, probabilistic models to describe biological reality and biological data. That means that we cannot speak any more of absolute necessity. Does that mean that we cannot say aything about laws in biology? Does that mean that biology is the kingdom of chance alone? Not at all. Let's say, again that we have A and B. But now A and B are biological data, in particular biological variables. For the reasons I have said, both those variables behave as random variables, IOWs variables that can assume different values, and whose value distribution can be described with some appropriate probability distribution. Let's say, for example, that A is the set of values of the blood levels of growth hormon at some age (for example, at 5 years) in some population, and B is the final height of those individuals. Both A and B can be considered random variables, because both can be well described by some probability distribution, in this case probably the normal distribution. But science does not stop there. We ask ourselves specific questions, like the following: Is there any causal relationship between the values of A and the values of B? To assess that, we need to perform a statistical analysis, in the correct methodological context. From a statistical point of view, we need to assess if the two variables are independent, or if there is some specific relationship between them. Well, I will continue in my next post.gpuccio
October 21, 2017
October
10
Oct
21
21
2017
03:45 AM
3
03
45
AM
PDT
gpuccio:
I agree only in part. There is a random component in the eventual fixation of a naturally selectable trait, but while the fixation of a neutral trait is completely random (IOWs, all neutral traits have the same probability to be fixed, and the selection coefficient is zero), the fixation of a trait that gives some reproductive advantage is in part due to a necessity component.
I'd like you to re-read what you wrote and note how you switched terms in mid stream. :) You are willing to say that fixation of a neutral trait is "completely random", but don't seem willing to say that fixation of a non-neutral trait is "partly random." But it most certainly is, at a minimum, partly random. Wouldn't you agree? Instead you insist that fixation of a non-neutral trait is due to some mysterious "necessity component." And here you have lost me. Say you have many traits in the population with equal selective values. Do they not also all have the same probability to be fixed, just as in the neutral case? I don't mean THE SAME probability as in the neutral case. I mean equal probability among the cases that share the same selective value. Will all of them be fixed? So we can say using your definition that since they all have the same probability, they are "completely random." There is certainly a probability that not all will not be fixed. Don't you agree? So what "necessity component"? Both neutral and non-neutral spread is probabilistic. They are both stochastic. Both are random. Unless we are going to redefine stochastic. Much respect sir! Hope this makes you think. It is not Chance and Necessity as Monod claimed. It is Chance and Chance. With some having better chances than others, lol.Mung
October 20, 2017
October
10
Oct
20
20
2017
05:59 PM
5
05
59
PM
PDT
Popular Posts (Last 30 Days) What are the limits of Natural Selection? An interesting… (2,023) [since October 5] Violence is Inherent in Atheist Politics (2,007) Howling Darwinists (1,473) Selective Horrid Doubt (1,316) Sweeping the Origin of Life Under the Rug (1,001)Dionisio
October 20, 2017
October
10
Oct
20
20
2017
03:48 PM
3
03
48
PM
PDT
gpuccio, That's an interesting research topic: wingear which perhaps eventually evolved into the 'win gear' - the mechanism Neo-Darwinian folks use to always win the discussions. Wing ear ---> wingear ---> win gear! :)Dionisio
October 20, 2017
October
10
Oct
20
20
2017
03:47 PM
3
03
47
PM
PDT
Dionisio: That could be an interesting conflation of two fundamental themes: the evolution of the ear and the evolution of wings! :) Covergent evolution, again?gpuccio
October 20, 2017
October
10
Oct
20
20
2017
12:55 PM
12
12
55
PM
PDT
Dionisio: Ah, Dumbo! :)gpuccio
October 20, 2017
October
10
Oct
20
20
2017
12:52 PM
12
12
52
PM
PDT
Origenes @258:
Obviously, an elephant has no use for a pair of wings.
Are you sure? http://es.web.img2.acsta.net/pictures/14/03/20/09/28/045404.jpg :)Dionisio
October 20, 2017
October
10
Oct
20
20
2017
11:28 AM
11
11
28
AM
PDT
Popular Posts (Last 30 Days)
Violence is Inherent in Atheist Politics (2,005) What are the limits of Natural Selection? An interesting… (2,001) [since October 5] Howling Darwinists (1,472) Selective Horrid Doubt (1,316) Sweeping the Origin of Life Under the Rug (1,001)
The NS topic seems to attract attention.Dionisio
October 20, 2017
October
10
Oct
20
20
2017
11:20 AM
11
11
20
AM
PDT
Origenes: "Including protein sequences with (unfitting) function and even including protein sequences with potential fitting function, but without proper regulation!" Of course! For example, Szostak's ATP binding protein is completely useless in a biological context, indeed deleterious. What is the utility of a protein that just binds ATP? It can only subtract precious ATP from the system! The whole idea of ATP is that it is a repository of chemical energy, to be spent for the various biochemical necessities of a cell. So, ATP synthase builds ATP from proton gradients (which, in turn, derive from cell metabolism), and othe rproteins use ATP to get the energy for other things. Let's take, for example, this brief description of dynein from Wikipedia:
Cytoplasmic dynein, which has a molecular mass of about 1.5 megadaltons (MDa), is a dimer of dimers, containing approximately twelve polypeptide subunits: two identical "heavy chains", 520 kDa in mass, which contain the ATPase activity and are thus responsible for generating movement along the microtubule; two 74 kDa intermediate chains which are believed to anchor the dynein to its cargo; two 53–59 kDa light intermediate chains; and several light chains..
In general, we have to have four different functions to do something useful with ATP: a) ATP binding b) ATPase activity, which releases chemical energy c) The rest of the molecule, which implements the real function d) An efficient coupling between b) and c), IOWs an efficient way to transfer the energy to the final function. So, in dynein, ATPase activity is coupled to the rest of the protein, whih uses the released energy to move along the microtubule, transporting the appropriate cargo to the appropriate target point. Of course, a designer can understand that, to design the whole complex, he needs first of all ATP binding. A designer can understand that. NS cannot. "Of course you are right. Why keep beating a dead horse?" That's exactly my idea! :)gpuccio
October 20, 2017
October
10
Oct
20
20
2017
08:00 AM
8
08
00
AM
PDT
GPuccio @259
naturally selectable steps ... NS requires a change that gives a real reproductive advantage versus the previous allele, to have a chance to act. Nothing else will do.
Including protein sequences with (unfitting) function and even including protein sequences with potential fitting function, but without proper regulation!
But again, why focus on those aspects, when the role of NS for complex functional information is, just from the beginning, completely inexistent?
Of course you are right. Why keep beating a dead horse?Origenes
October 20, 2017
October
10
Oct
20
20
2017
07:43 AM
7
07
43
AM
PDT
1 2 3 4 5 12

Leave a Reply