Uncommon Descent Serving The Intelligent Design Community

What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Coming from a long and detailed discussion about the limits of Natural Selection, here:

I realized that some attention could be given to the other great protagonist of the neo-darwinian algorithm: Random Variation (RV).

For the sake of clarity, as usual, I will try to give explicit definitions in advance.

Let’s call RV event any random event that, in the course of Natural History, acts on an existing organism at the genetic level, so that the genome of that individual organism changes in its descendants.

That’s more or less the same as the neo-darwinian concept of descent with modifications.

A few important clarifications:

a) I use the term variation instead of mutation because I want to include in the definition all possible kinds of variation, not only single point mutations.

b) Random here means essentially that the mechanisms that cause the variation are in no way related to function, whatever it is: IOWs, the function that may arise or not arise as a result of the variation is in no way related to the mechanism that effects the change, but only to the specific configuration which arises randomly from that mechanism.

In all the present discussion we will not consider how NS can change the RV scenario: I have discussed that in great detail in the quoted previous thread, and those who are interested in that aspect can refer to it. In brief, I will remind here that NS does not act on the sequences themselves (IOWs the functional information), but, if and when and in the measure that it can act, it acts by modifyng the probabilistic resources.

So, an important concept is that:

All new functional information that may arise by the neo-darwinian mechanism is the result of RV.

Examining the Summers paper about chloroquine resistance:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4035986/

I have argued in the old thread that the whole process of generation of the resistance in natural strains can be divided into two steps:

a) The appearance of an initial new state which confers the initial resistance. In our example, that corresponds to the appearance of one of two possible resistant states, both of which require two neutral mutations. IOWs, this initial step is the result of mere RV, and NS has no role in that. Of course, the initial resistant state, once reached, can be selected. We have also seen that the initial state of two mutations is probably the critical step in the whole process, in terms of time required.

b) From that point on, a few individual steps of one single mutation, each of them conferring greater resistance, can optimize the function rather easily.

Now, point a) is exactly what we are discussing in this new thread.

So, what are the realistic powers of mere RV in the biological world, in terms of functional information? What can it really achieve?

Another way to ask the same question is: how functionally complex can the initial state that for the first time implements a new function be, arising from mere RV?

And now, let’s define the probabilistic resources.

Let’s call probabilistic resources, in a system where random events take place, the total number of different states that can be reached by RV events in a certain window of time.

In a system where two dies are tossed each minute, and the numbers deriving from each toss are the states we are interested in, the probabilistic resources of the system in one day amount to  1440 states.

The greater the probabilstic resources, the easier it is to find some specific state, which has some specific probability to be found in one random attempt.

So, what are the states generated by RV? They are, very simply, all different genomes that arise in any individual of any species by RV events, or if you prefer by descent with modification.

Please note that we are referring here to heritable variation only, we are not interested to somatic genetic variation, which is not transmitted to descendants.

So, what are the probabilistic resources in our biological world? How can they be estimated?

I will use here a top-down method. So, I will not rely on empirical data like those from Summers or Behe or others, but only on what is known about the biological world and natural history.

The biological probabilstic resources derive from reproduction: each reproduction event is a new state reached, if its genetic information is different from the previous state. So, the total numbet of states reached in a system in a certain window of time is simply the total number of reproduction events where the genetic information changes. IOWs, where some RV event takes place.

Those resources depend essentially on three main components:

  1. The population size
  2. The number of reproductions of each individual (the reproduction rate) in a certain time
  3. The time window

So, I have tried to compute the total probabilistic resources (total number of different states) for some different biological populations, in different time windows, appropriate for the specific population (IOWs, for each population, from the approximate time of its appearance up to now). As usual, I have expressed the final results in bits (log2 of the total number).

Here are the results:

 

Population Size Reproduction rate (per day) Mutation rate Time window Time (in days) Number of states Bits + 5 sigma Specific AAs
Bacteria 5.00E+30 24 0.003 4 billion years 1.46E+12 5.26E+41 138.6 160.3 37
Fungi 1.00E+27 24 0.003 2 billion years 7.3E+11 5.26E+37 125.3 147.0 34
Insects 1.00E+19 0.2 0.06 500 million years 1.825E+11 2.19E+28 94.1 115.8 27
Fish 4E+12 0.1 5 400 million years 1.46E+11 2.92E+23 78.0 99.7 23
Hominidae 5.00E+09 0.000136986 100 15 million years 5.48E+09 3.75E+17 58.4 80.1 19

The mutation rate is expressed as mutations per genome per reproduction.

This is only a tentative estimate, and of course a gross one. I have tried to get the best reasonable values from the sources I could find, but of course many values could be somewhat different, and sometimes it was really difficult to find any good reference, and I just had to make an educated guess. Of course, I will be happy to acknowledge any suggestion or correction based on good sources.

But, even if we consider all those uncertainties, I would say that these numbers do tell us something very interesting.

First of all, the highest probabilistic resources are found in bacteria, as expected: this is due mainly to the huge population size and high reproduction rate. The number for fungi are almost comparable, although significantly lower.

So, the first important conclusion is that, in these two basic classes of organisms, the probabilistic resources, with this hugely optimistic estimate, are still under 140 bits.

The penultimate column just adds 21.7 bits (the margin for 5 sigma safety for inferences about fundamental issues in physics). What does that mean?

It means, for example, that any sequence with 160 bits of functional information is, by far, beyond any reasonable probability of being the result of RV in the system of all bacteria in 4 billion years of natural history, even with the most optimistic assumptions.

The last column gives the number of specific AAs that corrispond to the bit value in the penultimate column (based on a maximum information value of 4.32 bits per AA).

For bacteria, that corresponds to 37 specific AAs.

IOWs, a sequence of 37 specific AAs is already well beyond the probabilistic resources of the whole population of bacteria in the whole world reproducing for 4 billion years!

For fungi, 147 bits and 34 AAs are the upper limit.

Of course, values become lower for the other classes. Insects still perform reasonably well, with 116 bits and 27 AAs. Fish and Hominidae have even lower values.

We can notice that Hominidae gain something in the mutation rate, which as known is higher, and that I have considered here at 100 new mutations per genome per reproduction (a reasonable estimate for homo sapiens). Moreover, I have considered here a very generous population of 5 billion individuals, again taking a recent value for homo sapiens. These are  not realistic choices, but again generous ones, just to make my darwinist friends happy.

Another consideration: I have given here total populations (or at least generous estimates for them), and not effective population sizes. Again, the idea is to give the highest chances to the neo-darwinian algorithm.

So, these are very simple numbers, and they should give an idea of what I would call the upper threshold of what mere RV can do, estimated by a top down reasoning, and with extremely generous assumptions.

Another important conclusion is the following:

All the components of the probabilistic resources have a linear relationship with the total number of states.

That is true for population size, for reproduction rate, mutation rate and time.

For example, everyone can see that the different time windows, ranging from 4 billion years to 15 million years, which seems a very big difference, correspond to only 3 orders of magnitude in the total number of states. Indeed, the highest variations are probably in population size.

However, the complexity of a sequence, in terms of necessary AA sites, has an exponential relationship with the functional information in bits: a range from 19 to 37 AAs (only 18 AAs) corresponds to a range of 24 orders of magnitude in the distribution of probabilistic resources.

Can I remind here briefly, without any further comments, that in my OP here:

I have analyzed the informational jump in human conserved information at the apperance of vertebrates? One important result is that 10% of all human proteins (about 2000) have an information jump from pre-vertebrates to vertenrates of at least (about) 500 bits (corresponding to about 116 AAs)!

Now, some important final considerations:

  1. I am making no special inferences here, and I am drawing no special conclusions. I don’t think it is really necessary. The numbers speak for themselves.
  2. I will be happy of any suggestion, correction, or comment. Especially if based on facts or reasonable arguments. The discussion is open.
  3. Again, this is about mere RV. This is about the neutral case. NS has nothing to do with these numbers.
  4. For those interested in a discussion about the possible role of NS, I can suggest the thread linked at the beginning of this OP.
  5. I will be happy to answer any question about NS too, of course, but I would be even more happy if someone tried to answer my two questions challenge, given at post #103 of the other thread, and that nobody has answered yet. I paste it here for the convenience of all:

Will anyone on the other side answer the following two simple questions?

1) Is there any conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases?

2) Is there any evidence from facts that supports the hypothesis that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases?

Comments
'Anaxagorus, Your post #1 made AL (not Laugh Out Loud, but Audible Laughter) reading for me, from start to finish, simply because those things should not even need to be articulated. Though I'm a little surprised that GP of that ilk, (though he might be a hospital doctor, rather than a GP) had to explain to you that he was on your side, and explain his rationale. But then again, I get impatient when you boffins on here patiently explain the most basic and most commonsensical truths to our materialist friends. There is a certain sense in which this very forum is a surreal venture. How do people with a tertiary education in scientific fields manage to harbour crazy assumptions and conjectures that need to be eliminated from consideration? But to revert to the humour, these perfectly measured and courteous analyses of their madness remind me of the devastating analyses of the arguments of the rigorists and legalistic pedants by Papa Francesco, and indeed other cardinals of similar acuity; the humour residing in large part from their being no more than accurate statements of the truth, while they sound like mean satires, authorship of which Evelyn Waugh or Joseph Heller might have coveted. Of course, the fact that they read as if deliberately satirical, might, on occasions not be unwelcome by the authors such as Pope Francis and prelate cohorts.Axel
November 1, 2017
November
11
Nov
1
01
2017
09:19 AM
9
09
19
AM
PDT
daveS: I have made simple simulations of Markov chains, both including synonimous events (no variation) and excluding them. They always seem to perform worse than the mere probability in a random search. But I could have made some errors, I am not completely sure.gpuccio
November 1, 2017
November
11
Nov
1
01
2017
09:15 AM
9
09
15
AM
PDT
Anaxagoras: Again, I fully agree with you. Two important points, on which we seem to agree: a) In empirical sciences, a good theory is not a deduction, but an inference to the best explanation. I am surprised that Sober "admits that the outcomes of evolution have “extremely small probabilities” but (he says) “they are not impossible". That is not a scientifc argument at all. There are a lot of things "not impossible" that have no relevnce in science. b) ID theory is much more than simply rejecting the neo-darwinian theory. It is about positive reasons for considering complex functional information as a reliable and safe marker of design. I have discussed those positive aspects of ID many times, in my OPs and in my discussions on other's threads. But ID theory, indeed, does also reject the neo-darwinian theory of RV + NS, the RV part by probabilistic reasonings (as it is appropriate, being the RV part a probabilistic argument), and the NS part by conceptual, methodological and empirical considerations. I have tried to sum up, in my 2 OPs, the best arguments for both aspects of that reject. I am sorry that I had to use two separate OPs, which "according to Sober" should be, for some reason that I certainly miss, "a tricky thing". Maybe next time I will write a single long post! :) However, now the discussion is open on both aspects.gpuccio
November 1, 2017
November
11
Nov
1
01
2017
09:07 AM
9
09
07
AM
PDT
gpuccio,
I think they should be comparable. The main difference is that in a random walk you can easily reach a nearby point. For example, to compare with your dies example, if you start from a state which is 199 sixes and one five, it’s rather likely to reach 200 sixes, while in the todding dies scenario each results is independent from the previous one. In a Markov chain, each result depends on the previous state, but not on the history which led to the previous state. In a tossing dies scenario, each result is completely independent. However, when you start from any unrelated state, a random walk should be more or less equivalent to a random search. Indeed, from a few simulations I have done, it seems to perform worse than a random search, but I am not really sure. Maybe someone who knows better the mathemathical theory could give us some confirmation of that.
Interesting. My reasoning was that, referring to my toy example, using "realistic" parameters, most of the dice would remain fixed from one stage to the next---that is, assuming "mutations" are fairly rare. In that scenario, it's not absurdly unlikely that in two consecutive transitions, the first die switches from 1 to 2, and then back to 1 again, with all other dice fixed (for example). Hence repetitions of states might be a little more likely. I do think you are right in that whatever happens, the two scenarios would be comparable and perhaps very close.daveS
November 1, 2017
November
11
Nov
1
01
2017
09:06 AM
9
09
06
AM
PDT
Mung: "Does Wagner calculate the probabilities of his library?" I don't know, but his library seems to exist only in his head, certainly not out there! I am all for Borges' truly random "Library of Babel"!gpuccio
November 1, 2017
November
11
Nov
1
01
2017
08:54 AM
8
08
54
AM
PDT
Probabilistic arguments have been around for decades. But they doesn´t seem to have impressed evolutionists too much. Elliot Sober addresses the likelihood formulation of the design argument in his book “Evolution and Evidence”. He admits that the outcomes of evolution have “extremely small probabilities” but (he says) “they are not impossible”. In principle, “monkeys pounding at random on typewriters CAN produce the works of Shakespeare, and a hurricane whirling through a junkyard CAN produce a functioning airplane”. As a corollary, he insists, evolution CAN produce adaptive features that are irreducibly complex. Gpuccio has presented his arguments in two separate posts and that is a tricky thing, according to Sober. The argument on RV alone can beat Epicureanism, that is , a purely random process, but nos darwinism because darwinism implies a combination of RV plus NS. Basically he summarizes his criticism concluding that ID arguments don´t hold because they are not testable against the evolutionary hypothesis that RV plus NS did the work. A key point in Sober´s discourse is that he misrepresents what ID arguments are from the beginning of his explanation. That is, he starts by assuming that ID is a probabilistic argument on the grounds of an unimportant sentence he quotes from Paley´s Natural Theology. He purports that this sentence implies that Paley, “in principle” admits that random processes could have done the work, and therefore his argument is not a deductive formulation but a probabilistic argument. True, Paley´s argument like all ID arguments presented, starting from the philosophers of antiquity to the more modern proponents of the ID movement, shouldn´t be understood as an apodictic conclusion of a deductive syllogism. But they are not probabilistic, they are “the inference to the best explanation” (that includes to the only possible explanation) according to Peirce´s abductive logical method. And the fact that they are not presented as apodictic conclusions doesn´t imply that the contrary is accepted in principle as a possible reasonable explanation.Anaxagoras
November 1, 2017
November
11
Nov
1
01
2017
08:54 AM
8
08
54
AM
PDT
daveS: "And I take it that under a Markov chain model, the probabilistic resources would actually decrease compared to my assumption of independent trials." I think they should be comparable. The main difference is that in a random walk you can easily reach a nearby point. For example, to compare with your dies example, if you start from a state which is 199 sixes and one five, it's rather likely to reach 200 sixes, while in the todding dies scenario each results is independent from the previous one. In a Markov chain, each result depends on the previous state, but not on the history which led to the previous state. In a tossing dies scenario, each result is completely independent. However, when you start from any unrelated state, a random walk should be more or less equivalent to a random search. Indeed, from a few simulations I have done, it seems to perform worse than a random search, but I am not really sure. Maybe someone who knows better the mathemathical theory could give us some confirmation of that.gpuccio
November 1, 2017
November
11
Nov
1
01
2017
08:51 AM
8
08
51
AM
PDT
Sadly, Wagner goes into full-fledged fantasy-mode about a library housed in a 5000 dimensional hypercube, in order to facilitate the search ….
Sounds to me like the library, the hypercube and the paths to get from one shelf to another must have been designed. :) Does Wagner calculate the probabilities of his library?Mung
November 1, 2017
November
11
Nov
1
01
2017
07:47 AM
7
07
47
AM
PDT
forexhr @19: Interesting comment. Thanks. PS. "For e.g. since the DNA of first self replicating organism didn’t contain DNA sequences(nucleotide arrangements) for visual function, than..." then? "The radio between sequences for non-visual and visual function,..." ratio?Dionisio
November 1, 2017
November
11
Nov
1
01
2017
07:16 AM
7
07
16
AM
PDT
Here's an off topic question: At the bottom of this OP it reads: "(Visited 265 times, 278 visits today)" What do those number stand for? ThanksDionisio
November 1, 2017
November
11
Nov
1
01
2017
06:55 AM
6
06
55
AM
PDT
I said it before and this looks like a good place to say it again: Why is it that only in the field of biology are we to accept that random hits to an existing functioning system does not degrade it? Heck it not only doesn’t degrade it, it made it! And all without evidentiary support? Really?ET
November 1, 2017
November
11
Nov
1
01
2017
06:37 AM
6
06
37
AM
PDT
Thanks, gpuccio. And I take it that under a Markov chain model, the probabilistic resources would actually decrease compared to my assumption of independent trials. I think that's correct, anyway. In any case, I thought it would be interesting to use a physical illustration for the 200-dice example. Suppose we wanted to visualize the fraction of the state space explored by rolling the 200 dice for 4 billion years in terms of pixels on a large high-definition screen (say with 200 pixels per inch). Well, it seems that in order to construct this HD display so that even a single pixel corresponded to the searched part of the space, the display would have to be vastly larger than the known universe!daveS
November 1, 2017
November
11
Nov
1
01
2017
06:16 AM
6
06
16
AM
PDT
forexhr: By the way, the Dryden paper you quote is simply ridiculous. I would really like to see the authors produce a working version of ATP synthase with only two aminoacids!gpuccio
November 1, 2017
November
11
Nov
1
01
2017
06:12 AM
6
06
12
AM
PDT
daveS: Yes, it is right. And repeated states can indeed be factored, using for example the binomial distribution to get the probability of having at least one success of probability p in n attempts. The only formal difference between your example and the biological scenario is that the biological setting is better modeled as a random walk, usually a Markov chain. However, the general concepts remain comparable. I have used the concept of total number of states that can be reached because it is simple, intuitive and effective.gpuccio
November 1, 2017
November
11
Nov
1
01
2017
06:04 AM
6
06
04
AM
PDT
forexhr: You have provided a very good and clear summary of the main arguments I have given in my last two OPs! Thank you. :)gpuccio
November 1, 2017
November
11
Nov
1
01
2017
05:45 AM
5
05
45
AM
PDT
gpuccio, Your post is certainly very clear, but I am one of those who needs a "for dummies" presentation. Could I try to restate some of your points in terms of a toy example to make sure I have it right? We could consider the system where 200 fair, distinguishable, 6-sided dice are repeatedly rolled, let's say once per second. Then the probabilistic resources of this system over a period of 4 billion years would be the number of distinct states we would expect to be reached in that time period. I think the chance of getting a repeated state is small, so it's likely about 1.26 × 10^17 states would be reached. Hence the probabilistic resources in this example, would be a bit less than 57 bits. This is far less than the log_2 of the total possible number of states, which is about 517 bits. I take it the conclusion is that these probabilistic resources have had time to "search" only a miniscule portion of the state space (in fact, about 2.8 × 10^−139 of it). Does that sound about right?daveS
November 1, 2017
November
11
Nov
1
01
2017
05:35 AM
5
05
35
AM
PDT
Biological structures, just like everything in nature, are made of large number of various different kinds of particles. In evolutionary sense, that means that in order for a population to adapt to a particular environment, the mutation process must extract adaptive arrangement of particles(nucleotides in this case) from a pool of all possible arrangements(adaptive and non-adaptive). For e.g. since the DNA of first self replicating organism didn't contain DNA sequences(nucleotide arrangements) for visual function, than the only way for such sequences to appear is by extracting them from a pool of all possible DNA sequences of some duplicated gene. But here's where the problem comes in for the theory of evolution(ToE). The radio between sequences for non-visual and visual function, is so large that the total numbers of mutations in the history of life - 10^43(1), fails by many orders of magnitude to succeed in this extraction process. With the absurd assumption that only one average eukaryotic gene codes for some simple proto visual function, and that 10^500 different sequences from its pool of 10^810(2) possible sequences are functional proto visual sequences, it follows that it is mathematically impossible for evolution to extract proto visual function because the number of sequences that won’t code for this function is 267 orders of magnitude greater than the total numbers of mutations in the history of life. In other words, due to the lack of mutational resources it is impossible for a proto visual adaptation to enter the gene pool of a population and increase its frequency through natural selection. This mathematical problem is completely ignored by the ToE. Whenever I present this argument, the majority of evolutionists would immediately make an appeal to natural selection(NS). For example, we all know that humans are unable to breathe under water. We also know that there's tons of mutations in the human gene pool in every generation. Thus, large amounts of mutations have been spent in the last 5 million years and no trait for breathing underwater has entered the human gene pool. It can be shown mathematically that this will not happen even if we spent all the mutations that occured in the history of life. When such calculations are presented, an average evolutionist would completely ignore them and instantly make an appeal to NS. But obviously, NS cannot change the fact that no trait for breathing under water exists in the human gene pool. NS can act only when such trait entered the gene pool, by spreading this trait in the population. So, NS is completely unrelated to the question of mutational resources required to find adaptive traits, but evolutionists have repeatedly shown that they cannot comprehend the difference between these two instances. (1) http://rsif.royalsocietypublishing.org/content/5/25/953.full (2)The length of an average eukaryotic gene is 1346 bp. A gene consists of four different bases. Any base can assume one of four values (ATCG). A sequence of L basis can therefore assume one out of 4^L values, which gives 4^1346 or 10^810 potential sequences.forexhr
November 1, 2017
November
11
Nov
1
01
2017
05:31 AM
5
05
31
AM
PDT
Dionisio: "The biological probabilstic resources derive from reproduction" Yes. That is an important concept. In a sense, the whole RV+NS algorithm can be considered as a side effect of the functional information already present in the organisms, and that allows its reproduction. So, it is functional information modifying itself throrugh its existing information. That's why the scope of RV+NS is so limited. In any case, it cannot really or significantly go beyond the limits implicit in the already existing information, and the computational powers implicit in that information.gpuccio
November 1, 2017
November
11
Nov
1
01
2017
05:10 AM
5
05
10
AM
PDT
Anaxagoras A fair point. However, everything can be viewed probabilistically on condition that we should return from the world of mathematics to reality when we formulate the outcomes. E.g. for all intents and purposes, a probability of 10^-300 is a practical zero, which means practical impossibility.EugeneS
November 1, 2017
November
11
Nov
1
01
2017
05:05 AM
5
05
05
AM
PDT
Never mind. You may disregard #15. There's another game in town: the third way!!! :)Dionisio
November 1, 2017
November
11
Nov
1
01
2017
05:04 AM
5
05
04
AM
PDT
"What else could he do? Admitting design?" Is there another game in town? :)Dionisio
November 1, 2017
November
11
Nov
1
01
2017
05:03 AM
5
05
03
AM
PDT
Origenes: "That sounded profoundly reasonable, right?" Yes, it is. "Sadly, Wagner goes into full-fledged fantasy-mode about a library housed in a 5000 dimensional hypercube, in order to facilitate the search …." What else could he do? Admitting design? :) Frankly, I prefer that he remains on the other side, he and his 5000 dimensional hypercubes...gpuccio
November 1, 2017
November
11
Nov
1
01
2017
04:31 AM
4
04
31
AM
PDT
The biological probabilstic resources derive from reproduction The biological probabilistic resources derive from reproductionDionisio
November 1, 2017
November
11
Nov
1
01
2017
04:21 AM
4
04
21
AM
PDT
Andreas Wagner:
The first vertebrates to use crystallins in lenses did so more than five hundred million years ago, and the opsins that enable the falcon’s vision are some seven hundred million years old. They originated some three billion years after life first appeared on earth. That sounds like a helpfully long amount of time to come up with these molecular innovations. But each one of those opsin and crystallin proteins is a chain of hundreds of amino acids, highly specific sequences of molecules written in an alphabet of twenty amino acid letters. If only one such sequence could sense light or help form a transparent cameralike lens, how many different hundred-amino-acid-long protein strings would we have to sift through? The first amino acid of such a string could be any one of the twenty kinds of amino acids, and the same holds for the second amino acid. Because 20 × 20 = 400, there are there are 400 possible strings of two amino acids. Consider also the third amino acid, and you have arrived at 20 × 20 × 20, or 8,000, possibilities. At four amino acids we already have 160,000 possibilities. For a protein with a hundred amino acids (crystallins and opsins are much longer), the numbers multiply to a 1 with more than 130 trailing zeroes, or more than 10^130 possible amino acid strings. To get a sense of this number’s magnitude, consider that most atoms in the universe are hydrogen atoms, and physicists have estimated the number of these atoms as 10^90, or 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000. This is “only” a 1 with 90 zeroes. The number of potential proteins is not merely astronomical, it is hyperastronomical, much greater than the number of hydrogen atoms in the universe.11 To find a specific sequence like that is not just less likely than winning the jackpot in the lottery, it is less likely than winning a jackpot every year since the Big Bang.12 In fact, it’s countless billions of times less likely. If a trillion different organisms had tried an amino acid string every second since life began, they might have tried a tiny fraction of the 10^130 potential ones. They would never have found the one opsin string. There are a lot of different ways to arrange molecules. And not nearly enough time. ... The power of natural selection is beyond dispute, but this power has limits. Natural selection can preserve innovations, but it cannot create them. And calling the change that creates them random is just another way of admitting our ignorance about it.
That sounded profoundly reasonable, right? Sadly, Wagner goes into full-fledged fantasy-mode about a library housed in a 5000 dimensional hypercube, in order to facilitate the search ....Origenes
November 1, 2017
November
11
Nov
1
01
2017
04:15 AM
4
04
15
AM
PDT
Dionisio: If they like it hot, perhaps they should try the challenge. It's the spiciest part! :)gpuccio
November 1, 2017
November
11
Nov
1
01
2017
02:49 AM
2
02
49
AM
PDT
The more I read this two-volume RV+NS dissertation by gpuccio, the more I think he understands evolution better than many Neo-Darwinian folks. At least he explains it more precisely and clearly. :)Dionisio
November 1, 2017
November
11
Nov
1
01
2017
02:44 AM
2
02
44
AM
PDT
gpuccio @7:
Unfortunately, I am very bad at cooking!
The Neo-Darwinian folks apparently agree with that, because they've found the whole RV+NS enchilada you have cooked very disgusting. :) But sometimes taste is a very relative thing. Some folks here, including myself, have found both OP dishes very well prepared and tasteful. It's very difficult to please everybody. :)Dionisio
November 1, 2017
November
11
Nov
1
01
2017
02:36 AM
2
02
36
AM
PDT
Anaxagoras @1: As a continuation of my comment @4, note the sentences at the beginning of the OP:
I realized that some attention could be given to the other great protagonist of the neo-darwinian algorithm: Random Variation (RV). For the sake of clarity, as usual, I will try to give explicit definitions in advance. Let’s call RV event any random event that, in the course of Natural History, acts on an existing organism at the genetic level, so that the genome of that individual organism changes in its descendants. That’s more or less the same as the neo-darwinian concept of descent with modifications.
Note that gpuccio is carefully giving the Neo-Darwinian folks their own medicine, while playing in their own terrain under their own rules, so they see that their concepts really don't work as they loudly proclaim everywhere.Dionisio
November 1, 2017
November
11
Nov
1
01
2017
02:22 AM
2
02
22
AM
PDT
Dionisio: "Now we got the whole Neo-Darwinian RV+NS enchilada served on the table!" Ah, I love mexican cuisine! Unfortunately, I am very bad at cooking! :)gpuccio
November 1, 2017
November
11
Nov
1
01
2017
01:58 AM
1
01
58
AM
PDT
Dionisio: "This is unfair… you open this new discussion thread while your previous discussion thread –less than a month old– is at the top of the hit parade" Success is a drug! :)gpuccio
November 1, 2017
November
11
Nov
1
01
2017
01:55 AM
1
01
55
AM
PDT
1 7 8 9 10

Leave a Reply