Uncommon Descent Serving The Intelligent Design Community

No evidence that there is enough time for evolution

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

No evidence that there is enough time for evolution[*]

Lee M Spetner

Redoxia Israel, Ltd. 27 Hakablan St., Jerusalem, Israel

Abstract: A recent attempt was made to resolve the heretofore unaddressed issue of the estimated time for evolution, concluding that there was plenty of time. This would have been a very significant result had it been correct. It turns out, however, that the assumptions made in formulating the model of evolution were faulty and the conclusion of that attempt is therefore unsubstantiated.

[This post will remain at the top of the page until 00 hours Tuesday May 31. For reader convenience, other coverage continues below. – UD News]

 

The standard neo-Darwinian theory accounts for evolution as the result of long sequences of random mutations each filtered by natural selection. The random nature of this basic mechanism makes evolutionary events random. The theory must therefore be tested by estimating the probabilities of those events. This probability calculation has, however, not yet been adequately addressed.

Wilf & Ewens [2010] (W&E) recently attempted to address this issue, but their attempt was unsuccessful. Their model of the evolutionary process omitted important features of evolution invalidating their conclusions. They considered a genome consisting of L loci (genes), and an evolutionary process in which each allele at these loci would eventually mutate so that the final genome would be of a more “superior” or “advanced” type. They let K-1 be the fraction of potential alleles at each gene locus that would contribute to the “superior” genome. They modeled the evolutionary process as a random guessing of the letters of a word. The word has L letters in an alphabet of K letters. In each round of guessing, each letter can be changed and could be converted to a “superior” letter with probability K-1.

At the outset they stated the two goals of their study, neither of which they achieved. Their first goal was to “to indicate why an evolutionary model often used to ‘discredit’ Darwin, leading to the ‘not enough time’ claim, is inappropriate.” Their second goal was “to find the mathematical properties of a more appropriate model.”  They described what they called the “inappropriate model” as follows:

“The paradigm used in the incorrect argument is often formalized as follows:  Suppose that we are trying to find a specific unknown word of L letters, each of the letters having been chosen from an alphabet of K letters. We want to find the word by means of a sequence of rounds of guessing letters.  A single round consists in guessing all of the letters of the word by choosing, for each letter, a randomly chosen letter from the alphabet.  If the correct word is not found, a new sequence is guessed, and the procedure is continued until the correct sequence is found.  Under this paradigm the mean number of rounds of guessing until the correct sequence is found is indeed KL.”

They gave no reference for such a model and, to my knowledge, no responsible person has ever proposed such a model for the evolutionary process to “discredit” Darwin. Such a model had indeed been suggested by many, not for the evolutionary process, but for abiogenesis (e.g., [Hoyle & Wickramasinghe 1981]) where it is indeed appropriate. Their first goal was not achieved.

They then described their own model, which they called “a more appropriate model.” On the basis of their model, they concluded that the mean time for evolution increases as K log L, in contrast to KL of the “inappropriate” model. They called the first model “serial” and said that their “more correct” model of evolution was “parallel”.  Their characterization of “serial” and “parallel” for the above two models is mistaken. Evolution is a serial process, not a parallel one, and their model of the first, or “inappropriate”, process is better characterized as “simultaneous” than “serial” because the choosing of the sequence (either nucleotides or amino acids) is simultaneous. What they called their “more appropriate” model is the following:

“After guessing each of the letters, we are told which (if any) of the guessed letters are correct, and then those letters are retained. The second round of guessing is applied only for the incorrect letters that remain after this first round, and so forth. This procedure mimics the ‘in parallel’ evolutionary process.”

W&E were mistaken in thinking the evolutionary process to be an in-parallel one — it is an in-series one. A rare adaptive mutation may occur in one locus of the genome of a gamete of some individual, will become manifest in the genome of a single individual of the next generation, and will be heritable to future generations. If this mutation grants the individual an advantage leading to it having more progeny than its nonmutated contemporaries, the new genome’s representation in the population will tend to increase exponentially and eventually it may take over the population.

Let p be the probability that in a particular generation, (1) an adaptive mutation will occur in some individual in the population, and (2) the mutated genome will eventually take over the population. If both these should happen, then we could say that one evolutionary step has occurred. The mean number of generations (waiting time) for the appearance of such a mutation and its subsequent population takeover is 1/p. (I am ignoring the generations needed for a successful adaptive mutation to take over the population. These generations must be added to the waiting time for a successful adaptive mutation to occur.)  After the successful adaptive mutation has taken over the population, the appearance of another adaptive mutation can start another step.

In L steps of this kind, L new alleles will be incorporated into the mean genome of the population. These steps occur in series and the mean waiting time for L such steps is just L times the waiting time for one of them, or L/p. Thus the number of generations needed to modify L alleles is linear in L and not logarithmic as concluded from the flawed analysis of W&E.

The flaws in the analysis of W&E lie in the faulty assumptions on which their model is based. The “word” that is the target of the guessing game is meant to play the role of the set of genes in the genome and the “letters” are meant to play the role of the genes. A round of guessing represents a generation. Guessing a correct letter represents the occurrence of a potentially adaptive mutation in a particular gene in some individual in the population. There are K letters in their alphabet, so that the probability of guessing the correct letter is K-1. They wrote that

1– (1 – 1/K)r

is the probability that the first letter of the word will be correctly guessed in no more than r rounds of guessing. It is also, of course, the probability that any other specific letter would be guessed. Then they wrote that

[1– (1 – 1/K)r]L

is the probability that all L letters will be guessed in no more than r rounds. The event whose probability is the first of the above two expressions is the occurrence in r rounds of at least one correct guess of a letter. This corresponds to the appearance of an adaptive mutation in some individual in the population. That of the second expression is the occurrence of L of them. From these probability expressions we see that according to W&E each round of guessing yields as many correct letters as are lucky enough to be guessed. The correct guesses in a round remain thereafter unchanged, and guessing proceeds in successive rounds only on the remaining letters.

Their model does not mimic natural selection at all. In one generation, according to the model, some number of potentially adaptive mutations may occur, each most likely in a different individual. W&E postulate that these mutations remain in the population and are not changed. Contrary to their intention, this event is not yet evolution, because the mutations have occurred only in single individuals and have not become characteristic of the population. Moreover, W&E have ignored the important fact that a single mutation, even if it has a large selection coefficient, has a high probability of disappearing through random effects [Fisher 1958]. They allow further mutations only in those loci that have not mutated into the “superior” form. It is not clear if they intended that mutations be forbidden in those mutated loci only in those individuals that have the mutation or in other individuals as well. They have ignored the fact that evolution does not occur until an adaptive mutation has taken over the population and thereby becomes a characteristic of the population. Their letter-guessing game is more a parody of the evolutionary process than a model of it. They have not achieved their second goal either.

Thus their conclusion that “there’s plenty of time for evolution” is unsubstantiated. The probability calculation to justify evolutionary theory remains unaddressed.

References

Fisher, R. A. (1958). The Genetical Theory of Natural Selection, Oxford. Second revised edition, New York: Dover. [First published in 1929]

Hoyle, F. and N. C. Wickramasinghe, (1981). Evolution from Space, London: Dent.

Wilf, H. S. & Ewens, W. J.  (2010) There’s plenty of time for evolution. Proc Natl Acad Sci USA 107 (52): 22454-22456.


[*] This paper is a critique of a paper that appeared recently in the Proceedings National Academy of Sciences USA and rightfully should have been published there. It was submitted there and was rejected without review and the reason given was that the Board did not find it “to be of sufficient interest for publication.” When I noted how unreasonable this reply was, the editor replied that the paper “makes some obvious and elementary points of no relevance to the paper, and in my opinion does not warrant publication.” The Board then refused to comment further on the matter. It was clear that the Board’s rejection was not on the merit of the substance of the paper but for some other, undisclosed reason.

Comments
heh. I was born in 1952, PaV.
He did say 13 years, lol. I wonder if he meant 30. I've been debating evolution on the net since the early 90's. I remember when there were no blogs or forums and the days when we used newsgroups instead. Ah, talk.origins. Those were the days.Mung
June 1, 2011
June
06
Jun
1
01
2011
04:49 PM
4
04
49
PM
PDT
heh. I was born in 1952, PaV. Turning 60 next year. But my mind has been pretty open from birth, and I hope will remain that way for a couple more decades still. How about you? I haven't read Signature in the Cell, although did borrow a copy when it first came out and read some of it. I'm afraid I didn't find what I read convincing. I guess that might seem like evidence that my mind isn't as open as I think it is. From where I'm standing, though, it seemed like Stephen Meyer's mind wasn't very open. I guess my problem with arguments like the one he seemed to be making, which seemed to be, essentially, the Irreducible Complexity argument applied to the genetic code, is exactly the problem I see with Behe's version, namely that just because a structure or function is Irreducibly Complex (will break if you take any part away) doesn't mean that it was assembled from its constituent parts (this is leaving aside the other big problem for IC, which is that IC functions are in fact evolvable by incremental unselected, even deleterious, steps). A clunky structure can be pared down "decrementally" until only the essentials are in place. And there is a fair bit of reason to suppose that clunkier ancestors of self-replicating DNA containing cells could have been "pared down" to leave the DNA code that we know. There's a paper here, for instance: http://www.springerlink.com/content/b418465308240147/fulltext.pdf Now, its thesis may or may not be true, but it seems to me plausible, and a problem with making an ID inference from "there isn't another plausible explanation" is that you only need a plausible explanation to appear, and you lose your inference. It doesn't mean your inference is wrong, but it does mean that you have to check out each alternative hypothesis extremely thoroughly, because unlike most competing hypotheses, ID makes comparatively few differential predictions (at least ones that I've seen.Elizabeth Liddle
June 1, 2011
June
06
Jun
1
01
2011
01:20 PM
1
01
20
PM
PDT
EL:
I think that’s a fair point.
It's good that you trying to remain open-minded.
But I think it’s premature to assume that whatever it is, it is too complex to have arisen from a series of simpler self-replicating stages, themselves subject to natural selection.
I don't know how old you are, but I've been looking for Darwinian answers for nigh thirteen years. I bet you were in grade school then. And what have I found? Nothing that gives any kind of realistic answer for how highly improbable informational events can be accounted for. I don't think it's premature. There have been a number of great thinkers who have concluded that Darwinian mechanisms simply can't explain observed biological phenomena. So, don't hold your breath.
This is, of course speculation, but at the heart of the Darwinian argument is the idea that once you have something that self-replicates with variance, and where the variants differ in the efficiency with which they self-replicate, then you have the “recipe” if you like for increasing complexity (if complexity is beneficial), and no more “frontloading” is required than those key initial conditions.
To produce a protein you need a ribosome. To divide, you need cytochrome c. What you're envisioning is way too stripped down. I would recommend Stephen Meyer's Signature in the Cell as a way of understanding just what a big problem "origin of life" is.PaV
June 1, 2011
June
06
Jun
1
01
2011
12:38 PM
12
12
38
PM
PDT
Lizzie:
...but once you have a gene that does something that allows its possessor to replicate more efficiently, then the probability of variants that are also viable is much greater than the probability of finding that original,
Isn't it true, by definition, that some entity which replicates more efficiently is more viable than one that doesn't replicate at all? Basically, what I think you are saying is, once you have a functional system, changing that system and having it still function is much more likely than coming up with a functional system in the first place. Now frankly, I'm not sure that's even true. I'm also not clear on it's relevance to the current discussion. But maybe I just don't understand the argument you're making. How does having a functional system increase the chances of finding a functional variant of that system?Mung
June 1, 2011
June
06
Jun
1
01
2011
12:32 PM
12
12
32
PM
PDT
I think that's a fair point. I don't have an answer, because I don't think there is one (yet). We don't yet know what the minimum viable genome might be. But I think it's premature to assume that whatever it is, it is too complex to have arisen from a series of simpler self-replicating stages, themselves subject to natural selection. Nonetheless, I agree that the problem of how the simplest DNA genome formed has not been solved (although not with the idea that it isn't in principle, solvable). Where I probably do disagree with you is over just how simple that first genome had to be. It's possible that was contained within a simple lipid membrane and it had one gene, with one codon, and that codon resulted in the production of an amino acid that, for example, made the membrane slightly more (or less) elastic, and that this enhanced the probability that it would divide successfully. This is, of course speculation, but at the heart of the Darwinian argument is the idea that once you have something that self-replicates with variance, and where the variants differ in the efficiency with which they self-replicate, then you have the "recipe" if you like for increasing complexity (if complexity is beneficial), and no more "frontloading" is required than those key initial conditions.Elizabeth Liddle
June 1, 2011
June
06
Jun
1
01
2011
12:29 PM
12
12
29
PM
PDT
EL:
In other words, the probability space is nested, as it were (i.e. is Bayesian in structure); the probability of a single “original” gene might be small (but we are going back to abiogenesis arguments now) but once you have a gene that does something that allows its possessor to replicate more efficiently, then the probability of variants that are also viable is much greater than the probability of finding that original, and, indeed, the probability of finding variants of, for example, the word “elephant” that are also viable.
You're assuming here a breezy type style. And all seems so quite forward. But let's take a closer look. Here is the critical claim: " . . . the probability of a single “original” gene might be small (but we are going back to abiogenesis arguments now) . . . " Can we simply assume that the problem of gene origin is relegated only to time of abiogenesis? I think not. Here's what I mean. To assume that all genes were fabricated all at once would be the case of complete "front-loading" of the genome. Yet, this would mean that all the needed genes throughout the evolution of various life forms were already present from the beginning. How, then, can this be accounted for other than by a Creator? If, then, we abandon the complete "front-loading" hypothesis, then we're left to conjecture that during the unfolding of various life forms inevitably new genes will have had to arisen. Now we have the problem of how full-length gene sequences could develop when the odds of such a thing happening are so staggeringly low. I, too, look forward to you rebuttal. Cheers.PaV
June 1, 2011
June
06
Jun
1
01
2011
07:45 AM
7
07
45
AM
PDT
You say “not all combos produce anything”. But the reality is that all but an infinitely few produce anything. And this is so even when we include polymorphisms.
Well, not "infinitely few", and of those "few" (a large number, in fact), many variants are perfectly viable, which is not true of English text. When a genotype is replicated, or shuffled with another genotype, as in sexual reproduction for example, many of the resulting variants do what they always did, but slightly differently (are expressed slightly more readily; produce a slightly different protein; are expressed under slightly different chemical conditions) and all these variations my result in selectable phenotypes. Importantly of course, stretches can be duplicated and either do what one did twice as much, or become redundant, if the stretch in question is "switched off" by a certain concentration of product. At that point, potentially, the copies are "free" as it were, to "explore" a new search space. In other words, the probability space is nested, as it were (i.e. is Bayesian in structure); the probability of a single "original" gene might be small (but we are going back to abiogenesis arguments now) but once you have a gene that does something that allows its possessor to replicate more efficiently, then the probability of variants that are also viable is much greater than the probability of finding that original, and, indeed, the probability of finding variants of, for example, the word "elephant" that are also viable. And if, of those viable variants, some confer greater reproductive success than others, you have the beginnings of evolution; and if you also have a system that means that duplication is one of the variants, then you have the potential for finding not just a variant, but a variant that performs some additional pro-reproductive function to the one you started from. But I assume you know this argument :) So I look forward to the rebuttal. Cheers LizzieElizabeth Liddle
June 1, 2011
June
06
Jun
1
01
2011
03:34 AM
3
03
34
AM
PDT
EL: "Between those two extremes, I suggest, lies DNA – not all combos produce anything at all, but the ones that produce nothing don’t actually get in the way, and the ones that produce something can do it in a number of ways (witness the large number of polymorphisms that still result in a functional gene)." You say "not all combos produce anything". But the reality is that all but an infinitely few produce anything. And this is so even when we include polymorphisms.PaV
June 1, 2011
June
06
Jun
1
01
2011
03:14 AM
3
03
14
AM
PDT
My more serious point was that English is very “brittle” and highly redundant
To me, that statement sounds self-contradictory. Redundancy is just the sort of thing one would want to cure brittleness. Information Theory 101.Mung
June 1, 2011
June
06
Jun
1
01
2011
12:50 AM
12
12
50
AM
PDT
No, I don't think it is "very impressive", PaV, which was part of my point! It's a fairly trivial achievement. With enough time and effort I might even make a paragraph, but of course it wouldn't be worth reading, even if I achieved it. My more serious point was that English is very "brittle" and highly redundant - only a tiny set of possibly combinations actually make pronounceable syllables, let alone recognisable words or phrases. Whereas digits are quite different - any combination of digits represents a number. Between those two extremes, I suggest, lies DNA - not all combos produce anything at all, but the ones that produce nothing don't actually get in the way, and the ones that produce something can do it in a number of ways (witness the large number of polymorphisms that still result in a functional gene). As for the rest of your post, I need to read it more closely (it's 3 in the morning here, and I've been working on a studentship proposal!) but if I read your last sentence aright, I most emphatically agree! You can't model natural selection without, um, selection. Sometime tomorrow I'll try to figure out who isn't doing it ....Elizabeth Liddle
May 31, 2011
May
05
May
31
31
2011
07:07 PM
7
07
07
PM
PDT
Elizabeth: EL: "Eventually recognisably English sentences, of a sort, evolved." " . . . of a sort . . ." Humm. EL: "As you’d expect,including some “irreducibly complex” words – text strings that were not selected until several letters were in place." " . . . several letters were in place . . ." Humm. None of this sounds very impressive. I would fully expect that using a computer you could (by a "biased reproduction") develop a "sort of" sentence. What I would not expect is that you could develop a paragraph using your program---which could only happen, I would suspect, if you were using a target paragraph. It's the difference between micro and macro evolution. EL: "I don’t think it’s a terribly relevant model for DNA, for many reasons, so I think that arguing about WEASEL programs is a bit pointless, but there is simply no doubt that grammatical English sentence can evolve by random mutation and natural selection, without fixing “correct” placements. as long as you have a fitness function that selects for features of English text." I've already commented on the sentence-part of your comment. However, you seem to have missed my point entirely. The logic is quite simple. (1) Patently---on the face of it---Wilf and Ewens model can't be right: it makes evolution too easily had---something no one has seen, and something that would easily be seen. So we know they're wrong. (2) They make a serious error. The equations they develop can only work for "simultaneous" evolution; and "simultaneous" evolution cannot take place at the individual level. It can only happen if the whole population (a million-fold) is involved. But, then W&E have to make clear how newly arrived "correct letters" can sweep through the population: which, of course, is exactly what they hope their equations would obviate. (3) Wilf & Ewens' model sounds like, and WORKS like, Dawkins' "Methinks it is like a weasel" program in BW. But Dawkins clearly states that his program does not realistically represent how natural selection operates. Thus W&E's model is not what they think it is. Have you noticed that W&E's equations don't include a 'selection factor'? How can you present a model of "natural selection" that contains no selection factor whatsoever? It doesn't make sense.PaV
May 31, 2011
May
05
May
31
31
2011
06:59 PM
6
06
59
PM
PDT
Mung: I'm here to entertain!!!PaV
May 31, 2011
May
05
May
31
31
2011
06:18 PM
6
06
18
PM
PDT
Sorry for all the disconnections: when I pasted it from Notepad, I didn’t bother reading it through since I had already done that on Notepad.
And here I thought you were just waxing poetic and finding an interesting way to emphasize certain points. lolMung
May 31, 2011
May
05
May
31
31
2011
04:09 PM
4
04
09
PM
PDT
It is trivially easy to write a "weasel" program in which the "correct" letters are not fixed once they appear, and the sentence still evolves. There are many on the web, and I believe at least one is authored by Dawkins. I've certainly made one myself. I've also made one in which I did not specify the sentence in advance, rather I biased reproduction (which is what natural selection is) in favour of pronouncable syllables, real words, and grammatical combinations. Eventually recognisably English sentences, of a sort, evolved. As you'd expect,including some "irreducibly complex" words - text strings that were not selected until several letters were in place. I don't think it's a terribly relevant model for DNA, for many reasons, so I think that arguing about WEASEL programs is a bit pointless, but there is simply no doubt that grammatical English sentence can evolve by random mutation and natural selection, without fixing "correct" placements. as long as you have a fitness function that selects for features of English text.Elizabeth Liddle
May 31, 2011
May
05
May
31
31
2011
04:04 PM
4
04
04
PM
PDT
Noting that there are only 13 kinds of characters actually used in Dawkins' example of METHINKS IT IS LIKE A WEASEL (including the 'space'), and understanding this 'alphabet' to be equivalent to the number of different 'alleles' for each loci (i.e., K=40 for each loci), adjusting the formula and using the value of 13 (actual letters used=alleles) for K gives the following number of generations: 41PaV
May 31, 2011
May
05
May
31
31
2011
03:39 PM
3
03
39
PM
PDT
Sorry for all the disconnections: when I pasted it from Notepad, I didn't bother reading it through since I had already done that on Notepad.PaV
May 31, 2011
May
05
May
31
31
2011
02:39 PM
2
02
39
PM
PDT
It's a shame that very little of the discussion so far has anything to do with Dr. Spetner's post. First, I remember seeing this paper and looking briefly at its math. It looked straightforward enough. I didn't see any glaring error. But, of course, their conclusion had to be wrong since, otherwise, the force of their conclusion would have been already noticed in the lab. That is, it's too rosy a scenario for evolution. But I didn't want to take the time to think it through, thinking that eventually someone with a population genetic background would address its errors. So, thank you Dr. Spetner for pointing out some of their errors; viz., that assuming that change is only "beneficial" and never "harmful"---i.e., when the correct letter is chosen, nothing more happens---does not realistically reflect what happens to biological populations. However, now forced to think a little bit more about their reasoning, I believe there is another, perhaps more fatal, error present. W&E's scenario, per Spetner's quote above, let's us know that: "[a]fter guessing each of the letters, we are told which (if any) of the guessed letters are correct, and then those letters are retained. The second round of guessing is applied only for the incorrect letters that remain after this first round, and so forth. This procedure mimics the ‘in parallel’ evolutionary process.” As Dr. Spetner points out above, their 'first case' is not "serial", but "simultaneous". And, it turns out, so is their second case. W&E assume that all L loci are involved 'simultaneously' in each "round" of "guessing". But, how is that possible? Here's what I mean: We can assume that W&E are thinking here of a 'genome' that is made up of L loci=genes. No problem there. And we can think of each "gene" as being composed of a large number of different configurations; K, in fact. We can assume that 'mutations,' in the form of nucleotide substitutions, allows 'evolution' to produce these K alleles at each of the L loci/genes. Now here's the problem: how can all L (=20,000) loci provide a "new letter" after each "round"? W&E write: "The second round of guessing is applied only for the incorrect letters that remain after this first round, and so forth." Per W&E, their K = 40 is arrived at in this fashion: (1) 5 "letters"/L = 5 "letters"/20,000 loci/ individual x 10^6 individuals = 250 "letters"/loci/population (2) Only 1 in 10,000 "letters" (mutations) are beneficial So, (3) 10,000 "letters"/ 'correct letter' x "rounds" (generation)/250 "letters" per population = 40 "rounds"/'correct letter'-population. So, L=20,000 and K = 40. These numbers are true only if we are considering the entire population. IOW, somewhere in this million-fold population, there are "rounds" taking place at EACH of the 20,000 loci. But, for any individual in the population, only 5 of the 20,000 loci are having "letters" changed. Considering that only 1 in 10,000 "letters" is the correct one, it would take 2,000 rounds, = generations, for any one individual to find the correct "letter" (10,000/5 'letter's per generation) ASSUMING that the 5 'letters' were always occurring in the same 5 loci. If they were to happen anywhere among the 20,000 loci as each individual replicated, then it would require (20,000 loci x 10,000 letters/ correct letter)/ 5 letters/ generation = 4 x 10^7 generations, on average, for just ONE of the loci to get the right "letter" (=allele=gene). Obviously this is a whole lot more than W&E's value of 390 "rounds"=generations. So, what is going wrong here? Well, the model that W&E use is looking at all of the million-fold individuals in the population simultaneously. And, apparently, implicit in their model, they've simply assumed that once a "correct" letter is obtained, ANYWHERE in the population, that it is "instantaneously" 'fixed'. And, further, that because there is "instantaneous fixing" taking place, they in effect are also assuming that none of the already accumulated mutations are lost to the population via this fixing event. As Dr. Spetner points out, this is not how "natural selection" operates. He points out that when the "correct" letter is found, there is no reason to believe that in the next "round' it won't be lost by substitution. We can further point out, that the assumption implicit in their calculations are valid ONLY for the entire population, and that to assume that nothing is lost to the population and that everything is just instantaneously fixed, are not realistic assumptions. Let me put in another way: the model their assuming is comparable to Dawkin's model of the monkeys banging away at typewriters hoping to type out the phrase: "Methinks it is like a weasel". In that model, as soon as the correct "letter" is typed, it, too, is instantaneously saved, and, thereafter, only the "wrong letters" are to be newly typed. If this is true, then the equation that W&E give us should also apply to Dawkins' model. As I'm typing this, I am opening up Dawkin's Blind Watchmaker" to see what Dawkins came up with. In his model, K=27 and L=28. W&E's equation is: log 28/log(27/26) = 88 generations. Dawkin starts with three different strings of letters and his generation times are: 43, 64 and 41 using a computer that mimics W&E's model. These are very similar results. In fact, on close inspection, his starting string for the 41 generation case contained two correct letters. Hence, for that case, L=26. Then, W&E's equation would give us: log 26/log(27/26) = 86. The discrepancies are probably due to stochastic effects that are present when L is so small, and thus, "r" being so small. Also, each of the 20,000 loci (=genes) in W&E's model are different, whereas in Dawkin's model the "letters" can be the same (this should have the effect of changing the probability distribution towards a smaller running time). Averaging the times, we have 49.3 generations versus a calculated value of 87 (average). Obviously, we're very much in the same ballpark as far as models go. And what does Dawkins say about his computer model? "Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is mesleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL." (p. 50 paperback) In the case of W&E, they're assuming the instanteous fixation of the "correct" letter. This just isn't how natural selection works out in nature. It would be nice to see other population geneticists criticize this work. The rejection of Dr. Spetner's paper gives the impression that those who would criticize it will not receive a very friendly reception. Too bad.PaV
May 31, 2011
May
05
May
31
31
2011
02:25 PM
2
02
25
PM
PDT
Heinrich,
To this point his analysis has been spot-on.
I said to this point. The paragraph I was quoting appears two paragraphs above the one you quote. So yes, really.Mung
May 31, 2011
May
05
May
31
31
2011
11:35 AM
11
11
35
AM
PDT
Again I agree with Spetner. To this point his analysis has been spot-on.
Really? See my comment at 2 - it only recently appeared thanks to the joys of being moderated over a holiday weekend.Heinrich
May 31, 2011
May
05
May
31
31
2011
01:09 AM
1
01
09
AM
PDT
CY, I just thought the Simpson quote fit in well with the overall subject of the thread. I'm hoping someone can show me the maths, lol. Because I think it's garbage. 1. What does it take to program a genome even once, much less ten times over? 2. For most of those 5000MY life was, as he says, microbes (as far as we know). So don't we need to take that into account rather than counting on it to give us the figures we want? Don't we need to take into account what it takes to program a human genome, and don't we need to take into account generation times other than those of microbes? He's just tossing figures out with no basis in fact for believing that they are even relevant. Pretty much what he appears to be accusing those mathematicians of doing. Pot. Kettle. Black.Mung
May 30, 2011
May
05
May
30
30
2011
08:19 PM
8
08
19
PM
PDT
smidlee, there is not enough time not even on earth but on every planet in the whole universe for the time needed for evolution to work. there are only 10^150 events that have happened in our universe's history. that's 10^80 particles, 10^26 seconds and 10^43 divisions of a second, put that all together and you get 10^150 events. well, just to form one protein 200 aa long, that's 10^260 possible sequences.noam_ghish
May 30, 2011
May
05
May
30
30
2011
08:11 PM
8
08
11
PM
PDT
El, "CY: You should look into the controversy surrounding Dr Dawkins’ selfish gene notion. I’ve personally had discussions with a PhD level biologist who thinks Dr Dawkins reductionist approach is rubbish. He’s had to defend and argue his position over the decades. And, like a good scientist, he’s given way on some aspects of his idea. And his support of the meme concept has also taken some heat." This is what I really like about you. Sometimes you're with us and sometimes you're not. And the times you're not really with us, you're still pleasant. It takes guts to pull that off consistently. That bit about Dawkins and Gould I'm quite aware of. I think Dawkins ultimately disagreed with Glould because of his rather arbitrary non-overlapping magisterium rule. For Dawkins, the only magisterium is "science" according to his definition. Any other magisterium is irrelevant and/or non-existent. No sense in placating to those who are wrong about the existence of gods. I personally don't believe Dawkins' selfish gene hypothesis is that significant, and I don't find many people actually pushing it, save those on the fringes of science. It really is pop science, or pseudo science, but Dawkins would rather me not go there. :) Isn't it interesting how we can sort of agree with Dawkins but really not? We can agree with him in his consistency with his materialistic position, and that this is how Gould should have behaved, but we can disagree ultimately with his materialistic presuppositions.CannuckianYankee
May 30, 2011
May
05
May
30
30
2011
06:49 PM
6
06
49
PM
PDT
Mung, John Maynard Smith's statement sounds a lot like begging the question too. You have to assume first of all that "if one estimates, however roughly, the quantity of information in the genome" is something that developed out of the naturalistic processes you're attempting to show there's enough time for. Therefore, "and the quantity that could have been programmed by selection in 5000 MY, there has been plenty of time," doesn't follow. Why? You have to rule out that the quantity of the information in the genome did not arise by an intelligence placing it there without the constraints of probabilistic resources. I hope Elizabeth is reading this. And then we have old faithful PZ stating that it's wrong but useful against the alternative - creationists. As if that isn't laughable.CannuckianYankee
May 30, 2011
May
05
May
30
30
2011
06:29 PM
6
06
29
PM
PDT
@ noam-ghish Don't forget about to use your "Imagination." With your imagination 5 billions years is plenty of time.Smidlee
May 30, 2011
May
05
May
30
30
2011
04:38 PM
4
04
38
PM
PDT
mung, that JM Smith quote proves nothing. Smith offers no reasons why he thinks 5 billion years is enough time.noam_ghish
May 30, 2011
May
05
May
30
30
2011
04:05 PM
4
04
05
PM
PDT
Elizabeth, Thanks for your answers. I realize I was asking a lot, and perhaps a little off topic. "However, I think that is entirely valid, and has nothing to do with whether or not you are an atheist or not. “Supernatural”, essentially, means “unexplained”. It may well mean more than that too, but something cannot be, as I see it (and I think in the sense Dawkins is saying it) both “supernatural” and “an explanation” I'm not sure I would agree. It seems that if one is already committed to naturalistic explanations, then anything outside of them is inexplicable. Too me this sounds a lot like begging the question.CannuckianYankee
May 30, 2011
May
05
May
30
30
2011
03:56 PM
3
03
56
PM
PDT
Mung: Well, after that John Maynard Smith quote I can sleep quite easily! ;-) Night all! I really AM going to bed now.ellazimm
May 30, 2011
May
05
May
30
30
2011
03:55 PM
3
03
55
PM
PDT
Occasionally someone, often a mathematician, will announce that there has not been time since the origin of the earth for natural selection to produce the astonishing diversity and complexity of life we see ... The only way I know to give a quantitative answer is to point out that if one estimates, however roughly, the quantity of information in the genome, and the quantity that could have been programmed by selection in 5000 MY, there has been plenty of time. If, remembering that for most of the time our ancestors were microbes, we allow an average of 20 generations a year, there has been time for selection to program the genome ten times over. - John Maynard Smith
Mung
May 30, 2011
May
05
May
30
30
2011
03:51 PM
3
03
51
PM
PDT
There are D amino acids and L amino acids. (I'm not exactly sure why we use the latin name for D and the english name for L). I'm pretty sure this difference only matters for origin of life scenarios because the genome only codes for L amino acids. Is that true? Can L amino acids and D amino acids bind together?noam_ghish
May 30, 2011
May
05
May
30
30
2011
02:14 PM
2
02
14
PM
PDT
Well dang it mung, how can I please you if I don't let you see them first???bornagain77
May 30, 2011
May
05
May
30
30
2011
02:04 PM
2
02
04
PM
PDT
1 2 3 4 5

Leave a Reply