# Lee Spetner responds (briefly) to Tom Schneider

Tom Schneider, “Mr. Information Theory” for the pro-Darwin side, criticized Lee Spetner (author of Not a Chance) for a probability calculation characterizing evolutionary processes. Here is a reply by Spetner that I’m posting with his permission:

Someone just brought to my attention the website http://www.lecb.ncifcrf.gov/~toms/paper/ev/AND-multiplication-error.html
which criticizes a probability calculation I made. . . .

Schneider is mistaken. He evidently did not take the trouble to understand what I was calculating. My calculation is correct. The probability 1/300,000 is the probability that a particular mutation will occur in a population and will survive to take over that population. If that mutation occurred it would have to have had a positive selective value to take over the population. If that occurred, then all members of the new population will have that mutation. Then the probability of another particular adaptive mutation occurring in the new population is again 1/300,000 and is independent of what went before Ã¢â‚¬â€œ I have already taken account of the occurrence and take-over of the first mutation.

Therefore, the correct probability of both these mutations occurring and taking over their populations is the product of these two probabilities. And, as I wrote, the probability of 500 of them occurring is the probability 1/300,000 multiplied by itself 500 times. My calculation is correct and Schneider is mistaken. He is similarly mistaken about what he wrote about the article in Chance – Probability Alone Should End the Debate, http://www.windowview.org/science/06f.html, since that article relied on my calculation.

## 48 Replies to “Lee Spetner responds (briefly) to Tom Schneider”

1. 1
Joseph says:

Dr Spetner must be mistaken because we know evolution is a fact and all extant organisms owe their collective common ancestry to some unknown population(s) of single-celled organisms which just happened to have the ability to asexually reproduce. Therefore anyone doing any probability calculations needs to remind themselves of those facts.

(for those who do not know me, the above is sarcasm)

For anyone who doesn’t have or hasn’t read Not By Chance I highly recommend it (for whatever that is worth). And if you live in the New England area you may borrow my copy.

2. 2
Smidlee says:

Thanks to Tom Schneider I now understand how darwinist deals with these incredible odds… they glue their dice.

3. 3
JGuy says:

From the website Lee links to, the guy wrote this analogy….

“We then find the card that has the most coins with heads up and we throw away all the other cards. So if even one card has an extra head, it will be found. We reproduce that card 100 times (with errors) and repeat the selection. Suppose that we make an error in copying a coin state about 1 time in 100. Then almost every other generation we will get another head. Starting from about 50% heads, it will only take 10 generations to get a card with all heads. That is what happens in nature. ”

Now, if this were true. Then 3000 generations later.. the fruit fly should have made many new beneficial proteins.

…. or a mouse should have already been bred into another kind of rat with new proteins after 10 to 100 generations.

An experiement is better than a thousand scientific opinions. In this case – in my opinion – not so scientific.

4. 4
JGuy says:

5. 5
JGuy says:

“While this may be true for random strings, it does not directly apply to proteins found in living organisms. Why? Because individual mutations accumulate one-at-a-time and there is amplification (replication) between steps. That is, if one starts with a given amino acid string, the mutations in the genome (from which the string is derived) are sequential. A mutation occurs, perhaps changing the amino acid string. If the change is bad, which is true for the majority of changes, the organism dies and its genes are gone.”

This guy is entirely full of it. Andmethinksitsoundsfamiliar. Maybe, he was frantically trying to find a way around the probability issues at too late an hour, and hence rediscovered Dawkins pre-school “logic”.

6. 6
DLH says:

Schneider’s tutorial gets even more interesting. See:
http://www.lecb.ncifcrf.gov/~t.....error.html

“A mutation occurs, perhaps changing the amino acid string. If the change is bad, which is true for the majority of changes, the organism dies and its genes are gone. ” Oh that life were so simple. Schneider may find it instructive to read John C. Sanford’s * Genetic Entropy & the Mystery of the Genome, 2005, ISBN 1599190028 . Sanford reviews major population models showing that many mutations are near neutral or not harmful enough to cause immediate death. Instead they accumulate until they eventually cause species death. “Beneficial” mutations are so rare that they do not accumulate in practice.

7. 7
DLH says:

Tom Schneider’s probability tutorial appears to have been written in haste. See: http://www.lecb.ncifcrf.gov/~t.....error.html

A concrete example. Suppose we have 10 coins that land as ‘heads’ or ‘tails’ after they are all flipped at once in parallel. The probability of getting all heads is (1/2)10 = 1/1024. The probability of not getting any head in a parallel flip is 1-1/1024 so the probability of getting no heads after F parallel flips is (1-1/1024)F. After a number of flips F, the probability of finally getting all heads is

1 – (1-(1/1024))F.

For example, after 1024 tries the chance of getting all heads at least once is only 1 – (1023/1024)1024 Ã¢â€°Ë† 63.2%. So it could take quite a while to get all heads!

My understanding of “The probability of not getting ANY HEAD” is “the probability of getting ALL TAILS” which is identical to “the probability of getting ALL HEADS.” in a parallel flip. i.e., 1/1024, not 1-1/1024.

I expect Schneider intended to say “probability of NOT getting ALL heads” in which case his calculations would appear to be correct for random trials etc.

Consequently, I think the “probability of NOT getting ALL heads in ALL F sets of 10 parallel flips” would be 1-(1/1024)^F.

Schneider’s conclusions thus appear a bit dubious from my small understanding of probability. Maybe Schneider could clarify.

His following application to heads/tails cards assumes all “mutations” or errorrs of one kind are positive and are kept, apparently compounded by his probability calculation error.

Schneider may find it helpful to correct or clarify the above – OR correct his conclusion as follows:
“So it indeed does end the debate, [Spetner and http://www.WindowView.org have] Tom Schneider has made a fatal error.”

(My comments above are from memory of ancient classess on probability. Hopefully some of the math whizz’s can verify/clarify/correct them.)

8. 8
DLH says:

Thanks Zachriel for explaining Schneider’s calculations. That makes more sense now.

Schneider argues:

If such a string were to be generated using independent selection of the amino acids, then the probability of generating any particular string is 20-300, a very small number indeed. While this may be true for random strings, it does not directly apply to proteins found in living organisms. Why? Because individual mutations accumulate one-at-a-time and there is amplification (replication) between steps. That is, if one starts with a given amino acid string, the mutations in the genome (from which the string is derived) are sequential.

Aside from the Genetic Entropy issue detailed by John Sanford, Schneider appears to beg the question of abiogenesis by assuming mutations to lage populations of already existing cells. That appears to be a far cry from the calculation as to how the self reproducing cell originally came together. How do you get the first self replicting cell if there is no self reproduction before then?

e.g., The smallest self replicating genome found so far SAR11 appears to have about 1354 genes, each with numerous amino acids.

Genome Streamlining in a Cosmopolitan Oceanic Bacterium, Stephen J. Giovannoni et al. Science 19 August 2005: Vol. 309. no. 738, pp. 1242 – 1245
http://www.sciencemag.org/cgi/...../5738/1242

Lean Gene Machine: Ocean bacterium has the most streamlined genome
http://www.sciam.com/article.c.....38;colID=5

This gives an example of how “natural selection” works to minimize the genome’s length – not increase it with extra DNA of no immediate value.

9. 9
idnet.com.au says:

Probability theory alone destroys non intelligently designed theories on the origin of life in our cosmos.

According to J.B.S. Haldane “How small must the first natural organism have been? If this minimum involves 500 bits (of specified information), one could conclude either that terrestrial life had had an extraterrestrial origin (with Nagy and Braun) or a supernatural one.”

see Steve Jones’ quotes on http://creationevolutiondesign.....ne_30.html

10. 10
franky172 says:

Zachriel
Consider an analogy with a population of words. Suppose a word can evolve by random point-mutation or by random recombination with other words in the population. If a mutant forms a valid word, it is added to the population. If not, it is ruthlessly eliminated. The population is limited to a few hundred of the longest words.

I ran this experiment, but the dictionary I found only had words up to 8 letters 🙁 The results can be seen here:
http://www.duke.edu/~pat7/public/htm/sample.html

11. 11
JGuy says:

Zachriel, DLH:
You two should check out the latest issue of Creation Research Quarterly. There si a good article titled “The Elimination of Mutations by the Cell’s Elaborate Protein Quality Control Factory: A Major Problem for Neo-Darwinism”.

Not that this is the only reason SchneiderÃ¢â‚¬â„¢s article is faulty, but it (the article I posted the title to) simply presents powerful arguments that suggests everything that Schneider hopes for is moot on the subject presented in this CRSQ article alone. ie. If the cell can’t make a new protein without changing the qualtity control functions in cells simulataneously (to accomodate the specific new protein(s))..then you can’t have new proteins just waltzing into the mix to increase fitness.

12. 12
Andrea says:

From the quotes provided here and at Tom Schneider’s site it’s hard to understand how the original argument went. How is the number 1/300,000 derived?

13. 13
franky172 says:

It’s probably worth noting in the link I posted earler that the crossover procedure used defaults to “crossover and grow” – so any crossover between two words of four letters each will be a word of longer than four letters; the growing can be implemented as a miutation as well, but i thought it worth noting that’s the way it was implemented.

The M-code is available here:
http://www.duke.edu/~pat7/publ.....rdLength.m

14. 14
jzs says:

My thought process when I read interesting articles and comments like this is always:

If ID proponents use chance to infer that something is unlikely, and Darwinism proponents use chance to infer that something is actually likely, how can chance be a “minor ingredient” (The Blind Watchmaker, p. 49) like Dawkins claims?

15. 15
tinabrewer says:

People like Dawkins make a big deal out of how chance is a “minor” component of their schema because they recognize that chance is the biggest obstacle to the intuitive acceptance of their theory by the lay public. To dim the natural incredulity most people feel when they are told that the miracle of life is a result of a gazillion accidents all piled up on each other, they make much noise about the NON-random action of natural selection. Of course, the creative part of NDE is provided not by the passive working of natural selection, but by the allegedly random fodder provided to natural selection by mutations.

16. 16
Atom says:

[Slightly Off-Topic]

Has anyone else read the exchange between Spetner and Max?

Lee Spetner/Edward Max Dialogue

It goes over some issues relevant here, such as probabilities and increasing information contents. Good read. (I think I’ll soon purchase his book Not By Chance…)

17. 17
Chris Hyland says:

“From the quotes provided here and at Tom SchneiderÃ¢â‚¬â„¢s site itÃ¢â‚¬â„¢s hard to understand how the original argument went. How is the number 1/300,000 derived?”

Id also be interested to know what organism we’re talking about.

“If the cell canÃ¢â‚¬â„¢t make a new protein without changing the qualtity control functions in cells simulataneously (to accomodate the specific new protein(s))..then you canÃ¢â‚¬â„¢t have new proteins just waltzing into the mix to increase fitness.”

I don’t know how you’d define ‘new protein’ exactly, but pretty large changes have been observed occurring in populations in culture (acquisition of a new binding domain for example) so I don’t think it’s true that the cell can’t make new proteins. To be fair I haven’t read the article though.

“Of course, the creative part of NDE is provided not by the passive working of natural selection, but by the allegedly random fodder provided to natural selection by mutations.”

I guess the argument is that what variation is actually selected for is not random even though the underlying variation is. I guess what the variation determines is the possible directions that selection is able to take.

“chance is the biggest obstacle to the intuitive acceptance of their theory by the lay public.”

Something like ‘the miracle of life is a result of a gazillion accidents all piled up on each other’ seems an unwarranted philosophical extrapolation from evolutionary theory to me, and of all the evolutionary biologists I have heard speak I have not heard one of them come close to saying anything like that. On the other hand something like ‘the processes we have observed that we have evidence to show that they have been a major force in evolution have no apparent direction or goal’ doesn’t really sound as catchy. I’d be happy with ‘evolution appears to be unguided’ but we shouldn’t be teaching kids scientism so the ‘appears’ shouldn’t really be necessary when speaking in a scientific context.

18. 18
Andrea says:

Chance is a minor component because for most populations, a very large sequence space is sampled at each generation. For instance, each newborn human is generally estimated to carry about 1 new gene mutation. That means that 300,000,000 Americans will display 10,000 new mutations/gene/generation (ballparking at 30,000 genes). 6 billion humans will sample 2×10^5 variants/gene/generation. That’s a lot of variation!

That’s also why the 1/300,000 number claimed by Spetner seems way too low, by the way. (I wonder whether he confused mutation rate with the chance of a mutation appearing in a population. ) For instance, as mentioned above essentially every possible single amino acid substitution appears de novo at every human generation. The chance of fixation of a new mutation with even a small selective advantage (say, 1%), is 2s, i.e. 2%. That means that it would take only 50 generations on average for a new selectively advantageous aa substitution to appear and “sweep” a human-size free-breeding population. And when that happens, the time to fixation is rather short – few dozens generations or so.

19. 19
shaner74 says:

Ã¢â‚¬Å“Americans will display 10,000 new mutations/gene/generation (ballparking at 30,000 genes). 6 billion humans will sample 2Ãƒâ€”10^5 variants/gene/generation. ThatÃ¢â‚¬â„¢s a lot of variation!Ã¢â‚¬Â

Even if this is correct (I just donÃ¢â‚¬â„¢t know the stats) itÃ¢â‚¬â„¢s amazing that we all somehow remain human. Think we would have seen some hip new macro-change by now. New body plan maybe?

20. 20
DLH says:

Mea culpa
Following Zachriel’s explanation, I formally withdraw my comments at 8 and apologize to Tom for misreading his example. Better get some more sleep and good probability book.

(PS I still believe is his application to nature does not follow from reasons such the probability of selection as reviewed in Stanford’ Genetic Entropy etc. )

21. 21
Joseph says:

Andrea:
Chance is a minor component because for most populations, a very large sequence space is sampled at each generation. For instance, each newborn human is generally estimated to carry about 1 new gene mutation. That means that 300,000,000 Americans will display 10,000 new mutations/gene/generation (ballparking at 30,000 genes). 6 billion humans will sample 2Ãƒâ€”10^5 variants/gene/generation. ThatÃ¢â‚¬â„¢s a lot of variation!

But variation just leads to wobbling stability.; Which isn’t a good thing for evolutionism.

Andrea:
ThatÃ¢â‚¬â„¢s also why the 1/300,000 number claimed by Spetner seems way too low, by the way. (I wonder whether he confused mutation rate with the chance of a mutation appearing in a population. )

It is probably too high. Becoming fixed takes quite ba bit of luck or intention.

The larger the population, ie those over 1000 and NS is pratically nill. For populations under 1000 Mayr assures us any mutation will get lost just by random effects.
————————————————————

Zachriel:
I note that no one has yet attempted to use SpetnerÃ¢â‚¬â„¢s methodology to calculate how word evolution should progress.

And for good reason. And that reason would be very apparent to anyone who has read his book.

Also “evolution” isn’t the issue. It is the mechanism that is being debated.

22. 22
PaV says:

Andrea:

Chance is a minor component because for most populations, a very large sequence space is sampled at each generation. For instance, each newborn human is generally estimated to carry about 1 new gene mutation. That means that 300,000,000 Americans will display 10,000 new mutations/gene/generation (ballparking at 30,000 genes). 6 billion humans will sample 2Ãƒâ€”10^5 variants/gene/generation. ThatÃ¢â‚¬â„¢s a lot of variation!

I’m not sure how you arrived at your numbers.

The mutation rate of genomes is generally considered to be about 1^10-8/nucleotide. There’s about 10^10 nucleotides in the human genome. That means about 100 mutations/generation. But the percentage of the genome that is “coding” for genes is about 3%. That means about 3 mutations occuriing, on average, in “genes”. There are about 30,000 genes/genome in humans. So the number I get is: 3/30,000 , or 10^-4 mutations/gene/generation. For a population of 3 x 10^8, that means about 30,000 mutations/gene/ generation. Taking 20 years as an average time per generation, that means 1500 mutations per year/gene. (How many of these are “beneficial”? )

As to Spetner’s number of 1/300,000, he has a unique educational background, and with that background comes up with an impressive way of getting a realistic number for “fixation” (not appearance) of a mutation in a population.

23. 23
PaV says:

Zachriel:

“Illuminating work, Franky172. Apparently, the evolutionary algorithm is far more efficient than random search. ”

This shouldn’t be surprising since most, if not all, evolutionary algorithms “sneak in” information; and “information” is always directed towards a target. And only knowing where a target is will, per NFL theorems, improve your chances over a simple random search.

24. 24
Andrea says:

“For a population of 3 x 10^8, that means about 30,000 mutations/gene/ generation.”
Right, but those are nucleotide substitutions. Assuming (conservatively) that 1/3 nucleotide substitutions result in amino acid substitutions (hence potential phenotypic changes), it comes to 10^4 mutations/gene/ generation in the US, 2×10^5 mutations/gene/ generation in the world, as I said.

As to SpetnerÃ¢â‚¬â„¢s number of 1/300,000, he has a unique educational background, and with that background comes up with an impressive way of getting a realistic number for Ã¢â‚¬Å“fixationÃ¢â‚¬Â (not appearance) of a mutation in a population.
The chance of fixation of a selectively advantageous mutation is, regardless of the educational background of the proponent and impressiveness of their claims, 2s (2 x the selection coefficient). An only slightly advantageous mutation, with a selective coefficient of 0.01 (i.e. an increase in 1% in transmission – essentially experimentally undetectable in humans) has a chance of fixation of 2%.

For most organisms, as the example above shows, there is an enormous range of mutations sampled at each generation. The favorable ones have a very good chance of fixation. Of course, unfavorable mutations won’t get fixed, but no one would not expect the formation of a new species to require 500 selectively disfavored mutations.

Finally, there is the usual error of painting the target around the arrow. That is, even assuming 500 mutations are required to make a new species, it is only in retrospect that it had to be that species. Going forward, evolution could have just as easily generated a different species using different mutations.

In other words, while the chances of evolving one specific species are indeed low, the number of species that evolved is a minuscule fraction of the number of potential new species which could have evolved. (It’s like saying that the chance of an individual with my – or your – DNA sequence coming from the mating of my or your parents are essentially zero, but once thy got to it, someone had to be born. ) One should of course correct for that when talking probabilities.

Basically, I am just trying to understand how Spetner got to 1/300,000, because I can’t find a solid justification in the links provided.

25. 25
Andrea says:

That should have been:
“no one would expect the formation of a new species to require 500 selectively disfavored mutations.”

26. 26
Joseph says:

Andrea:
The chance of fixation of a selectively advantageous mutation is, regardless of the educational background of the proponent and impressiveness of their claims, 2s (2 x the selection coefficient). An only slightly advantageous mutation, with a selective coefficient of 0.01 (i.e. an increase in 1% in transmission – essentially experimentally undetectable in humans) has a chance of fixation of 2%.

How did you figure that?

27. 27
Patrick says:

Someone previously banned a while back tried to add this comment:

Joe, Crow & Kimura (pp 418-422) show how that is figured.

The assumption is that the number of offspring of a mutant follows a Poisson distribution with mean 1+s. A branching process argument then shows that the probability p of ultimate survival (meaning there is a non-zero number of mutants after a very long time) is the solution of

1-p=exp[-(s+1)p]

The solution is approximately p=2s-(5/3)s^2+(7/9)s^3-(131/540)s^4 and so on, or approximately p=2s for small s.

28. 28
PaV says:

Andrea:
“Ã¢â‚¬Å“For a population of 3 x 10^8, that means about 30,000 mutations/gene/ generation.Ã¢â‚¬Â
Right, but those are nucleotide substitutions. Assuming (conservatively) that 1/3 nucleotide substitutions result in amino acid substitutions (hence potential phenotypic changes), it comes to 10^4 mutations/gene/ generation in the US, 2Ãƒâ€”10^5 mutations/gene/ generation in the world, as I said.”

My number wasn’t 10^4, but 10^(minus) -4. Per your indication, 1/3 nucleotides produce an amino acid change, so that means 3^10-5 gene mutations (aa substitution) /generation.

Spetner’s numbers are 1/500 for fixation, and 1/600 for appearance. I think he was using a 1/10^5 nucleotide mutation rate, which makes my number above 1/300–about what Spetner uses. He uses 0.1 selection coefficient–very realistic. You multiply 1/600 (for the mutation to appear), and 1/500 (chance of being fixed)=1/300,000 (for each mutation).

“That means that it would take only 50 generations on average for a new selectively advantageous aa substitution to appear and Ã¢â‚¬Å“sweepÃ¢â‚¬Â a human-size free-breeding population. And when that happens, the time to fixation is rather short – few dozens generations or so. ”

Along with:
“For most organisms, as the example above shows, there is an enormous range of mutations sampled at each generation. The favorable ones have a very good chance of fixation. ”

Putting these two statements together, the only conclusion you can come to is that either one (1) beneficial mutations are extremely rare, or (2) all kinds of “beneficial” mutations have become fixed in the human population since the time of Darwin. Would you like to point out to us these “beneficial mutations”, or do you want to live with the notion that “beneficial mutations” are rare? (And probably swamped by deleterious ones)

29. 29
Andrea says:

“My number wasnÃ¢â‚¬â„¢t 10^4, but 10^(minus) -4. Per your indication, 1/3 nucleotides produce an amino acid change, so that means 3^10-5 gene mutations (aa substitution) /generation.”
If you are talking per individual. But what matters is the population, so, as you stated correctly yourself the first time around:
“So the number I get is: 3/30,000 , or 10^-4 mutations/gene/generation. For a population of 3 x 10^8, that means about 30,000 mutations/gene/ generation.”
which, corrected for aa substitutions instead of nucleotide substitutions, comes to 10,000 gene/generation (in the US).
“SpetnerÃ¢â‚¬â„¢s numbers are 1/500 for fixation, and 1/600 for appearance. I think he was using a 1/10^5 nucleotide mutation rate, which makes my number above 1/300Ã¢â‚¬â€œabout what Spetner uses. He uses 0.1 selection coefficientÃ¢â‚¬â€œvery realistic. You multiply 1/600 (for the mutation to appear), and 1/500 (chance of being fixed)=1/300,000 (for each mutation).”
Still doesn’t make sense to me. First of all, 10^-5 mutation rate in way too high. If we had that mutation rate, we would carry 10^5 new nucleotide substitutions/generation, of which ~3% (=3,000) would be within genes, that is 1,000 new amino acid substitutions/generation (0.03/ gene). That would mean that the population in the US would sample a whopping 10^7 mutations/gene/generation. You’d pretty much saturate the possible single point mutants space each generation (a typical gene of 300 aa has ~60,000 possible single aa mutants). You’d be pretty much assured that every mutant would appear at every generation. And 0.001 is a selection coefficient that is pretty much close to neutral (it would mean, in human terms and assuming 2 children/generation, that the carriers would, on average, have an extra descendant every 500 generations – essentially negligible)

But even if we assume (out of whatever calculation Spetner did) a rate of appearance of a specific mutation in a population of 1/600, that means that every few generations one of the 500 mutations he presupposes are necessary for the speciation would likely appear . And every new generation would be another roll, with another chance of another of the gene mutations to appear, and all of them would travel to the population in parallel, working through fixation.

And that again is assuming you expect precisely and only those specific mutations, as opposed to any of the innumerable alternative mutation that would give rise to any of the other possible innumerable alternative species. (Which is of course the obvious conceptual flaw in the calculation.)

It would be good to have the actual derivation of the numbers from Spetner’s book.

30. 30
Joseph says:

Zachriel:
The mechanism in the word experiment is random mutation/recombination along with simple selection.

To follow Spetner’s argument all you get to mutate is one bit, not a whole letter which is comprised of 8 bits. If you want to say recombination is random then the onus is on you to demonstrate that.

Zachriel:
As SpetnerÃ¢â‚¬â„¢s argument is strictly arithmetic, it appears it should apply to any evolutionary search algorithm.

When you demonstrate any search algorithm arising without intelligence please let us know.

31. 31
DLH says:

29 Joseph and 34 Andrea. (To Support 30 Patrick)

Spetner (p102) selects 0.001 (0.1%) as the fraction of mutations having a selective advantage, citing a Ã¢â‚¬Å“frequent valueÃ¢â‚¬Â used by George Gaylord Simpson 1953 p 119 (an NDT architect and the Ã¢â‚¬Å“dean of evolutionistsÃ¢â‚¬Â).

Then Spetner states on p 101 Ã¢â‚¬Å“FisherÃ¢â‚¬â„¢s analysis shows that a mutant with a selective value of one percent has a two percent chance of survival in a large population. . . . If the selective value were a tenth of a percent, the chance of survival would be about 0.2%, or one in 500.Ã¢â‚¬Â Citing Ronald A. Fisher (1958) The Genetical Theory of Natural Selection, Oxford.
On p 102 he summarizes: Ã¢â‚¬Å“For large populations, the chance of survival turns out to be about twice the selective value.Ã¢â‚¬Â (In populations > 10,000.)

32. 32
DLH says:

33 Andrea:
Ã¢â‚¬Å“It would be good to have the actual derivation of the numbers from SpetnerÃ¢â‚¬â„¢s book.Ã¢â‚¬Â
See: Lee M. Spetner, PhD, Not by Chance, Shattering the Modern Theory of Evolution. 1998 Judaica Press ISBN 1-88-582-24-4 JudaicaPr@aol.com

Steps per speciation:
Spetner p 97

G. Ledyard Stebbins, one of the architects of the NDT, has estimated that to get to a new species would take about 500 steps [Stebbins 1966].

500 steps. P 97 ( citing Stebbins 1966).

Acceptable probability of speciation:
Spetner p 100:

Richard Lewontin of Harvard University has estimated that for each species alive today there about 1000 that went extinct [Lewontin 1978]. . . . Some apecies go for a long time without changing. . . So let’s throw in another factor of a thousand for this effect.
. . . Thus we we adopt the criterion that evolution can work if the chance of achieving a new species in 500 steps is at least one in a million.

Needed probability per step:
Spetner p 100

The chance of a single step has to be so large that when we multiply it by itself 500 times we get at least 1/1,000,000. The smallest number that will do that is close to 0.9727, which is a chance of about 36 out of 37.

Spetner calculates the chance of one mutation appearing and then taking over the population as 1/600 x 1/500 = 1/300,000.

Spetner p 103

You can now see what it will take to complete one successful step in a chain of 500. An adaptive mutation has to occur and it has to survive to take over the population. But the chance is small that a specific copying error will appear and survive. The chance that it will appear is 1/600. For a selective value of a tenth of a percent the chance that the mutation will survive, if it appears, is 1/500. The chance that the mutation will both appear and survive to take over the population is 1/600 x 1/500, or one in three hundred thousand. (1/300,000). ThatÃ¢â‚¬â„¢s less than the chance of flipping 18 coins and having them all come up heads.

See previous post for 1/500 or 0.2% as 2x the 0.1% selective advantage.

Basis for 1/600 for a mutation to appear:
Number of births per step:
Spetner p 122 ref 3

Number of births per evolutionary step. . . . George Gaylord Simpson . . .estimated that the whole of the horse evolution took about 65 million years. He estimated there were about 1.5 trillion births in the horse line. . . .The experts say the modern horse has evolved through some 10 to 15 genera. If we say the horse line, from Hyracotherium to the modern horse, went through about five species in each genus, then the horse line with its 1.5 trillion births went through about 60 species. . . .That would make about 25 billion births per species. If I divide 25 billion births per species by the 500 steps per species transition, I get 50 million births per step.

Mutation rate/nucleotide/birth in animals:
Spetner p 92

For organisms other than bacteria, the mutation rate is between 0.01 and 1 per billion [Grosse et al. 1984]. The geometric-mean* is one per billion (10^-9) in bacteria and one per ten billion (10^-10) in other organisms.

Spetner p 100

Note that I have taken the mutation rate at each step to be a change in a single nucleotide.6. I don’t know if there is always, at each stage, a single nucleotide that can change to give the organism a positive selective value and to add information to it. No one really knows. But I have to assume it if I am to get on with this study of cumulative selection.
That’s a pretty strong assumption to make, and there’s no evidence for it. But if the assumption doesn’t hold, the NDT surely won’t work. Althrough we don’t know if it holds, lets see if the NDT can work even with the assumption.

Net mutation rate of 1/600:
Spetner p 100

The chance of a mutation in a specific nucleotide in one birth is 10^-10, and there are 50 million births in an evolutionary step. The chance of getting at least one such mutation in the whole step is about 50,000,000 times 10^-10, or one in two hundred. There is an equal chance that the base will change to any one of the other three.* Then the chance of getting a specific change in a specific nucleotide is a third of that, or one in six hundred.

* The chances aren’t really equal, but assuming they are will give us a result that is close enough for our purposes.

Copying errors needed per step Spetner states on his page 104:

How many potential adaptive copying errors must there be to raise the chance of a successful step from 1/300,000 to 0.9727? A calculation shows that there must be about a million of them.7

(sic)
(Reference 5 on page 123 apparently provides this detail:)

5. There have to be a million potential adaptive mutations to make the theory work. Actually the number turns out to be about 1,080,000. We can check this by verifying that at least one out of 1,080,000 possibilities will occur with probability 0.9727. We found that the chance for a particular mutation to occur and take over the population in one step is one in 300,000, or a probability of 0.000,003,333. The chance that it will not occur is one minus this number, or 0.999,996,667. The chance that none of the 1,080,000 potential mutations will occur and take over is 0.999,996,667^1,080,000. This works out to be 0.0273. The chance that at least one of the potential adaptive mutations will occur and survive is one minus this number, or 0.9727.

33. 33
DLH says:

Schneider notes that the combined probability of independent events is the product of their probabilities. Pa X Pb. He says that this does not apply to biology: http://www.lecb.ncifcrf.gov/~t.....error.html

It is inappropriate to multiply probabilities unless the two events are independent. One must account for all of the events (in other words, honor the dead). The functional amino acids in a protein are not obtained independently since many organisms die for the few that survive to reproduce. Each change to an amino acid occurs in the context of the current protein and therefore depends on the previous history of the protein. Although the amino acids may be functionally independent (allowing, for example, the computation of a sequence logo), the appearance of the selected amino acids is sequential during evolution and is, therefore, dependent on previous steps. It is invalid to directly apply the multiplication rule to computing the probability that proteins came into existence.

Schneider claims Spetner makes Ã¢â‚¬Å“the AND-multiplication errorÃ¢â‚¬Â citing Spetner p130: Ã¢â‚¬Å“The chance of 500 of these steps succeeding is 1/300,000 multiplied by itself 500 times.Ã¢â‚¬Â etc.

Spetner responds: Ã¢â‚¬Å“The probability 1/300,000 is the probability that a particular mutation will occur in a population and will survive to take over that population.Ã¢â‚¬Â . . .Ã¢â‚¬Å“the probability of 500 of them occurring is the probability 1/300,000 multiplied by itself 500 times.Ã¢â‚¬Â

From the previous post, it appears to me that Spetner has addressed both the population & probability issues necessary to make NDT work, taking major evolutionistÃ¢â‚¬â„¢s assumptions. Are there any errors in SpetnerÃ¢â‚¬â„¢s overall argument of what would be needed for NDT to work via those assumptions taken from evolutionists?

1) Has Schneider anywhere addressed SpetnerÃ¢â‚¬â„¢s overall argument and each of SpetnerÃ¢â‚¬â„¢s assumptions?

2) What support/objection is there for SpetnerÃ¢â‚¬â„¢s basis for independence between calculations if assuming a mutation takes over the population for each of the 500 steps per speciation?

3) Is Schneider correct in his AND-multiplication error critique of Spetner p130?

34. 34
Joseph says:

Joseph: Ã¢â‚¬Å“To follow SpetnerÃ¢â‚¬â„¢s argument all you get to mutate is one bit, not a whole letter which is comprised of 8 bits.Ã¢â‚¬Â

Zachriel:
Genomes are base-4 mapped to base-64. Letters can be mapped in a similar fashion.

Did you have a point or do you just like to type?

Joseph: Ã¢â‚¬Å“ If you want to say recombination is random then the onus is on you to demonstrate that.Ã¢â‚¬Â

Zachriel:
In our model, recombination *is* random.

Zachriel:
However, this is an important point. If Spetner argues that point-mutation is insufficient to account for biological diversity, then he is correct. Even simple recombination is not sufficient.

As far as I can tell no one knows if anything is sufficient.

Zachriel:
The question remains. How long would it take such an algorithm to evolve ten-letter words when such words represent only 1 in 14 billion of the possible sequences of ten-letters? How long would it take even if we use only point-mutation?

It all depends on the programmer- ie the parameters set, the efficiency of the algorithm. IOW it all depends on the design.

Joseph: Ã¢â‚¬Å“When you demonstrate any search algorithm arising without intelligence please let us know.Ã¢â‚¬Â

Zachriel:
The origin of such an algorithm is irrelevant to SpetnerÃ¢â‚¬â„¢s claim which concerns already existing evolutionary algorithms.

The origins are very relevant. Dr Spetner is only arguing against unintelligent causes.

What existing “evolutionary algorithms” are you talking about?

35. 35
Andrea says:

DLH:
thanks for posting that – what a mess. Statistics and the post-hoc target issue are the least of it.

Not sure we want to take this apart here, since this thread is disappearing anyway, but if the site owners are willing to start a new thread, it could be fun.

36. 36
franky172 says:

The website provided above has been updated to include the generation of 9- and 10-letter words. In at least some cases it appears 10 letter words can be found in about 10^4.5 calculations…

37. 37
DaveScot says:

Get lost Zachriel. I gave you a second chance to mend your ways but you’re still running about on the net posting trash talk about our site here. I consider that duplicitous and don’t want your two-faced kind around here. Hasta la vista. I’ll be deleting your previous comments along with you. Call it taking out the trash.

38. 38
DLH says:

See Schneider’s: The AND-Multiplication Error at:
http://www.lecb.ncifcrf.gov/~t.....error.html
Section:

The multiplication rule does not apply to biological evolution.
. . .That is, if one starts with a given amino acid string, the mutations in the genome (from which the string is derived) are sequential. A mutation occurs, perhaps changing the amino acid string. If the change is bad, which is true for the majority of changes, the organism dies and its genes are gone. (In diploids, recessive defects will be removed more slowly since they are only exposed when an organism becomes homozygous for the mutation.) If a rare lucky change occurs that has some advantage (or at best is neutral or only slightly deleterious) then the organism may survive to produce offspring. The possibility of appearance and acceptance (by natural selection processes) of mutations in the offspring therefore depends strongly on whether the previous generation survived and on the number of progeny.

Schneider appears to be describing the equivalent of a “bang-bang” controller. i.e., if a mutation has any positive selectivity then select it, if any negative selectivity, then it dies. That makes for simple calculations, but it seems to me that Schneider throws the baby out with the bath water with that statement. Realistic modeling needs realistic selection factors AND a realistic ratio of beneficial to harmful mutations.

Spetner appears to have selected what evolutionists say is a realistic selection factor of 0.1% . However, I think Spetner is being overly generous in his calculations by ignoring the harmful mutations with small negative selectivity.

Sanford, Genetic Entropy (2005) p 24 notes “The best estimates seem to be one million to one (Gerrish and Lenski 1998, Genetica 102/103:127-144)

The Basic Problem – Princess and the Nucleotide Paradox
See Sanford Genetic Entropy 2005) p 47. In realilstic conditions, there are few positive mutations and numerous negative mutations (the ratio of positive to negative is very small). Then the negative mutations swamp the positive. Sandford highlights this

The problem involves the enormous chasm that exists between genotypic change (a molecular mutation) and phenotypic selection (a whole organism’s reprouction.) .. . . We start to see what a great leap of faith is required to believe that by selecting or rejecting a whole organism, Mother Nature can precisely control the fate of billions of individual misspellings within the assembly manual.

Schneider appears to ignore this effect in his page. This Princess and the Nucleotides Paradox alone I expect is “catestrophic” to Schneider’s argument, his calculations and his Ev program.

—————

DaveScott – Andrea has proposed starting a new thread to pursue these issues. Propose taking my last four posts, reformatting to start a new thread: Schneider vs Spetner & Sanford

PS Assume my quotes of Spetner come under fair copying as they are to justify his position. Please verify with him.

39. 39
Patrick says:

franky, you might find this interesting:

http://user.tninet.se/~ecf599g.....index.html

http://www.uncommondescent.com/archives/1224

For fun see if your program can hit upon pseudopseudohypoparathyroidism (30 letters) or aequeosalinocalcalinoceraceoaluminosocupreovitriolic (52 letters). I’m glad to see your program doesn’t “sneak in too much information” considering the fitness function only checks for a 10-character string, although the target is very large considering you’re looking for ANY 10-letter word. If we’re just considering 8-bit single-byte coded graphic character sets I’d only find your results interesting if the generated word (or set of words) came to close to 500 informational bits.

40. 40
franky172 says:

http://user.tninet.se/~ecf599g.....index.html

This is interesting, and it does a good job of showing that blind search is infeasible as an approach to generating complicated text.

For fun see if your program can hit upon pseudopseudohypoparathyroidism (30 letters) or aequeosalinocalcalinoceraceoaluminosocupreovitriolic (52 letters).

I can pretty much guarantee you that the odds of finding any one of these particular words is vanishingly small. Of course, the odds of finding any particular 10 letter word is also very small, but not nearly as small as for the other words you suggested.

IÃ¢â‚¬â„¢m glad to see your program doesnÃ¢â‚¬â„¢t Ã¢â‚¬Å“sneak in too much informationÃ¢â‚¬Â considering the fitness function only checks for a 10-character string, although the target is very large considering youÃ¢â‚¬â„¢re looking for ANY 10-letter word.

Actually, the fitness function itself is defined as:
Fit(word) =
length(word) iff word is in dictionary
0 otherwise

I did implement a stopping criterion of “let me know when you hit 10 letter words” but that’s just so I could analyze the process of the GA up to that point. It doesn’t add anything to the GA itself.

Also, despite the fact that I’m looking for any 10 letter word, there are only on the order of 10-20k of them (depending on how you count), and there are 26^10 or 1.4*10^14 possible 10-letter combinations, so the odds of picking any particular word of length 10 at random is still very low (about 1.4*10^-10).

If weÃ¢â‚¬â„¢re just considering 8-bit single-byte coded graphic character sets IÃ¢â‚¬â„¢d only find your results interesting if the generated word (or set of words) came to close to 500 informational bits.

I’m sorry you don’t find these results interesting :). Making the assumptions I’ve used, do you know about how many letters would be equivalent to 500 informational bits?

41. 41
PaV says:

Andrea:

But even if we assume (out of whatever calculation Spetner did) a rate of appearance of a specific mutation in a population of 1/600, that means that every few generations one of the 500 mutations he presupposes are necessary for the speciation would likely appear . And every new generation would be another roll, with another chance of another of the gene mutations to appear, and all of them would travel to the population in parallel, working through fixation.

The problem here, Andrea, is that if you really believe that these mutations travel forward in parallel, that means that in about 500 generations, a new species will appear. That’s 500 years for most animals. Are you aware of a new species of cat, or dog, or horse, or……well, fill in the blank. I thought evolution takes place too slowly for us to see it in action. Descriptions of cats from the Egyptians dynasties is the same as for current day species. And, please, don’t appeal to this being attributable to artificial selection, because the selection pressure of artifical selection is much higher than that found in nature.

And, if it takes 500 years for a new “species” to come about, then why don’t we see them in the fossil record.

42. 42
Patrick says:

I suppose I’ll need to take a 2nd look (did you put the actual code up anywhere)?

Also, despite the fact that IÃ¢â‚¬â„¢m looking for any 10 letter word, there are only on the order of 10-20k of them (depending on how you count), and there are 26^10 or 1.4*10^14 possible 10-letter combinations, so the odds of picking any particular word of length 10 at random is still very low (about 1.4*10^-10).

That’s actually why I don’t find the results interesting. 1.4*10^10-10 doesn’t even approach the Universal Probability Bound of 1*10^-50 proposed by French mathematician Emile Borel.

As for calculating the informational bits, here is an example: Ã¢â‚¬Å“ME THINKS IT IS LIKE A WEASELÃ¢â‚¬Â is only 133 bits of information(when calculated as a whole sentence; the complexity of the individual items of the set is 16, 48, 16, 16, 32, 8, 48 plus 8 bits for each space). So aequeosalinocalcalinoceraceoaluminosocupreovitriolic would be 416 informational bits. Even though that’s not 500 I’d still be surprised if that showed up with the way your GA is designed right now.

43. 43
Andrea says:

“The problem here, Andrea, is that if you really believe that these mutations travel forward in parallel, that means that in about 500 generations, a new species will appear.”
You are confusing the chance of appearance of a mutation with its time of fixation. The time of fixation for a new favorable mutation (in generations) is (2/s)ln(2N) (assuming a large enough, freely breeding population). For a mutation with a lowish s, say 0.01, in a population of reasonable size (1,000,000), you are talking a couple thousands generations on average.

(This should also answer PaV’s previous comment about favorable mutations becoming fixed since Darwin’s times.)

44. 44
PaV says:

The chance of fixation of a selectively advantageous mutation is, regardless of the educational background of the proponent and impressiveness of their claims, 2s (2 x the selection coefficient). An only slightly advantageous mutation, with a selective coefficient of 0.01 (i.e. an increase in 1% in transmission – essentially experimentally undetectable in humans) has a chance of fixation of 2%.

I’m afraid this doesn’t obviate your problem. You’re now saying it will take 2,000 years to generate a new “species”. The Egyptians lived over 3,000 years ago, and the wild cats that lived then, are still the same today.

And, yes, you’ve finally have gotten the right formula for time to fixation, but that’s not what you said before:

The chance of fixation of a selectively advantageous mutation is, regardless of the educational background of the proponent and impressiveness of their claims, 2s (2 x the selection coefficient). An only slightly advantageous mutation, with a selective coefficient of 0.01 (i.e. an increase in 1% in transmission – essentially experimentally undetectable in humans) has a chance of fixation of 2%.

If you’ll read the first post that Allen MacNeil wrote in the “We is Junk” thread,http://www.uncommondescent.com/archives/1777, you’ll see that evolutionary biologists have pretty much given up population genetics as a way of explaining evolution.

45. 45
46. 46
Andrea says:

“IÃ¢â‚¬â„¢m afraid this doesnÃ¢â‚¬â„¢t obviate your problem. YouÃ¢â‚¬â„¢re now saying it will take 2,000 years to generate a new Ã¢â‚¬Å“speciesÃ¢â‚¬Â. The Egyptians lived over 3,000 years ago, and the wild cats that lived then, are still the same today.

And, yes, youÃ¢â‚¬â„¢ve finally have gotten the right formula for time to fixation, but thatÃ¢â‚¬â„¢s not what you said before:

“The chance of fixation of a selectively advantageous mutation is, regardless of the educational background of the proponent and impressiveness of their claims, 2s (2 x the selection coefficient). “”
PaV, seriously, man: the chance of fixation (=2s for a beneficial mutation in a large, free-breeding population) is different from time of fixation (=(2/s)ln(2N)) which is the number of generations that it takes, on average, for a mutation that reaches fixation to do so. Go back and read what I wrote.

To be explicit, just in case: a new mutation with a selection coefficient of 0.01 (small: in human terms it would mean an average of 1 more descendant over 50 generations) in a large population (say, 1,000,000 individuals) has a chance of fixation of 2%. That is, it will have to appear on average 50 times before one gets fixed. OK? Good. Now, when it gets fixed, the time it takes to reach fixation (i.e. to sweep the population) will be, on average, (2/0.01)ln(2,000,000)=~2,900 generations.

Now, the evolution of cats was certainly driven by humans, and it certainly involved the fixation of many mutations affecting reproductive and behavioral features of the animal. Many genetic differences, for instance, are known to exist between domestic cats and the Ethiopian wild cat, which is thought to be their wild ancestor. That said, neither the Egyptians nor anyone else since was trying to “evolve a new species”. Of course, the selection coefficients during artificial selection are much stronger. If you work on a small enough population, you can reach fixation of certain alleles in two generations. I do it in my own lab to generate purely mutant mouse strains.

“If youÃ¢â‚¬â„¢ll read the first post that Allen MacNeil wrote in the Ã¢â‚¬Å“We is JunkÃ¢â‚¬Â thread,http://www.uncommondescent.com/archives/1777, youÃ¢â‚¬â„¢ll see that evolutionary biologists have pretty much given up population genetics as a way of explaining evolution.”
I am sure that will come to a surprise to Dr. McNeill, and all biologists for that matter. I suggest you go read what you wrote again, paying attention to his words.

47. 47
franky172 says:

Hello,

(did you put the actual code up anywhere)?
The code for the original version is here:
http://www.duke.edu/~pat7/publ.....rdLength.m

All the code used in the second version is here:
http://www.duke.edu/~pat7/public/htm/source/

The vast majority of that code is for handling the more complicated dictionary searches. The only GA-related function is here:

http://www.duke.edu/~pat7/publ.....bination.m

enjoy!

48. 48