They withered under study.

There’s been a lively discussion between geneticists Dennis Venema and Richard Buggs and about whether the human race must have had more than one pair of ancestors (Venema yes, Buggs no).

From Evolution News and Science Today:

Earlier, we saw that evolutionary genomicist Richard Buggs has been engaged in a dialogue with Venema about the latter’s arguments against a short bottleneck of two individuals in human history. Buggs is skeptical that methods of measuring human genetic diversity cited by Venema can adequately test such an “Adam and Eve” hypothesis. Buggs’s initial email to Venema thus concluded, “I would encourage you to step back a bit from the strong claims you are making that a two person bottleneck is disproven.”

Buggs agreed with Venema that one particular metric — human allelic diversity — might be capable of testing the issue. But he wanted to know more details about the population genetics models that Venema was relying on. In reply to Venema’s response to his initial email, Buggs asked Venema to provide a citation. He requested some backup for the repeated claims that human allelic diversity indicates we evolved from an ancestral population of about 10,000 individuals.

Ultimately Dr. Venema was unable to provide a scientific citation to substantiate his claim. To be fair to Venema, he says he believes that he has provided an adequate citation. And no doubt he sincerely does believe it. There is no accusation of bad faith here. But Buggs has clearly shown that Venema did not provide adequate backup. This means that Venema’s claims against Adam and Eve are scientifically suspect and intellectually unpersuasive. In fact, Buggs has shown that some of Venema’s citations don’t even address the question of the ancestral population size of humans. This gives the appearance of “citation bluffing,” however unwitting. More.

*See also:* Are Adam and Eve genetically possible? The latest: Richard Buggs (yes) replies to Dennis Venema (no)

Over the years I’ve noticed that if you want to maximize the speed with which mutations arise and become ‘fixed,’ the optimal population size for animal populations is right around 10,000. I suspect that is the true origin of this number: that is, Venema won’t find, and will be unable to give Buggs, any citation that documents this number. It’s just a convenient number that, repeated enough times, becomes ‘fact.’

Oh, how wonderful the scientific methods in the hands of some!

It seems like the reality is, rather than the data forcing anyone into the idea of a population of 10,000, the real fact is that 10,000 is the working model of a lot of individual biologists. That’s a *huge* difference for someone trying to make the argument that the science dictates that there were more than two individuals. Saying “science dictates X” is drastically different than “scientists generally prefer X as a working hypothesis”.

I have not read the book, but, based on what I’ve heard about it, the idea that science dictates that there were more than two people is pretty much

thefoundational starting point that the rest of the book works from. And it turns out that the starting point may not even be controversial, just wrong.Oh, that’s interesting. Do you have a citation for it?

You cannot use standard evolutionary claims to test for Adam and Eve who would have been designed to evolve/ adapt.

Source for Venema?: Recent human effective population size estimated from linkage disequilibrium, Albert Tenesa, Pau Navarro, […], and Peter M. Visscher

“Overall, the estimates of Ne appear to be much lower than the usually quoted value of 10,000 (Takahata 1993). Earlier studies using mtDNA data suggested an Ne in the range of 1000–6000 (Rogers and Harpending 1992; Harpending et al. 1993; Sherry et al. 1994), for a population ?200,000 yr ago (?10,000 generations ago). Erlich et al. (1996) estimated a recent population size of ?10,000 from HLA polymorphisms. Sherry et al. (1997) estimated an ancestral population size of ?17,800 during the last one to two million yr from Alu repeats evolution.”

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1832099/

Takahata N. Allelic genealogy and human evolution. Mol. Biol. Evol. 1993;10:2–22. [PubMed]

Erlich H.A., Bergstrom T.F., Stoneking M., Gyllensten U., Bergstrom T.F., Stoneking M., Gyllensten U., Stoneking M., Gyllensten U., Gyllensten U. HLA sequence polymorphism and the origin of humans. Science. 1996;274:1552–1554. [PubMed]

Bob O’H:

You want a citation for something I’ve noticed on my own? Am I missing something? What are you looking for?

I just reread my post. Are you being sly? Are you saying that I have no citation, just as Venema has no citation; ergo, a ‘Mexican standoff’?

PaV – the result you claim is surprising so I would expect that you have something more formal than a comment in a blog post. I would expect a scientist making that claim to have either a model or data to back it up, so they would be able to point to a paper or manuscript reporting it.

Bob O’H:

My claim should not be surprising. The smaller the population, the fewer mutations arise; and, the larger the population, the longer it takes (4Ne replications) for a neutral mutation to become fixed. There’s a trade-off. And, if you crunch enough numbers enough times, you then zoom-in on the figure of around 10,000 as the ideal population size.

Why would there be a citation? Why would evolutionists want it to come out that the numbers they throw around are just favorable estimates they like to use?

And, if you “

expect a scientist making that claim to have either a model or data to back it up,” then why can’t Venema, when telling Christians that science tells us that the minimum population size for the human lineage is 10,000, give us a citation?I think I’ve given the real reason why.

PaV @ 8 –

Yes, and the effect of population size on the probability of fixation cancels out: this is undergrad stuff.

Can you show me an example of the figures being crunched?

Because if you have something more solid than “I say it is like this”, it becomes much easier to accept your claims, and might also provide some more insight into what’s going on.

Bob O’H:

I’ve misplaced my copy of Kimura’s “Neutral Theory of Evolution.” He made a calculation for a population of elephants using standard selection theory, I believe, and I simply followed his calculation.

I don’t have time now to scour my room; but, if I get my hands on the book, I can review it and respond.

IIRC, population size did matter. It didn’t just simply cancel out.

So, once found, and assuming I remember things correctly, I’ll have a ‘citation.’ But, not really, since I’m ‘citing’ my own experience.

I haven’t been a “scientist” since I left graduate school in 1973. I quickly became an engineer; and then even moved on from there. (It’s hard to keep up with me. 🙂 Posting at UD is simply an avocational interest, one that’s waned substantially over the last five or six years. )

That’s Kimura’s argument for the neutral theory from genetic load. It can’t be the basis for the claim you made in 8, as Kimura assumes many mutations are not neutral.

As Bob says, undergrads learn that the fixation rate is independent of Ne for neutral variants. If you want to increase the rate of adaptive substitutions then larger populations are always better. (With very large populations you will run into ‘soft sweeps’ where alleles that are identical in terms of their genetic sequence actually arrive from independent mutations. You could argue these are not true substitutions since the alleles that fix are not identical by descent. But that would be some next-level quibbling and not really relevant to the biology).

Bob O’H:

I found the book. On my bookshelf. Was looking at it, but didn’t see it.

The calculation I’m referring to is on p.101-102. I no longer remember what I used for the probability of fixation, but I’m guessing is was likely 1/2Ns generations, which, as you indicate, should just cancel out in terms of 2Nmu mutations/generation. I just don’t remember now where that trade-off came from at this point. I’ve done a google search looking for a post where I may have carried out a calculation, and I don’t find any. I’ve looked at a number of my earlier posts–171 of them, and can’t pinpoint one. I just can’t waste more time looking around.

But, I hope making all those calculations back then wasn’t a big waste of time. I’m wondering if I left out the 2N factor when calculating the time needed for the mutation to occur. If that’s so, then my 10,000 number is of no relevance.

With that said, however, I have no idea, then, where this figure of 10,000 comes from that Venema insists upon. Where does it come from, exactly?

BTW, I went back to a 2007 post, and there you were!!

Bob O’H:

I’ve just remembered how I did the calculations, and how I arrived at my conclusion. Details to follow when I have time.

I am not entirely sure about PaV’s personal observation that if one wants to maximize the speed at which mutations arise and become fixed, then a population of 10,000 is around optimal. I say this on the basis of two things. First, was a paper looking at the spread of mutations over 300 years in Quebec that indicates this process happens faster in small populations at the frontier of the advancing wave front of civilization (Science 334, 1148 (2011); Claudia Moreau, et al.). Second, I am currently working on a program that simulates evolution, with several variables that can be played with. It seems to me, from preliminary runs, that if you want to maximize the speed at which mutations arise and become fixed, then you want a lot of small (100), independently evolving populations, similar to what we probably had as human settlement spread as small tribal groups out across Asia, Europe, Africa and, eventually, the Americas. Essentially, a lot of small, independently evolving populations can take off in all sorts of directions, unimpeded by a large population mass. Eventually, when they coalesce into one large population, you have enormous variation well established in the overall population. I have a lot more work, and a lot of runs to do, each requiring about 24 hours or more of run time for just one data point, so I can’t show this yet in the form of graphs, but I plan to, and plan to publish the results. I’m just speaking from preliminary runs as I fine-tune the program to more closely to model reality. But I have run population sizes up to 100,000 and the larger the population and the fewer independently evolving populations, the slower the rate of accumulation of mutations if I use a reproduction rate of 10% (which can also be varied).

Bob O’H:

The size of the population becomes a factor when you compute the time for the fixation of a

particularmutation; that is, a nucleotide change at a particular locus leading to an a.a. change.If evolution is to take place randomly, this is just a basic step.

Here’s my numbers, and assumptions

Arrival Time= 3 x 3 x 10^9/2Nv ~ 10^10/200Nv=5 x 10^7/N

Fixation Time=1/probability of fixation= 1/1/4N=4N(*)

v=mutation rate= 100/haploid chromosome/generation

(This is my best stab at a ‘table’)

Pop Size: N=100 // N=10,000 // N=100,000 // N=1,000,000

Arrival Time: 500, 000 // 5,000 // 500 // 50

Fixation Time: 400 // 40,000 // 400,000 // 4,000,000

Total Time:45,000500,400 //// 400,500 // 4,000,050[*] 1/4Ne is Kimura’s number for the ‘average time for fixation of a mutation.’

Very clearly, the optimal population size is around 10,000.

When you’re not concerned about a particular location, then all mutations are the same, and you get a generic mutation flow rate through a populations. When you ask about a particular location, then population size does come into play.

Really not sure what youa are aiming at with this calculation. Why is the expected arrival time of a SNP dependant on the mutation rate per chromosome (or actually per genome if you’re talking about human mutation rates)?

cornu:

“haploid genome” instead of “haploid chromosome.”

We are still left with the question of why this number if part of the calculation.

KD:

Happy you’ve posted.

If a small population of 100 generates mutations slowly, then a grouping of such populations would generate the needed mutation sooner.

So, if you had 20 such populations–and we’re assuming they’re genetically very similar–then the ‘500,000’ generations needed, taken from the above quasi-table, would be divided by 20. IOW, it would only take 25,000 generations. Once fixed in just one of those populations, any interbreeding with the other populations would quickly become fixed in them as well.

So, I would think your simulation is in keeping with my numbers.

cornu:

It is a number that is often used for the number of mutations arising in the duplication of eukaryotes.

But’s not the SNP mutation rate, which is between 1E-8 and 1E-12 in euks. I’m trying to understand where these numbers have come from and what you think you are calculating.

Very little of it makes sense. Even if we could believe the results table, you seem to ignore the fact the target allele arises 80,000 more times while you are waiting for the first copy of it to fix.

cornu:

Your numbers are your numbers. Kimura used a mutation rate of 10^-6. I’m using 10^-7, roughly. That is: 10^-9/100.

And, as to the 80,000 more times, if these didn’t arise, then the mutation would never fix. Simple as that.

You’re not really understanding.

The particular number is irrelevant to the conclusion you are tyring to reach, so it doesn’t really matter you are using a bad one. But the fact you think the per-genome mutation rate is relevant to your per-site calculation should be an indication that something is wrong…

And FWIW, the minimum value for the function you have is found at 3536, quite a lot less than 10,000.

I don’t think you understand. I’m talking about independent mutations creating the same target allele, not identical-by-descent copies of the first mutant.

cornu:

Are you overlooking the fact that it is very likely that one particular mutation in a large population is much more likely to be lost than to be passed on?

And where do you get the 80,000 figure. Could you show your math?

PaV,

I’m not sure you understand how fixation works under neutrality (or that Kimura’s calculations use an infinite alleles model, so every mutation is ‘new’ in his calculation, but that wouldn’t be the case in your targeted mutation).

What I’m really trying to say in these comments is that your calculation is very unclear. Because the steps are opaque, it’s had to see what you have actually calculated (it’s certainly not the time to fixation of a particular allele under neutrality).

80,000 is taken from your extreme case. 4 million generation of an allele drifting toward fixation, with new copies of the same SNP (by state) arising every 50 gens.

cornu:

How realistic is the “infinite allele” model? My calculation is very realistic and simple.

Look at Kimura’s calculation on p. 83 of

Neutral Theory of Molecular Evolution.My calculation is similar to his, as is my thinking. He makes this calculation based on the fact that there is a new form of the alpha chain of hemoglobin that arises every 7 million years. Realistic stuff.Do you want to use 1/2N, or 1/4N, or what, for the probability of fixation? Is that what we’re quibbling about?

cornu: @11

I’m afraid it’s not. Are you thinking of Haldane?

It comes from a section titled “Constancy of Molecular Evolutionary Rates”.

But he wasn’t dealing with a neutral mutation in the section where his calculation appears.

This doesn’t tell us much, does it? It tells us ‘neutral’ mutations occur quite often throughout the

population: that is, “somewhere”. IOW, we don’t where they are. And, of course they’re neutral.Can’t a neutral mutation change back to another neutral mutation? Then you have a flow rate, mu, going in both directions. What is that? An equilibrium, kind of like the Hardy-Weinberg Equilibrium.

[Navel-gazing warning: I have no intention of going round and round on these points. I have better things to do with my time.]

Indeed. But if you want to decrease the number of generations needed for the ‘adaptive substitution’ to fix in the population, then a smaller population is better. This is the trade-off I was talking about, and which shows up in my ‘quasi-table.’

It’s very hard to talk to you about this, because you seem to lack a grounding in this topic. Rather than relitigate the errors above, perhaps it is easier to show the correct calculation?

You start by asking for “the time for fixation of a particular mutation”. Implicitly, you seem to assume the population is monomorphic for a non-target allele at the beginning of the process (you never calculate starting heterozygosity).

Let’s start by thinking about fixation. The probability that a neutral allele is fixed is equal to its current frequency in the population. For a mutant allele in a diploid population that is 1/2Ne (Ne being the effective population size, and two of that because there are two gene copies in each individual). If each mutant has only a 1/2Ne chance of fixing, then we expect the target mutation to be created 1/(1/2Ne) = 2Ne times before it goes to fixation.

How long will it take to see 2Ne mutations? Well, we can expect to 2Ne*v (where v is the mutation rate toward the target allele) to occur in each generation. That will usually be less than one, so let’s say 1/(2Ne*v) generations per mutation.

1/(2Ne*v) generations/mutation * Ne mutations = 1/v generations

So, it will take an average of 1/v generations for the version of the mutation that is destined for fixation to arrive, and 4Ne generations for the fixation to happen. Assuming we start from a monomorphic population that gives

t_fix = 1/v + 4Ne

It should be obvious that is highest when the population size is low. If we drop the assumption that the starting population is monomorphic then we expect 4Nev copies of the target allele to exist at the onset, and the waiting time doesn’t depend on population size at all.

Hope that’s clear enough.

First demonstrate that what have been coined “mutations”, are what they are: genetic errors. We do know that the Major part is recent, however. But we simply do not know how the genomes of Adam and Eve looked like. How much variability (interpreted as “mutations”) was already present in these genomes? As with all historical sciences, we will never know. What we do know is that most assumptions underlying current evolutonary thinking are wrong, scientifically untenible.

cornu:

That is an untestable assumption. No one has ever validated it.

I’ve heard a lot of creationists objections to evolutionary biology, but this might be the strangest. Good luck with it I guess…

cornu:

Agreed.

This is essentially the equation I’ve used. Where I quibble with this is over the understanding of v being independent of population size. As your equation is written, 1/v stands alone.

This is true if we don’t concern ourselves with any particular location along the length of the genome; however, if we ask the question: how long will it take to get a desired mutation at a particular location along the length of the genome, then everything changes.

You’ve stated:

.

But I’m interested in something else. I’m interested in a very particular mutation occurring, and then becoming fixed. 4Ne is the number of generations that it will take, on average, for any mutation to become fixed. So, there’s no quibble there.

However, for the needed (i.e., very particular) mutation to arise, then the equation to be used is:

[3 (mutations/correct nucleotide) x 3 x 10^9 nucleotide/genome]

Devastating refutation!

cornu:

I’ve just lost a long reply. I’m not going to go into detail again.

It takes 4Ne generations to ‘fix’ any given mutation. We’re agreed on that.

However, your equation deals with “mutations destined for fixation.” That can happen anywhere along the entire length of the genome. Hence the cancellation of the number of genomes, 2N.

But, if we’re looking for a very particular mutation to arise, then the equation is different. 1/v won’t do. Instead, for any one of the 2N genomes, the number of mutations is v, and so, any of the 2N genomes can be source of the new, particular mutation, and, for EACH genome, per generation, they get v mutations.

Now the odds of a particular mutation (ATCG) happening at a particlar location is 1/3 x 1/3 x 10^9 = approx. (10)^-10.

So, any of the 2N genomes has v number of attempts to overcome these odds each generation. Hence, 1/(2Nv x 10^-10) per generation. This is far different than 1/v, and is what I used for the “arrival” time in the above post.

It’s a different look at things. It’s not textbook.

OK, but note this is very different than your calculation. There you seem to assume the first copy of the mutaiton will go to fixation, so you don’t need to see (on average) 2Ne copies of it.

This would be the probability of a

givenmutation having occurred at a pre-specified site (multiplied by three for some reason). But that’s not what you are trying to calculate, you are “interested in a very particular mutation occurring, and then becoming fixed”. If we are interesting a very particular mutation then we need only to consider the site at which this very particular mutation could occur. Mutations will occur there with some rate per individual (v), which does not depend on the effective population size (how could it?). That being the case, you can but whatever number you want in ‘v’ and go through my post in 28 and work out where you went wrong.In general, when you start concluding that the expected outcome of one specific run of a stochastic process will behave differently than the long-term average you should start to get suspicious.

it’s also not correct.

Here’s the simplest way to understand your error.

In my calculation i used v as the mutation rate

per siteper generation. You are using v to mean the mutation rateper genomeper generation. Let’s call your one v_genome and mine v_site.It should be obvious that if we call genome size ‘g’, then v_site = v_genome/g (that is, the mutation rate per site is simply the mutation rate per genome divided by the total number of sites in the genome).

In your expression “1/(2Nv x 10^-10)” the ‘v’ is v_genome and 10E-10 is ~ 1/g (‘~’ because you make it more complex by picking the nucleotide, but we can find the general result by calling ‘v’ the mutation rate to the correct nucleotide). Since v_genome = v_site * g, we can say

1/(2Nv_genome x 1/g) = 1/(2N v_site*g * 1/g) = 1/(2Nv_site)

So, we are back to 1/(2Nv_site) * 2N = 1/v_site generations for the lucky mutation to arise.

cornu:

It’s been long ago that I found myself doing these calculations, but, IIRC, I was, again, not selecting a “textbook” example, but one, rather, that would give evolutionary theory the best shot at explaining things. So, IIRC, the simplest, most probable, event in favor of evolution was to have the mutation arise, and then go to fixation immediately. To me, this would involve the least amount of time; hence, giving evolutionary its best opportunity.

I ‘divide’ by 3 since the mutation must involve a ‘change’ in the nucleotide, of course, and there are 3 to choose from. It has to be a ‘particular’ mutation.

Yes, my ‘v’ is the actual number of mutations that occur through duplication, and is for the entire genome, and, is, as you say, v_site x ‘g’, the genome size.

These equations don’t properly reflect my calculation. To find the precise location were interested in, ‘g’ mutations are needed. Each genome receives v_genome number of mutations per generation. There are 2N genomes. The total number of mutations for the entire population, then, is 2Nv_genome per generation. The total number of generations needed, on average, to arrive at this particular mutation somewhere in the population of 2N genomes is:

‘g'[total # of mutations needed]/2Nv_genome [mutations/generation].

(Or, in your terminology, 1/2Nv_site)

I wrote this:

The 2N is already factored into my equation, and so the additional 2N you show is not called for.

So, the total time is then:

t = (1/2Nv_site) + 4N = 1 + 8(N^2)v/2Nv

Here, v= v_site, or around 10^-8. If N is large, then the r.h.s reduces to 4N, a constant.

If N is small, say N=100, then t = [1 + (8 x 10^4)(10^-8)/(200*10^-8) = [1 + 0.0008]/(2 x 10^-6), which is approximately equal to 5 x 10^5, which is very much larger than 4N= 400.

t= 1/v + 4N here would equal, 10^8 + 400, which is much larger than 5 x 10^5.

I think you would come off a lot better if you could simply admit your mistakes. Hopefully, you can take a bit of time, read these posts and see where you went wrong.

cornu:

Please point out my error in the following:

I have a genome that has L number of “sites.” What are the odds of a mutation occurring at a particular site, let’s call it S?

The answer has to be 1/L . The mutations are random.

How many mutations will occur along the length of this genome?

Usually around 100 per duplication.

So, for each duplication (generation), there are 100 opportunities for the mutation to occur at S.

The odds of a mutation occurring at the particular site, S, is 100/L. (This is comparable to having 100 opportunities to pull out of a sack that contains L numbered balls, the one which has the sought out [particular] number on it.)

In the next generation, there will be another 100 opportunities for the mutation to occur at S. (Another opportunity to select 100 balls out of a sack full of L numbered balls and pull out the correct number). And so forth for each generation.

[Minor digression:What are the odds that the mutation that occurs at S is the right nucleotide to bring about the needed amino acid change?

Roughly, one in three.

(This is like pulling out of sack full of 3L numbered balls the one with the right number, and the right color, each numbered ball coming in three different colors. This is true for each 100 selected balls, i.e., each “generation”)]

Back to the main analysis:

Now there are 2N genomes (I.e., 2N persons pulling out 100 balls from each of 2N sacks of L numbered/colored balls).

The odds of getting the right mutation at the same site, S, in each of these genomes is the same and occurs each ‘generation.’

So, for each ‘generation,’ there are 2N*100/L (or substitute v for 100/L) chances for the mutation to occur at site S. (i.e., 2Nv opportunities/generation)

How many times do all 2N genomes need to duplicate for the odds of a mutation occurring at S in any of the 2N genomes to equal 1?

Let T be the number of generations needed for the likelihood of the mutation occurring at S in any of the 2N genomes to become equal to 1.

(How many times do all 2N persons pulling out balls in 2N sacks of numbered/colored balls have to go through this procedure before one of the 2N persons finally gets the [right colored, and] right numbered ball? If each time all 2N persons do this there are 2N*100/L (=2Nv) opportunities, then let T equal the number of times this has to be repeated. T is determined by the equation: T * 2Nv = 1. Thus, T = 1/2Nv [This assumes that N << L, making 2N*100/L very small])

Since L is normally so much larger than N (that is, we haven’t arrived at 1 yet, 1 signifying the likelihood the event has happened), then we solve by writing this equation:

T * 2Nv = 1, or T = 1/2Nv.

The time for fixation for this mutation is 2N generations.

Total time to fixation: 1/2Nv + 2N (v=100/L).

You’ve taken a very round-about way to say what I did in 28. If the mutation rate per site is v, then we expect to wait for 1/2Nev generations to see the first mutation.

Your mistake is to think this mutation will go to fixation. You might listen to yourself in 24 “Are you overlooking the fact that it is very likely that one particular mutation in a large population is much more likely to be lost than to be passed on?” I showed in 28 that the expected time to fix a specific mutation, and how it is not very dependant on Ne.

Rather than admit this mistake, you now want us to believe you were trying to calculate the time it would take for a mutation to go to fixation if the first mutant always fixed. Perhaps you were really trying to do this, but it’s an utterly meaningless thing to calculate. If we can arbitrarily decide the first mutation goes to fixation, then why can’t we also arbitrarily decide it does so in much fewer than 4N generations (that is, remember, just the expected time to fixation, conditioned on the fact the allele actually fixes). An then, why only the first mutation? If all mutations that arise are destined to fix then we have to think about the distribution of times to fixation and not just the expected time…

It should be no surprise that if you remove the dependency on Ne of the probability that a mutation will fix but not on the time to fixation then the two no longer cancel each other out.But the idea that population geneticists have conducted a conspiracy to keep the standard value for human Ne 10,000 in order to maximise the meaningless value of your calculation is a bit of a joke. More so because the value is actually maximised around 3 500.

There is also the problem that if you just decide every mutation that arises goes to fixation, then the non-dependence on effective population size would also vanish for the whole genome (and not only the special case of one pre-determined change, which you claim).

It’s quite clear from the way that you have written about this that you don’t have a grounding in pop gen. That’s OK, but you would have an easier time understanding calculations and the principals underlying genetic models if you didn’t try and start in the middle.

“Your mistake is to think this mutation will go to fixation.”

In deed, the other mistake is to simply take fixed differences as fixed genetic errors.

The whole term Fixation is wrong, since it implies and assumes we are dealing with selectable or drifted Errors.

The whole Darwinian framwork is nonsense, based on ignorance of genomes, genetics and mutations.

cornu:

The error is this: you’re looking at the expectation value for a neutral mutation to arise and become fixed at a site. Then the chances of a mutation arising is 2Nv, and the chance of this mutation arising at this site somewhere in the 2N genomes is 1/2N, which, when multiplied gives v as the expected value.

Again, I’ve looked at this in a different way. The expected value of a mutation arising is 2Nv, as you define v. Then, the most optimistic, the fastest way to arrive at fixation is for this first mutation that arises at the desired site goes immediately to fixation, which is, again, 1/2N.

Instead of multiplying these two, which renders a rate of fixation of v per generaton, I’ve added the two events. So, I’m treating them as two completely different events.

And the reason I’m doing–actually ‘was’ doing it–was, again, to give evolution the fastest way to arrive at a needed fixed mutation (neutral in the sense that it will only be needed in the future, likely in conjunction with some other mutational event).

This is simply a case of ‘apples and oranges.’ I was not interested in average, or expected, values; rather, I was interested in evolution’s best chance at overcoming enormous probabilitistic barriers.

As to the ‘best’ chance: In the example I used, v is basically about 10^-8 per generation using standard terminology. [You stated, “the SNP mutation rate,. . . is between 1E-8 and 1E-12 in euks.”]

Thus, if the population size is 100, or 1 million, it will take, on average, 100 million generations to arrive at the fixed mutation.

Now look at my table. For a population size of 10,000, the fixation time is 45,000 generations. Isn’t this giving evolution an extra hand?

Do you see the point I’m making? This is NOT a textbook example. It is/was not meant to be.

OTOH, my error was in thinking that this is where the 10,000 figure came from regarding the progenitor human population figure that Venema gave. He bases this figure, he says, on bottleneck effects (which has its own set of problems).

However, as another poster, KD pointed out:

And, as you pointed out:

So, there is a maximal population size.

The “enormous probabilistic barriers’ to fixing a particular mutation that is completely interchangeable with the one that is already fixed? And you calculate this by ignoring the effects of population size on the probability of fixing a mutation, but not ignoring this effect on time to fixation? Perhaps you have really convinced yourself this is what you set out to correct (though it’s not in keeping with your other comments in this thread), but why would you calculate such a meaningless thing? Let alone think it was important for understanding evolutionary change?

And do you still think this behavior is different if you talk about one allele, as compared to the whole genome?

FWIW, the ~10,000 mostly comes from the fact the expected heterozygosity of a population is 4Ne.v., humans have average heterozygosity per base of about.0004. You don’t need a bottleneck to get this number (though, of course, it can let us learn about historical bottlenecks). There are many other ways to estimate historical Ne, and all of them end up pretty close to 10,000.

cornu:

Because evolutionary explanations fly in the face of improbabilities, and, so, I was finding ways of reducing them. That is, none of the explanations of I’ve read about, or heard about, can reasonably reduce the utterly unimaginable improbabilities that have to be overcome when positing random mechanisms.

Think of the Wistar Conferene in the 60’s. ID, nor Creationism, existed at the time. Mathematicians said that Darwnian mechanisms weren’t sufficient to match the complexity found in the cell.

You mention heterozygosity, and, of course, the very high degree of protein heterozygosity discovered in the 60’s is what led Kimura to propose his Neutral Theory, which was once denounced, but is now main-stream.

You realise your calculation assumes the pre-specified mutation is neutral, right? Has not fitness consequence at all. So, when you say “the utterly unimaginable improbabilities”, you are talking about the improbability of a specific molecular change that has no effect on the population. From a fitness standpoint, the population is in exactly the same position whether the ‘improbabilities’ pay off or not.

Even then, you’ve yet to explain why it is useful and/or reasonable to ignore the fact population size affects the probability a neutral mutation will fix, but not ignore the fact is also influences the expected to time fixation. Or the fact that losing the dependence on population size on the probability of fixation has the same effect when you talk about a whole-genome rate or a per-site rate.