Uncommon Descent Serving The Intelligent Design Community

Adam and Eve: Some of those just-a-myth citations turned out to be fig leaves

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

They withered under study.

Fig Leaves 1 There’s been a lively discussion between geneticists Dennis Venema and Richard Buggs and about whether the human race must have had more than one pair of ancestors (Venema yes, Buggs no).

From Evolution News and Science Today:

Earlier, we saw that evolutionary genomicist Richard Buggs has been engaged in a dialogue with Venema about the latter’s arguments against a short bottleneck of two individuals in human history. Buggs is skeptical that methods of measuring human genetic diversity cited by Venema can adequately test such an “Adam and Eve” hypothesis. Buggs’s initial email to Venema thus concluded, “I would encourage you to step back a bit from the strong claims you are making that a two person bottleneck is disproven.”

Buggs agreed with Venema that one particular metric — human allelic diversity — might be capable of testing the issue. But he wanted to know more details about the population genetics models that Venema was relying on. In reply to Venema’s response to his initial email, Buggs asked Venema to provide a citation. He requested some backup for the repeated claims that human allelic diversity indicates we evolved from an ancestral population of about 10,000 individuals.

Ultimately Dr. Venema was unable to provide a scientific citation to substantiate his claim. To be fair to Venema, he says he believes that he has provided an adequate citation. And no doubt he sincerely does believe it. There is no accusation of bad faith here. But Buggs has clearly shown that Venema did not provide adequate backup. This means that Venema’s claims against Adam and Eve are scientifically suspect and intellectually unpersuasive. In fact, Buggs has shown that some of Venema’s citations don’t even address the question of the ancestral population size of humans. This gives the appearance of “citation bluffing,” however unwitting. More.

See also: Are Adam and Eve genetically possible? The latest: Richard Buggs (yes) replies to Dennis Venema (no)

Comments
You realise your calculation assumes the pre-specified mutation is neutral, right? Has not fitness consequence at all. So, when you say "the utterly unimaginable improbabilities", you are talking about the improbability of a specific molecular change that has no effect on the population. From a fitness standpoint, the population is in exactly the same position whether the 'improbabilities' pay off or not. Even then, you've yet to explain why it is useful and/or reasonable to ignore the fact population size affects the probability a neutral mutation will fix, but not ignore the fact is also influences the expected to time fixation. Or the fact that losing the dependence on population size on the probability of fixation has the same effect when you talk about a whole-genome rate or a per-site rate.cornu
February 17, 2018
February
02
Feb
17
17
2018
03:52 PM
3
03
52
PM
PDT
cornu:
. . . but why would you calculate such a meaningless thing? Let alone think it was important for understanding evolutionary change?
Because evolutionary explanations fly in the face of improbabilities, and, so, I was finding ways of reducing them. That is, none of the explanations of I've read about, or heard about, can reasonably reduce the utterly unimaginable improbabilities that have to be overcome when positing random mechanisms. Think of the Wistar Conferene in the 60's. ID, nor Creationism, existed at the time. Mathematicians said that Darwnian mechanisms weren't sufficient to match the complexity found in the cell. You mention heterozygosity, and, of course, the very high degree of protein heterozygosity discovered in the 60's is what led Kimura to propose his Neutral Theory, which was once denounced, but is now main-stream.PaV
February 17, 2018
February
02
Feb
17
17
2018
03:22 PM
3
03
22
PM
PDT
This is simply a case of ‘apples and oranges.’ I was not interested in average, or expected, values; rather, I was interested in evolution’s best chance at overcoming enormous probabilitistic barriers.
The "enormous probabilistic barriers' to fixing a particular mutation that is completely interchangeable with the one that is already fixed? And you calculate this by ignoring the effects of population size on the probability of fixing a mutation, but not ignoring this effect on time to fixation? Perhaps you have really convinced yourself this is what you set out to correct (though it's not in keeping with your other comments in this thread), but why would you calculate such a meaningless thing? Let alone think it was important for understanding evolutionary change? And do you still think this behavior is different if you talk about one allele, as compared to the whole genome? FWIW, the ~10,000 mostly comes from the fact the expected heterozygosity of a population is 4Ne.v., humans have average heterozygosity per base of about.0004. You don't need a bottleneck to get this number (though, of course, it can let us learn about historical bottlenecks). There are many other ways to estimate historical Ne, and all of them end up pretty close to 10,000.cornu
February 16, 2018
February
02
Feb
16
16
2018
04:14 PM
4
04
14
PM
PDT
cornu: The error is this: you're looking at the expectation value for a neutral mutation to arise and become fixed at a site. Then the chances of a mutation arising is 2Nv, and the chance of this mutation arising at this site somewhere in the 2N genomes is 1/2N, which, when multiplied gives v as the expected value. Again, I've looked at this in a different way. The expected value of a mutation arising is 2Nv, as you define v. Then, the most optimistic, the fastest way to arrive at fixation is for this first mutation that arises at the desired site goes immediately to fixation, which is, again, 1/2N. Instead of multiplying these two, which renders a rate of fixation of v per generaton, I've added the two events. So, I'm treating them as two completely different events. And the reason I'm doing--actually 'was' doing it--was, again, to give evolution the fastest way to arrive at a needed fixed mutation (neutral in the sense that it will only be needed in the future, likely in conjunction with some other mutational event). This is simply a case of 'apples and oranges.' I was not interested in average, or expected, values; rather, I was interested in evolution's best chance at overcoming enormous probabilitistic barriers. As to the 'best' chance: In the example I used, v is basically about 10^-8 per generation using standard terminology. [You stated, "the SNP mutation rate,. . . is between 1E-8 and 1E-12 in euks."] Thus, if the population size is 100, or 1 million, it will take, on average, 100 million generations to arrive at the fixed mutation. Now look at my table. For a population size of 10,000, the fixation time is 45,000 generations. Isn't this giving evolution an extra hand? Do you see the point I'm making? This is NOT a textbook example. It is/was not meant to be. OTOH, my error was in thinking that this is where the 10,000 figure came from regarding the progenitor human population figure that Venema gave. He bases this figure, he says, on bottleneck effects (which has its own set of problems). However, as another poster, KD pointed out:
I have a lot more work, and a lot of runs to do, each requiring about 24 hours or more of run time for just one data point, so I can’t show this yet in the form of graphs, but I plan to, and plan to publish the results. I’m just speaking from preliminary runs as I fine-tune the program to more closely to model reality. But I have run population sizes up to 100,000 and the larger the population and the fewer independently evolving populations, the slower the rate of accumulation of mutations if I use a reproduction rate of 10% (which can also be varied).
And, as you pointed out:
More so because the value [of the maximal population size] is actually maximised around 3 500.
So, there is a maximal population size.PaV
February 16, 2018
February
02
Feb
16
16
2018
10:01 AM
10
10
01
AM
PDT
The whole term Fixation is wrong, since it implies and assumes we are dealing with selectable or drifted Errors. The whole Darwinian framwork is nonsense, based on ignorance of genomes, genetics and mutations.Peer
February 16, 2018
February
02
Feb
16
16
2018
03:16 AM
3
03
16
AM
PDT
"Your mistake is to think this mutation will go to fixation." In deed, the other mistake is to simply take fixed differences as fixed genetic errors.Peer
February 16, 2018
February
02
Feb
16
16
2018
12:46 AM
12
12
46
AM
PDT
You've taken a very round-about way to say what I did in 28. If the mutation rate per site is v, then we expect to wait for 1/2Nev generations to see the first mutation. Your mistake is to think this mutation will go to fixation. You might listen to yourself in 24 "Are you overlooking the fact that it is very likely that one particular mutation in a large population is much more likely to be lost than to be passed on?" I showed in 28 that the expected time to fix a specific mutation, and how it is not very dependant on Ne. Rather than admit this mistake, you now want us to believe you were trying to calculate the time it would take for a mutation to go to fixation if the first mutant always fixed. Perhaps you were really trying to do this, but it's an utterly meaningless thing to calculate. If we can arbitrarily decide the first mutation goes to fixation, then why can't we also arbitrarily decide it does so in much fewer than 4N generations (that is, remember, just the expected time to fixation, conditioned on the fact the allele actually fixes). An then, why only the first mutation? If all mutations that arise are destined to fix then we have to think about the distribution of times to fixation and not just the expected time... It should be no surprise that if you remove the dependency on Ne of the probability that a mutation will fix but not on the time to fixation then the two no longer cancel each other out.But the idea that population geneticists have conducted a conspiracy to keep the standard value for human Ne 10,000 in order to maximise the meaningless value of your calculation is a bit of a joke. More so because the value is actually maximised around 3 500. There is also the problem that if you just decide every mutation that arises goes to fixation, then the non-dependence on effective population size would also vanish for the whole genome (and not only the special case of one pre-determined change, which you claim). It's quite clear from the way that you have written about this that you don't have a grounding in pop gen. That's OK, but you would have an easier time understanding calculations and the principals underlying genetic models if you didn't try and start in the middle.cornu
February 15, 2018
February
02
Feb
15
15
2018
07:35 PM
7
07
35
PM
PDT
cornu: Please point out my error in the following: I have a genome that has L number of “sites.” What are the odds of a mutation occurring at a particular site, let’s call it S? The answer has to be 1/L . The mutations are random. How many mutations will occur along the length of this genome? Usually around 100 per duplication. So, for each duplication (generation), there are 100 opportunities for the mutation to occur at S. The odds of a mutation occurring at the particular site, S, is 100/L. (This is comparable to having 100 opportunities to pull out of a sack that contains L numbered balls, the one which has the sought out [particular] number on it.) In the next generation, there will be another 100 opportunities for the mutation to occur at S. (Another opportunity to select 100 balls out of a sack full of L numbered balls and pull out the correct number). And so forth for each generation. [Minor digression:What are the odds that the mutation that occurs at S is the right nucleotide to bring about the needed amino acid change? Roughly, one in three. (This is like pulling out of sack full of 3L numbered balls the one with the right number, and the right color, each numbered ball coming in three different colors. This is true for each 100 selected balls, i.e., each “generation”)] Back to the main analysis: Now there are 2N genomes (I.e., 2N persons pulling out 100 balls from each of 2N sacks of L numbered/colored balls). The odds of getting the right mutation at the same site, S, in each of these genomes is the same and occurs each ‘generation.’ So, for each ‘generation,’ there are 2N*100/L (or substitute v for 100/L) chances for the mutation to occur at site S. (i.e., 2Nv opportunities/generation) How many times do all 2N genomes need to duplicate for the odds of a mutation occurring at S in any of the 2N genomes to equal 1? Let T be the number of generations needed for the likelihood of the mutation occurring at S in any of the 2N genomes to become equal to 1. (How many times do all 2N persons pulling out balls in 2N sacks of numbered/colored balls have to go through this procedure before one of the 2N persons finally gets the [right colored, and] right numbered ball? If each time all 2N persons do this there are 2N*100/L (=2Nv) opportunities, then let T equal the number of times this has to be repeated. T is determined by the equation: T * 2Nv = 1. Thus, T = 1/2Nv [This assumes that N << L, making 2N*100/L very small]) Since L is normally so much larger than N (that is, we haven’t arrived at 1 yet, 1 signifying the likelihood the event has happened), then we solve by writing this equation: T * 2Nv = 1, or T = 1/2Nv. The time for fixation for this mutation is 2N generations. Total time to fixation: 1/2Nv + 2N (v=100/L).PaV
February 15, 2018
February
02
Feb
15
15
2018
06:48 PM
6
06
48
PM
PDT
I think you would come off a lot better if you could simply admit your mistakes. Hopefully, you can take a bit of time, read these posts and see where you went wrong.cornu
February 15, 2018
February
02
Feb
15
15
2018
05:46 PM
5
05
46
PM
PDT
cornu:
OK, but note this is very different than your calculation. There you seem to assume the first copy of the mutaiton will go to fixation, so you don’t need to see (on average) 2Ne copies of it.
It's been long ago that I found myself doing these calculations, but, IIRC, I was, again, not selecting a "textbook" example, but one, rather, that would give evolutionary theory the best shot at explaining things. So, IIRC, the simplest, most probable, event in favor of evolution was to have the mutation arise, and then go to fixation immediately. To me, this would involve the least amount of time; hence, giving evolutionary its best opportunity.
This would be the probability of a given mutation having occurred at a pre-specified site (multiplied by three for some reason).
I 'divide' by 3 since the mutation must involve a 'change' in the nucleotide, of course, and there are 3 to choose from. It has to be a 'particular' mutation.
But that’s not what you are trying to calculate, you are “interested in a very particular mutation occurring, and then becoming fixed”. If we are interesting a very particular mutation then we need only to consider the site at which this very particular mutation could occur. Mutations will occur there with some rate per individual (v), which does not depend on the effective population size (how could it?). That being the case, you can but whatever number you want in ‘v’ and go through my post in 28 and work out where you went wrong.
Yes, my 'v' is the actual number of mutations that occur through duplication, and is for the entire genome, and, is, as you say, v_site x 'g', the genome size.
1/(2Nv_genome x 1/g) = 1/(2N v_site*g * 1/g) = 1/(2Nv_site) So, we are back to 1/(2Nv_site) * 2N = 1/v_site generations for the lucky mutation to arise.
These equations don't properly reflect my calculation. To find the precise location were interested in, 'g' mutations are needed. Each genome receives v_genome number of mutations per generation. There are 2N genomes. The total number of mutations for the entire population, then, is 2Nv_genome per generation. The total number of generations needed, on average, to arrive at this particular mutation somewhere in the population of 2N genomes is: 'g'[total # of mutations needed]/2Nv_genome [mutations/generation]. (Or, in your terminology, 1/2Nv_site) I wrote this:
So, any of the 2N genomes has v number of attempts to overcome these odds each generation. Hence, 1/(2Nv x 10^-10) per generation. This is far different than 1/v, and is what I used for the “arrival” time in the above post.
The 2N is already factored into my equation, and so the additional 2N you show is not called for. So, the total time is then: t = (1/2Nv_site) + 4N = 1 + 8(N^2)v/2Nv Here, v= v_site, or around 10^-8. If N is large, then the r.h.s reduces to 4N, a constant. If N is small, say N=100, then t = [1 + (8 x 10^4)(10^-8)/(200*10^-8) = [1 + 0.0008]/(2 x 10^-6), which is approximately equal to 5 x 10^5, which is very much larger than 4N= 400. t= 1/v + 4N here would equal, 10^8 + 400, which is much larger than 5 x 10^5.PaV
February 15, 2018
February
02
Feb
15
15
2018
05:01 PM
5
05
01
PM
PDT
Here's the simplest way to understand your error. In my calculation i used v as the mutation rate per site per generation. You are using v to mean the mutation rate per genome per generation. Let's call your one v_genome and mine v_site. It should be obvious that if we call genome size 'g', then v_site = v_genome/g (that is, the mutation rate per site is simply the mutation rate per genome divided by the total number of sites in the genome). In your expression "1/(2Nv x 10^-10)" the 'v' is v_genome and 10E-10 is ~ 1/g ('~' because you make it more complex by picking the nucleotide, but we can find the general result by calling 'v' the mutation rate to the correct nucleotide). Since v_genome = v_site * g, we can say 1/(2Nv_genome x 1/g) = 1/(2N v_site*g * 1/g) = 1/(2Nv_site) So, we are back to 1/(2Nv_site) * 2N = 1/v_site generations for the lucky mutation to arise.cornu
February 15, 2018
February
02
Feb
15
15
2018
03:29 PM
3
03
29
PM
PDT
It’s a different look at things. It’s not textbook.
it's also not correct.cornu
February 15, 2018
February
02
Feb
15
15
2018
01:33 PM
1
01
33
PM
PDT
How long will it take to see 2Ne mutations? Well, we can expect to 2Ne*v (where v is the mutation rate toward the target allele) to occur in each generation. That will usually be less than one, so let’s say 1/(2Ne*v) generations per mutation. 1/(2Ne*v) generations/mutation * Ne mutations = 1/v generations
Agreed
OK, but note this is very different than your calculation. There you seem to assume the first copy of the mutaiton will go to fixation, so you don't need to see (on average) 2Ne copies of it.
However, for the needed (i.e., very particular) mutation to arise, then the equation to be used is: [3 (mutations/correct nucleotide) x 3 x 10^9 nucleotide/genome]
This would be the probability of a given mutation having occurred at a pre-specified site (multiplied by three for some reason). But that's not what you are trying to calculate, you are "interested in a very particular mutation occurring, and then becoming fixed". If we are interesting a very particular mutation then we need only to consider the site at which this very particular mutation could occur. Mutations will occur there with some rate per individual (v), which does not depend on the effective population size (how could it?). That being the case, you can but whatever number you want in 'v' and go through my post in 28 and work out where you went wrong. In general, when you start concluding that the expected outcome of one specific run of a stochastic process will behave differently than the long-term average you should start to get suspicious.cornu
February 15, 2018
February
02
Feb
15
15
2018
01:31 PM
1
01
31
PM
PDT
cornu: I've just lost a long reply. I'm not going to go into detail again. It takes 4Ne generations to 'fix' any given mutation. We're agreed on that. However, your equation deals with "mutations destined for fixation." That can happen anywhere along the entire length of the genome. Hence the cancellation of the number of genomes, 2N. But, if we're looking for a very particular mutation to arise, then the equation is different. 1/v won't do. Instead, for any one of the 2N genomes, the number of mutations is v, and so, any of the 2N genomes can be source of the new, particular mutation, and, for EACH genome, per generation, they get v mutations. Now the odds of a particular mutation (ATCG) happening at a particlar location is 1/3 x 1/3 x 10^9 = approx. (10)^-10. So, any of the 2N genomes has v number of attempts to overcome these odds each generation. Hence, 1/(2Nv x 10^-10) per generation. This is far different than 1/v, and is what I used for the "arrival" time in the above post. It's a different look at things. It's not textbook.PaV
February 15, 2018
February
02
Feb
15
15
2018
01:02 PM
1
01
02
PM
PDT
Devastating refutation!ET
February 15, 2018
February
02
Feb
15
15
2018
12:56 PM
12
12
56
PM
PDT
cornu:
How long will it take to see 2Ne mutations? Well, we can expect to 2Ne*v (where v is the mutation rate toward the target allele) to occur in each generation. That will usually be less than one, so let’s say 1/(2Ne*v) generations per mutation. 1/(2Ne*v) generations/mutation * Ne mutations = 1/v generations
Agreed.
So, it will take an average of 1/v generations for the version of the mutation that is destined for fixation to arrive, and 4Ne generations for the fixation to happen. Assuming we start from a monomorphic population that gives t_fix = 1/v + 4Ne
This is essentially the equation I've used. Where I quibble with this is over the understanding of v being independent of population size. As your equation is written, 1/v stands alone. This is true if we don't concern ourselves with any particular location along the length of the genome; however, if we ask the question: how long will it take to get a desired mutation at a particular location along the length of the genome, then everything changes. You've stated:
So, it will take an average of 1/v generations for the version of the mutation that is destined for fixation to arrive, . . .
. But I'm interested in something else. I'm interested in a very particular mutation occurring, and then becoming fixed. 4Ne is the number of generations that it will take, on average, for any mutation to become fixed. So, there's no quibble there. However, for the needed (i.e., very particular) mutation to arise, then the equation to be used is: [3 (mutations/correct nucleotide) x 3 x 10^9 nucleotide/genome]PaV
February 15, 2018
February
02
Feb
15
15
2018
12:52 PM
12
12
52
PM
PDT
That is an untestable assumption. No one has ever validated it.
I've heard a lot of creationists objections to evolutionary biology, but this might be the strangest. Good luck with it I guess...cornu
February 15, 2018
February
02
Feb
15
15
2018
12:16 PM
12
12
16
PM
PDT
cornu:
The probability that a neutral allele is fixed is equal to its current frequency in the population.
That is an untestable assumption. No one has ever validated it.ET
February 15, 2018
February
02
Feb
15
15
2018
06:16 AM
6
06
16
AM
PDT
First demonstrate that what have been coined "mutations", are what they are: genetic errors. We do know that the Major part is recent, however. But we simply do not know how the genomes of Adam and Eve looked like. How much variability (interpreted as "mutations") was already present in these genomes? As with all historical sciences, we will never know. What we do know is that most assumptions underlying current evolutonary thinking are wrong, scientifically untenible.Peer
February 14, 2018
February
02
Feb
14
14
2018
11:58 PM
11
11
58
PM
PDT
It's very hard to talk to you about this, because you seem to lack a grounding in this topic. Rather than relitigate the errors above, perhaps it is easier to show the correct calculation? You start by asking for "the time for fixation of a particular mutation". Implicitly, you seem to assume the population is monomorphic for a non-target allele at the beginning of the process (you never calculate starting heterozygosity). Let's start by thinking about fixation. The probability that a neutral allele is fixed is equal to its current frequency in the population. For a mutant allele in a diploid population that is 1/2Ne (Ne being the effective population size, and two of that because there are two gene copies in each individual). If each mutant has only a 1/2Ne chance of fixing, then we expect the target mutation to be created 1/(1/2Ne) = 2Ne times before it goes to fixation. How long will it take to see 2Ne mutations? Well, we can expect to 2Ne*v (where v is the mutation rate toward the target allele) to occur in each generation. That will usually be less than one, so let's say 1/(2Ne*v) generations per mutation. 1/(2Ne*v) generations/mutation * Ne mutations = 1/v generations So, it will take an average of 1/v generations for the version of the mutation that is destined for fixation to arrive, and 4Ne generations for the fixation to happen. Assuming we start from a monomorphic population that gives t_fix = 1/v + 4Ne It should be obvious that is highest when the population size is low. If we drop the assumption that the starting population is monomorphic then we expect 4Nev copies of the target allele to exist at the onset, and the waiting time doesn't depend on population size at all. Hope that's clear enough.cornu
February 14, 2018
February
02
Feb
14
14
2018
09:10 PM
9
09
10
PM
PDT
cornu: @11
That’s Kimura’s argument for the neutral theory from genetic load.
I'm afraid it's not. Are you thinking of Haldane? It comes from a section titled "Constancy of Molecular Evolutionary Rates".
It can’t be the basis for the claim you made in 8, as Kimura assumes many mutations are not neutral.
But he wasn't dealing with a neutral mutation in the section where his calculation appears.
As Bob says, undergrads learn that the fixation rate is independent of Ne for neutral variants.
This doesn't tell us much, does it? It tells us 'neutral' mutations occur quite often throughout the population: that is, "somewhere". IOW, we don't where they are. And, of course they're neutral. Can't a neutral mutation change back to another neutral mutation? Then you have a flow rate, mu, going in both directions. What is that? An equilibrium, kind of like the Hardy-Weinberg Equilibrium. [Navel-gazing warning: I have no intention of going round and round on these points. I have better things to do with my time.]
If you want to increase the rate of adaptive substitutions then larger populations are always better.
Indeed. But if you want to decrease the number of generations needed for the 'adaptive substitution' to fix in the population, then a smaller population is better. This is the trade-off I was talking about, and which shows up in my 'quasi-table.'PaV
February 14, 2018
February
02
Feb
14
14
2018
11:52 AM
11
11
52
AM
PDT
cornu: How realistic is the "infinite allele" model? My calculation is very realistic and simple. Look at Kimura's calculation on p. 83 of Neutral Theory of Molecular Evolution. My calculation is similar to his, as is my thinking. He makes this calculation based on the fact that there is a new form of the alpha chain of hemoglobin that arises every 7 million years. Realistic stuff. Do you want to use 1/2N, or 1/4N, or what, for the probability of fixation? Is that what we're quibbling about?PaV
February 14, 2018
February
02
Feb
14
14
2018
07:57 AM
7
07
57
AM
PDT
PaV, I'm not sure you understand how fixation works under neutrality (or that Kimura's calculations use an infinite alleles model, so every mutation is 'new' in his calculation, but that wouldn't be the case in your targeted mutation). What I'm really trying to say in these comments is that your calculation is very unclear. Because the steps are opaque, it's had to see what you have actually calculated (it's certainly not the time to fixation of a particular allele under neutrality). 80,000 is taken from your extreme case. 4 million generation of an allele drifting toward fixation, with new copies of the same SNP (by state) arising every 50 gens.cornu
February 14, 2018
February
02
Feb
14
14
2018
02:53 AM
2
02
53
AM
PDT
cornu: Are you overlooking the fact that it is very likely that one particular mutation in a large population is much more likely to be lost than to be passed on? And where do you get the 80,000 figure. Could you show your math?PaV
February 14, 2018
February
02
Feb
14
14
2018
02:39 AM
2
02
39
AM
PDT
You're not really understanding. The particular number is irrelevant to the conclusion you are tyring to reach, so it doesn't really matter you are using a bad one. But the fact you think the per-genome mutation rate is relevant to your per-site calculation should be an indication that something is wrong... And FWIW, the minimum value for the function you have is found at 3536, quite a lot less than 10,000.
And, as to the 80,000 more times, if these didn’t arise, then the mutation would never fix. Simple as that.
I don't think you understand. I'm talking about independent mutations creating the same target allele, not identical-by-descent copies of the first mutant.cornu
February 14, 2018
February
02
Feb
14
14
2018
02:31 AM
2
02
31
AM
PDT
cornu: Your numbers are your numbers. Kimura used a mutation rate of 10^-6. I'm using 10^-7, roughly. That is: 10^-9/100. And, as to the 80,000 more times, if these didn't arise, then the mutation would never fix. Simple as that.PaV
February 14, 2018
February
02
Feb
14
14
2018
02:13 AM
2
02
13
AM
PDT
But's not the SNP mutation rate, which is between 1E-8 and 1E-12 in euks. I'm trying to understand where these numbers have come from and what you think you are calculating. Very little of it makes sense. Even if we could believe the results table, you seem to ignore the fact the target allele arises 80,000 more times while you are waiting for the first copy of it to fix.cornu
February 14, 2018
February
02
Feb
14
14
2018
01:59 AM
1
01
59
AM
PDT
cornu: It is a number that is often used for the number of mutations arising in the duplication of eukaryotes.PaV
February 14, 2018
February
02
Feb
14
14
2018
01:55 AM
1
01
55
AM
PDT
KD: Happy you've posted. If a small population of 100 generates mutations slowly, then a grouping of such populations would generate the needed mutation sooner. So, if you had 20 such populations--and we're assuming they're genetically very similar--then the '500,000' generations needed, taken from the above quasi-table, would be divided by 20. IOW, it would only take 25,000 generations. Once fixed in just one of those populations, any interbreeding with the other populations would quickly become fixed in them as well. So, I would think your simulation is in keeping with my numbers.PaV
February 14, 2018
February
02
Feb
14
14
2018
01:54 AM
1
01
54
AM
PDT
We are still left with the question of why this number if part of the calculation.cornu
February 14, 2018
February
02
Feb
14
14
2018
01:52 AM
1
01
52
AM
PDT
1 2

Leave a Reply