Uncommon Descent Serving The Intelligent Design Community

Why do we need to make a decision about common descent anyway?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

To “Why, exactly, should we believe that humans are descended from simpler forms of life?”, Mark Frank responds,

This is really very simple. Either:

1. We descended from a simpler form of life.

2. We descended from an equally complicated form of life which has left no trace.

3. We didn’t descend from any form of life but somehow sprang into existence (as adults I guess as human babies can’t survive by themselves).

Be honest – which seems the most plausible?

Actually, it is even simpler than Mark Frank makes out. Nothing is at issue if I just decline to offer an opinion.

His 1. would seem plausible except for the people shouting that we are 98 percent chimpanzee. And they’re the strongest supporters of common descent. They want it rammed down everyone’s throat from kindergarten to the retirement home.

Yet not only is their claim implausible on its face (anyone can tell the difference between a human and a chimpanzee), it is unsatisfactory. It leaves unaccounted for everything of which we would like an account.

His 2. is hardly implausible. It would be a familiar situation to any adopted child who can’t trace birth parents. As an account, is it unsatisfactory principally because it amounts to saying that there is no information available? That might be true, but I don’t know that it is.

His 3. is really not much different from 2., in that no further information about origins is likely to be available.

So the actual choice, assuming Frank’s list is exhaustive, is between an account offered by people whose judgement can be seriously questioned and accounts that point to the futility of seeking further information.

It’s a good thing Thomas Huxley coined the term agnostic (“it is wrong for a man to say he is certain of the objective truth of any proposition unless he can produce evidence which logically justifies that certainty”). That just about characterizes what I consider the wisest position just now on common descent.

See also: What can we responsibly believe about human evolution?

Follow UD News at Twitter!

Comments
Oh, and I don't think it's up to ID advocates to try to prove anything. In my opinion, ID is a paradigm, not a theory. However, amassing evidence for and against the theory of evolution, might very well lead to concluding that the mechanism of mutation is not adequate to explain macro-evolution (i.e. TOE is over the edge), and might spur a hunt for another, heretofore hidden, mechanism. -QQuerius
August 14, 2014
August
08
Aug
14
14
2014
06:01 PM
6
06
01
PM
PDT
wd400, Yes, and after going to a bunch of scientific blog sites critical of Behe's 2007 estimate, I found that they all gave different reasons or estimates for their objections. Hmmm. It seemed reasonable to me that calculating the expected prevalence to CQR involves a lot of complicating factors, major and minor, possibly offsetting. So, it seemed to me that relying on statistical data would be a more direct way of estimating CQR resistance rates, since hundreds of millions of individuals are suffering from around a trillion parasites each. While I found a lot of fascinating publications, the one that stuck out the most was this on, published in 2010 titled "Prospective strategies to delay the evolution of anti-malarial drug resistance: weighing the uncertainty." http://www.malariajournal.com/content/9/1/217 Tell me what you think of it. -QQuerius
August 14, 2014
August
08
Aug
14
14
2014
05:54 PM
5
05
54
PM
PDT
You should maybe read the paper, Querius. Behe's error was ~1e4 fold for an estimate that was ~1e7. Finally, it's obviously true there is some 'edge' to evolution, and that ultra-specific mutational pathways that involve neutral or deleterious intermediates will rarely get fixed by selection in small populations. The thing IDers need to prove is that such pathways have been, or are required to be, traversed by such population lineages.wd400
August 14, 2014
August
08
Aug
14
14
2014
09:50 AM
9
09
50
AM
PDT
Mung, Try each method a hundred times. You'll find that it makes no statistically significant difference. Hmmm, what do you know. Our resident biologist and statistican seems to have vanished. ;-) -QQuerius
August 13, 2014
August
08
Aug
13
13
2014
06:16 PM
6
06
16
PM
PDT
In my experience almost all Darwinists and fellow travelers (Professor Moran doesn't consider himself a Darwinist) simply don't think quantitatively about what their theory asks of nature in the way of probability. When prodded to do so, they quickly encounter numbers that are, to say the least, bleak. They then seem to lose all interest in the problem and wander away. The conclusion that an unbiased observer should draw is that Darwinian claims simply don't stand up to even the most cursory calculations. - M. Behe
Mung
August 13, 2014
August
08
Aug
13
13
2014
06:14 PM
6
06
14
PM
PDT
kairosfocus and bornagain77---great posts! Also, note that whether one of the required mutations is indeed strictly neutral might be testable because that would make a significant difference in Behe's calculation. If that were the case, then the apparent experimental results that seem to bear out his calculated estimates would need to be attributed to some other mechanism. wd400---yes, there's only about a 42% chance of rolling at least one 5--spot in 3 rolls. And that's the same binomial math that Behe and others use to estimate the probability of malaria overcoming a drug, antibiotic resistance, and the time it would take for a multi-step evolutionary advantage to emerge. 10^20 is such a large number that being off by 1000 doesn't make that much difference. Ok, so it's 10^17. With hundreds of trillions of these protozoans reproducing wildly, we can estimate the rate of micro-evolution. For humans with a relatively tiny population of single-digit billions and a pathetic reproduction rate, micro-evolution will be many millions of times slower. And that's what Behe is pointing out as a problem with the current macro-evolution model. If evolution is true, then a different mechanism---don't ask me what---must be responsible. -QQuerius
August 13, 2014
August
08
Aug
13
13
2014
06:11 PM
6
06
11
PM
PDT
Q:
If you roll three 6-sided dice, one after the other, what are the odds that you will roll at least one 5 spot?
What if I roll one six-sided die three times simultaneously?Mung
August 13, 2014
August
08
Aug
13
13
2014
05:26 PM
5
05
26
PM
PDT
As per the probelm. This is made much easier as we are really calculating the probability of there being no ’5?s and that’s (5/6)^3 Of course the probability of one or more 5s is therefore 1-(5/6)^3. BTW, discussion of the malaria case would be much more precise if you used the expected waiting time for two mutations or specified the time period (number of trials) across which the probability was calculated.wd400
August 13, 2014
August
08
Aug
13
13
2014
08:54 AM
8
08
54
AM
PDT
The idea that you must be right, and I can't do highschool mats problems is.... strange. Read Durret and Schmidt, who explain one of Behe's calculations was out by a factor of > 1000 and try again. As per the probelm. This is made much easier as we are really calculating the probability of there being no '5's and that's (5/6)^3 The general solution for getting more than any k occurrences is 1 - [Sum(i=0, k) nCi p^i(1-p)^(n-1)] Which you'll see agrees here.wd400
August 13, 2014
August
08
Aug
13
13
2014
08:51 AM
8
08
51
AM
PDT
A challenger finally steps into the ring with Dr. Behe (and promptly leaves the ring mumbling incoherently) Laurence Moran's Sandwalk Evolves Chloroquine Resistance - Michael Behe August 13, 2014 Excerpt: First, a bit of background. As I discussed previously (see here, here, and here) in a new paper Summers et al. show that a minimum of two mutations to the malarial protein PfCRT are needed to confer an ability to pump the antibiotic chloroquine (which is necessary but may not be sufficient for chloroquine resistance in the wild). That result agrees with my discussion in The Edge of Evolution and goes a long way toward quantitatively explaining the rarity of the development of resistance. Over at PZ Myers's blog Pharyngula, he and Kenneth Miller disagreed strongly with me in words, but cited no numbers. I then invited them, since they don't like mine, to show us their calculations for how frequently chloroquine resistance should arise in the malarial parasite. The bad news is that so far neither has responded. The good news is that Laurence Moran, Professor of Biochemistry at the University of Toronto, has done so. Professor Moran is an intelligent, informed, direct, and relatively civil critic of intelligent design who maintains a popular blog, Sandwalk, on evolution-related matters. So his response gives us a great opportunity to see what the best alternative explanations might be. Moran begins his own calculation by assuming that the first required mutation is strictly neutral and spreads in the growing population before the second one arises. His straightforward computation leads him to conclude that "What this means is that if you start with an infection by a cell that has none of the required mutations then you will only get the right pair of mutations once in one million infected people." Once in one million infected people.... Since there are a trillion malarial cells in one sick person, then according to Moran's own calculation there are a million times a trillion malaria cells needed for resistance to arise, which in scientific notation is 10^18. On a log scale that's a stone's throw from Nicholas White's estimate of 10^20 cells per origin of resistance that I have been citing, literally an astronomically large number (there are only a paltry hundred billion, 10^11, stars in our galaxy). So let me just say thank you and welcome aboard to Professor Moran. Unfortunately, he seems not to have realized the import of his calculation at the time, and has shown no enthusiasm for exploring it much after it was brought to his attention by a commenter. Right after his calculation Moran writes "We know that the right pair of mutations ... is not sufficient to confer resistance to chloroquine so the actual frequency of chloroquine resistance is far less." Far less? Far less than 1 in 10^18? Now, it's true that at least four mutations have been found in all known resistant strains of malaria. And it's true that, although Summers et al. showed two mutations are necessary for pumping chloroquine at a low level, they might not be sufficient for chloroquine resistance in the wild. Nonetheless, a need for further mutations would only make the problem for Darwinism much worse. It wouldn't make it better. Let me emphasize: Professor Moran's own reasoning would make the problem much more severe than I myself have ever argued. Yet he doesn't take any time on his blog to explore the ramifications of his own reckoning. Why doesn't he think that's an interesting result? Why not ponder it a bit? Moran doesn't seem to actually have much confidence in his own numbers. He asks the readers of his blog to help him correct his calculations -- which is a commendable attitude but makes one wonder, if he's so unsure of the likelihood of helpful combinations of mutations, whence his trust in mutation/selection? In response to the commenter who alerted him to the huge number of parasites in a million people he writes, "This is why meeting the Behe challenge is so difficult. There are too many variables and too many unknowns. You can't calculate the probability because real evolution is much more complicated than Behe imagines." But, again, if he thinks everything is so darn complicated and incalculable, on what basis does he suppose he's right? That's the reason I issued the challenge in the first place. In my experience almost all Darwinists and fellow travelers (Professor Moran doesn't consider himself a Darwinist) simply don't think quantitatively about what their theory asks of nature in the way of probability. When prodded to do so, they quickly encounter numbers that are, to say the least, bleak. They then seem to lose all interest in the problem and wander away. The conclusion that an unbiased observer should draw is that Darwinian claims simply don't stand up to even the most cursory calculations. Another commenter at Sandwalk didn't like Moran's calculation, so came up with his own. Great! The more the merrier! He also assumed the first mutation to be neutral, but kept a more careful accounting of its accumulation through the generations and ended up with a result of one necessary double-mutation per 420 patients. That actually strikes me as a more realistic value for a neutral mutation than Professor Moran's. Now, at first blush 420 may seem much smaller than Moran's number of a million patients, but that's only because we haven't yet considered the factor of a trillion parasites per patient. When we multiply by 10^12 to get the total number of parasites per double mutation, the commenter's odds turn out to be 1 in 10^14.6 versus Moran's 1 in 10^18, again not all that far on a log scale. Either or both of these values can easily be reconciled to White's calculation of 1 in 10^20 by tweaking selection coefficients or by inferring that a further mutation is needed for effective chloroquine resistance in the wild, as Professor Moran noted. What if the first necessary mutation isn't neutral? What if -- as seems very likely from the failure of malaria cells with one required mutation (K76T) to thrive in the lab -- the first mutation is rather deleterious? The commenter estimated that, too (and also added another consideration, a selection coefficient), and came up with a value of one new double mutant per 818,500 patients. Let's relax the admirable precision a bit and round the number up to a million. That's the same count Professor Moran got in his (supposedly neutral) calculation, which we saw means there is one new origin per 10^18 malarial parasites -- not far at all on a log scale from White's number that I cited. The bottom line is that numbers can be tweaked and a few different scenarios can be floated, but there's no escaping the horrendous improbability of developing chloroquine resistance in particular, or of getting two required mutations for any biological feature in general, most especially if intermediate mutations are disadvantageous. If a (selectable) step has to be skipped, the wind goes out of Darwin's sails. http://www.evolutionnews.org/2014/08/laurence_morans088811.htmlbornagain77
August 13, 2014
August
08
Aug
13
13
2014
08:35 AM
8
08
35
AM
PDT
Q: Let's spot them a hint, it is often helpful to think in terms of P(x) = 1 - P(NOT-x). KFkairosfocus
August 13, 2014
August
08
Aug
13
13
2014
12:51 AM
12
12
51
AM
PDT
PS: BTW, taking mosquito bites as samples/searches, we are also looking at the dynamics of needles in haystacks. If 100 bites is a ml, 1,000 bites is 10 ml out of what a blood volume of 4 - 5 l, say. 10^12 parasites is 250 * 10^6 per ml for 4 l. at 10^4 copies of each possible mut in the 10^12, that is 2.5 per ml, or it takes coming on 100 bites to pass on any given mut if in proportion . . . and here I am applying the random sampling principle that boils down to there must be a sufficient sample size to make capture of a rare item likely enough -- which also holds for random walks. Thee are feasible numbers for a serious epidemic. It also points out that keeping the mosquitoes out of biting range is likely to be effective. When one goes instead to very large spaces that are sampled by proportionately tiny searches, then finding needles in the haystack becomes a serious challenge. For FSCO/I on the gamut of our solar system that is like a straw sized single sample to a cubical haystack 1,000 LY across, comparable to the thickness of our galaxy's central bulge. The implications on the implausibility of capturing something that is naturally rate int eh space of possible configs then jumps out. At least, if you are not challenged to grasp the implications of samples and populations with rare items in them. We already saw the issue of islands of function where the challenge is to get to shorelines of function. But then, there are two distinct types of difficulty with understanding, primary ignorance and interference from a conflicting conceptual system that if held on to with a death grip frustrates understanding.kairosfocus
August 13, 2014
August
08
Aug
13
13
2014
12:47 AM
12
12
47
AM
PDT
Box, I should thank you for your patience. Someone needs to appreciate it. All I will add is that if anyone does not realise that Behe was talking about multiple muts leading to drug resistance s/he did not read his original work with understanding, or even his recent letter. As I recall, his "edge" was about seven muts, and he spoke of the taxonomic category, the family, as a rough practical threshold. Malaria came in as a pivotal study, where because of reproduction rate, pop size and the long-term interaction with humans (spell that sickle-cell) we have some highly relevant dynamics. 10^20 reproduction events probably covers how many mammals have ever been born. So, blend in mut rate 10^-8 as a sort of generous upper estimate . . . and one is working to a few ords of mag here [an extension to Fermi's approach of estimation by reasonable numbers, informed by epidemiology] . . . and we are looking at the patterns as already laid out. That is, a double mut model without elim or dominance of mut A leading to joint action of A and B to give resistance becomes reasonable. Once that happens in a context where Chloroquine is in widespread use, additional muts that reinforce the resistance then come in as hill climbing. The double mut challenge is to get to the shoreline of function. And recall, with the mut rates, pop size in ONE patient [10^12 . . . comparable to the number of HUMAN cells in the body, being what 1%, we have a large no of ordinary gut bacteria too], and having the double in hand, increments become very plausible. At this stage, I doubt there will be an explicit admission that P(A AND B) is distinct from P(B | A), that this distinction is relevant to the odds of two dice summing to 7, and that onwards it speaks to the double mut challenge. I will say, that the incident is quite revealing of what we are up against. Sadly revealing. KFkairosfocus
August 13, 2014
August
08
Aug
13
13
2014
12:26 AM
12
12
26
AM
PDT
wd400,
You were shown to be wrong and then just going on about being within a few orders of magnitude and repeated a bunch more wrong stuff (you’ve not even grasped what the “binomial theorem” is, or what the new paper presented, for instance). So why would anyone bother?
Oh puleeze! Neither you nor anyone else showed anything of the sort! Just take a class in probability or get a math major to explain it to you. You have no credibility at all with anyone who's taken even one math course dealing with probability. And now you proceed to demonstrate your profound ignorance of significant digits in measured values. For example, what's Avogadro's number + 1? But since you claim that you understand the binomial theorem, go ahead and try to answer my easy question about the three dice---and without help I should add. Here it is again: If you roll three 6-sided dice, one after the other, what are the odds that you will roll at least one 5 spot? -QQuerius
August 13, 2014
August
08
Aug
13
13
2014
12:05 AM
12
12
05
AM
PDT
footnote on bacterial flagellum from the Long-Term Evolution Experiment: Lenski's Long-Term Evolution Experiment: 25 Years and Counting - Michael Behe - November 21, 2013 Excerpt: Twenty-five years later the culture -- a cumulative total of trillions of cells -- has been going for an astounding 58,000 generations and counting. As the article points out, that's equivalent to a million years in the lineage of a large animal such as humans.,,, ,,,its mutation rate has increased some 150-fold. As Lenski's work showed, that's due to a mutation (dubbed mutT) that degrades an enzyme that rids the cell of damaged guanine nucleotides, preventing their misincorporation into DNA. Loss of function of a second enzyme (MutY), which removes mispaired bases from DNA, also increases the mutation rate when it occurs by itself. However, when the two mutations, mutT and mutY, occur together, the mutation rate decreases by half of what it is in the presence of mutT alone -- that is, it is 75-fold greater than the unmutated case. Lenski is an optimistic man, and always accentuates the positive. In the paper on mutT and mutY, the stress is on how the bacterium has improved with the second mutation. Heavily unemphasized is the ominous fact that one loss of function mutation is "improved" by another loss of function mutation -- by degrading a second gene. Anyone who is interested in long-term evolution should see this as a baleful portent for any theory of evolution that relies exclusively on blind, undirected processes. ,,,for proponents of intelligent design the bottom line is that the great majority of even beneficial mutations have turned out to be due to the breaking, degrading, or minor tweaking of pre-existing genes or regulatory regions (Behe 2010). There have been no mutations or series of mutations identified that appear to be on their way to constructing elegant new molecular machinery of the kind that fills every cell. For example, the genes making the bacterial flagellum are consistently turned off by a beneficial mutation (apparently it saves cells energy used in constructing flagella). The suite of genes used to make the sugar ribose is the uniform target of a destructive mutation, which somehow helps the bacterium grow more quickly in the laboratory. Degrading a host of other genes leads to beneficial effects, too.,,, - http://www.evolutionnews.org/2013/11/richard_lenskis079401.html also of note for your favored model of genetic drift wd400: The consequences of genetic drift for bacterial genome complexity - Howard Ochman - 2009 Excerpt: The increased availability of sequenced bacterial genomes allows application of an alternative estimator of drift, the genome-wide ratio of replacement to silent substitutions in protein-coding sequences. This ratio, which reflects the action of purifying selection across the entire genome, shows a strong inverse relationship with genome size, indicating that drift promotes genome reduction in bacteria. http://genome.cshlp.org/content/early/2009/06/05/gr.091785.109bornagain77
August 12, 2014
August
08
Aug
12
12
2014
06:53 PM
6
06
53
PM
PDT
wd400, I know that you pride yourself on understanding evolution and scoffing at us poor IDiots who don't have a clue as to how evolution REALLY works. So, if I could beg your patience for poor ole ignorant me, could you please show us the empirical evidence of unguided processes creating a molecular machine that far outclasses anything man has ever made?
Bacterial Flagellum - A Sheer Wonder Of Intelligent Design – video http://tl.cross.tv/61771 Biologist Howard Berg at Harvard calls the Bacterial Flagellum “the most efficient machine in the universe." Michael Behe on Falsifying Intelligent Design - video http://www.youtube.com/watch?v=N8jXXJN4o_A Orr maintains that the theory of intelligent design is not falsifiable. He’s wrong. To falsify design theory a scientist need only experimentally demonstrate that a bacterial flagellum, or any other comparably complex system, could arise by natural selection. If that happened I would conclude that neither flagella nor any system of similar or lesser complexity had to have been designed. In short, biochemical design would be neatly disproved.- Dr Behe in 1997
Or wd400, if you are a bit short on empirical evidence, (which you are), could you at least show us the math as to how long we will have to wait for a flagellum to appear de novo? Perhaps a ballpark figure? A year? two years? 100 million years? 2 billion years? 20 billion years? 20 trillion years???? etc... Never??? The numbers I'm getting don't look good for you.
Nobel Prize-Winning Physicist Wolfgang Pauli on the Empirical Problems with Neo-Darwinism - Casey Luskin - February 27, 2012 Excerpt: "In discussions with biologists I met large difficulties when they apply the concept of 'natural selection' in a rather wide field, without being able to estimate the probability of the occurrence in a empirically given time of just those events, which have been important for the biological evolution. Treating the empirical time scale of the evolution theoretically as infinity they have then an easy game, apparently to avoid the concept of purposesiveness. While they pretend to stay in this way completely 'scientific' and 'rational,' they become actually very irrational, particularly because they use the word 'chance', not any longer combined with estimations of a mathematically defined probability, in its application to very rare single events more or less synonymous with the old word 'miracle.'" Wolfgang Pauli (pp. 27-28) - per ENV HISTORY OF EVOLUTIONARY THEORY - WISTAR DESTROYS EVOLUTION Excerpt: A number of mathematicians, familiar with the biological problems, spoke at that 1966 Wistar Institute,, For example, Murray Eden showed that it would be impossible for even a single ordered pair of genes to be produced by DNA mutations in the bacteria, E. coli,—with 5 billion years in which to produce it! His estimate was based on 5 trillion tons of the bacteria covering the planet to a depth of nearly an inch during that 5 billion years. He then explained that,, E. coli contain(s) over a trillion (10^12) bits of data. That is the number 10 followed by 12 zeros. *Eden then showed the mathematical impossibility of protein forming by chance. Per pathlights “So there we have it. The amount of time currently available for life to evolve is of the order of time N (billions of years), but according to Chaitin’s toy model, Darwinian evolution should take at least time N^2, or quintillions of years. That fact troubles Chaitin, and it should. But at least he has the honesty to admit there is a problem.” Dr. VJ Torley https://uncommondescent.com/intelligent-design/a-hypothetical-question-for-neo-darwinists-on-the-age-of-the-earth/ A review of The Edge of Evolution: The Search for the Limits of Darwinism Excerpt: The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have “invented” little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155). http://creation.com/review-michael-behe-edge-of-evolution Don't Mess With ID by Paul Giem (Durrett and Schmidt paper)- video https://www.youtube.com/watch?v=5JeYJ29-I7o Waiting Longer for Two Mutations – Michael J. Behe Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that ‘for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years’ (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless “using their model” gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model. http://www.discovery.org/a/9461 When Theory and Experiment Collide — April 16th, 2011 by Douglas Axe Excerpt: Based on our experimental observations and on calculations we made using a published population model [3], we estimated that Darwin’s mechanism would need a truly staggering amount of time—a trillion trillion years or more—to accomplish the seemingly subtle change in enzyme function that we studied. http://biologicinstitute.org/2011/04/16/when-theory-and-experiment-collide/ “Phosphatase speeds up reactions vital for cell signalling by 10^21 times. Allows essential reactions to take place in a hundredth of a second; without it, it would take a trillion years!” Jonathan Sarfati http://www.pnas.org/content/100/10/5607.abstract William Lane Craig - If Human Evolution Did Occur It Was A Miracle - video http://www.youtube.com/watch?v=GUxm8dXLRpA Quote from preceding video - In Barrow and Tippler’s book The Anthropic Cosmological Principle, they list ten steps necessary in the course of human evolution, each of which, is so improbable that if left to happen by chance alone, the sun would have ceased to be a main sequence star and would have incinerated the earth. They estimate that the odds of the evolution (by chance) of the human genome is somewhere between 4 to the negative 180th power, to the 110,000th power, and 4 to the negative 360th power, to the 110,000th power. Therefore, if evolution did occur, it literally would have been a miracle and evidence for the existence of God. William Lane Craig
bornagain77
August 12, 2014
August
08
Aug
12
12
2014
06:05 PM
6
06
05
PM
PDT
Querius, You were shown to be wrong and then just going on about being within a few orders of magnitude and repeated a bunch more wrong stuff (you've not even grasped what the "binomial theorem" is, or what the new paper presented, for instance). So why would anyone bother?wd400
August 12, 2014
August
08
Aug
12
12
2014
05:33 PM
5
05
33
PM
PDT
[More crickets chirping] Well, I suppose if I had a desire to annoy and humiliate AB and wd400, I could simply bring in some more probability calculations to any controversy. Or even worse, I can just imagine it now . . . AB and wd400 are standing at the pearly gates, hoping for admittance. St. Peter finally comes and tells them, "Ok, I have just one question to ask you that will determine whether you go to heaven or hell. Here it is. If you roll three 6-sided dice, one after the other, what are the odds that you will roll at least one 5 spot? ;-) -QQuerius
August 12, 2014
August
08
Aug
12
12
2014
05:10 PM
5
05
10
PM
PDT
Purposeful Design at the Foundation of Life - Michael Behe, PhD - video https://www.youtube.com/watch?v=I7pRD73PAaE This is Michael Behe's presentation at Christ's Church of the Valley from Sunday, July 6, 2014.bornagain77
August 11, 2014
August
08
Aug
11
11
2014
07:33 PM
7
07
33
PM
PDT
wd400, Ironically, a friend of mine and I were discussing yesterday how in astrophysics, "close to within a couple of orders of magnitude," is actually pretty good! This is true when dealing with very large and very small numbers. For example, the temperature of the core of the sun is given as 27 million degrees. When asked whether that's in degrees Celcius or degrees Kelvin, the correct answer is, "It doesn't matter." 1. The objection to Behe's math was that he calculated the probability of multiple mutations on the assumption that the mutations were of necessity "simultaneous." In fact, the binomial theorem doesn't make a distinction between simultaneous and sequential events (as long as they are independent). 2. Behe claimed only that the mutations needed to coexist in the same organism at the same time to confer immunity. The mutations themselves didn't have to be "simultaneous." The complications from adding a few generations when compared to trillions doesn't make a significant difference. Whether each of the mutations has the same probability and whether immunity requires 2, 3, or 4 mutations does make a difference. 3. Behe's estimates were confirmed empirically by an independent group. The math was easy: the product of the probability of event A and the probability of event B is the probability of both occurring (squaring one of them works only when A and B are equal). You wrote:
I never thought the idea that some traits require two mutations was the major claim Behe was making, but if it is then I’ll happily grant. It’s true in this case, and there are probably many other traits that couldn’t arise by single mutations each with a slight fitness increase
Yes, the probabilities of multiple mutations resulting in a trait was indeed exactly what Behe was arguing, and you are happily granting him this point. The problem that Behe pointed out in The Edge of Evolution wasn't that he doesn't believe in microevolution (which he does believe in), but that traits arising from many multiple mutations are far too improbable (when multiplied out) to account for macroevolution within the time available. Behe was brave enough to make a prediction, and the later observations seem to support him. And that's the point of the hysterical outcry against him. You might want to read his book sometime. It's not as outrageous as some would lead you believe. -QQuerius
August 11, 2014
August
08
Aug
11
11
2014
06:48 PM
6
06
48
PM
PDT
WD400:
"(...) so it’s silly to claim (as behe has) teh 10^20 number is empirical observation."
WD400, since you stubbornly refuse to read the article I linked to in post #59, I will quote from it here:
Behe cites his source that spontaneous resistance to chloroquine occurs in one in every 10^20 malaria cells. It's from a review article published in the prestigious Journal of Clinical Investigation entitled, "Antimalarial drug resistance" (Vol. 113(8) (April 2004)). The author, Nicholas J. White, holds two doctorates and is an esteemed researcher in his field. As White's bio states: Professor White has contributed to over 500 peer reviewed scientific publications and has written over 30 book chapters. He is a full Professor at Mahidol University and also Oxford University. He is a member of several WHO advisory panels, and is on the International Editorial Advisory boards of several international journals including The Lancet and the Journal of Infectious Diseases. White's article states precisely what Behe claims it does: "the per-parasite probability of developing resistance de novo is on the order of 1 in 10^20 parasite multiplications." Suffice to say, this kind of author wouldn't print such a statement in this type of article in this journal if it were a "mere guess." Behe roughly outlines how White performs this calculation as follows: "Nicholas White of Mahidol University in Thailand points out that if you multiply the number of parasites in a person who is very ill with malaria times the number of people who get malaria per year times the number of years since the introduction of chloroquine, then you can estimate the odds of a parasite developing resistance to chloroquine is roughly one in a hundred billion billion. In shorthand scientific notation, that's one in 10^20." (Behe, Edge of Evolution, pg. 57.) To re-produce the calculation: Instances of chloroquine resistance in the past 50 years: Less than 10 (White, 2004). To be generous, we'll say 10 per 50 years, or 1 instance of chloroquine resistance per 5 years. / Total malaria cells that exist each year: approximately 10^20 cells per year. (White, 2004; White & Pongtavornpinyo, 2003). ----------------------------------------------------------------------- = 1 instance of chloroquine resistance per 5 x 10^20 malaria cells, or roughly speaking, 1 instance of chloroquine resistance per 10^20 malaria cells Even science writing that has been simplified for public consumption in The New Criterion cannot fairly characterize the 1 in 10^20 statistic as "a mere guess." It's the result of real-world studies of malaria behavior in response to chloroquine and reproducible calculations, as reported in review articles by leaders in the field in one of the world's top medical journals. It was anything but "a mere guess."
Box
August 11, 2014
August
08
Aug
11
11
2014
06:04 PM
6
06
04
PM
PDT
To be honest, I'm so dumbfounded by the inability of either of you to read what I wrote, and then the way you can actually agree with my point while pretending the difference can be ignored ("good enough for government work", "insignificant" given the new criterion that the first mutation be strongly deleterious).So I give up. If you want to read Durret and Schmidt you can see they derive the expected waiting time for two mutations given a mutation rate, and indeed, it's considerably shorter that one over the mutation rate squared. As for Behe's claims. None of us know the actual rate at which resitance arises in P. falciparum. Estimating requires many paramaters (the rate of transmission between host->vector->host, the fitness of the first mutation within host, the rate of growth within host, the proportion of cells that are treated....) so it's silly to claim (as behe has) teh 10^20 number is empirical observation. In any case, I'll repeat what I said in the first post about the PNAS paper
I never thought the idea that some traits require two mutations was the major claim Behe was making, but if it is then I’ll happily grant. It’s true in this case, and there are probably many other traits that couldn’t arise by single mutations each with a slight fitness increase
wd400
August 11, 2014
August
08
Aug
11
11
2014
05:24 PM
5
05
24
PM
PDT
[The sound of crickets chirping] KF, I guess A-B and wd400 musta had a talk with a math professor. Apparently, the supposed difference between "simultaneous" versus "sequential" probability calculations is not determined by the number of trials (past one) after all. Oh dear. :-) -QQuerius
August 11, 2014
August
08
Aug
11
11
2014
05:00 PM
5
05
00
PM
PDT
F/N: Did a Bing, no 3 is SEP on Bayes Th, and 4 and 5 (screenful) are dictionaries. I clip SEP:
Bayes' Theorem is a simple mathematical formula used for calculating conditional probabilities. It figures prominently in subjectivist or Bayesian approaches to epistemology, statistics, and inductive logic. Subjectivists, who maintain that rational belief is governed by the laws of probability, lean heavily on conditional probabilities in their theories of evidence and their models of empirical learning. Bayes' Theorem is central to these enterprises both because it simplifies the calculation of conditional probabilities and because it clarifies significant features of subjectivist position. Indeed, the Theorem's central insight — that a hypothesis is confirmed by any body of data that its truth renders probable — is the cornerstone of all subjectivist methodology . . . . The probability of a hypothesis H conditional on a given body of data E is the ratio of the unconditional probability of the conjunction of the hypothesis with the data to the unconditional probability of the data alone. (1.1) Definition. The probability of H conditional on E is defined as PE(H) = P(H & E)/P(E), provided that both terms of this ratio exist and P(E) > 0.[1] . . . . Here are some straightforward consequences of (1.1): Probability. PE is a probability function.[2] Logical Consequence. If E entails H, then PE(H) = 1. Preservation of Certainties. If P(H) = 1, then PE(H) = 1. Mixing. P(H) = P(E)PE(H) + P(~E)P~E(H).[3] The most important fact about conditional probabilities is undoubtedly Bayes' Theorem, whose significance was first appreciated by the British cleric Thomas Bayes in his posthumously published masterwork, "An Essay Toward Solving a Problem in the Doctrine of Chances" (Bayes 1764). Bayes' Theorem relates the "direct" probability of a hypothesis conditional on a given body of data, PE(H), to the "inverse" probability of the data conditional on the hypothesis, PH(E). (1.2) Bayes' Theorem. PE(H) = [P(H)/P(E)] PH(E) In an unfortunate, but now unavoidable, choice of terminology, statisticians refer to the inverse probability PH(E) as the "likelihood" of H on E. It expresses the degree to which the hypothesis predicts the data given the background information codified in the probability P . . . . Though a mathematical triviality, Bayes' Theorem is of great value in calculating conditional probabilities because inverse probabilities are typically both easier to ascertain and less subjective than direct probabilities. People with different views about the unconditional probabilities of E and H often disagree about E's value as an indicator of H. Even so, they can agree about the degree to which the hypothesis predicts the data if they know any of the following intersubjectively available facts: (a) E's objective probability given H, (b) the frequency with which events like E will occur if H is true, or (c) the fact that H logically entails E. Scientists often design experiments so that likelihoods can be known in one of these "objective" ways. Bayes' Theorem then ensures that any dispute about the significance of the experimental results can be traced to "subjective" disagreements about the unconditional probabilities of H and E. When both PH(E) and P~H(E) are known an experimenter need not even know E's probability to determine a value for PE(H) using Bayes' Theorem. (1.3) Bayes' Theorem (2nd form).[4] PE(H) = P(H)PH(E) / [P(H)PH(E) + P(~H)P~H(E)] In this guise Bayes' theorem is particularly useful for inferring causes from their effects since it is often fairly easy to discern the probability of an effect given the presence or absence of a putative cause. For instance, physicians often screen for diseases of known prevalence using diagnostic tests of recognized sensitivity and specificity. The sensitivity of a test, its "true positive" rate, is the fraction of times that patients with the disease test positive for it. The test's specificity, its "true negative" rate, is the proportion of healthy patients who test negative. If we let H be the event of a given patient having the disease, and E be the event of her testing positive for it, then the test's specificity and sensitivity are given by the likelihoods PH(E) and P~H(~E), respectively, and the "baseline" prevalence of the disease in the population is P(H). Given these inputs about the effects of the disease on the outcome of the test, one can use (1.3) to determine the probability of disease given a positive test.
Good FYI. KFkairosfocus
August 11, 2014
August
08
Aug
11
11
2014
06:38 AM
6
06
38
AM
PDT
BTW, why is Wikipedia so dominant in Google rankings -- and what does that point to given its own significant bias and misrepresentation problems? Mind you, in all fairness, on basic math, physics etc Wiki can be quite good enough. But once PC issues come up, watch out. KFkairosfocus
August 11, 2014
August
08
Aug
11
11
2014
06:30 AM
6
06
30
AM
PDT
F/N: As a test for the hyp utter ignorance in a Google era, let us do a search under the provided clue, Bayes, first five hits (filling my screen without scrolling) . . . all Wiki:
Bayes' Theorem Thomas Bayes [includes visible ref that he was a Presbyterian Minister] Bayesian Probability Bayesian Bayes [a disambig. page]
In addition, RH column provides context and initial remarks. WD400's remarks intending to dismiss my remarks as incomprehensible, a la MF's tactic, falls apart on even basic due diligence. KFkairosfocus
August 10, 2014
August
08
Aug
10
10
2014
11:46 PM
11
11
46
PM
PDT
Mung (HT Q and attn WD400 and A_B): With 2 x 6-sided fair dice tossed, odds of A = 4, B = 3 [let's suppose A red + B green) = 1/6 * 1/6 = 1/36 This holds whether or no we hold both in the cup and toss "at once" or if we toss one then the other. Now suppose instead it is A and C, C being 7-sided. Odds (speaking loosely, strictly "probability, per Bernoulli-Laplace indifference principle") of A = 4 and C = 3 are now 1/6 * 1/7. In neither case have we "squared" the odds of tossing A with value 4. Now, let us shift to, IF A = 4, then we toss C not B. We now have imposed a sequence with a conditionality. Event E2 is NOT independent of E1. P(E2 = 3 | E1 = 4) = 1/7, and P(E2 = 3 | E1 != 4) = 1/6. That is, we have a situation where E1 affects E2, and E2 is NOT independent of E1. We can make a second shift, tossing C first, and defining the sum of values as the outcome S; then, we toss A. Here, We can have a situation with A = 7 (odds 1/7) and on that happening P(S = 7 | C = 7) = 0, but P(S = 7 | C !=7) = 1/6 for each of the six diverse cases C = 1, 2 . . . 6. This toy situation will help us understand conditional probabilities and independent events. Also, how a step by step goal-directed procedure (an algorithm) may incorporate designed in randomness. That is, chance and design can appear in the same situation. And in fact games of chance in a House are carefully designed to cumulatively give profit to the house based on the theory of sampling a population. Here, in a situation where we have reasonably good probability models. Actuarial math, the math of insurers, deals first with uncertainty and tries to convert it into risk with statistical techniques designed to capture the patterns and properties of relevant populations. But as this implies assumptions on stability, there are always walk-away clauses. Here, in a suspiciously timed event, insurance companies repudiated their coverage just about two weeks before the fatal hos volcanic ash flows of June 25, 1997. It turns out that at just about the same time, the lead scientists at the Observatory communicated a letter of warning to the Civil Authorities; a letter that (leaked to a key activist) later featured in the Commission of Inquiry. I infer design, with 50-50 epistemic, subjective conditional probability that one of the scientists or someone in the Emergency Dept had a retainer to an insurance company. Those odds can be then re-weighted based on further info through Bayesian revision. And, weights may be acquired through expertise calibrated expert elicitation. (Which was introduced here AFTER June 25, 1997. Along with a lot of other things. Let's just say, HM Governor at the time later testified to HoC that HMG was a year behind the curve of events and that had GBOP 10 mn been spent on building shelters etc in the North and SHV had not gone to full dome building and destructive Merapian Pyroclastic flow events, there could easily have been a different inquiry on the wasting of GBP 10 mn. In 1976 - 77, next door in Guadeloupe the W lobe was evacuated for about a year for an eruption that did not significantly go beyond phreatic explosions which have much less hazardous potential, with IIRC cost US$ 700 mn and a major public scientific quarrel on inter alia whether the island lobe would blow up and sweep the region with tidal waves from Venezuela to Miami or thereabouts; they had to have a big conference to settle the quarrel, which led to rescinding the evacuation, successfully. That possibility popped up again in our case, but the emphasis was now on the "consensus" of the scientists. Saved GBP 10 mn at first, but 14 lives by official count were found to be lost with contributory negligent responsibility of HMG and GoM. And, I can go on and on, the lessons of history are written in blood, sweat and tears.) KFkairosfocus
August 10, 2014
August
08
Aug
10
10
2014
10:34 PM
10
10
34
PM
PDT
Apparently not, Mung. But I'm afraid you have a lot of arguing and abuse ahead of you! ;-) -QQuerius
August 10, 2014
August
08
Aug
10
10
2014
07:06 PM
7
07
06
PM
PDT
kf:
...then kindly tell us why you speak in terms of “squaring” when it is actually such a multiplication, under relevant circumstances that makes roughly independent events...
You mean P(A) * P(B) is not equal to P(A)^2?Mung
August 10, 2014
August
08
Aug
10
10
2014
06:23 PM
6
06
23
PM
PDT
Nicely and patiently explained, Kairosfocus! A-B and wd400, please consult the following: http://www.mathsisfun.com/data/probability-events-independent.html And notice when they state
You can calculate the chances of two or more independent events by multiplying the chances.
And that's all that Behe did. His hypothesis was that two or more mutations were necessary. The following paper examines historical evidence that seems to prove Behe right: http://www.pnas.org/content/111/17/E1759 Thus, the scientific method indicates that the science is settled until someone can come up with a scientifically plausible alternative explanation to the observed data. This has not been done. The objection regarding the mathematical difference between calculating simultaneous versus sequential events is not credible---there is no difference as even A-B now admits. And I'd hope that wd400 would be willing to admit that you multiply rather than square probabilities (thus, the probability of rolling a 6 on a 6-sided dice followed by a 6 on a 7-sided dice is 1/6 * 1/7). I'll leave the answer as an exercice to wd400. ;-) One can argue that one of the deleterious mutations can be passed to subsequent generations and might survive until a second one (or a third or fourth) appears that together confer chloroquine resistance, but a few generations of malaria is not significant when dealing with a period of ten years of malarial reproduction! Behe's point in all this was to show where several (not one-by-one as Darwin proposed) changes were needed to confer an advantage, and that these multiple changes are extremely unlikely to produce an irreducibly complex structure. Mung, The reason that casinos pay out 4 to 1 for rolling a seven is that this is how they shave the payouts in their favor. They should be paying 6 to 1 if they were fair. And if casinos were seeing the kind of miraculous probabilities that Darwinists depend on, they'd quickly toss the person out as a cheater. http://nypost.com/2012/08/21/atlantic-city-casino-refuses-to-pay-out-1-5m-jackpot-claims-deck-of-unshuffled-cards-to-blame/ -QQuerius
August 10, 2014
August
08
Aug
10
10
2014
06:11 PM
6
06
11
PM
PDT
1 2 3 5

Leave a Reply