Uncommon Descent Serving The Intelligent Design Community

Kevin Padian: The Archie Bunker Professor of Paleobiology at Cal Berkeley

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Kevin Padian’s review in NATURE of several recent books on the Dover trial says more about Padian and NATURE than it does about the books under review. Indeed, the review and its inclusion in NATURE are emblematic of the new low to which the scientific community has sunk in discussing ID. Bigotry, cluelessness, and misrepresentation don’t matter so long as the case against ID is made with sufficient vigor and vitriol.

Judge Jones, who headed the Pennsylvania Liquor Control Board before assuming a federal judgeship, is now a towering intellectual worthy of multiple honorary doctorates on account of his Dover decision, which he largely cribbed from the ACLU’s and NCSE’s playbook. Kevin Padian, for his yeoman’s service in the cause of defeating ID, is no doubt looking at an endowed chair at Berkeley and membership in the National Academy of Sciences. And that for a man who betrays no more sophistication in critiquing ID than Archie Bunker.

For Padian’s review, see NATURE 448, 253-254 (19 July 2007) | doi:10.1038/448253a; Published online 18 July 2007, available online here. For a response by David Tyler to Padian’s historical revisionism, go here.

One of the targets of Padian’s review is me. Here is Padian’s take on my work: “His [Dembski’s] notion of ‘specified complexity’, a probabilistic filter that allegedly allows one to tell whether an event is so impossible that it requires supernatural explanation, has never demonstrably received peer review, although its description in his popular books (such as No Free Lunch, Rowman & Littlefield, 2001) has come in for withering criticism from actual mathematicians.”

Well, actually, my work on the explanatory filter first appeared in my book THE DESIGN INFERENCE, which was a peer-reviewed monograph with Cambridge University Press (Cambridge Studies in Probability, Induction, and Decision Theory). This work was also the subject of my doctoral dissertation from the University of Illinois. So the pretense that this work was not properly vetted is nonsense.

As for “the withering criticism” of my work “from actual mathematicians,” which mathematicians does Padian have in mind? Does he mean Jeff Shallit, whose expertise is in computational number theory, not probability theory, and who, after writing up a hamfisted critique of my book NO FREE LUNCH, has explicitly notified me that he henceforth refuses to engage my subsequent technical work (see my technical papers on the mathematical foundations of ID at www.designinference.com as well as the papers at www.evolutionaryinformatics.org)? Does Padian mean Wesley Elsberry, Shallit’s sidekick, whose PhD is from the wildlife fisheries department at Texas A&M? Does Padian mean Richard Wein, whose 50,000 word response to my book NO FREE LUNCH is widely cited — Wein holds no more than a bachelors degree in statistics? Does Padian mean Elliott Sober, who is a philosopher and whose critique of my work along Bayesian lines is itself deeply problematic (for my response to Sober go here). Does he mean Thomas Schneider, who is a biologist who dabbles in information theory and not very well at that (see my “withering critique” with Bob Marks of his work on the evolution of nucleotide binding sites here). Does he mean David Wolpert, a co-discoverer of the NFL theorems? Wolpert had some nasty things to say about my book NO FREE LUNCH, but the upshot was that my ideas there were not sufficiently developed mathematically for him to critique them. But as I indicated in that book, it was about sketching an intellectual program rather than filling in the details, which would await further work (as is being done at Robert Marks’s Evolutionary Informatics Lab — www.evolutionaryinformatics.org).

The record of mathematical criticism of my work remains diffuse and unconvincing. On the flip side, there are plenty of mathematicians and mathematically competent scientists, who have found my work compelling and whose stature exceeds that of my critics:

John Lennox, who is a mathematician on the faculty of the University of Oxford and is debating Richard Dawkins in October on the topic of whether science has rendered God obsolete (see here for the debate), has this to say about my book NO FREE LUNCH: “In this important work Dembski applies to evolutionary theory the conceptual apparatus of the theory of intelligent design developed in his acclaimed book The Design Inference. He gives a penetrating critical analysis of the current attempt to underpin the neo-Darwinian synthesis by means of mathematics. Using recent information-theoretic “no free lunch” theorems, he shows in particular that evolutionary algorithms are by their very nature incapable of generating the complex specified information which lies at the heart of living systems. His results have such profound implications, not only for origin of life research and macroevolutionary theory, but also for the materialistic or naturalistic assumptions that often underlie them, that this book is essential reading for all interested in the leading edge of current thinking on the origin of information.”

Moshe Koppel, an Israeli mathematician at Bar-Ilan University, has this to say about the same book: “Dembski lays the foundations for a research project aimed at answering one of the most fundamental scientific questions of our time: what is the maximal specified complexity that can be reasonably expected to emerge (in a given time frame) with and without various design assumptions.”

Frank Tipler, who holds joint appointments in mathematics and physics at Tulane, has this to say about the book: “In No Free Lunch, William Dembski gives the most profound challenge to the Modern Synthetic Theory of Evolution since this theory was first formulated in the 1930s. I differ from Dembski on some points, mainly in ways which strengthen his conclusion.”

Paul Davies, a physicist with solid math skills, says this about my general project of detecting design: “Dembski’s attempt to quantify design, or provide mathematical criteria for design, is extremely useful. I’m concerned that the suspicion of a hidden agenda is going to prevent that sort of work from receiving the recognition it deserves. Strictly speaking, you see, science should be judged purely on the science and not on the scientist.” Apparently Padian disagrees.

Finally, Texas A&M awarded me the Trotter Prize jointly with Stuart Kauffman in 2005 for my work on design detection. The committee that recommended the award included individuals with mathematical competence. By the way, other recipients of this award include Charlie Townes, Francis Crick, Alan Guth, John Polkinghorne, Paul Davies, Robert Shapiro, Freeman Dyson, Bill Phillips, and Simon Conway Morris.

Do I expect a retraction from NATURE or an apology from Padian? I’m not holding my breath. It seems that the modus operandi of ID critics is this: Imagine what you would most like to be wrong with ID and its proponents and then simply, bald-facedly accuse ID and its proponents of being wrong in that way. It’s called wish-fulfillment. Would it help to derail ID to characterize Dembski as a mathematical klutz. Then characterize him as a mathematical klutz. As for providing evidence for that claim, don’t bother. If NATURE requires no evidence, then certainly the rest of the scientific community bears no such burden.

Comments
P.O. Yes, that is it. Looking forward to your explanation The explanation is pretty much: that's just the way it is. When you look at White's paper, in there is a discussion about the various factors that are involved in development of drug resistance. Quite a number of factors are involved. He gives a brief discussion of some of these factors. But the bottom line is this: the in vivo rate (basically what happens in the organism itself---what is actually seen in the live organims) is 1 in 10^14, while the in vitro (basically what happens in the lab, in Petri dishes, etc.) rate is between 1 in 10^8 and 1 in 10^10. I still find it a little bit puzzling; but those are the rates. We had a very long discussion about it all here at UD. Here's the thread. I don't think that will very useful. The best thing is White's paper itself here.PaV
August 8, 2007
August
08
Aug
8
08
2007
08:50 PM
8
08
50
PM
PST
PO I think estimated mutation rates are not only less than perfect but probably not very good at all as they depend on population size. I'm not sure how to interpret that. Eukaryote mutation rate is generally given at one base pair error per 10^9 base pair replications. IIRC it's an order or two magnitude higher for prokaryotes. Population numbers don't effect that rate. That's not to say it's a constant. It's variable by reasons known and unknown. Many chemicals are carcinogens for example. They increase the rate of mistakes in DNA replication. Background radiation is another environmental factor that varies the rate. It also varies by loci within the same genomes. So one should indeed take the observed average rate given at 1/10^-9 with a grain of salt. However, the grain pretty much goes away in plasmodium as the predicted rate given above roughly matches the observed rate in acquiring the single point mutation conferring avoquine resistance and the two point mutations needed for chloroquine resistance.DaveScot
August 8, 2007
August
08
Aug
8
08
2007
07:54 PM
7
07
54
PM
PST
And what exactly is intellgent chance?tribune7
August 8, 2007
August
08
Aug
8
08
2007
07:27 PM
7
07
27
PM
PST
PO -- No mutation that occurs in humans can ever come even remotely close to Behe's CCC regardless of how useful it is, simply because of population sizes. And rate of reproduction and generational time span. Which is the point. If Darwinian evolution is occurring at such a rate as to do what its proponents claim it can, it would be seen more readily in more prolific creatures like bacteria than in man. That's why I fail to see why CCC - 1 in 10^20 - is a valid benchmark for the plausibility of Darwinian evolution. OK, so you are saying that man with a smaller population (and lower reproductive rates and longer generations) is more likely to have the simultaneous beneficial mutations than the malarial parasite?tribune7
August 8, 2007
August
08
Aug
8
08
2007
07:26 PM
7
07
26
PM
PST
PaV, Sure, there is no difference in moving inward or outward in the probability distribution. Each significance level corresponds to a particular cut-off point, the start of the rejection region. In statistics, significance levels are typically 5% or 1% or something similar. The smaller the better, but, if they're too small we cannot reject anything (which is what we want to do). So, we compromise, and the choice is really quite arbitrary. UPB is simply a drastic significance level (se my post 166). As for your proteins, later!olofsson
August 8, 2007
August
08
Aug
8
08
2007
07:02 PM
7
07
02
PM
PST
PaV, Yes, that is it. Looking forward to your explanation (keeping in mind that I don't think it's a big deal). As for your earlier post, for one who loathes statistics, you're not doing too bad! :) More later!olofsson
August 8, 2007
August
08
Aug
8
08
2007
06:39 PM
6
06
39
PM
PST
P.O. As to the suspected error on p.59 of EoE, are you pointing out that Behe speaks of two mutations and that for the fist mutation he has the odds of 1 in 10^12, and for the second, he uses 1 in 10^8? I believe I can explain this, if this is the error you suspect.PaV
August 8, 2007
August
08
Aug
8
08
2007
06:12 PM
6
06
12
PM
PST
tribune7 [258], No particular mutation, any mutation. No mutation that occurs in humans can ever come even remotely close to Behe's CCC regardless of how useful it is, simply because of population sizes. That's why I fail to see why CCC - 1 in 10^20 - is a valid benchmark for the plausibility of Darwinian evolution.olofsson
August 8, 2007
August
08
Aug
8
08
2007
05:50 PM
5
05
50
PM
PST
tribune7 [257], I'm here to discuss design inference, not personal beliefs. But I have to say, considering how incredibly complicated the flagellum is with all those little parts, it is most likely due to chance. Whether it is intelligent chance or not is another matter.olofsson
August 8, 2007
August
08
Aug
8
08
2007
05:38 PM
5
05
38
PM
PST
OK, you actually asked a question. None that I am aware of, but again, I'm no biologist. As you keep pointing out. But assumptions about biology seem to be coloring your mathematical analysis. For instance you ask "why should a mutation that has only appeared once in humans be deemed a trillion times less useful that one that has appeared only once in bacteria?" What mutation is that?tribune7
August 8, 2007
August
08
Aug
8
08
2007
05:08 PM
5
05
08
PM
PST
The EF does not allow us to ask these questions. So you don't have an opinion?tribune7
August 8, 2007
August
08
Aug
8
08
2007
04:49 PM
4
04
49
PM
PST
P.O. "As I have pointed out many times, if you put Caputo and the flagellum side by side, you notice that the specification is absent in the latter (see my post 106, near the end). We can identify E in both examples, but in the flagellum, there is no E*. I have asked you before to no avail." Maybe the answer to your dilemna lies in asking the question: How do we form the "rejection region"? What I mean is this. Implicit in the Caputo case, and in our discusion (both yours and ours) about it, is that as one moves along the probability distribution, i.e., as one moves further and further away from the center (peak) of the distribution and towards the edges, the probabilities get exceedingly smaller and smaller. Let's face it, in the end, there is no rule or law that establishes the "rejection region", rather it is established inductively, or, intuitively. In the Caputo case, e.g., 38% is close to the peak, and, obviously, 1 in 50 billion is out toward the edges. But, what is the rule for establishing the "rejection region". As far as I can see, there isn't any. We simply agree that a "chance" event that is highly improbable (1 in 50 billion) didn't come about by 'chance'. We do this, it appears to me, intuitively, inductively----it's something we infer. And, as your paper points out, it is an inference you came to quickly yourself. Well, what if instead, our journey along the probability distribution doesn't begin from the center. What if, instead, we begin from the extreme (almost infinite) end of the probability distribution and work our ways to the center. Wouldn't it be likewise true that at some point along this distribution, we could no longer 'rule out chance' as a cause for some event? Wouldn't we then indicate this as the end of the "rejection region", wherein, on passing beyond this point on our journey to the center of the probability distribution, "chance" is now a plausible explanation for an event? I don't see any difference between these two scenarios: both involve the same probability distribution, both establish a "rejection region", and in both case what 'establishes' the "rejection region" is a somewhat inductive, intuitive sense of improbability---i.e., a probabilistic inference. Intelligent beings are comfortable doing this. We have some native sense of improbabilites. What WD has done is simply established the UPB as that point along the probability distribution which defines the "rejection region". Thus defined, what does it matter whether one moves from the center to the edge (as you do, P.O., in the Caputo case, comparing the 38% pattern versus the 1 in 50 billion pattern) or one begins at the extreme end of the P.D. and moves toward the center, with the proviso that once a specification/pattern exceeds the UPB in probability, then "chance" can no longer be 'ruled out'? Why is it necessary to "know" what each possible specification looks like? In the Caputo case, we know that, given the rules, the pattern is going to contain nothing but D's and R's, and that they will total up to 41 (or was it 42?) such determinations, and that, each such determination has a 50% chance of occurring naturally. Well, as I pointed out in an earlier post, we know the "rules" for proteins: (1) they're made up of amino acids; (2) each amino acid has a chance of occurring of 1 in 22; (3) specified proteins are of lengths as many as 300 a.a. long.***[see below] Here the a.a. are equivalent to the D's and R's. Here, the total number of determinations is 300, whereas in the Caputo case it was 41. We run the numbers and we get 22^300, which is way beyond the UPB in improbability, assuming a chance hypothesis. In terms of statistical theory, I just don't see this approach violates in any way the canons of statistics. ***[from above] (I might add that we also know that outside of the cell--human intervention notwithstanding--proteins don't exist. That is, outside of biological life, the probability of running into a protein is zero. So, in a way, according to "chances", proteins have zero chance of existing. This, it would seem, should rule out the possibility that chance as a cause for proteins.)PaV
August 8, 2007
August
08
Aug
8
08
2007
04:45 PM
4
04
45
PM
PST
tribune7 [250], OK, you actually asked a question. None that I am aware of, but again, I'm no biologist.olofsson
August 8, 2007
August
08
Aug
8
08
2007
04:41 PM
4
04
41
PM
PST
Anybody who is still interested in the EF and my objections: see post [241]. There is a question at the end.olofsson
August 8, 2007
August
08
Aug
8
08
2007
04:37 PM
4
04
37
PM
PST
Kairosfocus as a debate referee is like Roger Federer being a referee at Wimbledon! :)olofsson
August 8, 2007
August
08
Aug
8
08
2007
04:36 PM
4
04
36
PM
PST
tribune7 [250], I will think it through, to the best of my capability. Off the top of my head though, I think estimated mutation rates are not only less than perfect but probably not very good at all as they depend on population size. Why should a mutation that has only appeared once in humans be deemed a trillion times less useful that one that has appeared only once in bacteria? Just because there is nothing better, it might not be a good idea to use something that is flawed. Note that this problem is not restricted to the ID/darwinism debate but ought to be of interest to evolutionary biologists as well.olofsson
August 8, 2007
August
08
Aug
8
08
2007
04:35 PM
4
04
35
PM
PST
tribune7 [249], The EF does not allow us to ask these questions. See my post 249.olofsson
August 8, 2007
August
08
Aug
8
08
2007
04:30 PM
4
04
30
PM
PST
PO--I question why ranking mutation rates is equivalent to ranking “evolutionary challenge” Now, think this through. Mutation rates may not be perfect but what other objective criteria is there to rank "evolutionary challenged"?tribune7
August 8, 2007
August
08
Aug
8
08
2007
04:27 PM
4
04
27
PM
PST
Kairosfocus, Interesting that you submit, on excellent grounds, agency is the best explanation as this is precisely the kind of conclusion you cannot reach in the Fisherian eliminative paradigm. To talk about "best explanation" requires a comparative approach which puts you in Elliot Sober's camp. I and Dr D maintain that only elimination is possible for design inference. Or do we? Ah, but Dr D himself succumbs to the temptation of comparison in the "Elimination vs Comparison" paper, on page 5: If we can spot an independently given pattern [...] then it's more plausible that some end-directed agent or process produced the outcome [...] than that it simply by chance ended up conforming to the pattern." The boldfacing of "more plausible" is mine, to point out how extremely difficult it is to stay on the straight and narrow Fisherian path. Prof PO, The Last Fisherianolofsson
August 8, 2007
August
08
Aug
8
08
2007
04:24 PM
4
04
24
PM
PST
PO -- We can never reject the design hypothesis PO, what is more reasonable -- the design hypothesis or any other?tribune7
August 8, 2007
August
08
Aug
8
08
2007
04:03 PM
4
04
03
PM
PST
Atom [243], I may be wrong on a number of points and I have written Prof Behe to clear up the situation. It is true that more mutations are required for chloroquine resistance than atovaquone resistance (2 vs 1, according to Behe) but that is not really my point, as I question why ranking mutation rates is equivalent to ranking "evolutionary challenge" or "usefulness of mutations." Anyway, I'll keep thinking about it.olofsson
August 8, 2007
August
08
Aug
8
08
2007
03:54 PM
3
03
54
PM
PST
Atom [242], We can never reject the design hypothesis because that would require us to state it and compute the likelihood of the data under the design hypothesis. Remember that the EF is eliminiative, not comparative (which is something that Elliot Sober has a problem with but I don't). I suppose that you might mean "fail to reject the chance hypothesis."olofsson
August 8, 2007
August
08
Aug
8
08
2007
03:49 PM
3
03
49
PM
PST
PS: Sorry, 10^310 Q-states overall.kairosfocus
August 8, 2007
August
08
Aug
8
08
2007
03:41 PM
3
03
41
PM
PST
Still going strong . . . A few quick comments, not in any particular order: 1] WD and “right” of rebuttal: I thought earlier to keep what was not public private, especially as the request Prof PO makes, apart from unusual influence and openness, will most likely make but little difference to the well-known situation [cf fate of Sternberg etc.]; i.e. IMHCO the operative issue is that WD would be petitioning for access at sufferance, and on long track record that is unlikely given the general state of journal politics and polarisation relative to design issues. (Of course, I did not foresee that that could be “turned” into an insinuation of bad faith on my part. I would love to be happily surprised, but I ain't holdin' my breath waiting for it.) 2] Atom: A bright light indeed. Bon Voyage . . . 3] Caputo: Actually, the Caputo case is an illustration of configuration space at work, and the issue of likely outcomes on reasonable probabilistic resources. We in effect have a space of 2^41 ~ 2.2*10^12 cells, with 41 clusters from 41R to 20 R/21 D to 41 D. At the peak, the near 50-50 splits, we have two clusters of 2.7*10^11 cells, or about 12.2%. One step away from that, and on to the end of the distribution, we have about 38% of the cells. In short, this is a classic inverse T distribution typical of statistical thermodynamics and related situations, i.e sharply peaked. Relative to these numbers, WD's E* of the last two configs on the right as an extreme, is about 1 in 52 billions, indeed. On these numbers and with a reasonable sample of cases, we would not expect to see the sort of distribution Caputo saw, as the most likely outcomes would cluster near the middle of the distribution. So, when we see something on the right leg of the inverted T and it also fits an independent pattern: advantage for Mr C's party, we are right to infer that design is the likeliest explanation. Worse, the report that the actual drawings were unobserved lends to motive and means, opportunity. Equally, an expansion of the reasonable rejection region to 38% of the distribution is not defensible, as this is well within the reasonable expectation of observation. I guess this is for the record, but that will allow onlookers to judge for themselves whether I am simply trying to win a debate by any tactics that come top mind, or actually care about the balance of the case on the merits. [Let's just say my views on that wicked art we call “debate” are not exactly a secret; they are why I recently refused to sit as a judge on an inter-island debate competition.) 4] Flagella and proteins I take on board the new information –- note that word -- on specific proteins, though of course they make no material difference to the underlying issue raised in my openly acknowledged as crude calculations. BTW, as a practical matter, with the 4-state elements in DNA, if we have a string of DNA of over 500 – 1,000 base pairs, i.e double to quadruple the length that gives 10^150 states, we very reasonably are in the region of sufficiently complex that islands of functionality in the config space are comfortably sparse beyond island-hopping strategies such as RM + NS, starting at arbitrary states. For, 4^500 ~1.07* 10^301 and 4^1,000 ~ 1.15* 10^602. To give an idea of what even the first of these numbers means, take our observed cosmos of ~ 10^80 atoms, and expand it so that every atom becomes a new sub-universe of the same order; this gives now 10^80 sub-universes of 10^80 atoms, or 10^160 atoms. (At 10^150 quantum states per universe across its lifetime, that would give us 10^230 quantum states, total.) Thus, the config space approach is both a generalisation of the statistical distribution approach and it is also quite capable of taking on the flagellum issue, even just the first corner of it we outlined above being enough to make the material point: 4 proteins at ~ 300 base pairs each gets us to 1200 base pairs. (Also, as I and others have pointed out above, motor without steering wheel and driver is a recipe for trouble. That is, the system is complex beyond reach of any reasonable probabilistic resources, is functionally specified and information-based.) I see too that someone aptly noted that the issue is not whether other locks exist with combinations, but this particular lock and its combination. So, the formation of an acid-ion powered, controlled, forward/reverse drive outboard motor in bacteria in the face of Behe's edge, is to be explained. I submit, on excellent grounds, agency is the best explanation. Moretime GEM of TKIkairosfocus
August 8, 2007
August
08
Aug
8
08
2007
03:36 PM
3
03
36
PM
PST
There is no a priori reason that resistance to chloroquine is that much harder to develop than resistance to atovaquone (without any expertise in Chemistry, I think the molecules are very similar in size, composition, etc).
It has been a month of so since I finished EOE, so this is from memory, but I think you may be mistaken on this. I think one type of resistance takes more mutations than does the other (lurkers with the book handy can correct or confirm my point.) So theoretically we expect for the "mutli-step" one to take longer to find through blind-search. When we look at the empirical data, this is what we see and our theoretical estimate is confirmed. If the two types of resistance were caused by an identical number of mutated bases, it would be odd indeed for one to take much longer than the other to come about by chance. So this also leads me to believe that you may have made a wrong assumption in your reading of Behe's ideas.Atom
August 8, 2007
August
08
Aug
8
08
2007
03:04 PM
3
03
04
PM
PST
PO, Lol at the distraction. To everyone who sent congratulations and compliments, thank you. I did post a comment with links to my mug, but they seem lost. So sorry Aq. PO (again), Ok, so you agree that in principle we could use my approach to apply the EF, even if our numbers are so small that we reject the design hypothesis. (Which is fine.) We can reject design 30 times, keep sharpening our methods, then on the 31st try we see that a design inference can be made. This isn't a problem for the filter, since it is designed to have a tendency to reject design (it is conservative), rather than have false positives.
You just can't get anywhere near the UPB with a relative-frequency estimate so you could never reject a chance hypothesis.
Again, I haven't run the numbers so I'm not sure of this. We would need to calculate how many bases long our N would have to be to get a one in a 10^150 isolation ratio. Then we'd have to see how many sequences of that length we have examined (to check for BF type device), and run the numbers. We could do the same with the second method I outlined (dealing with proteins and sub-components, not DNA bases). But either way, it is just a matter of calculation at that point, not one of theoretical issues. Would you agree?Atom
August 8, 2007
August
08
Aug
8
08
2007
02:43 PM
2
02
43
PM
PST
Kairosfocus [219], 2. the flagellum is a specific type Yes, and in applying the EF, we need to form specifications, that is, sets of specific types. Whether or not the "evo mat paradigm" is deeply challenged is not the issue; the issue is how to apply the EF. So, I repeat my hitherto unanswered question: What is E* in the flagellum example?olofsson
August 8, 2007
August
08
Aug
8
08
2007
02:34 PM
2
02
34
PM
PST
Atom, If you need an update on the Bayes issue, see my post 214.olofsson
August 8, 2007
August
08
Aug
8
08
2007
02:31 PM
2
02
31
PM
PST
Atom [230], I meant that if an event is defined to have some certain property based on its observed relative frequency f alone, obviously you will not see it in a population that is much smaller than 1/f. So even if we were to discover a great mutation that helped the caveman suddenly be able to solve differential equations, it would not qualify as Behe's "CCC" event because there has not been enough humans. In other words, I think it is suspect to quantify and rank mutation events by their observed relative frequency. Even if we had a way to independently quantify "usefulness" of a mutation or "evolutionary challenge" there is no reason it would coincide with observed mutation rates. For example, the only reason we can say that chloroquine resistance is 10^8 times more complex than atovaquone resistance is based on estimated mutation rates. There is no a priori reason that resistance to chloroquine is that much harder to develop than resistance to atovaquone (without any expertise in Chemistry, I think the molecules are very similar in size, composition, etc). The argument that "because the malaria parasite needed a 1-in-10^20 mutation event to become resistant to chloroquine, humans must have experienced mutations of the same probability" just does not seem very persuasive to me. By the way, there is an error in Behe's book, on p.59 regarding the numbers 10^20, 10^12, and 10^8. It is not important but see if you can spot it (and ask Kairosfocus if you talk to him). Finally, I do not know anythin about biochemistry so I have no way to comment no protein-protein binding sites. I suppose that any probability calculations there are based on assumptions rather than data though.olofsson
August 8, 2007
August
08
Aug
8
08
2007
02:11 PM
2
02
11
PM
PST
Congrats Atom!scordova
August 8, 2007
August
08
Aug
8
08
2007
01:37 PM
1
01
37
PM
PST
1 3 4 5 6 7 13

Leave a Reply