Uncommon Descent Serving The Intelligent Design Community

Kevin Padian: The Archie Bunker Professor of Paleobiology at Cal Berkeley

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Kevin Padian’s review in NATURE of several recent books on the Dover trial says more about Padian and NATURE than it does about the books under review. Indeed, the review and its inclusion in NATURE are emblematic of the new low to which the scientific community has sunk in discussing ID. Bigotry, cluelessness, and misrepresentation don’t matter so long as the case against ID is made with sufficient vigor and vitriol.

Judge Jones, who headed the Pennsylvania Liquor Control Board before assuming a federal judgeship, is now a towering intellectual worthy of multiple honorary doctorates on account of his Dover decision, which he largely cribbed from the ACLU’s and NCSE’s playbook. Kevin Padian, for his yeoman’s service in the cause of defeating ID, is no doubt looking at an endowed chair at Berkeley and membership in the National Academy of Sciences. And that for a man who betrays no more sophistication in critiquing ID than Archie Bunker.

Kevin Padian and Archie Bunker

For Padian’s review, see NATURE 448, 253-254 (19 July 2007) | doi:10.1038/448253a; Published online 18 July 2007, available online here. For a response by David Tyler to Padian’s historical revisionism, go here.

One of the targets of Padian’s review is me. Here is Padian’s take on my work: “His [Dembski’s] notion of ‘specified complexity’, a probabilistic filter that allegedly allows one to tell whether an event is so impossible that it requires supernatural explanation, has never demonstrably received peer review, although its description in his popular books (such as No Free Lunch, Rowman & Littlefield, 2001) has come in for withering criticism from actual mathematicians.”

Well, actually, my work on the explanatory filter first appeared in my book THE DESIGN INFERENCE, which was a peer-reviewed monograph with Cambridge University Press (Cambridge Studies in Probability, Induction, and Decision Theory). This work was also the subject of my doctoral dissertation from the University of Illinois. So the pretense that this work was not properly vetted is nonsense.

As for “the withering criticism” of my work “from actual mathematicians,” which mathematicians does Padian have in mind? Does he mean Jeff Shallit, whose expertise is in computational number theory, not probability theory, and who, after writing up a hamfisted critique of my book NO FREE LUNCH, has explicitly notified me that he henceforth refuses to engage my subsequent technical work (see my technical papers on the mathematical foundations of ID at www.designinference.com as well as the papers at www.evolutionaryinformatics.org)? Does Padian mean Wesley Elsberry, Shallit’s sidekick, whose PhD is from the wildlife fisheries department at Texas A&M? Does Padian mean Richard Wein, whose 50,000 word response to my book NO FREE LUNCH is widely cited — Wein holds no more than a bachelors degree in statistics? Does Padian mean Elliott Sober, who is a philosopher and whose critique of my work along Bayesian lines is itself deeply problematic (for my response to Sober go here). Does he mean Thomas Schneider, who is a biologist who dabbles in information theory and not very well at that (see my “withering critique” with Bob Marks of his work on the evolution of nucleotide binding sites here). Does he mean David Wolpert, a co-discoverer of the NFL theorems? Wolpert had some nasty things to say about my book NO FREE LUNCH, but the upshot was that my ideas there were not sufficiently developed mathematically for him to critique them. But as I indicated in that book, it was about sketching an intellectual program rather than filling in the details, which would await further work (as is being done at Robert Marks’s Evolutionary Informatics Lab — www.evolutionaryinformatics.org).

The record of mathematical criticism of my work remains diffuse and unconvincing. On the flip side, there are plenty of mathematicians and mathematically competent scientists, who have found my work compelling and whose stature exceeds that of my critics:

John Lennox, who is a mathematician on the faculty of the University of Oxford and is debating Richard Dawkins in October on the topic of whether science has rendered God obsolete (see here for the debate), has this to say about my book NO FREE LUNCH: “In this important work Dembski applies to evolutionary theory the conceptual apparatus of the theory of intelligent design developed in his acclaimed book The Design Inference. He gives a penetrating critical analysis of the current attempt to underpin the neo-Darwinian synthesis by means of mathematics. Using recent information-theoretic “no free lunch” theorems, he shows in particular that evolutionary algorithms are by their very nature incapable of generating the complex specified information which lies at the heart of living systems. His results have such profound implications, not only for origin of life research and macroevolutionary theory, but also for the materialistic or naturalistic assumptions that often underlie them, that this book is essential reading for all interested in the leading edge of current thinking on the origin of information.”

Moshe Koppel, an Israeli mathematician at Bar-Ilan University, has this to say about the same book: “Dembski lays the foundations for a research project aimed at answering one of the most fundamental scientific questions of our time: what is the maximal specified complexity that can be reasonably expected to emerge (in a given time frame) with and without various design assumptions.”

Frank Tipler, who holds joint appointments in mathematics and physics at Tulane, has this to say about the book: “In No Free Lunch, William Dembski gives the most profound challenge to the Modern Synthetic Theory of Evolution since this theory was first formulated in the 1930s. I differ from Dembski on some points, mainly in ways which strengthen his conclusion.”

Paul Davies, a physicist with solid math skills, says this about my general project of detecting design: “Dembski’s attempt to quantify design, or provide mathematical criteria for design, is extremely useful. I’m concerned that the suspicion of a hidden agenda is going to prevent that sort of work from receiving the recognition it deserves. Strictly speaking, you see, science should be judged purely on the science and not on the scientist.” Apparently Padian disagrees.

Finally, Texas A&M awarded me the Trotter Prize jointly with Stuart Kauffman in 2005 for my work on design detection. The committee that recommended the award included individuals with mathematical competence. By the way, other recipients of this award include Charlie Townes, Francis Crick, Alan Guth, John Polkinghorne, Paul Davies, Robert Shapiro, Freeman Dyson, Bill Phillips, and Simon Conway Morris.

Do I expect a retraction from NATURE or an apology from Padian? I’m not holding my breath. It seems that the modus operandi of ID critics is this: Imagine what you would most like to be wrong with ID and its proponents and then simply, bald-facedly accuse ID and its proponents of being wrong in that way. It’s called wish-fulfillment. Would it help to derail ID to characterize Dembski as a mathematical klutz. Then characterize him as a mathematical klutz. As for providing evidence for that claim, don’t bother. If NATURE requires no evidence, then certainly the rest of the scientific community bears no such burden.

Comments
All: The thread that will not die indeed. A few points, pardon selectiveness, being summary and thematic [save for no 1] rather than detailed on points – insomnia can carry one only so far: 1] PaV, on Likelihood etc: While there are distinctions between the two relevant schools, for our purposes [Caputo etc] it seems they speak with more or less one voice, nuances and technicalities aside. Wiki, that ever handy 101 reference, gives a first rough-cut slice on the idea of “Likelihood”: . . . consider a model which gives the probability density function of observable random variable X as a function of a parameter θ. Then for a specific value x of X, the function L(θ | x) = P(X=x | θ) is a likelihood function of θ: it gives a measure of how "likely" any particular value of θ is, if we know that X has the value x. Two likelihood functions are equivalent if one is a scalar multiple of the other . . . Boiling down, via PaVian simmering [perhaps for “onlookers” wanting a simplified summary of the “simple” presentation above!], the idea here is that we have some alleged random variable that takes and observed value x from a set of possible values X. We then wish to get to how likely it is that a given value of θ is, given the observation of the value x. Likelihood of θ on observing x, is the conditional probability P(X=x | θ), which then gets us into the terror-fitted depths of Bayes' theorem and its practical applications. BT is of course: P[A|B] = (P[B|A]*P[A])/P[B], with the reversed conditional probability P[B|A] being in this case, “the likelihood of A given B.” (A conditional probability P[A|B] is in effect the ratio of prob of A and B happening jointly, to prob of B, or, the prob of A given the condition that that B, another event of more or less probability, has occurred.) The rest falls out algebraically, once we make that basic substitution.) We then read the BT eqn just now, as posterior probability is likelihood times prior probability, all divided by a normalising constant. Problems start to come in with “needs” to know p[A] and P[B] directly – hence part of why prof PO was talking about such priors. The contrast is that on elimination approaches, we are in effect saying that [a] here is a credible distribution on a variable, on the relevant chance hyp. Then, we sample/observe real-world cases, and in such a case as that we have a sharp peak holding most of the distribution, extreme cases are unlikely to be met with IF the chance hyp holds, cf. my illustration in 68 above, and the follow up on it. What is happening is that had Caputo done as he testified, he would not have been likely to see the 1 in 52 bns chance outcome. So it is most reasonable that he did not do as he declared. And since in such a contingent situation, natural regularity is not material, we see that agency and intent better explain the outcome than chance. This is of course the typical way in which most statistical inference is done in science and management decision making, and even as brought out in the courtroom. It is harder to justify theoretically, but has such a track record of empirical success that it is generally regarded as sufficiently reliable to be used. Going back to the underlying issue of why the EF works, we can see that we know that things are caused through chance, necessity and/or agency. In situations where multiple valued outcomes are reasonable [e.g on tossing a die], then we see that the effective choice is chance or agency. Thence, we see that on taking chance as null, using a credible model of what chance can do, e.g. the Caputo coin-tossing model, or the stat default Laplacian equiprobable assumption, etc, we compare what we see to what we would expect from chance. If there is a sufficiently unlikely outcome, we for excellent and reliable reasons, revert to agency as the explanation. We do it in day to day life all the time, and in science all the time. It also sends a lud and clear messaage on the mostr likely cause of the flagellum, etc. Therein lieth the rub. The debate here IMHCO comes up in the main because of selective hyper-skepticism triggered by the possible worldview implications, not because of some serious and substantial defect in what is being done. . . .kairosfocus
August 14, 2007
August
08
Aug
14
14
2007
12:58 AM
12
12
58
AM
PDT
Darn html! ...values p less than 1/2. I see awful typos above, sorry about those. Anyway, the above was all about different chance hypotheses, each one corresponding to a value of p. When Dr D analyzes the Caputo case, he starts by ruling out all chance hypotheses except p=1/2. When p=1/2 is ruled out, there is no chance hypothesis left and design is inferred (but only in the sense that chance is ruled out, there is no alternative design hypothesis of how he cheated). If this can be done, by inspecting his equipment or taking his word for it (Capone-Bluto's word???), we can argue that our initial chance model was not correct and conclude that chance was not involved at all. So, the EF neither addeth nor taketh away from what a statitistician would do. We're all in agreement here. I haven't seen the arguments the prosecution used in the original case but I'm sure they had statisticians on their payroll. But when it comes to applications to evolutionary biology, it is in my opinion impossible to form a design hypothesis. Elliot Sober claims that we must but I claim that we can't. Regardless if whether the data is the flagellum or Atom's fiancee, how would we compute its probability under a design hypothesis? I'm not "distancing myself" from Sober now as I have never held his position. I agree with Dr D insofar as it is not a logical error to reject a hypothesis without superseding it (moon-made-of-cheese which Wallace and Gromit worked hard at!). As for you calculations, let's say (for the sake of argument!) that you manage to rule out the uniform chance hypothesis (Caputo p=1/2). But how do you rule out other chance hypotheses (Caputo p=37/38)? Recall that these chance hypothesis would be formed by considering billions of years of evolutions, not an easy task. I known we may perhaps talk past each other to some extent, but hopefully there is a little more understandin each time! Going from song quotes to movie quotes: "Die you b...."! Prof POolofsson
August 13, 2007
August
08
Aug
13
13
2007
11:13 PM
11
11
13
PM
PDT
PaV, If it doesn't die on its own, we'll beat it to death. Anyway, by likelihood we mean the probability of the observed data as a function of the parameter p, here L(p)=p^40*(1-p). Thus, with p=1/2, the likelihood is 1 in 50 billion. Now, Caputo could have "cheated by chance" for example by spinning a roulette wheel and only chose R when unlucky 13 came up. Then the likelihood is L(37/38)=0.009, not that low anymore. The likelihood approach would now choose the value of p that maxmimizes the likelihood, which turns out to be p=40/41 (the so-called maximum likelihood extimator). So, now that I've rtied to explain better what I mean by likelihood, you see that it is easy to find the hypothesis that confers the highest likliehood on the data. The hypothesis test only rules out H0:p=1/2 in favor of HA:p>1/2, thus, a combination of elimination and likelihood. Note that it is reasonable to have HA; the data speaks against H0 but not in favor of values polofsson
August 13, 2007
August
08
Aug
13
13
2007
11:00 PM
11
11
00
PM
PDT
P.O. Thank you for your response. It is elucidating. Let me put a few of your thoughts together and so work toward a question/statement. First, you say, "The likelihood approach picks the hypothesis that confers the highest probabiilty (likelihood) on the data." And then you say, "If you read my article, you will see no mention of competing hypotheses, likelihood, or prior/posterior probabilities. I am entirely within the eliminative paradigm." Now, the method you used in computing the Caputo case isn't something, it seems to me, that is consistent with the "likelihood" approach. In fact, it seems to me that it is impossible to "confer the highest probabiilty (likelihood) on the data" when you find yourself in the 'rejection regions'. What I mean is this: it would be easy to, for example, verify the probabilities associated with the method Caputo used when you are at, or near, the peak of the distribution: e.g., you could run 300 experiments, i.e., come up with 300 samples, and calculate the odds "over/[on] the data". I'm sure they would be close to the peak. Thus, this particular "likelihood" could be calcualted "on the data" (I'm hoping I'm understanding this last phrase correctly here). But what about the example of the Caputo case itself, where, if we are to believe Caputo, his method turned up 41 D's and 1 R. The odds are 1 in 50 billion of that happening. How many "samples" would have to be run to come up with even "one" such instance of 41 D's and 1 R? Theoretically, 50 billion. It seems to me, then, that this "likelihood" would be very difficult (really, it would be impossible) to calculate "on the data". Perhaps you've already sensed this inadequacy, and, for that reason now distance yourself from Sober. Having said that, though, it also seems to me that the kind of calculation I proposed in my last post should satisfy, if you do indeed find yourself "entirely within the eliminative paradigm", your misgivings about the "elimination" method that WD employs. I wondering about your reaction to what I propose. Could you comment? (And, indeed, this really IS the "thread that won't die"!)PaV
August 13, 2007
August
08
Aug
13
13
2007
09:41 PM
9
09
41
PM
PDT
PaV, Yes, there are three major approaches: elimination, likelihood, and Bayesian. For "intelligent design inference," which we are discussing here, only elimination is at all possible. The other approaches require us to compute the probability of data under each hypothesis considered. The likelihood approach picks the hypothesis that confers the highest probabiilty (likelihood) on the data. Bayesian analysis, in addition, assigns prior probabilities to the various hypotheses and the computes the posterior probabilities once data are observed (thus, only the Bayesian approach lets us talk about how probable the hypotheses themselves are). Clearly, there is no way of doing either so for the point of "elimination vs comparison" the distinction likelihood/Bayes is not material (as Dr D also says in his "chapter 33"). As I said, I don't criticize from the same vantage point as Sober (I have read his criticism, not just Dr D's account of it(!), and discussed it with him). If you read my article, you will see no mention of competing hypotheses, likelihood, or prior/posterior probabilities. I am entirely within the eliminative paradigm. By the way, in theoretical and applied statistics, there is no pure eliminative ("Fisherian") approach, but a combination with the likelihood approach due to Neyman and Pearson. The Bayesian approach is gaining ground; as it tends to be computationally heavy, it was not feasible a few decades ago. These days it is used in email spam filters, Google's seach engine, in clinical trials, and, on occasion, in court cases. Thus spake Prof PO.olofsson
August 13, 2007
August
08
Aug
13
13
2007
10:25 AM
10
10
25
AM
PDT
Since it's time, apparently, for final notes, here's this one. In looking through Dembski's NFL, in section 2.9 he discusses Sober's criticisms of WD's Design Inference. Sober, who P.O. mentions right from the start, uses what he, Sober, terms a "likelihood approach" to statistics. What is done is that any hypothesis that can be formed is considered a "chance hypothesis" (even one that says something is designed) and then the probabilities that these "chance hypotheses" develop are compared and an inference is made as to the best explanation. So that is why, it appears, the good professor refused to be described as a Bayesian, although Dembski's reasons for rejecting Sober's approach is much the same as that for the strictly Bayesian approach. This also explains the good professor's insistence on wanting to know what are the probabilities associated with the bacterial flagellum. They can be computed in the Caputo case; but not with the flagellum. Nonetheless, I think the analysis I presented certainly begins to get to the heart of any such probabilities. It strikes me that if one were to calculated the total number of proteins that exist at any one moment in time---those present in every cell of every creature that exists, then one could take this number of total proteins in existence and divide it by the 50 or so proteins that make up the flagellum, and then 'assert' that the number so calculated represents the total number of, in WD terminology, replicational opprotunities for the flagellar proteins to exist. Then one would divide this rather large number by the probability space generated by each of the proteins in the flagellum multiplied together---which would end up beyond imagination. This would be the realistic approach. But the ultimately conservative approach is to simply divide the above calculated number of proteins that exist by the probability space of just ONE 300 residue protein, and, I'm confident, we would be well above the UPB.PaV
August 13, 2007
August
08
Aug
13
13
2007
07:21 AM
7
07
21
AM
PDT
PO -- One last thought for a near-dead thread. Increasing the rejection region to a certain degree seems to require us to step out of the realm of what can be revealed by mathematics, into the real of philosophy -- i.e. "why shoud a bacteria develop a falgellum to move?" which should lead us to "why should a bacteria develop at all?" Why not something else to fill the bacterial niche? And why should man be the only creature with the ability to create? Why didn't super intelligent asexual producting beings evolve, or egg-layers with strong exo-skeltons? And most significantly, why does anything -- mathematically speaking -- have to be. It seems the natural state of the universe is heat death -- which the 2nd Law of Thermodynamics says we are inevitably headed.tribune7
August 13, 2007
August
08
Aug
13
13
2007
06:51 AM
6
06
51
AM
PDT
Atom: Amen! God bless you both! GEM of TKIkairosfocus
August 13, 2007
August
08
Aug
13
13
2007
12:28 AM
12
12
28
AM
PDT
An observation about the mod policy -- near the beginning of the thread PO stated that ID was not creationism, and maybe that provided a clue he was here to debate in good faith. When I first glanced at his paper I thought for sure he was just going to be another name-calling troll who refuses to argue on the merits. Anyway, he turned out to be a great addition and was the prime force behind a great and history making thread. PO, I hope you take KF's points about style to heart. Your paper would have stood (or not ;-) ) on its merits without a mention of creationist or creationism.tribune7
August 12, 2007
August
08
Aug
12
12
2007
04:06 PM
4
04
06
PM
PDT
GEM, Thanks for your recap...it helped me to understand what some of the "small" issues really were in your discussion. I think overall, everyone is still friends and the thread has been very educational. I know it has forced me to think on some issues and clarified my thinking as well. Thank you for the compliments on Luz, she is a light in this dark world. The countdown stands at roughly 26 days, which in my opinion is 26 days too many. :) "Beauty is fleeting and charm is deceptive, but a woman who fears the L-RD is to be praised." - Though she has the first two, it is the latter that made me want to keep her forever. And BTW, I shared with her all the nice comments you guys made about her, and her reaction? Quickly, with a laugh: "Post more pictures!!! LOL." I love my baby. She appreciates your gentlemenly compliments.Atom
August 12, 2007
August
08
Aug
12
12
2007
08:39 AM
8
08
39
AM
PDT
8] PO, 344: Dr D’s paper on elimination vs comparison presents the Bayesian arguments (from page 6 onward, nothing on the referenced page 4!) . . . In the above excerpted 2005 paper, WD begins on p.1 by identifying the issue of Fisherian vs Bayesian inference, and addresses all critiques in that context, pointing out that the EF is a way to formalise and undergird what was already implicit in the Fisherian approach, as the excerpt above from p. 4 already notes. Thus, the underlying context of the discussion is Fisher vs Bayes on the issue of inference by elimination, with Elliott Sober as the leading critic, on a Bayesian premise. In that context, the discussion on p. 4 is in that underlying context, and if as I have noted PO begins his paper's discussion by introducing Sober by name [without citing the other side for balance], and then proceeds to introduce Caputo in that general context of the Bayesian side of the debate, the inference that the is using a Bayesian critique is quite natural. Indeed, the “biased vs fair coin” model as a Bayesian view on a case similar to Caputo is explicitly addressed on p. 2, bridges into the issue of probabilistic resources and it is in that context that the expansion of rejection regions is raised on p. 4, as the connecting words and sequence will show at once. Then, on p. 5 he broadens the issue that specification leads to the inference that a specified and extremely improbable outcome is most likely intentional not accidental, and as he leads into p 6, notes on how Bayesians wish to block this “slippery slope” to design inference by insisting that one must produce a comparative hyp that specifically has better evidence for it before one may infer to design. But of course this leads to the problem of evaluating prior hypotheses and undermines the whole process of inference. Indeed WD says that, in the end, one adverts to Bayesian inference in contexts where the very improbability of the occurrence is what alerts you to the need to account for what has happened, i.e an implicit often intuitive Fisherian style inference, and more; cf WD for details. (In effect we can thus see on the “natural interpretation” model that PO, 2007 was discussing p[Caputo|Fair coin] vs p[Caputo|biased coin], and his dismissal of WD's use of the Court's note that the claimed selection process was fair is on a first look an implicit insistence on the comparative rather than the eliminative approach. But of course too, as I have noted, in the Caputo case, such a strong run to D sustained over decades -- i.e even with an initially inadvertently biased coin -- soon becomes design by self-serving negligence.) Now, too, on his explicitly announcing that he was not a Bayesian, I accepted that claim, and specified to prof PO that my point in the main was, and BTW is, that the question is on the substance of the critique [which is in a Bayesian context . . .] and that the arbitrary expansion of the RR without reference to the issue of probabilistic resources -- as I have again cited -- is the issue that has to be answered to. Relative to that, given that reference to the academic debate starts on p. 1 [specific discussion of Bayesian claims on p. 6 onward notwithstanding] this latest claim above is, sadly, simply yet another distraction with unfortunate and unnecessary ad hominem overtones. 9] PO, 34: my claim has been that we cannot just consider the flagellum (Dr D’s E) but must consider it as an outcome in a set of many possible outcomes (Dr D’s E*). I don’t know how to do this, and do not believe that it can be done satisfactorily. Again, specification is,a s WD pointed out in both his 2005 papers referenced in this thread, far broader than RR's relative to statistical inferences on probability distributions. In particular, functionally specified, complex information is a valid type of specification, and one that we routinely infer to as a sign of agency –- we do not believe the posts in this thread are simply lucky noise absent demonstrative proof otherwise. For in context we know that agents are possible and that they routinely create FSCI. So, on encountering FSCI, we infer to agents. [In effect this surfaces the underlying worldview level question-begging that too often lies under objections on the flagellum etc, i.e a ruling out – on no evidence! -- that agents could have been active at the relevant time. But, if we accept the possibility of agents, and then observe the significance of the observed FSCI, we can easily see that this now provides actual empirical evidence of agent action at the time and place in question.] 10] PO, 34: . . .I point out you had a problem by my mentioning it, whereas Dr D has encouraged his followers to perform their own sokalesque hoaxes and even get paid for it. At least you know the "right" beer to choose! [Though I am not a beer drinker.] Checking my email . . . Nope not in the inbox, nor in the bulk box. Try sending again. On the main issue, I think that WD is probably not advocating that people misrepresent the relevant technical issues to an experimental non-peer reviewed journal in which one is being trusted to play above board. This last is what Sokal did. [I have no objection in principle to playing devil's advocate or spoofing to make the point that a peer or editorial review process is manifestly improper, especially when on track record of unfairness, straight submissions have a negligibly different from zero probability of being published.] 1] Banning policy: Having seen and been a victim of the sort of abuse that often takes over blog threads on this general topic, I sympathise with a strong policy on abuse and willful obtuseness or mere empty regurgitation of a party line. In some cases I think WD has gone overboard, and judging by a recent reversal of a ban, he agrees with me too. [NB: I note here that, even through our strong disagreements, I miss Pixie. Don't know why he was pulled.] GEM of TKIkairosfocus
August 12, 2007
August
08
Aug
12
12
2007
04:31 AM
4
04
31
AM
PDT
6] PO, 344: I am sorry we had to spend so much time on Caputo. If I had known, I’d chosen another example, believe me! I think most of you understand that I am not using it to criticize the filter, quite the opposite . . . Mr Kf got stuck on “expanding the rejection region” and repeats it to this day despite many attempts by me and others to explain how I used the Caputo example. H'mm, let's recap again: the article begins with an unfortunately loaded term -- Creationists -- and an inappropriate example, Hoyle's 747 in a Junkyard; in effect simply dismissing the implied issues of the statistics of getting to extremely improbable and functional configurations by chance and necessity without agency. It then proceeds to a one-sided summary on the literature and issues, and the discussion of the Caputo case runs like this, in key part:
. . . In contrast [to the EF approach], a statistical hypothesis test of the data would typically start by making a few assumptions, thus establishing a model. If presented with Caputo’s sequence and asked whether it is likely to have been produced by a fair drawing procedure, a statistician [in context, as opposed to a design thinker,and omiting reference to WD's relevant qualifications] would first assume that the sequence was obtained by each time independently choosing D or R, such that D has an unknown probability p and R has probability 1 – p. The statistician would then form the null hypothesis that p = 1/2 which is the hypothesis of fairness. In this case, Caputo would be suspected of cheating in favor of Democrats so the alternative hypothesis would be that p >1/2 [in context dismissing the on the record since 1996 WD point that the Court, on Caputo's own testimony, accepted that the claimed selection process, if actually used, would have been fair] indicating that Ds were more likely to be chosen.[2007, p. 7.]
NB, he then infers to the rejection of the p = 1/2 hyp, and holds that only the aux hyp [dismissing and indeed in context criticising design thinkers for adverting to, the actual context of a claimed fair selection process at work, as documented by WD since 1996] that only inference to p > 1/2 is warranted. BTW, this also underscores the point that PO is here plainly critiquing the use of the EF in this case, contrary to what he has said above, cf. My comments in 20 – 21 on, and in 154 etc. That sets a very different context than we would pick up from PO, 344, for evaluating:
It is important to note that it is the probability of the rejection region, not of the individual outcome, that warrants rejection of a hypothesis. A sequence consisting of 22 Ds and 19 Rs could also be said to exhibit evidence of cheating in favor of Democrats, and any particular such sequence also has less than a 1-in-2-trillion probability. However, when the relevant rejection region consisting of all sequences with at least 22 Ds is created, this region turns out to have a probability of about 38% and is thus easily attributed to chance. [2007, p. 7 again.]
Now, of course, the first sentence here excerpted is in effect what WD said in defining E* as the upper extremum from 1 R/40 D on, or ~ 1 in 50 billionths of the curve at an extreme, precisely the basic approach of Fisher in rejecting the null hyp that a given sample came from a chance population. IMHCO -- and pardon my turnaround of the rhetorical devices above to make the next point [I am illustrating how the rhetoric works, not making a personal attack] -- no “statistician” who properly understands the issue that a relatively small sample of a population is unlikely to be in whole or in part at its extreme, would then glide straight into the second sentence. For, to in effect suggest that any person with even basic exposure to inferential statistics could think that a sample in a proposed “rejection region” encompassing 38% of the curve -- i.e odds of nearly 2 in 5 -- could be viewed by any informed person as credible evidence of the sample's being not from the relevant claimed distribution, is to set up a strawman. Far better would have been to directly address the point that WD makes on p. 4 of his 2005 paper on Fisher vs Bayes, that while some critics [coming from Bayesian approaches] raise the issue of arbitrary expansion of the RR,
what’s to prevent . . . [so expanding the RR] that any sample will always fall in some one of these rejection regions and therefore count as evidence against any chance hypothesis whatsoever? The way around this concern is to limit rejection regions to those that can be characterized by low complexity patterns (such a limitation has in fact been implicit when Fisherian methods are employed in practice). Rejection regions, and specifications more generally, correspond to events and therefore have an associated probability or probabilistic complexity [in context, they are of sufficiently low probability to be beyond the reasonable reach of the available probabilistic resources]. But rejection regions are also patterns and as such have an associated complexity that measures the degree of complication of the patterns, or what I call its specificational complexity . . . [NB too how specification is broader than RRs]
7] PO, 344: Mr Kf repeated over and over that my criticism was Bayesian My consistent observation has been that prof PO first cited Mr Sober as if he were the final word on the subject, and then in addressing Caputo used the above cited criticism, which is on the evidence of WD's 2005 paper [cf. below!], a criticism of WD's reasoning by Bayesians. Of course, the actual material issue is that whoever has put the criticism, and from whatever background, it is invalid for reasons as I have also in brief part excerpted above from WD, 2005. This too , I have underscored, and it seems that prof PO in the end must agree on the merits, or he would have addressed me on the merits instead of as noted above. . . .kairosfocus
August 12, 2007
August
08
Aug
12
12
2007
04:24 AM
4
04
24
AM
PDT
Hi PaV, Prof PO, Jerry, Atom et al: The thread that refuses to die . . . 1] PaV, 340: As noted, your material point stands, minor errors and slips of memory notwithstanding. BTW, recall, that given the role of enzymes, there are dozens and dozens of those DNA-codedproteins involved in the cell's processes! [Indeed, as TBO's TMLO summarises, Hoyle and Wickramasinghe's calculation on odds of 1 in 10^40,000 against forming life based on cells by chance had to do with the odds against getting to the required cluster of enzymes for life.] On my response to Prof Po, tha tis not so much driven by pique as by analysis of the rhetorical pattern he has used, starting with my responses in 20 – 21 to his linked in 19. Sadly, he has not acknowledged the problems that stem form that pattern of loaded language, biased summary/survey of the literature, failure to cite and engage WD on what he said on the record long since, and more. It comes out in his analysis of Caputo, and in his onward handling of the flagellum. Then, even more unfortunately, it comes out in his handling of issues in this thread. I hope – and daresay, pray, that he experience will help him in the longer term as he reflects on how he has reasoned and argued. 2] Jerry, 341: Why should the calculations be restricted to 20 amino acids when there are 39 alternatives given the right hand and left hand versions. One, glycine, has no handedness. Fair point. On origin of life [OOL], the question of the other handedness and the many non protein forming amino acids comes in, but within life systems, the situation is in effect confined to the 20 with a few oddball exceptions. 3] J, 341: the formation of proteins as a chain of amino acids must have come about only after the information was assembled to construct them. In other words there isn’t any chicken or egg issue here. It couldn’t have been just polymers laying around, ready to be selected. They must have been prescribed or specified because the chances of the self assembly of a single strand of 100 of the same chirality is also way pass the upb let alone a whole suite of them. This directly follows on OOL, and you may find TBO's discussion as onward linked through my always linked, interesting. 4] J, 341: I find it rather interesting how one could just say he believes this could happen by chance without addressing these issues. It does rival the belief in resurrection as faith based. If you look at the parallel current [6th August] thread, on Image of Pots and Kettles will see that in my exchange with Prof Carl Sachs there, i argue that reason and faith are inextricably intertwined in the core of all worldviews and thus scientific research programmes – which are deeply embedded with worldviews commitments. 5] Atom, 343: GEM, I agree with PaV in that you seem to have become cross with PO and vice versa, but I thank you both nonetheless for your contributions. . . . As noted above, to PaV, not so much cross as saddened and concerned. I do confess that the series of assertions in 305 and 310 came across as un-necessary, ad hominemish, atmosphere-clouding and fight-picking; thus, quite irritating. I responded, accordingly, with a few balancing remarks. The primary intent of those remarks was to highlight how the sort of comments in 305 and 310 can be persuasicve without being actually cogent and sound on the issues. I hope I was not too annoyed in how I responded; if that was so, I am sorry for that. BTW, all the very best as your days of bachelorhood count down! [On the evidence, you have chosen well indeed – there is a radiance there that more than lives up to that apt name, “light.” And though from moment to moment there will doubtless be times of challenge ahead, on balance the exchange of those ever so sobering vows is more than worth it! (Speaking from nearly 17 years on the other side of such a vow.)] . . .kairosfocus
August 12, 2007
August
08
Aug
12
12
2007
04:21 AM
4
04
21
AM
PDT
Hello yall, I guess we're closing the thread which is probably about time. I am responsible for lengthening it by referring to Behe's Edge in passing when I really only came here to discuss the EF. In Dr D's original post, he notes that Jeff Shallit, arguably the most qualified mathematician on the list, has no expertise in probability. Thus, Dr D thinks that such expertise is necessary to understand his writings. Two points: (1) nobody on Dr D's "pro list" has such expertise either and (2) I do, and I have criticised the filter but was not mentioned on the "con list." So I introduced myself and posted a link to my article and then it took off to an apparently record-breaking session. Thanks to all for being interested in a factual and respectful debate (I never heard back from Michaels7, [191, 197]). I am sorry we had to spend so much time on Caputo. If I had known, I'd chosen another example, believe me! I think most of you understand that I am not using it to criticize the filter, quite the opposite. If I had used it as criticism, I would have said: "Here is how the Caputo example can be used to argue against the EF..." and presented my arguments. Unfortunately, Mr Kf got stuck on "expanding the rejection region" and repeats it to this day despite many attempts by me and others to explain how I used the Caputo example. We also spent much time discussing Bayesianism, which is more interesting than Caputo although we perhaps discussed it for the wrong reasons. Again, if I actually wanted to criticize the EF from the Bayesian point of view, I would have said so. Mr Kf repeated over and over that my criticism was Bayesian (and PaV got on that wagon for a while although I hope I set him straight!) which I don't find very constructive. The basis for his criticism seems to be connected to Caputo and the suposed "expansion." Dr D's paper on elimination vs comparison presents the Bayesian arguments (from page 6 onward, nothing on the referenced page 4!) so everybody can read for themselves and see if they find any such arguments in my article. As I have said many times, feel free to contact me directly if you have questions. As for my promised reply to PaV and Atom, I have not forgotten but I don't want to go into details. My two main problems with the EF are (a) how do we determine the rejection region ("specfication") and (b) how do we rule out chance hypotheses other than the uniform? We have mostly spent time on (a) and my claim has been that we cannot just consider the flagellum (Dr D's E) but must consider it as an outcome in a set of many possible outcomes (Dr D's E*). I don't know how to do this, and do not believe that it can be done satisfactorily. Anybody can try though, and why not write a real scholarly article on it and submit it for publication? I know there is a presumption that ID-friendly articles will not be published by the regular academic press, but there are other outlets you can use. Anything must be judged on its own merits. Finally, Mr Kairosfocus, I harbor no hard feelings. As for my post 305, the smiley didn't register due to a double parenthesis. It's there in 310 though. I say these things as if I were slapping you on he back whilest we're clinking bottles of Red Stripe. [On a side note, I sent you an email regarding Sokal's Hoax in which I point out you had a problem by my mentioning it, whereas Dr D has encouraged his followers to perform their own sokalesque hoaxes and even get paid for it. Just wonder what you think, that's all.] With regards to Atom's last note, I am sorry to hear that people are bounced so easily. I decided to go here for a more interesting, although less sympathetic, exchange than I would get at an anti-ID blog. It has been a good experience. And I have finally seen one piece of evidence of intelligent design: Atom's fiancee! :D Cheers yall, Prof POolofsson
August 11, 2007
August
08
Aug
11
11
2007
12:56 PM
12
12
56
PM
PDT
A few thoughts on this thread: First, this has been my favorite thread at UD. Some have come close, but this one is the most fun and informative. Second, I think the mod policy should be relaxed a tad, to allow threads like this to happen more often. It is no surprise that this thread contains comments by someone who openly questions some aspects of ID. That is where the interesting turns come from. Usually people will get bounced for (what seems in my eyes) simply disagreeing and remaining obstinate in their disagreement. As long as they are not insulting IDers or saying "ID=Religion" (which at this point in the debate can only be due to negligence) I think we should allow them to stay. As PO has shown, the payoff is more interesting threads. Lastly, thanks Kairosfocus, PO, PaV, Tribune7 and the others. You've made it fun. GEM, I agree with PaV in that you seem to have become cross with PO and vice versa, but I thank you both nonetheless for your contributions. You are always a source of information and a definite asset to UD.Atom
August 11, 2007
August
08
Aug
11
11
2007
08:26 AM
8
08
26
AM
PDT
I meant to say "the self assembly of a single strand of 300 of the same chirality is also way pass the upb let alone a whole suite of them." A strand of 100 polymers of the same chirality is also extremely high but not at the upb.jerry
August 11, 2007
August
08
Aug
11
11
2007
07:37 AM
7
07
37
AM
PDT
I have a couple side issue. Why should the calculations be restricted to 20 amino acids when there are 39 alternatives given the right hand and left hand versions. One, glycine, has no handedness. There are also several non-proteinogenic amino acids. Thus, any discussion of amino acids would have to consider these into the calculations. By limiting the calculations to left handed amino acids we should recognize that the calculation are extremely conservatively estimated. There is no known necessity to limit polymers to one or the other or to eliminate all of the non-proteinogenic amino acids. Thus, the formation of proteins as a chain of amino acids must have come about only after the information was assembled to construct them. In other words there isn't any chicken or egg issue here. It couldn't have been just polymers laying around, ready to be selected. They must have been prescribed or specified because the chances of the self assembly of a single strand of 100 of the same chirality is also way pass the upb let alone a whole suite of them. So what the issue is about here is the origin of the instructions and machinery to construct proteins of a unique capability. And by the way this machinery to construct proteins (made of rna) requires proteins for its construction. Which is curious, where did these proteins originate. So to believe in the whole process of life as it works today one has to hypothesize some unknown incredibly complicated other form of life that preceded it. If this form of life existed it had to have a different methodology for constructing proteins. And then why should this incredibly complicated set of life systems evolve into what we have today. This non protein system would have had to randomly construct all the proteins that were necessary to replace the machinery it already had to make a new system of rna that are necessary to make the proteins that we see today. (Remember mrna and trna are both constructed by proteins and these could not have existed in the original system.) Sounds convoluted to say what I mean let alone to actually self assemble by chance. I find it rather interesting how one could just say he believes this could happen by chance without addressing these issues. It does rival the belief in resurrection as faith based. I would just wish they would admit it. It defies reason when one choose chance because there is no reason for it other than blind faith.jerry
August 11, 2007
August
08
Aug
11
11
2007
07:07 AM
7
07
07
AM
PDT
kairosfocus: Thanks for having cleaned up some of my mistakes along the way. My memory, alas, is not very good, and I can make incidental errors here and there. Along these lines: (1) I've said a number of times in this thread "Upper Probability Bound" for UPB; of course, UPB is "Universal Probability Bound". (2) I've--again, from memory--used 10^180 for the UPB, when, in fact, Dembski has the figure of 10^150 (Interestingly, I sort of stumbled upon the UPB using Planck time having forgotten---its been two years since I've read about this stuff---that that is in fact how WD calculated it. Just saw this this morning.) kairosfocus, you're usually quite gracious (more so than I), but it seems the good professor has rubbed you the wrong way. What I have appreciated about P.O. has been his tone: neither dismissive nor overly dogmatic (you might disagree on this last point). Anyways, thanks for correcting as we went along the way. I honestly believe that post [313] adequately addresses the main issue that P.O. was raising, and succeeds in reasserting that the UPB is exceeded (even using 20 a.a. instead of 22) by a simple 300 residue protein. I suspect the good professor is chewing on that right now.PaV
August 11, 2007
August
08
Aug
11
11
2007
05:46 AM
5
05
46
AM
PDT
All: I see the thread continues. And, while I would have loved to be able to simply chime in with Trib and PaV just above [leaving the thread to stand on its merits], Prof PO – while his time of interaction here is appreciated – has by making some unfortunate, unwarranted and atmosphere-clouding remarks overnight [cf. 305 and 310] left me little alternative but to make a few balancing remarks on his consistent rhetorical tactics. This will also underscore the original point Also, a few remarks on the points on the merits will be useful; 1] On rhetoric, the art of persuasion, not analysis Onlookers will see that from his original linked article at 19 above, Prof PO begins with the term “creationist,” prejudicing the mind off his likely audience: A classic creationist argument against Darwinian evolution is that it is as likely as a tornado in a junkyard creating a Boeing 747. In fact, the argument originates with the distinguished Astronomer, Sir Fred Hoyle [hardly a Creationist!], is rooted in the underlying statistical thermodynamics of the generation of bio-functional molecules by the known random forces at work, and stands unanswered on the merits to this day, nearly thirty years later – including from prof PO. Then, sadly, the rest of his introductory remarks run downhill, as I noted in 20 – 21 above, and since: biased summary of the literature, citation of experts on one side of a disagreement as though that were the be-all and end-all [especially of Mr Sober], and so on. Of particular note among these was his handling of the Caputo case, using the approach last analysed with appropriate excerpts in 179 – 182 above. In particular, on reading PO [cf 180], one would not realise that WD has been on the record since 1996 on the issue of an inadvertently biased selection process: the court held that from the outset on Mr C's testimony, the process he claimed to be using was fair; the serious question being whether he used the claimed process. Nor, would one see that WD used the issue of exhaustion of available probabilistic resources in his reasoning as to why a 1 in 50 bn chance that fits a simply describable pattern [cf. p. 4 in WD's 2005 paper on Fisher and Bayes, also excerpted in 180], is well warranted as a basis for inferring to design, and why it is qualitatively different from PO's arbitrary choice of an expanded rejection region that would enclose 38% of the curve. Now, overnight we see Prof PO unfortunately again mischaracterising my argument [which he has never cogently responded to on the merits] and my person, dismissing what he has not answered, and then exhorting others not to follow that "bad example." (Sadly, such strawman an ad hominem rhetoric is precisely reminiscent of the approach used from the outset in the critique paper linked at 19 with WD, Behe and even the Creationists. Dembski's complaint in his original post is well-warranted, and Prof PO -- sadly -- gives a further instance of why.) Let us turn to happier matters of substance . . . 2] PaV, 308: 20 or 22 amino acids While there are some “oddball” cases, the vast majority of proteins are made up from a set of 20 acids. So and also, as that humble source, Wiki notes: The sequence of amino acids in a protein is defined by a gene and encoded in the genetic code. Although this genetic code specifies 20 "standard" amino acids, the residues in a protein are often chemically altered in post-translational modification: either before the protein can function in the cell, or as part of control mechanisms. Proteins can also work together to achieve a particular function, and they often associate to form stable complexes. And, in discussing protein structure, Wiki notes: the current estimate for the average protein length is around 300 residues . . . Thus, while a calculation relative to 22 acids and 150 monomers is okay, a calculation relative to 300-length chains with 20 acids is conservative [cf. Supra]; and also it underestimates the known complexity, given the modifications that such monomers may undergo subsequent to chaining. The material point remains as PaV has it. 3] DS, 311: look around you at all the manmade objects. Whether you know what the function of each is for or not doesn’t make much difference in knowing that it’s manmade because there are no known undirected physical processes that could have assembled it. Of course, this underscores that specifications by pattern-matching are wider than functional specifications. We can recognise CSI and asses its likely origin even without knowing the function. Right back tot he tornado in a junkyard, or my small scale version as linked [which is at a scale where molecular agitation is the driving force for spontaneous change.] In my always linked, I have focussed on the subset of functionally specified complex information [FSCI] that works in information-based systems because that is IMHCO the most clear case in point. The silence that has usually greeted that focus, tells me that here is no serious answer on the merits to it. Dave is also right to note that “[t]heoretically possible and practically possible are two quite different things.” That is, we cannot reasonably revert to “lucky noise” when that would exhaust the available probabilistic resources, and would be beyond the edge of chance. Theoretically and empirically, both materialistic abiogenesis and NDT-based macroevolution at body-plan level [the flagellum being a case in point] are far, far beyond the edge of evolution. [BTW, thought eh flagellum is an example of IC, it is also an example of CSI and WD made a calculation that he odds on its fdormation by chance are something like 4] DS, 31: Is there any possible means other than chance or agency? It seems to me and many others that this is a true dichotomy - if it isn’t chance it is design. From Plato on there has been a trichotomy: necessity, chance, agency. But in situations where the system constitutes variable-state components and assembly options -- e.g. a discrete-state chain of “characters” that stores information like DNA – necessity is plainly ruled out as the predominant cause, leaving DS' dichotomy of material causal forces. And, we know directly that agency is capable of creating FSCI. 5] Back-forth on triple mutations, with genetic entropy and Haldane's dilemma lurking. Interesting . . . Okay, trust the above is helpful as a balancing contribution. GEM of TKIkairosfocus
August 11, 2007
August
08
Aug
11
11
2007
01:23 AM
1
01
23
AM
PDT
Prof.Olafsson, you’ve been most gracious. Thank you for time here. Ditto thattribune7
August 10, 2007
August
08
Aug
10
10
2007
07:55 PM
7
07
55
PM
PDT
Prof.Olafsson, you've been most gracious. Thank you for time here.PaV
August 10, 2007
August
08
Aug
10
10
2007
06:18 PM
6
06
18
PM
PDT
Atom [327], Probably. I meant that if each base pair has a probability of 10^-9, then any set of three specific base pairs (that one, that one, and that one) has probability 10^-27 to all mutate. I'm vague on the correct terminology because, tribune7, I am no biologist. ;)olofsson
August 10, 2007
August
08
Aug
10
10
2007
05:21 PM
5
05
21
PM
PDT
Patrick [324], I have no problems with the parallel malaria parasite/human. Thanks still for making a clarifying statement. I know a lot of biologists study yeast as a model system for humans.olofsson
August 10, 2007
August
08
Aug
10
10
2007
05:17 PM
5
05
17
PM
PDT
tribune7 [323], You seem to be saying because we can’t refute him, we should assume he’s wrong No, I'm saying that his argument is hard to refute due to its inner logic.olofsson
August 10, 2007
August
08
Aug
10
10
2007
05:16 PM
5
05
16
PM
PDT
I mentioned earilier Haldane, Nachman's U-Paradox are arguments against Darwinian evolution independent of using the EF. Even on the generous assumption Natural Selection could in prinicple creation specified complexity, we assess the population and mutational resources required to make this amazing feat possible. There simply aren't enough population resources, and there are too many opportunities for bad things to happen. In sum, there is too much Genetic Entropy.scordova
August 10, 2007
August
08
Aug
10
10
2007
04:04 PM
4
04
04
PM
PDT
which means that each person harbors about three new deleterious mutations.
Actually, humans can't afford to harbor much more than about 3, becuause 3 would imply human females need to give birth to 40 offspring, just to sustain the clean up. The number could be much higher, but the number 3 is the maximum tolerable rate of bad mutatoins the human race can sustain and still expect to live over geological timescales. I'm so glad Patrick mentioned this. We talked about it at UD. See: Nachman's U-Paradox.scordova
August 10, 2007
August
08
Aug
10
10
2007
03:52 PM
3
03
52
PM
PDT
I just reread [320] and [322]. I think I see what you're saying, Patrick; P.O. might have misunderstood your U rate information.PaV
August 10, 2007
August
08
Aug
10
10
2007
02:31 PM
2
02
31
PM
PDT
Atom: Whoever has the first chance to explain can do it. I'll be plenty busy over the next 24 hours.PaV
August 10, 2007
August
08
Aug
10
10
2007
02:26 PM
2
02
26
PM
PDT
...the genomic deleterious mutation rate (U) is at least 3... I don’t see how that information could ever be used against Behe’s premise in EoE.
I guess "One man's garbage..." :)Atom
August 10, 2007
August
08
Aug
10
10
2007
02:00 PM
2
02
00
PM
PDT
Sorry, the confusion is my fault since I was referencing Nachman/Crowell's estimate that the genomic deleterious mutation rate (U) is at least 3. See post 320. I don't see how that information could ever be used against Behe's premise in EoE.Patrick
August 10, 2007
August
08
Aug
10
10
2007
01:51 PM
1
01
51
PM
PDT
1 2 3 4 13

Leave a Reply