Uncommon Descent Serving The Intelligent Design Community

Kevin Padian: The Archie Bunker Professor of Paleobiology at Cal Berkeley

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Kevin Padian’s review in NATURE of several recent books on the Dover trial says more about Padian and NATURE than it does about the books under review. Indeed, the review and its inclusion in NATURE are emblematic of the new low to which the scientific community has sunk in discussing ID. Bigotry, cluelessness, and misrepresentation don’t matter so long as the case against ID is made with sufficient vigor and vitriol.

Judge Jones, who headed the Pennsylvania Liquor Control Board before assuming a federal judgeship, is now a towering intellectual worthy of multiple honorary doctorates on account of his Dover decision, which he largely cribbed from the ACLU’s and NCSE’s playbook. Kevin Padian, for his yeoman’s service in the cause of defeating ID, is no doubt looking at an endowed chair at Berkeley and membership in the National Academy of Sciences. And that for a man who betrays no more sophistication in critiquing ID than Archie Bunker.

Kevin Padian and Archie Bunker

For Padian’s review, see NATURE 448, 253-254 (19 July 2007) | doi:10.1038/448253a; Published online 18 July 2007, available online here. For a response by David Tyler to Padian’s historical revisionism, go here.

One of the targets of Padian’s review is me. Here is Padian’s take on my work: “His [Dembski’s] notion of ‘specified complexity’, a probabilistic filter that allegedly allows one to tell whether an event is so impossible that it requires supernatural explanation, has never demonstrably received peer review, although its description in his popular books (such as No Free Lunch, Rowman & Littlefield, 2001) has come in for withering criticism from actual mathematicians.”

Well, actually, my work on the explanatory filter first appeared in my book THE DESIGN INFERENCE, which was a peer-reviewed monograph with Cambridge University Press (Cambridge Studies in Probability, Induction, and Decision Theory). This work was also the subject of my doctoral dissertation from the University of Illinois. So the pretense that this work was not properly vetted is nonsense.

As for “the withering criticism” of my work “from actual mathematicians,” which mathematicians does Padian have in mind? Does he mean Jeff Shallit, whose expertise is in computational number theory, not probability theory, and who, after writing up a hamfisted critique of my book NO FREE LUNCH, has explicitly notified me that he henceforth refuses to engage my subsequent technical work (see my technical papers on the mathematical foundations of ID at www.designinference.com as well as the papers at www.evolutionaryinformatics.org)? Does Padian mean Wesley Elsberry, Shallit’s sidekick, whose PhD is from the wildlife fisheries department at Texas A&M? Does Padian mean Richard Wein, whose 50,000 word response to my book NO FREE LUNCH is widely cited — Wein holds no more than a bachelors degree in statistics? Does Padian mean Elliott Sober, who is a philosopher and whose critique of my work along Bayesian lines is itself deeply problematic (for my response to Sober go here). Does he mean Thomas Schneider, who is a biologist who dabbles in information theory and not very well at that (see my “withering critique” with Bob Marks of his work on the evolution of nucleotide binding sites here). Does he mean David Wolpert, a co-discoverer of the NFL theorems? Wolpert had some nasty things to say about my book NO FREE LUNCH, but the upshot was that my ideas there were not sufficiently developed mathematically for him to critique them. But as I indicated in that book, it was about sketching an intellectual program rather than filling in the details, which would await further work (as is being done at Robert Marks’s Evolutionary Informatics Lab — www.evolutionaryinformatics.org).

The record of mathematical criticism of my work remains diffuse and unconvincing. On the flip side, there are plenty of mathematicians and mathematically competent scientists, who have found my work compelling and whose stature exceeds that of my critics:

John Lennox, who is a mathematician on the faculty of the University of Oxford and is debating Richard Dawkins in October on the topic of whether science has rendered God obsolete (see here for the debate), has this to say about my book NO FREE LUNCH: “In this important work Dembski applies to evolutionary theory the conceptual apparatus of the theory of intelligent design developed in his acclaimed book The Design Inference. He gives a penetrating critical analysis of the current attempt to underpin the neo-Darwinian synthesis by means of mathematics. Using recent information-theoretic “no free lunch” theorems, he shows in particular that evolutionary algorithms are by their very nature incapable of generating the complex specified information which lies at the heart of living systems. His results have such profound implications, not only for origin of life research and macroevolutionary theory, but also for the materialistic or naturalistic assumptions that often underlie them, that this book is essential reading for all interested in the leading edge of current thinking on the origin of information.”

Moshe Koppel, an Israeli mathematician at Bar-Ilan University, has this to say about the same book: “Dembski lays the foundations for a research project aimed at answering one of the most fundamental scientific questions of our time: what is the maximal specified complexity that can be reasonably expected to emerge (in a given time frame) with and without various design assumptions.”

Frank Tipler, who holds joint appointments in mathematics and physics at Tulane, has this to say about the book: “In No Free Lunch, William Dembski gives the most profound challenge to the Modern Synthetic Theory of Evolution since this theory was first formulated in the 1930s. I differ from Dembski on some points, mainly in ways which strengthen his conclusion.”

Paul Davies, a physicist with solid math skills, says this about my general project of detecting design: “Dembski’s attempt to quantify design, or provide mathematical criteria for design, is extremely useful. I’m concerned that the suspicion of a hidden agenda is going to prevent that sort of work from receiving the recognition it deserves. Strictly speaking, you see, science should be judged purely on the science and not on the scientist.” Apparently Padian disagrees.

Finally, Texas A&M awarded me the Trotter Prize jointly with Stuart Kauffman in 2005 for my work on design detection. The committee that recommended the award included individuals with mathematical competence. By the way, other recipients of this award include Charlie Townes, Francis Crick, Alan Guth, John Polkinghorne, Paul Davies, Robert Shapiro, Freeman Dyson, Bill Phillips, and Simon Conway Morris.

Do I expect a retraction from NATURE or an apology from Padian? I’m not holding my breath. It seems that the modus operandi of ID critics is this: Imagine what you would most like to be wrong with ID and its proponents and then simply, bald-facedly accuse ID and its proponents of being wrong in that way. It’s called wish-fulfillment. Would it help to derail ID to characterize Dembski as a mathematical klutz. Then characterize him as a mathematical klutz. As for providing evidence for that claim, don’t bother. If NATURE requires no evidence, then certainly the rest of the scientific community bears no such burden.

Comments
Hi Atom: I just love that “we.” Greet our lovely “Light” for us all.! PO will be missed, indeed. The weather is finally clearing up [still windy, power came back late afternoon – a lot of poles were knocked down], and indeed let us pray that we have a milder season than is feared. GEM of TKIkairosfocus
August 18, 2007
August
08
Aug
18
18
2007
02:10 AM
2
02
10
AM
PST
And by Spet I mean Sept. :)Atom
August 17, 2007
August
08
Aug
17
17
2007
07:46 AM
7
07
46
AM
PST
GEM, G-d be with you dealing with Dean. I too am hoping for a storm that either misses land, or softens before it does. (We're visiting the region in Spet, and are praying for a LACK of hurricanes and hurricane devastation.) PO, you'll be missed. Just know there are IDers who are willing to discuss the difficult questions at length without ending dialogue. Hopefully some of us demonstrated that.Atom
August 17, 2007
August
08
Aug
17
17
2007
07:46 AM
7
07
46
AM
PST
Hi KF. Hope things are well. restricting oneself to guidelines set forth by evolutionary biologists is akin to what W.D. does and hence limits the rejection region one could consider. Here is what I was thinking. The example I gave of the proteins being placed in proximity and waiting for chance to form them into a fallgelum is silly (which PO seems to recognize i.e 500 years etc). BUT that seems to be what PO is asking that we consider in his chance calculations (tornado in a junkyard and all that). UNTIL PO says things such as "we need to consider what objections an evolutionary biologist would have to the particular chance hypothesis that Dr D chooses" (cf 170) Which seems that PO is the one who is then putting limitations on what can be considered including the rejection region. Now, evolutionary biologists don't posit proteins randomingly winging themselves together into a flagellum. They start with a flagellum-less bacterium and say the DNA code is added to it somehow which then directs RNA to make the proper proteins in the proper sequence to give it its flagellum. Now, granted Dembski's famous calculation involve proteins lining themselves up by chance (and shows that it is pretty silly to think that they did) but -- if memory serves -- he also had a probility calculation of the undirected formation of the DNA code specifing such a line up that ended up being the same as the winging the proteins together scenario. Anyway, I think he conceded that if the code was in the DNA the flagellum was a certainity. As far as I can tell, the rebuttal against Dembski is that he hasn't calculated for some unkown natural force that might cause DNA code to expand or change to program for things like a falgellum. That, of course, is a faith statement. I said it before, it puts them in the same category as YEC.tribune7
August 17, 2007
August
08
Aug
17
17
2007
06:32 AM
6
06
32
AM
PST
All, First, weather situation M'rat: Seems Dean is cutting into the region just N of Barbados, so so far just wind here, maybe up to 40 – 50 in gusts. (Now my concern is that it may have done a number on the farmers in Dominica and St Lucia. But moreso, projections put it very near Jamaica at Cat 4 Sunday. Let's hope and pray it does an Ivan if so – ducks away from Jamaica by a mysterious swerve. And onward let's hope it does not do a Katrina etc.) On a few notes: 1] Prof PO: It seems he has been excluded, and from the situation, maybe what was intended as a light hearted remark was taken a little strongly. He expressed appreciation to me and to the blog. As perhaps his most strongly objecting interlocutor, I think his time here was a contribution of value, on balance, and hope that we will be able to hear from him again. 2] PaV, 379: “PaVian Simmerisation” Chuckle, chuckle, chuckle . . . . There, you got three whole chuckles! 3] I really would like you to respond to the question of why any other chance hypothesis ‘needs’ to be considered in the Caputo case. PO is not here, so I will note that in 361, I showed where the Bayesian algebra leads. Taking a ration across alt hyps, if one knows the probabilities of the hyps, one can then infer whether or no evidence supports one over the other. Fine if you have that data and computational power. But, we need not look there to see that as Stove points out by far and away most subsets of a population are similar to the pop, i.e reflect its average, cf my darts and charts example in 68 above. Thus, very rare patterns as show up in Caputo, are utterly unlikely to be in a sample, raising suspicion of “cooking” the data. Thence we can see that this is an instance of a search on a configuration space where the islands of functionality are sparse/ isolated; sufficiently so that random walks are unlikely to hit them. Thence tornadoes in junkyards assembling 747s and/or microjets in vats [app 1 always linked point 6] assembled by diffusion as utterly improbable. I will keep away from the headaches of multinomial distributions, thank you – I ain't a “statistrician” ;-) . 4] Trib: restricting oneself to guidelines set forth by evolutionary biologists is akin to what W.D. does and hence limits the rejection region one could consider. Could you kindly explain/expand a bit what you mean here? GEM of TKIkairosfocus
August 17, 2007
August
08
Aug
17
17
2007
02:25 AM
2
02
25
AM
PST
OK, Sal. I could have continued this discussion for a little while longer, I think. PO, if you peek back, restricting oneself to guidelines set forth by evolutionary biologists is akin to what W.D. does and hence limits the rejection region one could consider. The example I provided was a pure chance one. Well, almost pure chance. I did cheat a little.tribune7
August 16, 2007
August
08
Aug
16
16
2007
08:00 PM
8
08
00
PM
PST
Atom, PaV, tribune7, jerry, Patrick, and Kairosfocus, etc. P.O. and I communicated on other matters outside of UD since he is a mathematician and I have an interest in math. He expresses his thanks to everyone here for a stimulating exchange. He will not be returning, and he is sorry he won't be posting here anymore. He asked me to convey his appreciation for you all.scordova
August 16, 2007
August
08
Aug
16
16
2007
07:34 PM
7
07
34
PM
PST
P.O Alert!!! P.O. Alert!!! There's an article in this week's Science magazine you'll want to look up at the University. Here's and excerpt: It's entitled: "Crystal Structure of an Ancient Protein:..." They write: Here we report the empirical structures of an ancient protein, which we “resurrected” (12) by phylogenetically determining its maximum likelihood sequence from a large database of extant sequences, biochemically synthesizing a gene coding for the inferred ancestral protein, expressing it in cultured cells, and determining the protein’s structure by xray crystallography. Specifically, we investigated the mechanistic basis for the functional evolution of the glucocorticoid receptor (GR), a hormone-regulated transcription factor present in all jawed vertebrates (13). GR and its sister gene, the mineralocorticoid receptor (MR), descend from the duplication of a single ancient gene, the ancestral corticoid receptor (AncCR), deep in the vertebrate lineage ~450 million years ago (Ma) (Fig. 1A) (13). How timely, no?!PaV
August 16, 2007
August
08
Aug
16
16
2007
05:13 PM
5
05
13
PM
PST
P.O. [363]: "Before we kill the thread, let’s kill Caputo! I was talking abot your flagellum calculations and only meant that your chance hypothesis is similar to Caputo p=1/2, and asked how you’d rule out other chance hypotheses (similar to Caputo p=37/38). I was very unclear, sorry! Shouldn’t have mentioned him at all!" Frankly, I was working the other way around. That is, I thought it would be better to get some clarification as to how the 'liklihood' approach actually works in a simple case before we try to tackle a more complicated one. So, in using the example of the stock market closing price as the determiner of D's and R's, surely a random happening, I was trying to get at what other chance hypotheses might need to be eliminated to satisfy a 'liklihood' statistician. IOW, I really would like you to respond to the question of why any other chance hypothesis 'needs' to be considered in the Caputo case. Changing the subject a little bit, and in a way that perhaps anticipates what might come next, I would ask you if you think a multinomial distribution could be used for statistical analysis in the case of proteins. What are your thoughts about that? I have to rush off. See you all later. P.S. BTW, kairosfocus, I knew what you were getting at with the "PaVian Simmerisation"; I just thought I'd get a chuckle from you! ;)PaV
August 16, 2007
August
08
Aug
16
16
2007
03:30 PM
3
03
30
PM
PST
Prof PO (and all . . .) Seems there is life yet in this thread that keeps going and going. Okay, a few observations: 1] David Stove's point on induction, empirical support to a claim and sampling On another thread, I ran across the name and followed it up fruitfully – UD is good at stimulating that sort of thing. In taking on Hume, Kuhn, Popper et al, he makes some points that jump off the page at me on the stuff discussed above, and at least for the onlookers I think the ideas he raised are well worth at least an excerpt or two. So, PaVian style, the “simmerised” core stuff [quite relevant tot he underlying context to the above], courtesy that ever so humble and too often biased on ID Wiki (which has onward links that get to whole books online, drop me a line through contacts via my always linked . . .):
[Negative task] Consider a claim such as “All ravens are black”. Hume argued that we don’t know this a priori and that it cannot be entailed from necessary truths. Nor can it be deduced from our observations of ravens . . . . Stove argued that Hume was presuming “deductivism” . . . the view, explicitly or implicitly accepted by many modern philosophers, that the only valid and sound arguments are ones that entail their conclusions. But if we accept that premises can support a conclusion to a greater (or lesser) degree without entailing it, then we have no need to add a premise to the effect that the observed will be like the unobserved - the observational premises themselves can provide strong support for the conclusion, and make it likely to be true. Stove argued that nothing in Hume’s argument shows that this cannot be the case and so Hume’s argument does not go through, unless one can defend deductivism. This argument wasn’t entirely original with Stove but it had never been articulated so well before. Since Stove put it forward some philosophers have come to accept that it defeats Hume’s argument . . . .
So, it comes down to defeatable but credible warrant, where we may know reliably enough for real-world purposes, but only provisionally. [Thus, “all men live by faith; the issue is which one, why?”]
[positive task] it is a statistical truth that the great majority of the possible subsets of specified size (as long as this size is not too small) are similar to the larger population to which they belong. For example, the majority of the subsets which contain 3000 ravens which you can form from the raven population are similar to the population itself (and this applies no matter how large the raven population is, as long as it is not infinite). Consequen[tl]y, Stove argued that if you find yourself with such a subset then the chances are that this subset is one of the ones that are similar to the population, and so you are justified [NB:following Plantinga, I would use “warranted”] in concluding that it is likely that this subset ‘matches’ the population reasonably closely . . .
Thus we see that sampling [esp if random or nearly so] tends to reflect the population's “average” i.e the same basic point that statistical thermodynamics is premised on, and which is very relevant to – here it comes again – Caputo. Thus too, unless we have a large enough sample, we are unlikely to see the extreme strongly represented, on a chance basis. Further to this, generalising to a configurational space and the task of searching more or less at random, we are far more likely to see states reflecting clusters of configs that are common, than clusters that are rare; absent feeding in enough probabilistic resources to make it likely to climb Mt Improbable. And, random walks in a space where the criterion of inclusion in the specified set is functionality, are vastly unlikely to ever arrive at a functional state to begin with. This is for reasons as identified in my always linked, App 1, point 6, on the micro-version of Hoyle's “tornado in a junkyard builds a 747 by chance.” 2] PO, 377: I assumed Kf was just joking about the bacteria in the dish; no evolutionary biologist has to my knowledge claimed that all the proteins were sitting in the bacterium and suddenly assembled by chance H'mm, if you look back at 360, you will see that the example was set up by Tribune 7 not me. But, too, you will see by comparing the just referenced point in my always linked, that his is essentially the same thing as my microjets example addresses; cf the exchange with Trib at 364 (pt 3) and 366, on the issue of the thermodynamics of diffusion, which is what would dominate. In a nutshell: the number of scattered microstates – the cells in the locational space would be of order 10^-9 to 10^-8 or so metres [1 cc would have ~ 10^18 10^-8 m scale locational cells in it, and you are here dealing with dozens to hundreds of parts] so overwhelms the clumped much less the functionally configured that we probably will have to wait longer than the observed cosmos exists for even EXISTING, known to be functional, proteins to assemble into a flagellum by chance-dominated processes. [Tornado in a junkyard statistics again.] And, that is before you get to the issue of forming the functional proteins by chance and co-option, requiring a lot of DNA coding as discussed above, up to 27,000 top 45,000 base pairs worth! (No to mention the underlying issue of forming the life forms in the first place out of prebiotic chemistry in realistic environments.) 3] I promise not to use the C-word unnecessarily That is where a lot of the trouble on rhetoric started; cf just above to see the force of the Hoylean point hat HM et al picked up, probably because of their thermodynamics training in Chemistry and/or Engineering Sciences. 4] let’s go look for some E coli and get started! E coli are rather easy to find – being a major life form in sewage. But I think Biological supply houses have all sorts of “pet” strains used in studies. [I think there was a flap about producing and using lab strains that would not thrive in the “wild”; for obvious reasons.] On the challenges of the expt's design, cf. Just above. Funding will be a bear, in that light. GEM of TKIkairosfocus
August 16, 2007
August
08
Aug
16
16
2007
12:39 AM
12
12
39
AM
PST
...and I'm back! tribune7, Yes, I just pulled a number for fun. I assumed Kf was just joking about the bacteria in the dish; no evolutionary biologist has to my knowledge claimed that all the proteins were sitting in the bacterium and suddenly assembled by chance (not sure what Stuart Kauffman thinks though). I though we'd end the thread on a light note. As for wording, I promise not to use the C-word unnecessarily (but some actually call themselves C-ists). So, let's go look for some E coli and get started! Cheers, Prof POolofsson
August 15, 2007
August
08
Aug
15
15
2007
07:25 AM
7
07
25
AM
PST
Maybe 500 was optimistic. Let’s say 600 and you prove me wrong! PO, you have a strong background in statistics. You throw out a number. I make the resonable assumption you might have some mathematics behind it. OK, maybe you don't. Maybe you pulled that number out of the blue to make a debate point (i.e. we can never know the truth in our lifetimes.) Of course, dontcha think that line of reasoning kinda makes your paper pointless? Actuallly, that line of reasoning makes the entire debate on evolution debate pointless i.e. we can never be sure of the answer so it's all a matter of faith so let's not worry about the science. Actually that comes close to what evolutionists sometimes appear to believe :-) Anyway, work on your word choices even in blog discussions. Or maybe you do have math to back up your point. I'd be interested in seeing it.tribune7
August 15, 2007
August
08
Aug
15
15
2007
06:05 AM
6
06
05
AM
PST
PO The malaria parasite evolved precisely as IDists expected it to do given the average mutation rate of eukaryotes. It isn't "doing just fine" either. Its range is severely restricted by climate and there's a man-made drug killing it by the billions of trillions. Except for point mutations which statistical probability excludes from building into chained interdependent structures it hasn't evolved anything at all. If it's something that requires more than one or two changes in the genetic code the odds are virtually impossible of it getting done. 10^20 replications is orders of magnitude more than all the mammals that ever lived. At this point in time you appear to be in denial. If that persists I'll be asking you to leave. People in denial of plain evidence aren't welcome here.DaveScot
August 15, 2007
August
08
Aug
15
15
2007
04:38 AM
4
04
38
AM
PST
All: It seems plain we have reached a reasonable consensus at length. GEM of TKI PS: Re, PO, 373: Let’s say 600 and you prove me wrong! (BTW, thanks on the Algebra.) --> Cf. Trib at 366, esp his comment on: Appendix 1, always linked, point 6kairosfocus
August 15, 2007
August
08
Aug
15
15
2007
01:04 AM
1
01
04
AM
PST
tribune7 [368], Maybe 500 was optimistic. Let's say 600 and you prove me wrong! Seriously guys, I'm out of here. Die thread, die! :D :D :Dolofsson
August 14, 2007
August
08
Aug
14
14
2007
11:13 PM
11
11
13
PM
PST
DaveScot, The malaria parasiteseems to be doing just fine without "novel new structures." Getting those mutations that gave chloroquine resistance was quite an achievement, kind of like when Team USA beat the Soviet Union in hockey in 1980.olofsson
August 14, 2007
August
08
Aug
14
14
2007
11:11 PM
11
11
11
PM
PST
Kf, Your algebraic manipulations look just fine. I already told Michaels7, "No More Mr Nice Guy" is an Alice Cooper song, and I think each scholarly paper should include the title of a rock song.olofsson
August 14, 2007
August
08
Aug
14
14
2007
11:01 PM
11
11
01
PM
PST
Kf [365], I'd start right now if I only knew how to find some E. coli...olofsson
August 14, 2007
August
08
Aug
14
14
2007
10:56 PM
10
10
56
PM
PST
tribune7 [349], Do I need to go into rehab...? POolofsson
August 14, 2007
August
08
Aug
14
14
2007
10:55 PM
10
10
55
PM
PST
Dave, For the life of me I can't figure where PO is coming up with his 500-year estimate. He should find someone to bet with. I think it's more likely he lives 500 years than the proteins form a flagellum.tribune7
August 14, 2007
August
08
Aug
14
14
2007
07:59 PM
7
07
59
PM
PST
tribune7 We watched the malaria parasite for 10^20 replications and it didn't evolve any new structures at all. All it did was find a single point mutation (quite often) that confers avoquine resistance and a two point mutation (very rarely) that confers chloroquine resistance. It was unable to defeat each of two different human hemoglobin mutations, it was unable to find a different host, and it was unable to find a way to live in temperatures under 64F. Either falciparum is exceedingly atypical of eukaryote evolution or random mutation is an utter failure at building novel new structures.DaveScot
August 14, 2007
August
08
Aug
14
14
2007
05:31 PM
5
05
31
PM
PST
KF!!! You do have an answer in your Appendix 1, always linked, point 6!!! Excellent!tribune7
August 14, 2007
August
08
Aug
14
14
2007
04:39 PM
4
04
39
PM
PST
How long do you think we’d have to watch before we get a falgellum? . . . I’d say about 500 years. When do we start? That was my question :-) Somebody should be encouraged to start as soon as possible, if you really believe it would take just 500 years. It would make more sense thatn SETI. (And it would put in the history books if it should succeed) And just curious -- I'm really not trying to get to 400 -- why do you say 500 years?tribune7
August 14, 2007
August
08
Aug
14
14
2007
04:29 PM
4
04
29
PM
PST
All: Looks like the thread is really coming to an end, i.e a point of definite consensus – and stated by no less than . . .
Prof PO, 363: I’m trying to convince people here that there is nothing extraordinary about the EF which is clear from Dr D’s constant references to concepts from mathematical statistics. There is also nothing repulsive about the EF . . .
I agree, the problem is not with the concept of the EF or its utility [both pretty straightforward and successful in loads of cases all across science, statistics and management, even court rooms], but with what happens when it runs head on into the assumptions and assertions of core ideas, icons and cases in the reigning evolutionary materialist paradigm. Then, all the objections and debate tactics we know so well come into play. BTW, prof PO, no points to pick on my always suspect algebraic substitutions above on conditional probabilities and likelihoods, etc? [I normally have someone else review any Math I do/outline/summarise for serious reasons, before putting it out in public!] Now on a few closing [?] notes: 1] PaV, 362: Simmerisation . . . You have presented yet another fine example of boiling the matter down to essentials in 362. (A happy typo inspired me to name the approach after the leading practiitoner at UD, one certain PaV.) I only add to that, that given the declarative fair method allegedly in use, a long run should have alerted a fair-minded pol to something going wrong: should be observed pretty close to 1/2, given the dominance of near 50-50 outcomes in the relevant population. So we have at least design by negligence as I noted on several times. 2] PO, 363: C was only used as an example But, of what, given the way in which you built up to then handled it then segued into “No more Mr Nice Guy”? ;-) 3] PO, 363: Question The Q was by Trib, not me. I already have an answer in Appendix 1 my always linked, point 6, on clumping and functional clustering by chance processes – essentially diffusion. The scattered states absolutely predominate, I am afraid, so once diffused in, the molecules will normally spread out at random like an ink drop. Absent drying out the wait will be long, long indeed – compare the relative statistical weight of scattered vs clumped vs functionally configured microstates for nanometre-scale cells in a beaker or even just a test tube. AKA, why do you thing the cell has such a tight, complex, interlocking set of nanomachines to carry out its work? [No prizes for guessing that diffusion is as a rule not controllable enough . . .] GEM of TKIkairosfocus
August 14, 2007
August
08
Aug
14
14
2007
03:36 PM
3
03
36
PM
PST
PaV, Oh dear, oh dear, oh dear, I'm so sorry! Before we kill the thread, let's kill Caputo! I was talking abot your flagellum calculations and only meant that your chance hypothesis is similar to Caputo p=1/2, and asked how you'd rule out other chance hypotheses (similar to Caputo p=37/38). I was very unclear, sorry! Shouldn't have mentioned him at all! From the outset, C was only used as an example. There is nothing more to say about it. If I ever write about this again, I'll choose another example indeed! Caputo is dead. Not sure what my particular brand is; it's all basic stuff and any mathematical statistician would tell you the same thing. I'm trying to convince people here that there is nothing extraordinary about the EF which is clear from Dr D's constant references to concepts from mathematical statistics. There is also nothing repulsive about the EF and there are other ID critics who are annoyed with me for being far too nice to yall. Ah, the maverick Prof PO has to fight left and right! As for Kf's question How long do you think we’d have to watch before we get a falgellum? I'd say about 500 years. When do we start? :D :D :Dolofsson
August 14, 2007
August
08
Aug
14
14
2007
02:32 PM
2
02
32
PM
PST
P.O. As for you calculations, let’s say (for the sake of argument!) that you manage to rule out the uniform chance hypothesis (Caputo p=1/2). But how do you rule out other chance hypotheses (Caputo p=37/38)? Recall that these chance hypothesis would be formed by considering billions of years of evolutions, not an easy task. I'm finally beginning to see how you, using your particular brand of statistics, are looking at this scenario. When you write . . . . When p=1/2 is ruled out, there is no chance hypothesis left and design is inferred (but only in the sense that chance is ruled out, there is no alternative design hypothesis of how he cheated). . . . . I see in your example of the Roulette wheel what you mean by a different "chance hypothesis". So, at least, I see the way in which you're approaching all of this, though, of course, I disagree with it! Here's how I disagree: in the first quote above, you ask: "But how do you rule out other chance hypotheses (Caputo p=37/38)" My answer would be that you would rule it out because there is nothing to suspect that Caputo used a Roulette wheel. IOW, what is suspicious about what Caputo did is that, living in the United States, knowing that there are only two major political parties, and that those parties are oftentimes abbreviated by using D or R, AND, under the procedure specified by Caputo himself, wherein the procedure was supposedly set up so that EACH Democrat and Republican had an equal chance of getting to the top of the ballot (i.e., p=1/2), then 40 D's and 1 R is simply suspicious. The ONLY thing a statistician has to do in such a situation is "eliminate" the p=1/2 scenario. Once this is done, then "chance" has been eliminated. So, what do we do next? We examine the machine/software (whatever it was he used) to see if something is defective. If it is not defective, we have now ruled out any "natural causes" for the skewed outcome. That leaves us with "design". Now, if it turns out that the machine/software used turns out to be "defective", then one would conduct a "forensic" investigation, trying to determine whether or not the machine/software was tampered with or not. If it had been tampered with, then we're back to "design". That's all we would have to do. The only reason I can think of for even entertaining the prospect that a Roulette wheel was used would be if Caputo himself said that this is what he did. But, now, the issue would no longer be the results---which fit perfectly with the method employed in such a case---but "why" he chose to use such a method. If we're "forced" to rule out the Roulette wheel, then should we also have to rule out the "chance" hypothesis that the way the 40 D's and 1 R came about was through the use of the stock market, so that, over a forty-one day period, every day that the stock market closed up, Caputo selected a D, and every day it closed down, he chose an R. Why is this "possible chance hypothesis" important in any way? I just don't see how, or why, any of this would be important. We can get into what limited number of "chance" models Nature affords us in the construction of proteins (something I've already alluded to in prior posts), but I'd like to get your reaction to this straightforward objection before we get around to more difficult engtanglements. P.O. If it doesn’t die on its own, we’ll beat it to death. This thread is like a Timex watch: "It takes a licking, and keeps on ticking!" (BTW, I'm away for the rest of the day.) Hey, kairosfocus, what is PaVian Simmerisation? Is it some kitchen technique they use down in the Caribbean?PaV
August 14, 2007
August
08
Aug
14
14
2007
10:53 AM
10
10
53
AM
PST
PS: AN -- after nap -- we often wish to find evidence to support a theory, where it is usually easier to show that the theory makes the observed evidence “likely” to be so [or at least, accepting it makes believing the theory more plausible on whatever scale of weighting subjective probabilities we may wish etc . . .]. So we have to move: p[E|T] --> p[T|E]; at least, on a comparative basis. PaVian Simmerisation, and if my substitutions are worked out right: a] Now, first, look at p[A|B] as the ratio, (fraction of the time we would expect/observe A AND B to jointly occur)/(fraction of the the time B occurs in the POPULATION), on the easiest interpretation of p's to follow. b] Thus p[A|B] = p[A AND B]/p[B], or, p[A AND B] = p[A|B] * p[B] c] By “symmetry” -- the easiest way to do this, I think -- we see that also p[B AND A] = p[B|A] * p[A], where the two joint probabilities are plainly the same, so: p[A|B] * p[B] = p[B|A] * p[A], which rearranges to . . . d] Bayes' Theorem, classic form: p[A|B] = (p[B|A] * p[A]) / p[B] e] Substituting, p[E|T] = (p[T|E] * p[E])/ p[T], p[T|E] being here by initial simple def'n L[E|T], likelihood of theory T given evidence E, at least up to some constant. But, where do we get p[E] and p[T] from – a hard problem with no objective consensus answers, in too many cases. (In short, we are looking at a political dust-up in the relevant institutions.) f] This leads to the relevance of the point [which is where a lot of things come in] that a certain ratio, LAMBDA is: L[h2|A]/L[h1|A], and is a measure of the degree to which the evidence supports one or the other of competing hyps. g] So, p[T1|E] = p[E|T1]* p[T1]/p[E], and p[T2|E] = p[E|T2]* p[T2]/p[E], so also: p[E|T2]/ p[E|T1] = L[T2|E]/ L[T1|E] = {p[T2|E] * p[E]/p[T2]}/ {p[T1|E] * p[E]/p[T1]} = {p[T2|E] /p[T2]}/ {p[T1|E] /p[T1]} h] All of this is fine as a matter of algebra applied to probability, but it confronts us with the issue that we have to find the outright credible real world probabilities of T1, T2; at least, we have eliminated p[E]. In some cases we can get that, in others, we cannot. [And thus the sort of objections we have seen in this and previous threads.] i] Now, by contrast the “elimination” approach looks at a credible chance hyp and the distribution across possible outcomes it would give, with a flat distribution as the default [e.g why a 6 on a “fair” die is 1 in 6]; something we are often comfortable in doing. Then we look at in the hyp testing case, the credible observability of the actual observed evidence in hand, and in many cases we see it is simply too extreme relative to such a chance hyp, as in the case of Caputo. j] So by the magic of seeing the sort of distribution in Caputo [cf. 68 above!] as a space containing the possible configurations, we then see that this is a particular case of searching a config space in which the individual outcomes are equiprobable -- but because they cluster in groups that are what we are interested in, the probabilities of the groups is not the same. [So, we are more likely to hit near the centre of the distribution on the dart board chart, than to hit the extreme to the right, which is 1/52 billionths or so of the curve. Indeed, in many real world cases, an upper extreme that is 5% of the curve is acceptable, or 1 % if you are not comfortable with that; rarely in statistics do we see picking an extreme of 1 in a 1,000. That should give us some real world context.] k] So the consequence follows: when we can “simply” specify a cluster of outcomes of interest in a config space, and such a space is sufficiently large that a reasonable search will be unlikely within available probabilistic/ search resources, to reach the cluster, we have good reason to believe that if the actual outcome is in that cluster, it was by agency. [E.g the bacterial flagellum, or a flyable microjet in Appendix 1 in the always attached. Thus the telling force of Hoyle's celebrated tornado in a junkyard assembling a 747 by chance illustration.] --> Thus, we see a little more on why the Fisherian approach makes good sense even though it does not so neatly line up with the algebra of probability as would a likelihood or full Bayesian approach. Thence, we see why the explanatory filter can be so effective, too. GEM of TKIkairosfocus
August 14, 2007
August
08
Aug
14
14
2007
05:34 AM
5
05
34
AM
PST
Something else to consider: Is it possible for proteins to be formed into a flagellum? Sure, or there would not be flagella. But how? By chance? One way to test would be taking the appropriate proteins, putting them in proximity and just watching them. I guess gathering them together would be increasing the odds quite a bit and could be thought of as cheating but it would be a first step, right PO :-) How long do you think we'd have to watch before we get a falgellum?tribune7
August 14, 2007
August
08
Aug
14
14
2007
05:09 AM
5
05
09
AM
PST
Excellent summary, KF.tribune7
August 14, 2007
August
08
Aug
14
14
2007
04:37 AM
4
04
37
AM
PST
2] Caputo, one more time: Had Caputo done as advertised, he would by overwhelming probability been near the 50-50 split on R's and D's. That he claimed to do so but ended up in such an unlikely to be observed outcome is the surprising thing that triggered the investigation. On examining the credible distribution on the claimed “fair coin” model, the suspicion of an extremely unlikely to be observed result was substantiated. On grounds that the highly complex and meaningful/ functional result should not normally have been seen, it was inferred that the likelier explanation was -- given the simple specification that the outcome served his obvious agenda -- that he was acting deliberately. Then, corroboration came from the reported fact that there was no observation of the actual selections. [We can presumably rest assured that on being forced to fix the process, the decades-long run to D's vanished like magic.] A little common sense saves us a lot of wading through technicalities that may do moe to cloud than to clarify the issue. 3] Flagellum: Similarly, we know that the proposed mechanism is RM + NS, which is blind to future states and imposes the requirements of bio-functionality and differential reproductive success at each stage. The flagellum is a complex, fine-tuned whole comprising many parts in a self assembling, self-tuning actuator as part of the food-finding mechanism of the relevant bacteria. --> A lot of bacteria etc get along without it, so it is not necessary to survival. --> It is complex [40+ - 50 or so proteins, averaging maybe up to 45,000 base pairs in DNA] and requires co-adapted components, so a partial assembly would not work and so would not be selected for in the population. That leaves co-option with some final novelties, but that faces the gap of the many unique proteins [whatever the debate on the numbers of such]. --> The only seriously proposed co-option is of a mechanism that turns out to be [a] reportedly based on a a subset of the flagellar genes (part of the self-assembly mechanism it seems), and [b] is functionally dependent on the existence of “later” populations and types of cells, ie. Eukaryotes. Namely, the TTS, which s also [c] a long way from a flagellum. --> On the random mutation assumption/model, and relative to Behe's observed edge of evolution, the chance hyp is so unlikely that we can immediately discard it, absent empirical demonstration, which after 150 years is not forthcoming at this, body plan, level. That leaves agency on the table as the best explanation. 4] PO, 355: Now, Caputo could have “cheated by chance” for example by spinning a roulette wheel and only chose R when unlucky 13 came up. We have Caputo's testimony that he used a selection process that if actually used would have been fair. An intentionally biased selection process that may have in it a chance element leading to deception in the Courtroom is, of course: DESIGN. (So would be a sloppy method that at first unintentionally created runs [e.g the capsules were layered and not shuffled enough], which was then stuck with instead of being debugged and fixed.) 5] PO, 356: When Dr D analyzes the Caputo case, he starts by ruling out all chance hypotheses except p=1/2. When p=1/2 is ruled out, there is no chance hypothesis left and design is inferred Cf just above for why; it is not a mystery or a mistake, and the relevant WD document has been available since 1996, cf link above and again here. GEM of TKIkairosfocus
August 14, 2007
August
08
Aug
14
14
2007
01:02 AM
1
01
02
AM
PST
1 2 3 13

Leave a Reply