Uncommon Descent Serving The Intelligent Design Community

Kevin Padian: The Archie Bunker Professor of Paleobiology at Cal Berkeley

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Kevin Padian’s review in NATURE of several recent books on the Dover trial says more about Padian and NATURE than it does about the books under review. Indeed, the review and its inclusion in NATURE are emblematic of the new low to which the scientific community has sunk in discussing ID. Bigotry, cluelessness, and misrepresentation don’t matter so long as the case against ID is made with sufficient vigor and vitriol.

Judge Jones, who headed the Pennsylvania Liquor Control Board before assuming a federal judgeship, is now a towering intellectual worthy of multiple honorary doctorates on account of his Dover decision, which he largely cribbed from the ACLU’s and NCSE’s playbook. Kevin Padian, for his yeoman’s service in the cause of defeating ID, is no doubt looking at an endowed chair at Berkeley and membership in the National Academy of Sciences. And that for a man who betrays no more sophistication in critiquing ID than Archie Bunker.

For Padian’s review, see NATURE 448, 253-254 (19 July 2007) | doi:10.1038/448253a; Published online 18 July 2007, available online here. For a response by David Tyler to Padian’s historical revisionism, go here.

One of the targets of Padian’s review is me. Here is Padian’s take on my work: “His [Dembski’s] notion of ‘specified complexity’, a probabilistic filter that allegedly allows one to tell whether an event is so impossible that it requires supernatural explanation, has never demonstrably received peer review, although its description in his popular books (such as No Free Lunch, Rowman & Littlefield, 2001) has come in for withering criticism from actual mathematicians.”

Well, actually, my work on the explanatory filter first appeared in my book THE DESIGN INFERENCE, which was a peer-reviewed monograph with Cambridge University Press (Cambridge Studies in Probability, Induction, and Decision Theory). This work was also the subject of my doctoral dissertation from the University of Illinois. So the pretense that this work was not properly vetted is nonsense.

As for “the withering criticism” of my work “from actual mathematicians,” which mathematicians does Padian have in mind? Does he mean Jeff Shallit, whose expertise is in computational number theory, not probability theory, and who, after writing up a hamfisted critique of my book NO FREE LUNCH, has explicitly notified me that he henceforth refuses to engage my subsequent technical work (see my technical papers on the mathematical foundations of ID at www.designinference.com as well as the papers at www.evolutionaryinformatics.org)? Does Padian mean Wesley Elsberry, Shallit’s sidekick, whose PhD is from the wildlife fisheries department at Texas A&M? Does Padian mean Richard Wein, whose 50,000 word response to my book NO FREE LUNCH is widely cited — Wein holds no more than a bachelors degree in statistics? Does Padian mean Elliott Sober, who is a philosopher and whose critique of my work along Bayesian lines is itself deeply problematic (for my response to Sober go here). Does he mean Thomas Schneider, who is a biologist who dabbles in information theory and not very well at that (see my “withering critique” with Bob Marks of his work on the evolution of nucleotide binding sites here). Does he mean David Wolpert, a co-discoverer of the NFL theorems? Wolpert had some nasty things to say about my book NO FREE LUNCH, but the upshot was that my ideas there were not sufficiently developed mathematically for him to critique them. But as I indicated in that book, it was about sketching an intellectual program rather than filling in the details, which would await further work (as is being done at Robert Marks’s Evolutionary Informatics Lab — www.evolutionaryinformatics.org).

The record of mathematical criticism of my work remains diffuse and unconvincing. On the flip side, there are plenty of mathematicians and mathematically competent scientists, who have found my work compelling and whose stature exceeds that of my critics:

John Lennox, who is a mathematician on the faculty of the University of Oxford and is debating Richard Dawkins in October on the topic of whether science has rendered God obsolete (see here for the debate), has this to say about my book NO FREE LUNCH: “In this important work Dembski applies to evolutionary theory the conceptual apparatus of the theory of intelligent design developed in his acclaimed book The Design Inference. He gives a penetrating critical analysis of the current attempt to underpin the neo-Darwinian synthesis by means of mathematics. Using recent information-theoretic “no free lunch” theorems, he shows in particular that evolutionary algorithms are by their very nature incapable of generating the complex specified information which lies at the heart of living systems. His results have such profound implications, not only for origin of life research and macroevolutionary theory, but also for the materialistic or naturalistic assumptions that often underlie them, that this book is essential reading for all interested in the leading edge of current thinking on the origin of information.”

Moshe Koppel, an Israeli mathematician at Bar-Ilan University, has this to say about the same book: “Dembski lays the foundations for a research project aimed at answering one of the most fundamental scientific questions of our time: what is the maximal specified complexity that can be reasonably expected to emerge (in a given time frame) with and without various design assumptions.”

Frank Tipler, who holds joint appointments in mathematics and physics at Tulane, has this to say about the book: “In No Free Lunch, William Dembski gives the most profound challenge to the Modern Synthetic Theory of Evolution since this theory was first formulated in the 1930s. I differ from Dembski on some points, mainly in ways which strengthen his conclusion.”

Paul Davies, a physicist with solid math skills, says this about my general project of detecting design: “Dembski’s attempt to quantify design, or provide mathematical criteria for design, is extremely useful. I’m concerned that the suspicion of a hidden agenda is going to prevent that sort of work from receiving the recognition it deserves. Strictly speaking, you see, science should be judged purely on the science and not on the scientist.” Apparently Padian disagrees.

Finally, Texas A&M awarded me the Trotter Prize jointly with Stuart Kauffman in 2005 for my work on design detection. The committee that recommended the award included individuals with mathematical competence. By the way, other recipients of this award include Charlie Townes, Francis Crick, Alan Guth, John Polkinghorne, Paul Davies, Robert Shapiro, Freeman Dyson, Bill Phillips, and Simon Conway Morris.

Do I expect a retraction from NATURE or an apology from Padian? I’m not holding my breath. It seems that the modus operandi of ID critics is this: Imagine what you would most like to be wrong with ID and its proponents and then simply, bald-facedly accuse ID and its proponents of being wrong in that way. It’s called wish-fulfillment. Would it help to derail ID to characterize Dembski as a mathematical klutz. Then characterize him as a mathematical klutz. As for providing evidence for that claim, don’t bother. If NATURE requires no evidence, then certainly the rest of the scientific community bears no such burden.

Comments
Hi Dave, Patrick, PO, Michaels, DK, PaV, Sal et al, I see the thread still goes strong, at about 20 posts per day or so. Greetings from the insomnia patrol, with it seems another tropical wave passing through – great for the farmers, let's hope our friend to the south does not make too much fresh steam out of it. Okay, on points that strike me as key: 1] DS, 185: The bone of contention isn't the size of the search space per se. It's the probabilistic resources available to reduce it. RM+NS in theory can find a flagellum pattern . . . I agree, in broadest terms –- providing stepping stones exist to be always functional, RM + NS can find a direct or indirect path to a flagellum. [That BTW, is why even Darwin's challenge that “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down” improperly shifts the burden of proof and is selectively hyper-skeptical.] My remarks above -- and for that matter, WD's analysis of Caputo (and Fisher's theory of statistical inference) -- were in the context of available probabilistic resources, hence for instance my comment that we are dealing with needing to get tot he flagellum on an earth of mass 6 * 10^24 kg in at most a few billion years. 2] DS, 185: If a credible series of reasonably small steps can't be reconstructed it's chalked up as a failure of imagination in forensic reconstruction of random changes rather than a failure of imagination in mechanisms that cause the changes. Correct, and reflecting that same improper shift in epistemology. (On empirical matters, we can only achieve warrant to moral certainty at best, and often only tot he preponderance of the evidence. So, to demand demonstration beyond possible alternative in a context that does not admit of such outright mathematical or logical proof, is to presume truth where it should be warranted relative to comparative difficulties across live option alternatives on factual adequacy, coherence and explanatory elegance vs being simplistic and/or merely after the fact ad hoc.) 3] DS, 186: Rather than assuming it can do anything physically possible Behe examines what it has actually done under observation in billions of trillions of rapid reproducers (malaria parasites) under intense selection pressure. Just as importantly he examines what rm+ns failed to accomplish under intense selection pressure . . . . Unless the empirical observations can somehow be impeached as wrong, incomplete, or atypical it stongly suggests random mutation plays only a small role in phylogenesis That's why the critics are so angry. Behe is showing here that RM + NS is factually inadequate. Whilst, of course we know that in all cases of actual observation, functionally specified, contingent, fine-tuned and often irreducibly complex entities are produced by intelligent agents. We have a known source of FSCI, going up against a suggested or assumed source, and the current direct empirical test on the latter is coming up short, real short. 4] Patrick, 189: if we were to make the design inference strong enough to be warranted in your opinion how many Darwinian pathways would need to be tested? Unfortunately, testing ALL of them isn't likely to be a reasonable/reachable goal. Again, the Darwinian burden of proof shift mis-move surfaces. 5] PO, 190: we need to consider motility devices in general, not just the flagellum . . . As noted, Behe looked also at the Cilium. But, that is not the root issue yet: the point is that a functionally specific, fine-tuned system comprising interacting and interlocking parts based on folding of linear proteins into 3-D shapes under various electrostatically derived and bonding forces, is just that, specific. It is not just that the flagellum is a means of moving, but that it is a particular means of moving based on a particular set of codes in DNA inclusive of some 30 unique proteins. Just asking what is the config space for 30 such proteins, on a crude estimate puts us into the ball park of a space of some 5*10^27 cells, and even if we were to generously overestimate every life form that ever lived, by positing 10^500 samples of DNA, the flagellum state would be incredibly isolated in the relevant config space. That is, the probabilistic resources simply are not there even on the scale of a cosmos that is a lot larger than what we do observe, ~ 10^80 atoms, and the time from big bang to heat death. . . .kairosfocus
August 7, 2007
August
08
Aug
7
07
2007
12:48 AM
12
12
48
AM
PDT
Scordova [200], How many can calculate Radon-Nikodym derivatives? I can, I can! :) Goodnight everybody!olofsson
August 6, 2007
August
08
Aug
6
06
2007
09:52 PM
9
09
52
PM
PDT
PS. I have not studied Dr D's latest updates on the EF though but I am aware that they exist.olofsson
August 6, 2007
August
08
Aug
6
06
2007
09:46 PM
9
09
46
PM
PDT
Patrick [199]: In your opinion how damaging would this critique be to ID; minor or major? Can you yourself think of any additions or workarounds to tighten up the EF? As a principle do you reject UPBs such as 10^–50 (Emile Borel) as being useful? If so, why? I'd say it would be fairly damaging the the EF, not necessarily to ID as such. I've pointed out in a few posts that I do not reject UPB's. Statisticians deal with similar problems all the time as Dr Dembski explains in The Design Inference. I don't really have any good ideas how to tighten up the filter, think it is very hard. The EF is, in my mind, simply too ambitious, which rhymes well with your last paragraph. That doesn't mean it's useless however, we have to remember that science is always a work in progress...ask Galileo!olofsson
August 6, 2007
August
08
Aug
6
06
2007
09:45 PM
9
09
45
PM
PDT
PaV [202], Uh-oh, here we go again...but OK, PaV is a nice guy so I'll take it: has made some kind of grudging concession that it “might” be right Actually...never mind, just read my posts on the UPB 162, 166, 176 and note that I have no objections, would even be satisfied with a larger probability, and even attempt to explain the logic behind the UPB. Ungrudgingly yours, POolofsson
August 6, 2007
August
08
Aug
6
06
2007
09:37 PM
9
09
37
PM
PDT
Patrick: "Not being a mathematician myself, let's say for the sake of discussion your critique is 100% dead-on and Bill agrees with it. In your opinion how damaging would this critique be to ID; minor or major?" Patrick, I'm going to jump in here for a second. In the discussion that P.O. and I have had about the UPB, it seems to me that he has made some kind of grudging concession that it "might" be right. I consider the UPB portion of WD's argument to be unassailable---there just aren't enough probability resources to explore the kinds of configuration spaces that biological systems entail. Having said that, though, there is a way in which, I believe, the argument can break down. It is along the lines that P.O. is arguing. However, as I pointed out in #169, if you rule out chance agencies such as RM+NS searching these kinds of configuration spaces (which WD's argument not doubt does), then you're left with the possibility that the "actual" configuration space, i.e., the one de-limited by natural forces, and then the argument becomes that the "actual" configuration space is so small that RM+NS, indeed, is able to search it out. But, if you make this your arguement, then you are presented with the problem of explaining how it is that "Nature" has, so to speak, shrunk the configuration space. This, then, gets into fine-tuning, as I point out in #169. I suspect someone like Michael Denton ("Nature's Destiny") would say, yes, indeed, nature is actually fine-tuned to this degree. And, so, Denton might see evolution as "front-loaded" in the Big Bang. Well, this is almost what P.O. is arguing, suggesting that unless we really know the composition of the flagellum's configuration space, the appropriate statistical calculations cannot be properly employed since, after all, this kind of "fine-tuning" may be at play; therefore, an "elimination" technique, such as WD would employ, cannot get at such a contingency, and thus it is necessary to use the comparison method. But, as Hoyle says, this kind of de-limiting of what "might" be, is entirely suggestive of a "super intellect". His words: ‘Some super-calculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule.' IOW, it's impossible to 'begin' with the blind forces of nature, and to conclude from them the properties of the carbon atom: the one doesn't flow from the other. He concludes: "A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature."PaV
August 6, 2007
August
08
Aug
6
06
2007
05:09 PM
5
05
09
PM
PDT
SORRY, I meant to say, "One can refute Darwinian evoltuion on grounds outside of the EF such as Nachman's U-Paradox, Haldane's Dilemma, Invisibility of Redundancy, etc."scordova
August 6, 2007
August
08
Aug
6
06
2007
04:28 PM
4
04
28
PM
PDT
PaV, Salvador, Dave, other moderators, Nick Matzke preens in his interview with Jason Rennie about how he has solved the Flagellum problem for Darwinian evolution. As he is off to Berkeley to get his PhD. he talks about how evolutionary biologist researchers have come to him as the expert on the flagellum. Why not start a thread for comments on Nick's claims. The audio is available for all to listen to. And by the way Nick is a master of calling ID people creationists. He does it several times in the interview. If anyone knows the difference, Nick does.
One can refute Darwinian evoltuion on grounds outside of the such as Nachman's U-Paradox, Haldane's Dilemma, Invisibility of Redundancy, etc.) Edge of Evolution (EoE) is an excellent example of this, and in fact is more fundamental in that EoE supplies numbers for the EF. The EoE argument stands on its own against Darwinian evolution, but it can be used to help EF arguments. For the 4 years I've been a part of ID, I can't recall that I've strongly endosed the Flagellum as an example. It does not mean I could not one day, but I've had a bit of a hard time with it. Behe's work on protein-protein binding is shoring up the argument, and maybe it's only a matter of time before the ID community tightens the noose around Matzke's boasts. I have little doubt Matzke totally out to lunch. His logic was bogus, but had a few good points.... If there are protein-protein binding interactions unique to the flagellum, Matzke's analysis is totally worthless, and actually a hindrance to scientific understanding. My view is let's not be too hasty to assert every pro-ID conclusion. Both Mike Gene and I think the flagellum and IC are real problems for mindless evolution, but if there areas where the ID community can strenghthen its arguments, by all means let these areas be put on the table. I can say Behe's EoE tied up a lot of loose ends for me personally compared to DBB. But the issue of coming up with good specifications is an important area of research, and professor Olofsson perhaps has criticism worth seriously considering. I have also pointed Professor Olofsson to some of Bill's newer papers which I think will address at least some of his concerns. And let's be honest, how many participating in this discussion have read and understood Bill's latest? How many can calculate Radon-Nikodym derivatives? Or how many here understood Bill's exploration into Minimum Description Length (MDL) representation and its relevance to preclusion of post-dictive specification? These are relevant to professor Olofsson reasonble reservations. I've invited him to peruse the much improved and much more robust ID literature which is so specialized that I would wager most at UD are not even familiar with it.scordova
August 6, 2007
August
08
Aug
6
06
2007
04:24 PM
4
04
24
PM
PDT
PO,
orry for not fully answering. I really don't know enough biology to be able to answer intelligently. As I have mentioned a few times in this thread, I try to keep the focus on what my expertise is in. ...... However, if you have any interesting comments or critique of my filter criticism, I would be delighted to hear them and try to respond as well as I can.
Not being a mathematician myself, let's say for the sake of discussion your critique is 100% dead-on and Bill agrees with it. In your opinion how damaging would this critique be to ID; minor or major? Can you yourself think of any additions or workarounds to tighten up the EF? As a principle do you reject UPBs such as 10^–50 (Emile Borel) as being useful? If so, why? By the way, I don't expect design detection to ever be fullproof. After all, it should be possible to fool the EF. Let's say there is an arch designer that attempts to make garden architecture look "natural". Each arch is unique and customized. A series of stones are designed by shape to tightly interlock with each other to form an arch yet in appearance they appear "natural". If one of these stones are lost we wouldn't be able to tell it was designed using the EF.Patrick
August 6, 2007
August
08
Aug
6
06
2007
02:37 PM
2
02
37
PM
PDT
All, Bill originally brought on more people to handle moderation because it was consuming too much of his time. He does read comments sometimes but he might not have noticed the interesting turn this thread took. Has anyone been emailing Bill about the content contained herein?Patrick
August 6, 2007
August
08
Aug
6
06
2007
02:08 PM
2
02
08
PM
PDT
Hello Michaels7, I'm glad to hear that you enjoy the thread. I only used "pointless" in reference to my attempts to explain various things to Mr Kairos (publicly and privately), by no means to the entire thread. He, on the other hand, dismissed all of it as pointless (though, only on "one level"). No More Mr. Materialist Nice Guy? There's no Alice Cooper song with that name that I know of. Besides, if your read my article, you will see the context. Hit piece? I just had a bit of fun, just like Ms Coulter did in her book. I seriously doubt she would be insulted if she read it but if she does and is, I will apologize. Are you familiar with the Sokal Hoax? I thought it provided a nice context. Also, there was so much criticism of Ms Coulter at the time, that she was mean and evil, I thought I would show that people do not need to hate her. Hate is neither healthy, nor productive. As for "stubborn" and "do not listen well," I think you would have to back that up a bit. If you read the entire thread, I have repeatedly explained my very narrow criticism of the Explanatory Filter, from the particular position of expertise that Dr Dembski in the original post claims is absent; probability. Mr Kairosfocus kept going on and on about my "expanding" the rejection region (I used 22 Ds as an example, might as well have used 29 or 33 or..., anyway, most other people seem to have gotten it and even tried to help me explain to him). He kept insising that my criticism was "Bayesian" which it, very obviously to those with any knowledge, is not. I have tried to explain to him publicly here at UD, and privately in emails. It can also be learned from Dr Dembski's essay on Elimination vs Comparison (which Mr Kairosfocus, somewhat ironically, kept referring to). Actually, already the title explains he difference. I don't know how closely Dr D himself follows this blog but he could easily step in and point out that my criticism is not Bayesian, regardless of how meritless he might deem it otherwise. Anyway, it is with regards to these issues, I found the exhange pointless. However, if you have any interesting comments or critique of my filter criticism, I would be delighted to hear them and try to respond as well as I can.olofsson
August 6, 2007
August
08
Aug
6
06
2007
01:57 PM
1
01
57
PM
PDT
On a side note, this bit of history puts homologies in perspective:
There is yet another reason that the universality of the genetic code is not strong evidence for evolution. Simply put, the theory of evolution does not predict the genetic code to be universal (it does not, for that matter, predict the genetic code at all). In fact, leading evolutionists such as Francis Crick and Leslie Orgel are surprised that there aren't multiple codes in nature. Consider how evolutionists would react if there were in fact multiple codes in nature. What if plants, animals, and bacteria all had different codes? Such a finding would not falsify evolution; rather, it would be incorporated into the theory. For if the code is arbitrary, why should there be just one? The blind process of evolution would explain why there are multiple codes. In fact, in 1979 certain minor variations in the code were found, and evolutionists believe, not surprisingly, that the variations were caused by the continuing evolution of the universal genetic code. Of course, it would not be a problem for such an explanation to be extended if it were the case that there were multiple codes. There is nothing wrong with a theory that is comfortable with different outcomes, but there is something wrong when one of those outcomes is then claimed as supporting evidence. If a theory can predict both A and not-A, then neither A nor not-A can be used as evidence for the theory. When it comes to the genetic code, evolution can accommodate a range of findings, but it cannot then use one of those findings as supporting evidence. (Hunter, 38.)
Personally I'm in favor of searching for homologies where they should be unexpected. Not just in "convergent evolution", of which there are many examples. Let's say we have have a lower organism and a higher organism that share a homolog. But the creatures that are supposed to be in-between do not share this homolog. Now you could explain it away by saying this code "re-evolved", but I'd consider this scenario to be more compatible with front-loading/designer reuse. BTW, I'm not aware of such an example but it'd be an interesting data point if there was.Patrick
August 6, 2007
August
08
Aug
6
06
2007
01:30 PM
1
01
30
PM
PDT
Patrick, Sorry for not fully answering. I really don't know enough biology to be able to answer intelligently. As I have mentioned a few times in this thread, I try to keep the focus on what my expertise is in. Anyway, part of my point is that the flagellum as we know it is one example of a motility device in bacteria, namely, the one we have observed (in E. coli). Now, it is of course possible that some other such device would have evolved instead, something that would look different. [Note: As we are testing the chance hypothesis, we need to assume that it is true and proceed from there.] If that were the case, we would instead test that device, its protein configurations and so on. These possible but unknown scenarios are, in my mind, difficult to incorporate. But I would be certainly be interested to see an attempt along the lines you suggest. Everything doesn't have to be done at once, any partial progress (or lack thereof) is also of interest. One would have to get both IDers and Darwinists to agree to the program though, otherwise it will be the usual "did not - did too" exchange.olofsson
August 6, 2007
August
08
Aug
6
06
2007
01:04 PM
1
01
04
PM
PDT
PO,
That is pretty much one of my two objections, with the addition of “potential” to “direct” and “indirect” because, in my view, we need to consider motility devices in general, not just the flagellum.
Lest we retread ground once again...I'm assuming you're saying that as a starting basis for making an argument about homologies of flagellar proteins, which usually turns into drawing a chart of said homologies and devolves into a game of wishful connect the dots without demonstrating how the various proteins can be formed, assembled and function as a motile device via unguided, purpose-less processes. Worst case scenario is the argument devolves into the average PT personal attacks where people are smeared for stating in the past data about homologs that are now known not to be true due to current research (oh, and it's just as bad when Darwinists are smeared for doing the same). Then of course the worst offense is presenting the homologs--often ignoring the weakness of sequence similarity--by themselves as if they're somehow irrefutable proof of Darwinism even though ID-compatible hypotheses would expect designer(s) reuse and/or homology due to front-loading. Why do I say the design inference is currently the strongest explanation? We know what intelligent agencies are capable of coupled with the knowledge of what nature, operating freely and unguided, is capable of. The whole point of Behe's new book was to try and find experimental evidence for exactly what Darwinian mechanisms are capable of. On the other hand we have speculative pathway scenarios but so far the "edge of evolution" doesn't allow these models to be feasible. But this "edge" is an estimate based upon a limited set of data which in turn "might" mean the estimated "edge" is far less than the maximum capable by Darwinian mechanisms. If Darwinists would bother to do further experiments they may see if this "edge" could in reality be extended. Then if this new derived "edge" is compatible with these models then so be it (though I'll add the caveat that the "edge" might be better for Darwinism only in limited scenarios). In the meantime they're just assuming the "edge" allows for it. Even worse, unless I failed to notice the news, the very first detailed, testable (and potentially falsifiable) model is yet to be fully completed (I realize there are people working on producing one). But, yes, Darwinists should stop pretending they have the current strongest explanation. I'll fully acknowledge they're currently formulating a response in the form of continued research, new models, and such but the mere fact is that they're missing all the major parts to their explanation. This might change in the future, but it may not. BTW, you didn't answer my question. Would you find it acceptable if the current symbiotic/endosymbiotic/exogenous models (Margulis) and some of the more current endogenous models (Matzke, whoever) were tested? Are more needed? If so, justify this. EDIT: Edited for grammar but not potential stupidity on my behalf.Patrick
August 6, 2007
August
08
Aug
6
06
2007
12:46 PM
12
12
46
PM
PDT
Sorry, kf, I misunderstood "monomer." Of course, you meant "amino acid."Daniel King
August 6, 2007
August
08
Aug
6
06
2007
11:54 AM
11
11
54
AM
PDT
kf:
We have about 50 proteins at ~ 300 monomers each, in turn at 3 base pairs per DNA codon. So, we are looking at a DNA configurational space that we may crudely estimate at: 4^[50 x 300 x 3] = 4^45,000 ~ 5.01*10^27,092.
Your other numbers may be fine, but that 300 monomers each would not be individually coded in the DNA. One gene per protein would ordinarily suffice - if I understand what you're saying.Daniel King
August 6, 2007
August
08
Aug
6
06
2007
11:40 AM
11
11
40
AM
PDT
Kairosfocus, thanks for your patience and continued efforts in illuminating the issues. I wanted to chime in and say this has been a great thread to follow. Olufsson, re: pointless. Maybe for you, but for others here including myself, this has been a good thread. Please do not make assumptions for others. And as to insults, motivations and No More Mr. Nice Guy. Please do realize you lowered yourself to insults on the hit piece in Skeptical Inquirer. And since you don't want to be considered "nice" anymore, get over it. I'd recommend a change in your sloganeering - try No More Mr. Materialist Nice Guy. And you yourself are stubborn...., do not listen well and frankly have formed your worldview.Michaels7
August 6, 2007
August
08
Aug
6
06
2007
11:32 AM
11
11
32
AM
PDT
Patrick, That is pretty much one of my two objections, with the addition of "potential" to "direct" and "indirect" because, in my view, we need to consider motility devices in general, not just the flagellum. When you say that it's been "made quite clear" that the design inference is the "strongest explanation," I am sure that you are aware that there are many who disagree. In terms of the filter, which is what we concern ourselves with here, you cannot draw that conclusion as it has not been successfully applied to the flagellum. But there's more, even if it were, you could not really draw your conclusion beacuse the filter infers design by elimination, not by comparison (which is the real issue of the chapter by Dr Dembski that has been cited in this thread many times by people who unfortunately don't understand it).olofsson
August 6, 2007
August
08
Aug
6
06
2007
11:24 AM
11
11
24
AM
PDT
PO, Assuming I'm comprehending you correctly, your article essentially boils down to the objection that the design inference for biological machines using the EF is not 100% certified unless all Darwinian pathways--indirect and direct--are tested first. This objection has been made before on UD but yours is the most coherent version yet (I mean that as a compliment) since you've explained the reasoning behind it and it's not just a "gut feeling" objection like most Darwinists make. First off, I don't think it's ever going to be possible to have 100% certainty. It's been made quite clear that at this point the design inference is the strongest explanation BUT new evidence could overturn it; that's the nature of science. PO, now if we were to make the design inference strong enough to be warranted in your opinion how many Darwinian pathways would need to be tested? Unfortunately, testing ALL of them isn't likely to be a reasonable/reachable goal. Fortunately the task could be made easier if we reject pathway scenarios that cannot work from an engineering perspective (like numerous non-functioning/non-useful parts evolving toward complexity for no apparent reason) and only consider scenarios that are feasible. Now the scenarios being proposed for the flagellum so far do suffer from much wishful thinking but--I could be wrong--at least they appear to be "reasonable" (as in, while extremely difficult still in the realm of being a possibility). Also, if necessity is a factor for the flagellum (which is currently unknown) I would presume this would take the form of a Direct Darwinian pathway. But from what I've seen all focus has rightly been put on Indirect pathways since there currently aren't any reasonable Direct Pathways.Patrick
August 6, 2007
August
08
Aug
6
06
2007
09:39 AM
9
09
39
AM
PDT
By the way, I hope nobody got the impression that I claim to have invented the "outboard motor" analogy for the flagellum. I don't know its origins but I got it from Dr Demsbki's book No Free Lunch.olofsson
August 6, 2007
August
08
Aug
6
06
2007
09:14 AM
9
09
14
AM
PDT
Joseph [168], I was explaining the rationale behind the UPB. Put the quote in its proper context. POolofsson
August 6, 2007
August
08
Aug
6
06
2007
08:47 AM
8
08
47
AM
PDT
kf (con't) I think Behe's "Edge of Evolution" or at least the first half (I'm up to chapter eight) is aimed straight at the probabilistic resources of RM+NS. Rather than assuming it can do anything physically possible Behe examines what it has actually done under observation in billions of trillions of rapid reproducers (malaria parasites) under intense selection pressure. Just as importantly he examines what rm+ns failed to accomplish under intense selection pressure. The information Behe brings to bear on bounding rm+ns performance, nucleotide accurate, in an astronomically large population, wasn't available until recently. Unless the empirical observations can somehow be impeached as wrong, incomplete, or atypical it stongly suggests random mutation plays only a small role in phylogenesis.DaveScot
August 6, 2007
August
08
Aug
6
06
2007
07:35 AM
7
07
35
AM
PDT
kf The bone of contention isn't the size of the search space per se. It's the probabilistic resources available to reduce it. RM+NS in theory can find a flagellum pattern in an otherwise impossibly large space of non-flagellum patterns. RM+NS is restricted in operation in that to have a reasonable chance of finding something there must be an incremental series of tiny steps that bring the pattern closer to a flagellum wherein each tiny step must (at the least) not be crippling to the intermediaries. That this series of steps exists and was traversed by RM+NS is taken as a matter of faith by Darwinists. Random mutation as the sole or only significant source of variation is taken as a given. It then follows that any change observed must be the result of random mutation. If a credible series of reasonably small steps can't be reconstructed it's chalked up as a failure of imagination in forensic reconstruction of random changes rather than a failure of imagination in mechanisms that cause the changes.DaveScot
August 6, 2007
August
08
Aug
6
06
2007
06:46 AM
6
06
46
AM
PDT
PaV, Salvador, Dave, other moderators, Nick Matzke preens in his interview with Jason Rennie about how he has solved the Flagellum problem for Darwinian evolution. As he is off to Berkeley to get his PhD. he talks about how evolutionary biologist researchers have come to him as the expert on the flagellum. Why not start a thread for comments on Nick's claims. The audio is available for all to listen to. And by the way Nick is a master of calling ID people creationists. He does it several times in the interview. If anyone knows the difference, Nick does.jerry
August 6, 2007
August
08
Aug
6
06
2007
06:41 AM
6
06
41
AM
PDT
PPS: BTW, re PAV and PO on RR re flagellum, 177 - 178. The proper formulation of the issue here is in terms of CONFIGURATION SPACE, not rejection regions relative to statistical probability distributions. The finely tuned bio-functional state of the flagellum is so isolated therein that the probability of random search accessing it is minimal on the gamut of the observed cosmos. Let us take a cruder look than Mr Dembski does, which will bring out the underlying issues sufficiently. We have about 50 proteins at ~ 300 monomers each, in turn at 3 base pairs per DNA codon. So, we are looking at a DNA configurational space that we may crudely estimate at: 4^[50 x 300 x 3] = 4^45,000 ~ 5.01*10^27,092. This is comfortably beyond the reasonable range of a search in the gamut of the observed universe, even with imagined islands of functionality of say 10^500 possible states. [That is outlandishly far more than the number of cell-based organisms that will ever exist in our observed cosmos from birth at the big bang to eventual heat death.] Within that vast config space, we have known-to-be fine-tuned [cf. Minnich's empirical work] islands of functionality for the flagellum. We cannot credibly get to them from an arbitrary start-point in a space of 5*10^27,092 or anything near that exponent. Similarly, due tot he interlocked functionally fine-tuned complexity, we cannot get half- a flagellum and have it work enough to be encouraged, or even 95% etc, we have to have a flagellum, and worse, we have left off the food concentration gradient based control system [i.e it is often used to move towards a source of nutrients]. A gradualistic incremental RM + NS approach is not credible. Since some 30 of the proteins are more or less unique tot eh flagellum, and the proposed TTSS is in fact evidently derivative not the source of the flagellum [which itself complicates the requirements on the code to embed a second system . . .], co-optation becomes a problem, too. So, relative to the chance + necessity based live options on the table, agency is the best warranted explanation. We may refine the calculation for various factors and issues, but the basic point will remain: we are well beyond UPB, and we are looking at a far more constrained scope: evolution on Earth, ~ 6*10^24 kg, and a window of maybe a few thousand million years. In short, the constraints on the relative probabilistic resources are a lot tighter than we have used, and the complexity is in fact far more than we have used. So, let us focus on the issue, not on a strawmannish side-issue.kairosfocus
August 6, 2007
August
08
Aug
6
06
2007
03:34 AM
3
03
34
AM
PDT
PS: Lest we forget the focus of the thread: The above interaction with prof PO, IMHCO, aptly underscores the force of WD's comment at the top on Padian, NCSE and too many other critics of the EF and ID inference. If Prof WD is lurking, could he comment? Also, Atom, do I make more sense to you now? Why/why not?kairosfocus
August 6, 2007
August
08
Aug
6
06
2007
02:25 AM
2
02
25
AM
PDT
5] PO, 162: It there are 10^80 atoms, there are “10^80 choose 2″ pairs of atoms which is 5 x 10^159. Actually, 10^80 is the number of particles, but we have been content to simply call it atoms. And, the estimate 10^150 is the number of quantum states of these atoms across the lifetime of the observed cosmos, which includes bonded states. Besides, due to the nature of the cosmos, we are not dealing with a simple random choice of any two atoms that can be addressed by a combinations calculation – many atoms will not bond with others, and others are simply too separated – inter-stellar or inter-galactic distances -- to chemically interact. BTW, PaV, recall: He [the no. 2 most abundant atom, a principal product of fusion of H in stars] is monoatomic, as it is a noble gas. Only under exceptional circumstances is it conceivable for it to become engaged in chemical bonding. 6] PO, 163, 165: There is a potential fallacy here because you could say about the flagellum, “hey, it looks like an outboard motor” and infer design. So, how would you run the pillars through the filter? . . . . [RE Trib's: Dembski points out that the formation of the flagellum could not have formed by chance.] That might be [WD's] opinion but he has not been able to conclude it by his filter. Besides, “necessity” was supposed to be ruled out first. First, the outboard motor issue –- which I raised (so PO has read me but chooses not to respond save to his convenience) -- is one of specification; not “it looks like.” The Flagellum comprises 50 parts constructed based on DNA code, including some 30 unique proteins, and has a stator and rotor reversibly driving an external paddle to move the bacterium back or forth in a liquid medium. It IS an outboard motor, of a technology that emerging nanotech engineers are openly salivating over – as once we copied the Bat's sonar (another astonishingly fine-tuned, integrated and complex body-plan level system beyond Behe's empirically observed edge of evolution). Second, the system is -- by virtue of being contingent and caused based on application of a code -- not the product of necessity. [Indeed, its derivative TTSS, has a further contingency that is temperature sensitive.] So, by observing contingency, necessity has in fact long since already been addressed first. Indeed, to be considered as CSI, a system must first pass the test of contingency, going all the way back to the notes by Thaxton et al on the state of OOL research thought at the turn of the 1980's: order [a simple crystal or repeated digital string] vs complexity [aperiodic polymer, random text strings that are long enough] vs specified complexity [informational macromolecules such as DNA, meaningful text in English of long enough llength to be complex]. Then, the distinction between chance and agency is assessed on inference to best explanation – where Trib stumbles. Had he said “ Dembski points out that the formation of the flagellum could not [CREDIBLY] have formed by chance [on the gamut of the observed cosmos]” he would have been spot on. PO has chosen to pounce on a conveniently poor formulation instead of addressing the real issue. On empirically constrained inference to best explanation, agency is the most credible current explanation of the functionally specified, fine-tuned, empirically demonstrated irreducible complexity [cf. Minnich!] in the flagellum. The inference to design of the flagellum, though of course provisional as are all scientific inferences of consequence, is of high credibility and IMHCO is unlikely to be reversed on the merits – though the question is often slectively hyper-skeptically begged as we see in the following . . . . 7] PO, 166: The problem with probabilistic inference is that even if something has a very small probability, it could still occur. H'mm: are you holding your breath waiting for all the oxygen molecules in your room to rush to one end leaving you choking? By Stat Mech, that can happen, and the odds are similar to those of the formulation of the flagellum by chance, or the formation of the text of the various messages in this thread by lucky noise, etc. [Cf. my always linked] In short, we here come to the precise point Fisher was making way back, and which WD has now updated: when the probabilistic resources available in a situation are inadequate, it is not credible to infer that chance was responsible, relative to agency. We routinely make just that choice in many situations, but the problem is when that same principle cuts against our favourite ideas. On this, cf. My discussion of the attempts to expand the available cosmos to escape the force of the UPB, in my always linked; e.g. through postulating a quasi-infinite cosmos as a whole. This is a resort to pure empirically uncontrolled metaphysical speculation, and leaves on the table the challenge to compare other live options, e.g the theistic one. To prejudicially exclude such an option at the worldviews table, is to crudely beg the question. GEM of TKI.kairosfocus
August 6, 2007
August
08
Aug
6
06
2007
01:59 AM
1
01
59
AM
PDT
3] Back to Caputo Observe, I have already cited [154] on why the Court ruled out the inference to a markedly and inadvertently biased selection process:
The first option-that Caputo chose poorly his procedure for selecting ballot lines, so that instead of genuinely randomizing the ballot order, it just kept putting the Democrats on top-was dismissed by the court because Caputo himself had claimed to use a randomization procedure in selecting ballot lines. And since there was no reason for the court to think that Caputo's randomization procedure [capsules from an urn or the like it seems, but with relatively few capsules] was at fault, the key question therefore became whether Caputo actually put this procedure into practice when he made the ballot line selections, or whether he purposely circumvented this procedure in order for the Democrats consis-tently to come out on top. And since Caputo's actual drawing of the capsules was obscured to witnesses, it was this question that the court had to answer. [Dembski, 1996. Of course, since the situation was highly contingent, it cannot be dominated by NR, which is why the first issue is a biased chance option . . .]
Now, contrast the discussion of this issue in PO, 2007 [cf. Link in 19 above]:
. . . In contrast [to the EF approach], a statistical hypothesis test of the data would typically start by making a few assumptions, thus establishing a model. If presented with Caputo's sequence and asked whether it is likely to have been produced by a fair drawing procedure, a statistician would first assume that the sequence was obtained by each time independently choosing D or R, such that D has an unknown probability p and R has probability 1p. The statistician would then form the null hypothesis that p = 1=2 which is the hypothesis of fairness. In this case, Caputo would be suspected of cheating in favor of Democrats so the alternative hypothesis would be that p > 1=2, indicating that Ds were more likely to be chosen.[p. 7. BTW, this also underscores the point that PO is here critiquing the use of the EF in this case, contrary to what he has said above, cf. My comments in 20 – 21 on, and in 154 etc. NB: In the context of the excerpt WD is only spoken of as a design thinker and his relevant qualifications to address statistics are ignored.]
--> Let us note: from 1996 on, WD's FIRST OPTION was precisely: a biased chance process that diverted in favour of D's away form what we would expect for a “fair coin” model. --> So, WD in fact did look at the “fair coin” and “biased coin” models, first, and he followed the Court in accepting the credibility of the claimed procedure as being approximate to a fair coin. THAT is the context in which he then went on to look at the comparison of an alleged fair coin being at work vs deliberate action, i.e design. --> So, on inference to best explanation, what is the likeliest and best explanation for the result: a 1 in 50 billion freak outcome, or Mr C yielding to the obvious temptations of a selection process that in the crucial stages was without witnesses? 4] And on broadening the rejection region [RR]. . . Here, prof PO said:
It is important to note that it is the probability of the rejection region, not of the individual outcome, that warrants rejection of a hypothesis. A sequence consisting of 22 Ds and 19 Rs could also be said to exhibit evidence of cheating in favor of Democrats, and any particular such sequence also has less than a 1-in-2-trillion probability. However, when the relevant rejection region consisting of all sequences with at least 22 Ds is created, this region turns out to have a probability of about 38% and is thus easily attributed to chance. [p. 7]
--> The very first part, of course, partly aligns with WD, who pointed out that the issue is being in the extreme tail from 1 R/40D to 0 R 41 D, i.e he is looking at a Fisherian investigation of being so far out in the tail of a claimed probabilistic process that the likelihood of observing such a result is too low to accept chance as the best explanation. The odds of being in the tail from 1 R/ 40 D on, is 1 in 50 billions, cf. my comments in 68 above. --> PO glides smoothly from that to an assertion that neither WD nor Fisher nor I would agree with: the notion that something as close to the peak as 19R/21 D would be "evidence" of cheating. There is no warrant for that glide and it in effect supplants the real issue with a convenient strawman, where it appears that the border of RRs is a mater of arbitrary selection to suit oneself. Not so at all! --> And of course, this is in its core, the sort of “expansion of the RR” argument that WD deplored in his own 2005 paper, p. 4, responding to Bayesian objections:
what's to prevent . . . [so expanding the RR] that any sample will always fall in some one of these rejection regions and therefore count as evidence against any chance hypothesis whatsoever? The way around this concern is to limit rejection regions to those that can be characterized by low complexity patterns (such a limitation has in fact been implicit when Fisherian methods are employed in practice). Rejection regions, and specifications more generally, correspond to events and therefore have an associated probability or probabilistic complexity. But rejection regions are also patterns and as such have an associated complexity that measures the degree of complication of the patterns, or what I call its specificational complexity. [Note how WD plainly views specification as being a broader concept than RR's; i.e on this too, PO was tilting at a strawman]
. . .kairosfocus
August 6, 2007
August
08
Aug
6
06
2007
01:37 AM
1
01
37
AM
PDT
PaV, Joseph, Trib & Prof PO: The onward back-forth since 158 above is aptly illustrative of what happens when material evidence in a situation is suppressed or ignored or subjected to selective hyper-skepticism, which becomes very interesting in light of Prof PO's “lighthearted” statement in 164: What happened to Kairosfocus? I haven't heard from him after he alleged that I was a KKK supporter Now, of course I nowhere made any such allegation, but that's beside the point. What is on-point, is that we have in hand -- cf my provided links in 157 -- a 1996 presentation of WD's case, and an associated discussion of the statistical theory-based objections to it, as made in 2005, both available to an Internet search, though the first did take a bit of effort on my part. We also have at least the lead of a NYT article, which highlights that the “run” in the Caputo case had played our over the span of “decades.” Similarly, by his own confession, Prof PO knew that the origin of the tornado-in-a junkyard analogy for the odds against forming life-systems by chance-dominated forces originated with Sir Fred Hoyle, but only highlighted that “Creationists” use it, as the lead to his critique of WD's ID work [and rhetorically one makes his hardest-hitting point first of all] -- where neither the man nor the movement are “Creationist” -- without engaging the underlying statistical thermodynamics that underlies Hoyle's remaarks. [NB: In my always linked through my handle, onlookers can see in Appendix 1 an introductory level survey of that thermodynamics and its implications, including under point 6, a scaling down of the 747 to a vat with a disassembled micro-jet in it to be assembled by the random forces responsible for Brownian motion. BTW, the stat mech based explanation of the roots of Brownian motion was a material factor in Einstein's Nobel Prize; oddly, his work on Relativity was deemed “too controversial” at that time, to be a significant contributor. That in itself is telling on the limitations of peer review.] The mere fact that Prof PO's “preprint” of a peer-reviewed article due for publication shortly does not fairly cite and engage the material facts brought forth above, is sufficient to severely undermine our confidence in anything he proposes in his analysis – as you, others and I have now detailed above several times. However, certain points for m the onward discussion and the Caputo case are worth further mention: 1] Joseph, in 167: 1) Did it have to happen? 2) Did it happen by chance? 3) Was it designed (to happen)? By asking the questions in that order [the explanatory filter] prevents any bias towards a design inference . . . . And if someone doesn't use the EF when they attempt to detect design I would love to know what process they use. Any ideas? Joseph, you have struck the nail on the head, hard and sure, exposing the underlying selective hyper-skepticism at work. For, we finite, fallible materially ignorant and too often outright deceived creatures are simply incapable of formulating all possible hyps in a situation – indeed, that is one reason why Occam's Razor about preferring simple explanations is so important; as, that simplifies the set of live options to look at wonderfully! Indeed, ever since Plato in his The Laws, 2,400 years ago [cf my Appendix 2 in my always linked], cause has been understood to originate in one or more of the above: law-like natural regularity, chance, agency. What is happening here is a case of selective hyper-skepticism, where because the inference to agency opens a door to a philosophical option where many would not wish to go, a question-begging assertion or assumption is used to lock the door, where on other cases where that philosophical question is not at stake, they would not dream of being so skeptical. Indeed,this is underscored by your . . . 2] J, 167: And we don't have to rule out all possibilities to arrive at a design inference. To demand that all possibilities be ruled out prior to arriving at a design inference is akin to asking for proof of design. Science isn't in the proving business. Like all scientific inferences the design inference is tentative and can either be confirmed or refuted by future research. In short, Science is -- properly speaking -- an empirically constrained, open-ended, provisional search for the truth about our world, based on the epistemology of inference to best current explanation, i.e Peirce's logic of Abduction. As high-quality dictionaries put it (and I deliberately use two that pre-date the current ID controversies]:
science: a branch of knowledge conducted on objective principles involving the systematized observation of and experiment with phenomena, esp. concerned with the material and functions of the physical universe. [Concise Oxford, 1990 -- and yes, they used the "z" Virginia!] scientific method: principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses. [Webster's 7th Collegiate, 1965]
So, it won't do to beg major questions and block otherwise credible options across the set NR, Chance, Agency, would it? Or, do we KNOW -- how so? -- that an agent could not have been involved in: the origin of the cosmos as we know it, the origin off cell-based life, the body-plan level diversification of that life [e.g. in the Cambrian revolution, the bacterial flagellum], the origin of mind? Why is the possibility of agency in these situations, then so stoutly resisted, even to the point of career busting and the distortion of the positions of those who argue that there is warrant for considering and indeed for inferring to agency in light of the glorified common sense embedded in the explanatory filter? . . .kairosfocus
August 6, 2007
August
08
Aug
6
06
2007
01:32 AM
1
01
32
AM
PDT
PaV, Short answers, "no" and "yes." I don't think it can ever be done in a way such that everybody would agree that it's final. But, as Joseph pointed out earlier, "science isn't in the proving business." [Note to potential newcomers: the issue whether the flagellum is acutally designed or due to chance is not at issue, only whether this can be decided by the EF.]olofsson
August 5, 2007
August
08
Aug
5
05
2007
03:25 PM
3
03
25
PM
PDT
1 5 6 7 8 9 13

Leave a Reply