Uncommon Descent Serving The Intelligent Design Community

Kevin Padian: The Archie Bunker Professor of Paleobiology at Cal Berkeley

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Kevin Padian’s review in NATURE of several recent books on the Dover trial says more about Padian and NATURE than it does about the books under review. Indeed, the review and its inclusion in NATURE are emblematic of the new low to which the scientific community has sunk in discussing ID. Bigotry, cluelessness, and misrepresentation don’t matter so long as the case against ID is made with sufficient vigor and vitriol.

Judge Jones, who headed the Pennsylvania Liquor Control Board before assuming a federal judgeship, is now a towering intellectual worthy of multiple honorary doctorates on account of his Dover decision, which he largely cribbed from the ACLU’s and NCSE’s playbook. Kevin Padian, for his yeoman’s service in the cause of defeating ID, is no doubt looking at an endowed chair at Berkeley and membership in the National Academy of Sciences. And that for a man who betrays no more sophistication in critiquing ID than Archie Bunker.

Kevin Padian and Archie Bunker

For Padian’s review, see NATURE 448, 253-254 (19 July 2007) | doi:10.1038/448253a; Published online 18 July 2007, available online here. For a response by David Tyler to Padian’s historical revisionism, go here.

One of the targets of Padian’s review is me. Here is Padian’s take on my work: “His [Dembski’s] notion of ‘specified complexity’, a probabilistic filter that allegedly allows one to tell whether an event is so impossible that it requires supernatural explanation, has never demonstrably received peer review, although its description in his popular books (such as No Free Lunch, Rowman & Littlefield, 2001) has come in for withering criticism from actual mathematicians.”

Well, actually, my work on the explanatory filter first appeared in my book THE DESIGN INFERENCE, which was a peer-reviewed monograph with Cambridge University Press (Cambridge Studies in Probability, Induction, and Decision Theory). This work was also the subject of my doctoral dissertation from the University of Illinois. So the pretense that this work was not properly vetted is nonsense.

As for “the withering criticism” of my work “from actual mathematicians,” which mathematicians does Padian have in mind? Does he mean Jeff Shallit, whose expertise is in computational number theory, not probability theory, and who, after writing up a hamfisted critique of my book NO FREE LUNCH, has explicitly notified me that he henceforth refuses to engage my subsequent technical work (see my technical papers on the mathematical foundations of ID at www.designinference.com as well as the papers at www.evolutionaryinformatics.org)? Does Padian mean Wesley Elsberry, Shallit’s sidekick, whose PhD is from the wildlife fisheries department at Texas A&M? Does Padian mean Richard Wein, whose 50,000 word response to my book NO FREE LUNCH is widely cited — Wein holds no more than a bachelors degree in statistics? Does Padian mean Elliott Sober, who is a philosopher and whose critique of my work along Bayesian lines is itself deeply problematic (for my response to Sober go here). Does he mean Thomas Schneider, who is a biologist who dabbles in information theory and not very well at that (see my “withering critique” with Bob Marks of his work on the evolution of nucleotide binding sites here). Does he mean David Wolpert, a co-discoverer of the NFL theorems? Wolpert had some nasty things to say about my book NO FREE LUNCH, but the upshot was that my ideas there were not sufficiently developed mathematically for him to critique them. But as I indicated in that book, it was about sketching an intellectual program rather than filling in the details, which would await further work (as is being done at Robert Marks’s Evolutionary Informatics Lab — www.evolutionaryinformatics.org).

The record of mathematical criticism of my work remains diffuse and unconvincing. On the flip side, there are plenty of mathematicians and mathematically competent scientists, who have found my work compelling and whose stature exceeds that of my critics:

John Lennox, who is a mathematician on the faculty of the University of Oxford and is debating Richard Dawkins in October on the topic of whether science has rendered God obsolete (see here for the debate), has this to say about my book NO FREE LUNCH: “In this important work Dembski applies to evolutionary theory the conceptual apparatus of the theory of intelligent design developed in his acclaimed book The Design Inference. He gives a penetrating critical analysis of the current attempt to underpin the neo-Darwinian synthesis by means of mathematics. Using recent information-theoretic “no free lunch” theorems, he shows in particular that evolutionary algorithms are by their very nature incapable of generating the complex specified information which lies at the heart of living systems. His results have such profound implications, not only for origin of life research and macroevolutionary theory, but also for the materialistic or naturalistic assumptions that often underlie them, that this book is essential reading for all interested in the leading edge of current thinking on the origin of information.”

Moshe Koppel, an Israeli mathematician at Bar-Ilan University, has this to say about the same book: “Dembski lays the foundations for a research project aimed at answering one of the most fundamental scientific questions of our time: what is the maximal specified complexity that can be reasonably expected to emerge (in a given time frame) with and without various design assumptions.”

Frank Tipler, who holds joint appointments in mathematics and physics at Tulane, has this to say about the book: “In No Free Lunch, William Dembski gives the most profound challenge to the Modern Synthetic Theory of Evolution since this theory was first formulated in the 1930s. I differ from Dembski on some points, mainly in ways which strengthen his conclusion.”

Paul Davies, a physicist with solid math skills, says this about my general project of detecting design: “Dembski’s attempt to quantify design, or provide mathematical criteria for design, is extremely useful. I’m concerned that the suspicion of a hidden agenda is going to prevent that sort of work from receiving the recognition it deserves. Strictly speaking, you see, science should be judged purely on the science and not on the scientist.” Apparently Padian disagrees.

Finally, Texas A&M awarded me the Trotter Prize jointly with Stuart Kauffman in 2005 for my work on design detection. The committee that recommended the award included individuals with mathematical competence. By the way, other recipients of this award include Charlie Townes, Francis Crick, Alan Guth, John Polkinghorne, Paul Davies, Robert Shapiro, Freeman Dyson, Bill Phillips, and Simon Conway Morris.

Do I expect a retraction from NATURE or an apology from Padian? I’m not holding my breath. It seems that the modus operandi of ID critics is this: Imagine what you would most like to be wrong with ID and its proponents and then simply, bald-facedly accuse ID and its proponents of being wrong in that way. It’s called wish-fulfillment. Would it help to derail ID to characterize Dembski as a mathematical klutz. Then characterize him as a mathematical klutz. As for providing evidence for that claim, don’t bother. If NATURE requires no evidence, then certainly the rest of the scientific community bears no such burden.

Comments
Hi Atom: I just love that “we.” Greet our lovely “Light” for us all.! PO will be missed, indeed. The weather is finally clearing up [still windy, power came back late afternoon – a lot of poles were knocked down], and indeed let us pray that we have a milder season than is feared. GEM of TKI kairosfocus
And by Spet I mean Sept. :) Atom
GEM, G-d be with you dealing with Dean. I too am hoping for a storm that either misses land, or softens before it does. (We're visiting the region in Spet, and are praying for a LACK of hurricanes and hurricane devastation.) PO, you'll be missed. Just know there are IDers who are willing to discuss the difficult questions at length without ending dialogue. Hopefully some of us demonstrated that. Atom
Hi KF. Hope things are well. restricting oneself to guidelines set forth by evolutionary biologists is akin to what W.D. does and hence limits the rejection region one could consider. Here is what I was thinking. The example I gave of the proteins being placed in proximity and waiting for chance to form them into a fallgelum is silly (which PO seems to recognize i.e 500 years etc). BUT that seems to be what PO is asking that we consider in his chance calculations (tornado in a junkyard and all that). UNTIL PO says things such as "we need to consider what objections an evolutionary biologist would have to the particular chance hypothesis that Dr D chooses" (cf 170) Which seems that PO is the one who is then putting limitations on what can be considered including the rejection region. Now, evolutionary biologists don't posit proteins randomingly winging themselves together into a flagellum. They start with a flagellum-less bacterium and say the DNA code is added to it somehow which then directs RNA to make the proper proteins in the proper sequence to give it its flagellum. Now, granted Dembski's famous calculation involve proteins lining themselves up by chance (and shows that it is pretty silly to think that they did) but -- if memory serves -- he also had a probility calculation of the undirected formation of the DNA code specifing such a line up that ended up being the same as the winging the proteins together scenario. Anyway, I think he conceded that if the code was in the DNA the flagellum was a certainity. As far as I can tell, the rebuttal against Dembski is that he hasn't calculated for some unkown natural force that might cause DNA code to expand or change to program for things like a falgellum. That, of course, is a faith statement. I said it before, it puts them in the same category as YEC. tribune7
All, First, weather situation M'rat: Seems Dean is cutting into the region just N of Barbados, so so far just wind here, maybe up to 40 – 50 in gusts. (Now my concern is that it may have done a number on the farmers in Dominica and St Lucia. But moreso, projections put it very near Jamaica at Cat 4 Sunday. Let's hope and pray it does an Ivan if so – ducks away from Jamaica by a mysterious swerve. And onward let's hope it does not do a Katrina etc.) On a few notes: 1] Prof PO: It seems he has been excluded, and from the situation, maybe what was intended as a light hearted remark was taken a little strongly. He expressed appreciation to me and to the blog. As perhaps his most strongly objecting interlocutor, I think his time here was a contribution of value, on balance, and hope that we will be able to hear from him again. 2] PaV, 379: “PaVian Simmerisation” Chuckle, chuckle, chuckle . . . . There, you got three whole chuckles! 3] I really would like you to respond to the question of why any other chance hypothesis ‘needs’ to be considered in the Caputo case. PO is not here, so I will note that in 361, I showed where the Bayesian algebra leads. Taking a ration across alt hyps, if one knows the probabilities of the hyps, one can then infer whether or no evidence supports one over the other. Fine if you have that data and computational power. But, we need not look there to see that as Stove points out by far and away most subsets of a population are similar to the pop, i.e reflect its average, cf my darts and charts example in 68 above. Thus, very rare patterns as show up in Caputo, are utterly unlikely to be in a sample, raising suspicion of “cooking” the data. Thence we can see that this is an instance of a search on a configuration space where the islands of functionality are sparse/ isolated; sufficiently so that random walks are unlikely to hit them. Thence tornadoes in junkyards assembling 747s and/or microjets in vats [app 1 always linked point 6] assembled by diffusion as utterly improbable. I will keep away from the headaches of multinomial distributions, thank you – I ain't a “statistrician” ;-) . 4] Trib: restricting oneself to guidelines set forth by evolutionary biologists is akin to what W.D. does and hence limits the rejection region one could consider. Could you kindly explain/expand a bit what you mean here? GEM of TKI kairosfocus
OK, Sal. I could have continued this discussion for a little while longer, I think. PO, if you peek back, restricting oneself to guidelines set forth by evolutionary biologists is akin to what W.D. does and hence limits the rejection region one could consider. The example I provided was a pure chance one. Well, almost pure chance. I did cheat a little. tribune7
Atom, PaV, tribune7, jerry, Patrick, and Kairosfocus, etc. P.O. and I communicated on other matters outside of UD since he is a mathematician and I have an interest in math. He expresses his thanks to everyone here for a stimulating exchange. He will not be returning, and he is sorry he won't be posting here anymore. He asked me to convey his appreciation for you all. scordova
P.O Alert!!! P.O. Alert!!! There's an article in this week's Science magazine you'll want to look up at the University. Here's and excerpt: It's entitled: "Crystal Structure of an Ancient Protein:..." They write: Here we report the empirical structures of an ancient protein, which we “resurrected” (12) by phylogenetically determining its maximum likelihood sequence from a large database of extant sequences, biochemically synthesizing a gene coding for the inferred ancestral protein, expressing it in cultured cells, and determining the protein’s structure by xray crystallography. Specifically, we investigated the mechanistic basis for the functional evolution of the glucocorticoid receptor (GR), a hormone-regulated transcription factor present in all jawed vertebrates (13). GR and its sister gene, the mineralocorticoid receptor (MR), descend from the duplication of a single ancient gene, the ancestral corticoid receptor (AncCR), deep in the vertebrate lineage ~450 million years ago (Ma) (Fig. 1A) (13). How timely, no?! PaV
P.O. [363]: "Before we kill the thread, let’s kill Caputo! I was talking abot your flagellum calculations and only meant that your chance hypothesis is similar to Caputo p=1/2, and asked how you’d rule out other chance hypotheses (similar to Caputo p=37/38). I was very unclear, sorry! Shouldn’t have mentioned him at all!" Frankly, I was working the other way around. That is, I thought it would be better to get some clarification as to how the 'liklihood' approach actually works in a simple case before we try to tackle a more complicated one. So, in using the example of the stock market closing price as the determiner of D's and R's, surely a random happening, I was trying to get at what other chance hypotheses might need to be eliminated to satisfy a 'liklihood' statistician. IOW, I really would like you to respond to the question of why any other chance hypothesis 'needs' to be considered in the Caputo case. Changing the subject a little bit, and in a way that perhaps anticipates what might come next, I would ask you if you think a multinomial distribution could be used for statistical analysis in the case of proteins. What are your thoughts about that? I have to rush off. See you all later. P.S. BTW, kairosfocus, I knew what you were getting at with the "PaVian Simmerisation"; I just thought I'd get a chuckle from you! ;) PaV
Prof PO (and all . . .) Seems there is life yet in this thread that keeps going and going. Okay, a few observations: 1] David Stove's point on induction, empirical support to a claim and sampling On another thread, I ran across the name and followed it up fruitfully – UD is good at stimulating that sort of thing. In taking on Hume, Kuhn, Popper et al, he makes some points that jump off the page at me on the stuff discussed above, and at least for the onlookers I think the ideas he raised are well worth at least an excerpt or two. So, PaVian style, the “simmerised” core stuff [quite relevant tot he underlying context to the above], courtesy that ever so humble and too often biased on ID Wiki (which has onward links that get to whole books online, drop me a line through contacts via my always linked . . .):
[Negative task] Consider a claim such as “All ravens are black”. Hume argued that we don’t know this a priori and that it cannot be entailed from necessary truths. Nor can it be deduced from our observations of ravens . . . . Stove argued that Hume was presuming “deductivism” . . . the view, explicitly or implicitly accepted by many modern philosophers, that the only valid and sound arguments are ones that entail their conclusions. But if we accept that premises can support a conclusion to a greater (or lesser) degree without entailing it, then we have no need to add a premise to the effect that the observed will be like the unobserved - the observational premises themselves can provide strong support for the conclusion, and make it likely to be true. Stove argued that nothing in Hume’s argument shows that this cannot be the case and so Hume’s argument does not go through, unless one can defend deductivism. This argument wasn’t entirely original with Stove but it had never been articulated so well before. Since Stove put it forward some philosophers have come to accept that it defeats Hume’s argument . . . .
So, it comes down to defeatable but credible warrant, where we may know reliably enough for real-world purposes, but only provisionally. [Thus, “all men live by faith; the issue is which one, why?”]
[positive task] it is a statistical truth that the great majority of the possible subsets of specified size (as long as this size is not too small) are similar to the larger population to which they belong. For example, the majority of the subsets which contain 3000 ravens which you can form from the raven population are similar to the population itself (and this applies no matter how large the raven population is, as long as it is not infinite). Consequen[tl]y, Stove argued that if you find yourself with such a subset then the chances are that this subset is one of the ones that are similar to the population, and so you are justified [NB:following Plantinga, I would use “warranted”] in concluding that it is likely that this subset ‘matches’ the population reasonably closely . . .
Thus we see that sampling [esp if random or nearly so] tends to reflect the population's “average” i.e the same basic point that statistical thermodynamics is premised on, and which is very relevant to – here it comes again – Caputo. Thus too, unless we have a large enough sample, we are unlikely to see the extreme strongly represented, on a chance basis. Further to this, generalising to a configurational space and the task of searching more or less at random, we are far more likely to see states reflecting clusters of configs that are common, than clusters that are rare; absent feeding in enough probabilistic resources to make it likely to climb Mt Improbable. And, random walks in a space where the criterion of inclusion in the specified set is functionality, are vastly unlikely to ever arrive at a functional state to begin with. This is for reasons as identified in my always linked, App 1, point 6, on the micro-version of Hoyle's “tornado in a junkyard builds a 747 by chance.” 2] PO, 377: I assumed Kf was just joking about the bacteria in the dish; no evolutionary biologist has to my knowledge claimed that all the proteins were sitting in the bacterium and suddenly assembled by chance H'mm, if you look back at 360, you will see that the example was set up by Tribune 7 not me. But, too, you will see by comparing the just referenced point in my always linked, that his is essentially the same thing as my microjets example addresses; cf the exchange with Trib at 364 (pt 3) and 366, on the issue of the thermodynamics of diffusion, which is what would dominate. In a nutshell: the number of scattered microstates – the cells in the locational space would be of order 10^-9 to 10^-8 or so metres [1 cc would have ~ 10^18 10^-8 m scale locational cells in it, and you are here dealing with dozens to hundreds of parts] so overwhelms the clumped much less the functionally configured that we probably will have to wait longer than the observed cosmos exists for even EXISTING, known to be functional, proteins to assemble into a flagellum by chance-dominated processes. [Tornado in a junkyard statistics again.] And, that is before you get to the issue of forming the functional proteins by chance and co-option, requiring a lot of DNA coding as discussed above, up to 27,000 top 45,000 base pairs worth! (No to mention the underlying issue of forming the life forms in the first place out of prebiotic chemistry in realistic environments.) 3] I promise not to use the C-word unnecessarily That is where a lot of the trouble on rhetoric started; cf just above to see the force of the Hoylean point hat HM et al picked up, probably because of their thermodynamics training in Chemistry and/or Engineering Sciences. 4] let’s go look for some E coli and get started! E coli are rather easy to find – being a major life form in sewage. But I think Biological supply houses have all sorts of “pet” strains used in studies. [I think there was a flap about producing and using lab strains that would not thrive in the “wild”; for obvious reasons.] On the challenges of the expt's design, cf. Just above. Funding will be a bear, in that light. GEM of TKI kairosfocus
...and I'm back! tribune7, Yes, I just pulled a number for fun. I assumed Kf was just joking about the bacteria in the dish; no evolutionary biologist has to my knowledge claimed that all the proteins were sitting in the bacterium and suddenly assembled by chance (not sure what Stuart Kauffman thinks though). I though we'd end the thread on a light note. As for wording, I promise not to use the C-word unnecessarily (but some actually call themselves C-ists). So, let's go look for some E coli and get started! Cheers, Prof PO olofsson
Maybe 500 was optimistic. Let’s say 600 and you prove me wrong! PO, you have a strong background in statistics. You throw out a number. I make the resonable assumption you might have some mathematics behind it. OK, maybe you don't. Maybe you pulled that number out of the blue to make a debate point (i.e. we can never know the truth in our lifetimes.) Of course, dontcha think that line of reasoning kinda makes your paper pointless? Actuallly, that line of reasoning makes the entire debate on evolution debate pointless i.e. we can never be sure of the answer so it's all a matter of faith so let's not worry about the science. Actually that comes close to what evolutionists sometimes appear to believe :-) Anyway, work on your word choices even in blog discussions. Or maybe you do have math to back up your point. I'd be interested in seeing it. tribune7
PO The malaria parasite evolved precisely as IDists expected it to do given the average mutation rate of eukaryotes. It isn't "doing just fine" either. Its range is severely restricted by climate and there's a man-made drug killing it by the billions of trillions. Except for point mutations which statistical probability excludes from building into chained interdependent structures it hasn't evolved anything at all. If it's something that requires more than one or two changes in the genetic code the odds are virtually impossible of it getting done. 10^20 replications is orders of magnitude more than all the mammals that ever lived. At this point in time you appear to be in denial. If that persists I'll be asking you to leave. People in denial of plain evidence aren't welcome here. DaveScot
All: It seems plain we have reached a reasonable consensus at length. GEM of TKI PS: Re, PO, 373: Let’s say 600 and you prove me wrong! (BTW, thanks on the Algebra.) --> Cf. Trib at 366, esp his comment on: Appendix 1, always linked, point 6 kairosfocus
tribune7 [368], Maybe 500 was optimistic. Let's say 600 and you prove me wrong! Seriously guys, I'm out of here. Die thread, die! :D :D :D olofsson
DaveScot, The malaria parasiteseems to be doing just fine without "novel new structures." Getting those mutations that gave chloroquine resistance was quite an achievement, kind of like when Team USA beat the Soviet Union in hockey in 1980. olofsson
Kf, Your algebraic manipulations look just fine. I already told Michaels7, "No More Mr Nice Guy" is an Alice Cooper song, and I think each scholarly paper should include the title of a rock song. olofsson
Kf [365], I'd start right now if I only knew how to find some E. coli... olofsson
tribune7 [349], Do I need to go into rehab...? PO olofsson
Dave, For the life of me I can't figure where PO is coming up with his 500-year estimate. He should find someone to bet with. I think it's more likely he lives 500 years than the proteins form a flagellum. tribune7
tribune7 We watched the malaria parasite for 10^20 replications and it didn't evolve any new structures at all. All it did was find a single point mutation (quite often) that confers avoquine resistance and a two point mutation (very rarely) that confers chloroquine resistance. It was unable to defeat each of two different human hemoglobin mutations, it was unable to find a different host, and it was unable to find a way to live in temperatures under 64F. Either falciparum is exceedingly atypical of eukaryote evolution or random mutation is an utter failure at building novel new structures. DaveScot
KF!!! You do have an answer in your Appendix 1, always linked, point 6!!! Excellent! tribune7
How long do you think we’d have to watch before we get a falgellum? . . . I’d say about 500 years. When do we start? That was my question :-) Somebody should be encouraged to start as soon as possible, if you really believe it would take just 500 years. It would make more sense thatn SETI. (And it would put in the history books if it should succeed) And just curious -- I'm really not trying to get to 400 -- why do you say 500 years? tribune7
All: Looks like the thread is really coming to an end, i.e a point of definite consensus – and stated by no less than . . .
Prof PO, 363: I’m trying to convince people here that there is nothing extraordinary about the EF which is clear from Dr D’s constant references to concepts from mathematical statistics. There is also nothing repulsive about the EF . . .
I agree, the problem is not with the concept of the EF or its utility [both pretty straightforward and successful in loads of cases all across science, statistics and management, even court rooms], but with what happens when it runs head on into the assumptions and assertions of core ideas, icons and cases in the reigning evolutionary materialist paradigm. Then, all the objections and debate tactics we know so well come into play. BTW, prof PO, no points to pick on my always suspect algebraic substitutions above on conditional probabilities and likelihoods, etc? [I normally have someone else review any Math I do/outline/summarise for serious reasons, before putting it out in public!] Now on a few closing [?] notes: 1] PaV, 362: Simmerisation . . . You have presented yet another fine example of boiling the matter down to essentials in 362. (A happy typo inspired me to name the approach after the leading practiitoner at UD, one certain PaV.) I only add to that, that given the declarative fair method allegedly in use, a long run should have alerted a fair-minded pol to something going wrong: should be observed pretty close to 1/2, given the dominance of near 50-50 outcomes in the relevant population. So we have at least design by negligence as I noted on several times. 2] PO, 363: C was only used as an example But, of what, given the way in which you built up to then handled it then segued into “No more Mr Nice Guy”? ;-) 3] PO, 363: Question The Q was by Trib, not me. I already have an answer in Appendix 1 my always linked, point 6, on clumping and functional clustering by chance processes – essentially diffusion. The scattered states absolutely predominate, I am afraid, so once diffused in, the molecules will normally spread out at random like an ink drop. Absent drying out the wait will be long, long indeed – compare the relative statistical weight of scattered vs clumped vs functionally configured microstates for nanometre-scale cells in a beaker or even just a test tube. AKA, why do you thing the cell has such a tight, complex, interlocking set of nanomachines to carry out its work? [No prizes for guessing that diffusion is as a rule not controllable enough . . .] GEM of TKI kairosfocus
PaV, Oh dear, oh dear, oh dear, I'm so sorry! Before we kill the thread, let's kill Caputo! I was talking abot your flagellum calculations and only meant that your chance hypothesis is similar to Caputo p=1/2, and asked how you'd rule out other chance hypotheses (similar to Caputo p=37/38). I was very unclear, sorry! Shouldn't have mentioned him at all! From the outset, C was only used as an example. There is nothing more to say about it. If I ever write about this again, I'll choose another example indeed! Caputo is dead. Not sure what my particular brand is; it's all basic stuff and any mathematical statistician would tell you the same thing. I'm trying to convince people here that there is nothing extraordinary about the EF which is clear from Dr D's constant references to concepts from mathematical statistics. There is also nothing repulsive about the EF and there are other ID critics who are annoyed with me for being far too nice to yall. Ah, the maverick Prof PO has to fight left and right! As for Kf's question How long do you think we’d have to watch before we get a falgellum? I'd say about 500 years. When do we start? :D :D :D olofsson
P.O. As for you calculations, let’s say (for the sake of argument!) that you manage to rule out the uniform chance hypothesis (Caputo p=1/2). But how do you rule out other chance hypotheses (Caputo p=37/38)? Recall that these chance hypothesis would be formed by considering billions of years of evolutions, not an easy task. I'm finally beginning to see how you, using your particular brand of statistics, are looking at this scenario. When you write . . . . When p=1/2 is ruled out, there is no chance hypothesis left and design is inferred (but only in the sense that chance is ruled out, there is no alternative design hypothesis of how he cheated). . . . . I see in your example of the Roulette wheel what you mean by a different "chance hypothesis". So, at least, I see the way in which you're approaching all of this, though, of course, I disagree with it! Here's how I disagree: in the first quote above, you ask: "But how do you rule out other chance hypotheses (Caputo p=37/38)" My answer would be that you would rule it out because there is nothing to suspect that Caputo used a Roulette wheel. IOW, what is suspicious about what Caputo did is that, living in the United States, knowing that there are only two major political parties, and that those parties are oftentimes abbreviated by using D or R, AND, under the procedure specified by Caputo himself, wherein the procedure was supposedly set up so that EACH Democrat and Republican had an equal chance of getting to the top of the ballot (i.e., p=1/2), then 40 D's and 1 R is simply suspicious. The ONLY thing a statistician has to do in such a situation is "eliminate" the p=1/2 scenario. Once this is done, then "chance" has been eliminated. So, what do we do next? We examine the machine/software (whatever it was he used) to see if something is defective. If it is not defective, we have now ruled out any "natural causes" for the skewed outcome. That leaves us with "design". Now, if it turns out that the machine/software used turns out to be "defective", then one would conduct a "forensic" investigation, trying to determine whether or not the machine/software was tampered with or not. If it had been tampered with, then we're back to "design". That's all we would have to do. The only reason I can think of for even entertaining the prospect that a Roulette wheel was used would be if Caputo himself said that this is what he did. But, now, the issue would no longer be the results---which fit perfectly with the method employed in such a case---but "why" he chose to use such a method. If we're "forced" to rule out the Roulette wheel, then should we also have to rule out the "chance" hypothesis that the way the 40 D's and 1 R came about was through the use of the stock market, so that, over a forty-one day period, every day that the stock market closed up, Caputo selected a D, and every day it closed down, he chose an R. Why is this "possible chance hypothesis" important in any way? I just don't see how, or why, any of this would be important. We can get into what limited number of "chance" models Nature affords us in the construction of proteins (something I've already alluded to in prior posts), but I'd like to get your reaction to this straightforward objection before we get around to more difficult engtanglements. P.O. If it doesn’t die on its own, we’ll beat it to death. This thread is like a Timex watch: "It takes a licking, and keeps on ticking!" (BTW, I'm away for the rest of the day.) Hey, kairosfocus, what is PaVian Simmerisation? Is it some kitchen technique they use down in the Caribbean? PaV
PS: AN -- after nap -- we often wish to find evidence to support a theory, where it is usually easier to show that the theory makes the observed evidence “likely” to be so [or at least, accepting it makes believing the theory more plausible on whatever scale of weighting subjective probabilities we may wish etc . . .]. So we have to move: p[E|T] --> p[T|E]; at least, on a comparative basis. PaVian Simmerisation, and if my substitutions are worked out right: a] Now, first, look at p[A|B] as the ratio, (fraction of the time we would expect/observe A AND B to jointly occur)/(fraction of the the time B occurs in the POPULATION), on the easiest interpretation of p's to follow. b] Thus p[A|B] = p[A AND B]/p[B], or, p[A AND B] = p[A|B] * p[B] c] By “symmetry” -- the easiest way to do this, I think -- we see that also p[B AND A] = p[B|A] * p[A], where the two joint probabilities are plainly the same, so: p[A|B] * p[B] = p[B|A] * p[A], which rearranges to . . . d] Bayes' Theorem, classic form: p[A|B] = (p[B|A] * p[A]) / p[B] e] Substituting, p[E|T] = (p[T|E] * p[E])/ p[T], p[T|E] being here by initial simple def'n L[E|T], likelihood of theory T given evidence E, at least up to some constant. But, where do we get p[E] and p[T] from – a hard problem with no objective consensus answers, in too many cases. (In short, we are looking at a political dust-up in the relevant institutions.) f] This leads to the relevance of the point [which is where a lot of things come in] that a certain ratio, LAMBDA is: L[h2|A]/L[h1|A], and is a measure of the degree to which the evidence supports one or the other of competing hyps. g] So, p[T1|E] = p[E|T1]* p[T1]/p[E], and p[T2|E] = p[E|T2]* p[T2]/p[E], so also: p[E|T2]/ p[E|T1] = L[T2|E]/ L[T1|E] = {p[T2|E] * p[E]/p[T2]}/ {p[T1|E] * p[E]/p[T1]} = {p[T2|E] /p[T2]}/ {p[T1|E] /p[T1]} h] All of this is fine as a matter of algebra applied to probability, but it confronts us with the issue that we have to find the outright credible real world probabilities of T1, T2; at least, we have eliminated p[E]. In some cases we can get that, in others, we cannot. [And thus the sort of objections we have seen in this and previous threads.] i] Now, by contrast the “elimination” approach looks at a credible chance hyp and the distribution across possible outcomes it would give, with a flat distribution as the default [e.g why a 6 on a “fair” die is 1 in 6]; something we are often comfortable in doing. Then we look at in the hyp testing case, the credible observability of the actual observed evidence in hand, and in many cases we see it is simply too extreme relative to such a chance hyp, as in the case of Caputo. j] So by the magic of seeing the sort of distribution in Caputo [cf. 68 above!] as a space containing the possible configurations, we then see that this is a particular case of searching a config space in which the individual outcomes are equiprobable -- but because they cluster in groups that are what we are interested in, the probabilities of the groups is not the same. [So, we are more likely to hit near the centre of the distribution on the dart board chart, than to hit the extreme to the right, which is 1/52 billionths or so of the curve. Indeed, in many real world cases, an upper extreme that is 5% of the curve is acceptable, or 1 % if you are not comfortable with that; rarely in statistics do we see picking an extreme of 1 in a 1,000. That should give us some real world context.] k] So the consequence follows: when we can “simply” specify a cluster of outcomes of interest in a config space, and such a space is sufficiently large that a reasonable search will be unlikely within available probabilistic/ search resources, to reach the cluster, we have good reason to believe that if the actual outcome is in that cluster, it was by agency. [E.g the bacterial flagellum, or a flyable microjet in Appendix 1 in the always attached. Thus the telling force of Hoyle's celebrated tornado in a junkyard assembling a 747 by chance illustration.] --> Thus, we see a little more on why the Fisherian approach makes good sense even though it does not so neatly line up with the algebra of probability as would a likelihood or full Bayesian approach. Thence, we see why the explanatory filter can be so effective, too. GEM of TKI kairosfocus
Something else to consider: Is it possible for proteins to be formed into a flagellum? Sure, or there would not be flagella. But how? By chance? One way to test would be taking the appropriate proteins, putting them in proximity and just watching them. I guess gathering them together would be increasing the odds quite a bit and could be thought of as cheating but it would be a first step, right PO :-) How long do you think we'd have to watch before we get a falgellum? tribune7
Excellent summary, KF. tribune7
2] Caputo, one more time: Had Caputo done as advertised, he would by overwhelming probability been near the 50-50 split on R's and D's. That he claimed to do so but ended up in such an unlikely to be observed outcome is the surprising thing that triggered the investigation. On examining the credible distribution on the claimed “fair coin” model, the suspicion of an extremely unlikely to be observed result was substantiated. On grounds that the highly complex and meaningful/ functional result should not normally have been seen, it was inferred that the likelier explanation was -- given the simple specification that the outcome served his obvious agenda -- that he was acting deliberately. Then, corroboration came from the reported fact that there was no observation of the actual selections. [We can presumably rest assured that on being forced to fix the process, the decades-long run to D's vanished like magic.] A little common sense saves us a lot of wading through technicalities that may do moe to cloud than to clarify the issue. 3] Flagellum: Similarly, we know that the proposed mechanism is RM + NS, which is blind to future states and imposes the requirements of bio-functionality and differential reproductive success at each stage. The flagellum is a complex, fine-tuned whole comprising many parts in a self assembling, self-tuning actuator as part of the food-finding mechanism of the relevant bacteria. --> A lot of bacteria etc get along without it, so it is not necessary to survival. --> It is complex [40+ - 50 or so proteins, averaging maybe up to 45,000 base pairs in DNA] and requires co-adapted components, so a partial assembly would not work and so would not be selected for in the population. That leaves co-option with some final novelties, but that faces the gap of the many unique proteins [whatever the debate on the numbers of such]. --> The only seriously proposed co-option is of a mechanism that turns out to be [a] reportedly based on a a subset of the flagellar genes (part of the self-assembly mechanism it seems), and [b] is functionally dependent on the existence of “later” populations and types of cells, ie. Eukaryotes. Namely, the TTS, which s also [c] a long way from a flagellum. --> On the random mutation assumption/model, and relative to Behe's observed edge of evolution, the chance hyp is so unlikely that we can immediately discard it, absent empirical demonstration, which after 150 years is not forthcoming at this, body plan, level. That leaves agency on the table as the best explanation. 4] PO, 355: Now, Caputo could have “cheated by chance” for example by spinning a roulette wheel and only chose R when unlucky 13 came up. We have Caputo's testimony that he used a selection process that if actually used would have been fair. An intentionally biased selection process that may have in it a chance element leading to deception in the Courtroom is, of course: DESIGN. (So would be a sloppy method that at first unintentionally created runs [e.g the capsules were layered and not shuffled enough], which was then stuck with instead of being debugged and fixed.) 5] PO, 356: When Dr D analyzes the Caputo case, he starts by ruling out all chance hypotheses except p=1/2. When p=1/2 is ruled out, there is no chance hypothesis left and design is inferred Cf just above for why; it is not a mystery or a mistake, and the relevant WD document has been available since 1996, cf link above and again here. GEM of TKI kairosfocus
All: The thread that will not die indeed. A few points, pardon selectiveness, being summary and thematic [save for no 1] rather than detailed on points – insomnia can carry one only so far: 1] PaV, on Likelihood etc: While there are distinctions between the two relevant schools, for our purposes [Caputo etc] it seems they speak with more or less one voice, nuances and technicalities aside. Wiki, that ever handy 101 reference, gives a first rough-cut slice on the idea of “Likelihood”: . . . consider a model which gives the probability density function of observable random variable X as a function of a parameter θ. Then for a specific value x of X, the function L(θ | x) = P(X=x | θ) is a likelihood function of θ: it gives a measure of how "likely" any particular value of θ is, if we know that X has the value x. Two likelihood functions are equivalent if one is a scalar multiple of the other . . . Boiling down, via PaVian simmering [perhaps for “onlookers” wanting a simplified summary of the “simple” presentation above!], the idea here is that we have some alleged random variable that takes and observed value x from a set of possible values X. We then wish to get to how likely it is that a given value of θ is, given the observation of the value x. Likelihood of θ on observing x, is the conditional probability P(X=x | θ), which then gets us into the terror-fitted depths of Bayes' theorem and its practical applications. BT is of course: P[A|B] = (P[B|A]*P[A])/P[B], with the reversed conditional probability P[B|A] being in this case, “the likelihood of A given B.” (A conditional probability P[A|B] is in effect the ratio of prob of A and B happening jointly, to prob of B, or, the prob of A given the condition that that B, another event of more or less probability, has occurred.) The rest falls out algebraically, once we make that basic substitution.) We then read the BT eqn just now, as posterior probability is likelihood times prior probability, all divided by a normalising constant. Problems start to come in with “needs” to know p[A] and P[B] directly – hence part of why prof PO was talking about such priors. The contrast is that on elimination approaches, we are in effect saying that [a] here is a credible distribution on a variable, on the relevant chance hyp. Then, we sample/observe real-world cases, and in such a case as that we have a sharp peak holding most of the distribution, extreme cases are unlikely to be met with IF the chance hyp holds, cf. my illustration in 68 above, and the follow up on it. What is happening is that had Caputo done as he testified, he would not have been likely to see the 1 in 52 bns chance outcome. So it is most reasonable that he did not do as he declared. And since in such a contingent situation, natural regularity is not material, we see that agency and intent better explain the outcome than chance. This is of course the typical way in which most statistical inference is done in science and management decision making, and even as brought out in the courtroom. It is harder to justify theoretically, but has such a track record of empirical success that it is generally regarded as sufficiently reliable to be used. Going back to the underlying issue of why the EF works, we can see that we know that things are caused through chance, necessity and/or agency. In situations where multiple valued outcomes are reasonable [e.g on tossing a die], then we see that the effective choice is chance or agency. Thence, we see that on taking chance as null, using a credible model of what chance can do, e.g. the Caputo coin-tossing model, or the stat default Laplacian equiprobable assumption, etc, we compare what we see to what we would expect from chance. If there is a sufficiently unlikely outcome, we for excellent and reliable reasons, revert to agency as the explanation. We do it in day to day life all the time, and in science all the time. It also sends a lud and clear messaage on the mostr likely cause of the flagellum, etc. Therein lieth the rub. The debate here IMHCO comes up in the main because of selective hyper-skepticism triggered by the possible worldview implications, not because of some serious and substantial defect in what is being done. . . . kairosfocus
Darn html! ...values p less than 1/2. I see awful typos above, sorry about those. Anyway, the above was all about different chance hypotheses, each one corresponding to a value of p. When Dr D analyzes the Caputo case, he starts by ruling out all chance hypotheses except p=1/2. When p=1/2 is ruled out, there is no chance hypothesis left and design is inferred (but only in the sense that chance is ruled out, there is no alternative design hypothesis of how he cheated). If this can be done, by inspecting his equipment or taking his word for it (Capone-Bluto's word???), we can argue that our initial chance model was not correct and conclude that chance was not involved at all. So, the EF neither addeth nor taketh away from what a statitistician would do. We're all in agreement here. I haven't seen the arguments the prosecution used in the original case but I'm sure they had statisticians on their payroll. But when it comes to applications to evolutionary biology, it is in my opinion impossible to form a design hypothesis. Elliot Sober claims that we must but I claim that we can't. Regardless if whether the data is the flagellum or Atom's fiancee, how would we compute its probability under a design hypothesis? I'm not "distancing myself" from Sober now as I have never held his position. I agree with Dr D insofar as it is not a logical error to reject a hypothesis without superseding it (moon-made-of-cheese which Wallace and Gromit worked hard at!). As for you calculations, let's say (for the sake of argument!) that you manage to rule out the uniform chance hypothesis (Caputo p=1/2). But how do you rule out other chance hypotheses (Caputo p=37/38)? Recall that these chance hypothesis would be formed by considering billions of years of evolutions, not an easy task. I known we may perhaps talk past each other to some extent, but hopefully there is a little more understandin each time! Going from song quotes to movie quotes: "Die you b...."! Prof PO olofsson
PaV, If it doesn't die on its own, we'll beat it to death. Anyway, by likelihood we mean the probability of the observed data as a function of the parameter p, here L(p)=p^40*(1-p). Thus, with p=1/2, the likelihood is 1 in 50 billion. Now, Caputo could have "cheated by chance" for example by spinning a roulette wheel and only chose R when unlucky 13 came up. Then the likelihood is L(37/38)=0.009, not that low anymore. The likelihood approach would now choose the value of p that maxmimizes the likelihood, which turns out to be p=40/41 (the so-called maximum likelihood extimator). So, now that I've rtied to explain better what I mean by likelihood, you see that it is easy to find the hypothesis that confers the highest likliehood on the data. The hypothesis test only rules out H0:p=1/2 in favor of HA:p>1/2, thus, a combination of elimination and likelihood. Note that it is reasonable to have HA; the data speaks against H0 but not in favor of values p olofsson
P.O. Thank you for your response. It is elucidating. Let me put a few of your thoughts together and so work toward a question/statement. First, you say, "The likelihood approach picks the hypothesis that confers the highest probabiilty (likelihood) on the data." And then you say, "If you read my article, you will see no mention of competing hypotheses, likelihood, or prior/posterior probabilities. I am entirely within the eliminative paradigm." Now, the method you used in computing the Caputo case isn't something, it seems to me, that is consistent with the "likelihood" approach. In fact, it seems to me that it is impossible to "confer the highest probabiilty (likelihood) on the data" when you find yourself in the 'rejection regions'. What I mean is this: it would be easy to, for example, verify the probabilities associated with the method Caputo used when you are at, or near, the peak of the distribution: e.g., you could run 300 experiments, i.e., come up with 300 samples, and calculate the odds "over/[on] the data". I'm sure they would be close to the peak. Thus, this particular "likelihood" could be calcualted "on the data" (I'm hoping I'm understanding this last phrase correctly here). But what about the example of the Caputo case itself, where, if we are to believe Caputo, his method turned up 41 D's and 1 R. The odds are 1 in 50 billion of that happening. How many "samples" would have to be run to come up with even "one" such instance of 41 D's and 1 R? Theoretically, 50 billion. It seems to me, then, that this "likelihood" would be very difficult (really, it would be impossible) to calculate "on the data". Perhaps you've already sensed this inadequacy, and, for that reason now distance yourself from Sober. Having said that, though, it also seems to me that the kind of calculation I proposed in my last post should satisfy, if you do indeed find yourself "entirely within the eliminative paradigm", your misgivings about the "elimination" method that WD employs. I wondering about your reaction to what I propose. Could you comment? (And, indeed, this really IS the "thread that won't die"!) PaV
PaV, Yes, there are three major approaches: elimination, likelihood, and Bayesian. For "intelligent design inference," which we are discussing here, only elimination is at all possible. The other approaches require us to compute the probability of data under each hypothesis considered. The likelihood approach picks the hypothesis that confers the highest probabiilty (likelihood) on the data. Bayesian analysis, in addition, assigns prior probabilities to the various hypotheses and the computes the posterior probabilities once data are observed (thus, only the Bayesian approach lets us talk about how probable the hypotheses themselves are). Clearly, there is no way of doing either so for the point of "elimination vs comparison" the distinction likelihood/Bayes is not material (as Dr D also says in his "chapter 33"). As I said, I don't criticize from the same vantage point as Sober (I have read his criticism, not just Dr D's account of it(!), and discussed it with him). If you read my article, you will see no mention of competing hypotheses, likelihood, or prior/posterior probabilities. I am entirely within the eliminative paradigm. By the way, in theoretical and applied statistics, there is no pure eliminative ("Fisherian") approach, but a combination with the likelihood approach due to Neyman and Pearson. The Bayesian approach is gaining ground; as it tends to be computationally heavy, it was not feasible a few decades ago. These days it is used in email spam filters, Google's seach engine, in clinical trials, and, on occasion, in court cases. Thus spake Prof PO. olofsson
Since it's time, apparently, for final notes, here's this one. In looking through Dembski's NFL, in section 2.9 he discusses Sober's criticisms of WD's Design Inference. Sober, who P.O. mentions right from the start, uses what he, Sober, terms a "likelihood approach" to statistics. What is done is that any hypothesis that can be formed is considered a "chance hypothesis" (even one that says something is designed) and then the probabilities that these "chance hypotheses" develop are compared and an inference is made as to the best explanation. So that is why, it appears, the good professor refused to be described as a Bayesian, although Dembski's reasons for rejecting Sober's approach is much the same as that for the strictly Bayesian approach. This also explains the good professor's insistence on wanting to know what are the probabilities associated with the bacterial flagellum. They can be computed in the Caputo case; but not with the flagellum. Nonetheless, I think the analysis I presented certainly begins to get to the heart of any such probabilities. It strikes me that if one were to calculated the total number of proteins that exist at any one moment in time---those present in every cell of every creature that exists, then one could take this number of total proteins in existence and divide it by the 50 or so proteins that make up the flagellum, and then 'assert' that the number so calculated represents the total number of, in WD terminology, replicational opprotunities for the flagellar proteins to exist. Then one would divide this rather large number by the probability space generated by each of the proteins in the flagellum multiplied together---which would end up beyond imagination. This would be the realistic approach. But the ultimately conservative approach is to simply divide the above calculated number of proteins that exist by the probability space of just ONE 300 residue protein, and, I'm confident, we would be well above the UPB. PaV
PO -- One last thought for a near-dead thread. Increasing the rejection region to a certain degree seems to require us to step out of the realm of what can be revealed by mathematics, into the real of philosophy -- i.e. "why shoud a bacteria develop a falgellum to move?" which should lead us to "why should a bacteria develop at all?" Why not something else to fill the bacterial niche? And why should man be the only creature with the ability to create? Why didn't super intelligent asexual producting beings evolve, or egg-layers with strong exo-skeltons? And most significantly, why does anything -- mathematically speaking -- have to be. It seems the natural state of the universe is heat death -- which the 2nd Law of Thermodynamics says we are inevitably headed. tribune7
Atom: Amen! God bless you both! GEM of TKI kairosfocus
An observation about the mod policy -- near the beginning of the thread PO stated that ID was not creationism, and maybe that provided a clue he was here to debate in good faith. When I first glanced at his paper I thought for sure he was just going to be another name-calling troll who refuses to argue on the merits. Anyway, he turned out to be a great addition and was the prime force behind a great and history making thread. PO, I hope you take KF's points about style to heart. Your paper would have stood (or not ;-) ) on its merits without a mention of creationist or creationism. tribune7
GEM, Thanks for your recap...it helped me to understand what some of the "small" issues really were in your discussion. I think overall, everyone is still friends and the thread has been very educational. I know it has forced me to think on some issues and clarified my thinking as well. Thank you for the compliments on Luz, she is a light in this dark world. The countdown stands at roughly 26 days, which in my opinion is 26 days too many. :) "Beauty is fleeting and charm is deceptive, but a woman who fears the L-RD is to be praised." - Though she has the first two, it is the latter that made me want to keep her forever. And BTW, I shared with her all the nice comments you guys made about her, and her reaction? Quickly, with a laugh: "Post more pictures!!! LOL." I love my baby. She appreciates your gentlemenly compliments. Atom
8] PO, 344: Dr D’s paper on elimination vs comparison presents the Bayesian arguments (from page 6 onward, nothing on the referenced page 4!) . . . In the above excerpted 2005 paper, WD begins on p.1 by identifying the issue of Fisherian vs Bayesian inference, and addresses all critiques in that context, pointing out that the EF is a way to formalise and undergird what was already implicit in the Fisherian approach, as the excerpt above from p. 4 already notes. Thus, the underlying context of the discussion is Fisher vs Bayes on the issue of inference by elimination, with Elliott Sober as the leading critic, on a Bayesian premise. In that context, the discussion on p. 4 is in that underlying context, and if as I have noted PO begins his paper's discussion by introducing Sober by name [without citing the other side for balance], and then proceeds to introduce Caputo in that general context of the Bayesian side of the debate, the inference that the is using a Bayesian critique is quite natural. Indeed, the “biased vs fair coin” model as a Bayesian view on a case similar to Caputo is explicitly addressed on p. 2, bridges into the issue of probabilistic resources and it is in that context that the expansion of rejection regions is raised on p. 4, as the connecting words and sequence will show at once. Then, on p. 5 he broadens the issue that specification leads to the inference that a specified and extremely improbable outcome is most likely intentional not accidental, and as he leads into p 6, notes on how Bayesians wish to block this “slippery slope” to design inference by insisting that one must produce a comparative hyp that specifically has better evidence for it before one may infer to design. But of course this leads to the problem of evaluating prior hypotheses and undermines the whole process of inference. Indeed WD says that, in the end, one adverts to Bayesian inference in contexts where the very improbability of the occurrence is what alerts you to the need to account for what has happened, i.e an implicit often intuitive Fisherian style inference, and more; cf WD for details. (In effect we can thus see on the “natural interpretation” model that PO, 2007 was discussing p[Caputo|Fair coin] vs p[Caputo|biased coin], and his dismissal of WD's use of the Court's note that the claimed selection process was fair is on a first look an implicit insistence on the comparative rather than the eliminative approach. But of course too, as I have noted, in the Caputo case, such a strong run to D sustained over decades -- i.e even with an initially inadvertently biased coin -- soon becomes design by self-serving negligence.) Now, too, on his explicitly announcing that he was not a Bayesian, I accepted that claim, and specified to prof PO that my point in the main was, and BTW is, that the question is on the substance of the critique [which is in a Bayesian context . . .] and that the arbitrary expansion of the RR without reference to the issue of probabilistic resources -- as I have again cited -- is the issue that has to be answered to. Relative to that, given that reference to the academic debate starts on p. 1 [specific discussion of Bayesian claims on p. 6 onward notwithstanding] this latest claim above is, sadly, simply yet another distraction with unfortunate and unnecessary ad hominem overtones. 9] PO, 34: my claim has been that we cannot just consider the flagellum (Dr D’s E) but must consider it as an outcome in a set of many possible outcomes (Dr D’s E*). I don’t know how to do this, and do not believe that it can be done satisfactorily. Again, specification is,a s WD pointed out in both his 2005 papers referenced in this thread, far broader than RR's relative to statistical inferences on probability distributions. In particular, functionally specified, complex information is a valid type of specification, and one that we routinely infer to as a sign of agency –- we do not believe the posts in this thread are simply lucky noise absent demonstrative proof otherwise. For in context we know that agents are possible and that they routinely create FSCI. So, on encountering FSCI, we infer to agents. [In effect this surfaces the underlying worldview level question-begging that too often lies under objections on the flagellum etc, i.e a ruling out – on no evidence! -- that agents could have been active at the relevant time. But, if we accept the possibility of agents, and then observe the significance of the observed FSCI, we can easily see that this now provides actual empirical evidence of agent action at the time and place in question.] 10] PO, 34: . . .I point out you had a problem by my mentioning it, whereas Dr D has encouraged his followers to perform their own sokalesque hoaxes and even get paid for it. At least you know the "right" beer to choose! [Though I am not a beer drinker.] Checking my email . . . Nope not in the inbox, nor in the bulk box. Try sending again. On the main issue, I think that WD is probably not advocating that people misrepresent the relevant technical issues to an experimental non-peer reviewed journal in which one is being trusted to play above board. This last is what Sokal did. [I have no objection in principle to playing devil's advocate or spoofing to make the point that a peer or editorial review process is manifestly improper, especially when on track record of unfairness, straight submissions have a negligibly different from zero probability of being published.] 1] Banning policy: Having seen and been a victim of the sort of abuse that often takes over blog threads on this general topic, I sympathise with a strong policy on abuse and willful obtuseness or mere empty regurgitation of a party line. In some cases I think WD has gone overboard, and judging by a recent reversal of a ban, he agrees with me too. [NB: I note here that, even through our strong disagreements, I miss Pixie. Don't know why he was pulled.] GEM of TKI kairosfocus
6] PO, 344: I am sorry we had to spend so much time on Caputo. If I had known, I’d chosen another example, believe me! I think most of you understand that I am not using it to criticize the filter, quite the opposite . . . Mr Kf got stuck on “expanding the rejection region” and repeats it to this day despite many attempts by me and others to explain how I used the Caputo example. H'mm, let's recap again: the article begins with an unfortunately loaded term -- Creationists -- and an inappropriate example, Hoyle's 747 in a Junkyard; in effect simply dismissing the implied issues of the statistics of getting to extremely improbable and functional configurations by chance and necessity without agency. It then proceeds to a one-sided summary on the literature and issues, and the discussion of the Caputo case runs like this, in key part:
. . . In contrast [to the EF approach], a statistical hypothesis test of the data would typically start by making a few assumptions, thus establishing a model. If presented with Caputo’s sequence and asked whether it is likely to have been produced by a fair drawing procedure, a statistician [in context, as opposed to a design thinker,and omiting reference to WD's relevant qualifications] would first assume that the sequence was obtained by each time independently choosing D or R, such that D has an unknown probability p and R has probability 1 – p. The statistician would then form the null hypothesis that p = 1/2 which is the hypothesis of fairness. In this case, Caputo would be suspected of cheating in favor of Democrats so the alternative hypothesis would be that p >1/2 [in context dismissing the on the record since 1996 WD point that the Court, on Caputo's own testimony, accepted that the claimed selection process, if actually used, would have been fair] indicating that Ds were more likely to be chosen.[2007, p. 7.]
NB, he then infers to the rejection of the p = 1/2 hyp, and holds that only the aux hyp [dismissing and indeed in context criticising design thinkers for adverting to, the actual context of a claimed fair selection process at work, as documented by WD since 1996] that only inference to p > 1/2 is warranted. BTW, this also underscores the point that PO is here plainly critiquing the use of the EF in this case, contrary to what he has said above, cf. My comments in 20 – 21 on, and in 154 etc. That sets a very different context than we would pick up from PO, 344, for evaluating:
It is important to note that it is the probability of the rejection region, not of the individual outcome, that warrants rejection of a hypothesis. A sequence consisting of 22 Ds and 19 Rs could also be said to exhibit evidence of cheating in favor of Democrats, and any particular such sequence also has less than a 1-in-2-trillion probability. However, when the relevant rejection region consisting of all sequences with at least 22 Ds is created, this region turns out to have a probability of about 38% and is thus easily attributed to chance. [2007, p. 7 again.]
Now, of course, the first sentence here excerpted is in effect what WD said in defining E* as the upper extremum from 1 R/40 D on, or ~ 1 in 50 billionths of the curve at an extreme, precisely the basic approach of Fisher in rejecting the null hyp that a given sample came from a chance population. IMHCO -- and pardon my turnaround of the rhetorical devices above to make the next point [I am illustrating how the rhetoric works, not making a personal attack] -- no “statistician” who properly understands the issue that a relatively small sample of a population is unlikely to be in whole or in part at its extreme, would then glide straight into the second sentence. For, to in effect suggest that any person with even basic exposure to inferential statistics could think that a sample in a proposed “rejection region” encompassing 38% of the curve -- i.e odds of nearly 2 in 5 -- could be viewed by any informed person as credible evidence of the sample's being not from the relevant claimed distribution, is to set up a strawman. Far better would have been to directly address the point that WD makes on p. 4 of his 2005 paper on Fisher vs Bayes, that while some critics [coming from Bayesian approaches] raise the issue of arbitrary expansion of the RR,
what’s to prevent . . . [so expanding the RR] that any sample will always fall in some one of these rejection regions and therefore count as evidence against any chance hypothesis whatsoever? The way around this concern is to limit rejection regions to those that can be characterized by low complexity patterns (such a limitation has in fact been implicit when Fisherian methods are employed in practice). Rejection regions, and specifications more generally, correspond to events and therefore have an associated probability or probabilistic complexity [in context, they are of sufficiently low probability to be beyond the reasonable reach of the available probabilistic resources]. But rejection regions are also patterns and as such have an associated complexity that measures the degree of complication of the patterns, or what I call its specificational complexity . . . [NB too how specification is broader than RRs]
7] PO, 344: Mr Kf repeated over and over that my criticism was Bayesian My consistent observation has been that prof PO first cited Mr Sober as if he were the final word on the subject, and then in addressing Caputo used the above cited criticism, which is on the evidence of WD's 2005 paper [cf. below!], a criticism of WD's reasoning by Bayesians. Of course, the actual material issue is that whoever has put the criticism, and from whatever background, it is invalid for reasons as I have also in brief part excerpted above from WD, 2005. This too , I have underscored, and it seems that prof PO in the end must agree on the merits, or he would have addressed me on the merits instead of as noted above. . . . kairosfocus
Hi PaV, Prof PO, Jerry, Atom et al: The thread that refuses to die . . . 1] PaV, 340: As noted, your material point stands, minor errors and slips of memory notwithstanding. BTW, recall, that given the role of enzymes, there are dozens and dozens of those DNA-codedproteins involved in the cell's processes! [Indeed, as TBO's TMLO summarises, Hoyle and Wickramasinghe's calculation on odds of 1 in 10^40,000 against forming life based on cells by chance had to do with the odds against getting to the required cluster of enzymes for life.] On my response to Prof Po, tha tis not so much driven by pique as by analysis of the rhetorical pattern he has used, starting with my responses in 20 – 21 to his linked in 19. Sadly, he has not acknowledged the problems that stem form that pattern of loaded language, biased summary/survey of the literature, failure to cite and engage WD on what he said on the record long since, and more. It comes out in his analysis of Caputo, and in his onward handling of the flagellum. Then, even more unfortunately, it comes out in his handling of issues in this thread. I hope – and daresay, pray, that he experience will help him in the longer term as he reflects on how he has reasoned and argued. 2] Jerry, 341: Why should the calculations be restricted to 20 amino acids when there are 39 alternatives given the right hand and left hand versions. One, glycine, has no handedness. Fair point. On origin of life [OOL], the question of the other handedness and the many non protein forming amino acids comes in, but within life systems, the situation is in effect confined to the 20 with a few oddball exceptions. 3] J, 341: the formation of proteins as a chain of amino acids must have come about only after the information was assembled to construct them. In other words there isn’t any chicken or egg issue here. It couldn’t have been just polymers laying around, ready to be selected. They must have been prescribed or specified because the chances of the self assembly of a single strand of 100 of the same chirality is also way pass the upb let alone a whole suite of them. This directly follows on OOL, and you may find TBO's discussion as onward linked through my always linked, interesting. 4] J, 341: I find it rather interesting how one could just say he believes this could happen by chance without addressing these issues. It does rival the belief in resurrection as faith based. If you look at the parallel current [6th August] thread, on Image of Pots and Kettles will see that in my exchange with Prof Carl Sachs there, i argue that reason and faith are inextricably intertwined in the core of all worldviews and thus scientific research programmes – which are deeply embedded with worldviews commitments. 5] Atom, 343: GEM, I agree with PaV in that you seem to have become cross with PO and vice versa, but I thank you both nonetheless for your contributions. . . . As noted above, to PaV, not so much cross as saddened and concerned. I do confess that the series of assertions in 305 and 310 came across as un-necessary, ad hominemish, atmosphere-clouding and fight-picking; thus, quite irritating. I responded, accordingly, with a few balancing remarks. The primary intent of those remarks was to highlight how the sort of comments in 305 and 310 can be persuasicve without being actually cogent and sound on the issues. I hope I was not too annoyed in how I responded; if that was so, I am sorry for that. BTW, all the very best as your days of bachelorhood count down! [On the evidence, you have chosen well indeed – there is a radiance there that more than lives up to that apt name, “light.” And though from moment to moment there will doubtless be times of challenge ahead, on balance the exchange of those ever so sobering vows is more than worth it! (Speaking from nearly 17 years on the other side of such a vow.)] . . . kairosfocus
Hello yall, I guess we're closing the thread which is probably about time. I am responsible for lengthening it by referring to Behe's Edge in passing when I really only came here to discuss the EF. In Dr D's original post, he notes that Jeff Shallit, arguably the most qualified mathematician on the list, has no expertise in probability. Thus, Dr D thinks that such expertise is necessary to understand his writings. Two points: (1) nobody on Dr D's "pro list" has such expertise either and (2) I do, and I have criticised the filter but was not mentioned on the "con list." So I introduced myself and posted a link to my article and then it took off to an apparently record-breaking session. Thanks to all for being interested in a factual and respectful debate (I never heard back from Michaels7, [191, 197]). I am sorry we had to spend so much time on Caputo. If I had known, I'd chosen another example, believe me! I think most of you understand that I am not using it to criticize the filter, quite the opposite. If I had used it as criticism, I would have said: "Here is how the Caputo example can be used to argue against the EF..." and presented my arguments. Unfortunately, Mr Kf got stuck on "expanding the rejection region" and repeats it to this day despite many attempts by me and others to explain how I used the Caputo example. We also spent much time discussing Bayesianism, which is more interesting than Caputo although we perhaps discussed it for the wrong reasons. Again, if I actually wanted to criticize the EF from the Bayesian point of view, I would have said so. Mr Kf repeated over and over that my criticism was Bayesian (and PaV got on that wagon for a while although I hope I set him straight!) which I don't find very constructive. The basis for his criticism seems to be connected to Caputo and the suposed "expansion." Dr D's paper on elimination vs comparison presents the Bayesian arguments (from page 6 onward, nothing on the referenced page 4!) so everybody can read for themselves and see if they find any such arguments in my article. As I have said many times, feel free to contact me directly if you have questions. As for my promised reply to PaV and Atom, I have not forgotten but I don't want to go into details. My two main problems with the EF are (a) how do we determine the rejection region ("specfication") and (b) how do we rule out chance hypotheses other than the uniform? We have mostly spent time on (a) and my claim has been that we cannot just consider the flagellum (Dr D's E) but must consider it as an outcome in a set of many possible outcomes (Dr D's E*). I don't know how to do this, and do not believe that it can be done satisfactorily. Anybody can try though, and why not write a real scholarly article on it and submit it for publication? I know there is a presumption that ID-friendly articles will not be published by the regular academic press, but there are other outlets you can use. Anything must be judged on its own merits. Finally, Mr Kairosfocus, I harbor no hard feelings. As for my post 305, the smiley didn't register due to a double parenthesis. It's there in 310 though. I say these things as if I were slapping you on he back whilest we're clinking bottles of Red Stripe. [On a side note, I sent you an email regarding Sokal's Hoax in which I point out you had a problem by my mentioning it, whereas Dr D has encouraged his followers to perform their own sokalesque hoaxes and even get paid for it. Just wonder what you think, that's all.] With regards to Atom's last note, I am sorry to hear that people are bounced so easily. I decided to go here for a more interesting, although less sympathetic, exchange than I would get at an anti-ID blog. It has been a good experience. And I have finally seen one piece of evidence of intelligent design: Atom's fiancee! :D Cheers yall, Prof PO olofsson
A few thoughts on this thread: First, this has been my favorite thread at UD. Some have come close, but this one is the most fun and informative. Second, I think the mod policy should be relaxed a tad, to allow threads like this to happen more often. It is no surprise that this thread contains comments by someone who openly questions some aspects of ID. That is where the interesting turns come from. Usually people will get bounced for (what seems in my eyes) simply disagreeing and remaining obstinate in their disagreement. As long as they are not insulting IDers or saying "ID=Religion" (which at this point in the debate can only be due to negligence) I think we should allow them to stay. As PO has shown, the payoff is more interesting threads. Lastly, thanks Kairosfocus, PO, PaV, Tribune7 and the others. You've made it fun. GEM, I agree with PaV in that you seem to have become cross with PO and vice versa, but I thank you both nonetheless for your contributions. You are always a source of information and a definite asset to UD. Atom
I meant to say "the self assembly of a single strand of 300 of the same chirality is also way pass the upb let alone a whole suite of them." A strand of 100 polymers of the same chirality is also extremely high but not at the upb. jerry
I have a couple side issue. Why should the calculations be restricted to 20 amino acids when there are 39 alternatives given the right hand and left hand versions. One, glycine, has no handedness. There are also several non-proteinogenic amino acids. Thus, any discussion of amino acids would have to consider these into the calculations. By limiting the calculations to left handed amino acids we should recognize that the calculation are extremely conservatively estimated. There is no known necessity to limit polymers to one or the other or to eliminate all of the non-proteinogenic amino acids. Thus, the formation of proteins as a chain of amino acids must have come about only after the information was assembled to construct them. In other words there isn't any chicken or egg issue here. It couldn't have been just polymers laying around, ready to be selected. They must have been prescribed or specified because the chances of the self assembly of a single strand of 100 of the same chirality is also way pass the upb let alone a whole suite of them. So what the issue is about here is the origin of the instructions and machinery to construct proteins of a unique capability. And by the way this machinery to construct proteins (made of rna) requires proteins for its construction. Which is curious, where did these proteins originate. So to believe in the whole process of life as it works today one has to hypothesize some unknown incredibly complicated other form of life that preceded it. If this form of life existed it had to have a different methodology for constructing proteins. And then why should this incredibly complicated set of life systems evolve into what we have today. This non protein system would have had to randomly construct all the proteins that were necessary to replace the machinery it already had to make a new system of rna that are necessary to make the proteins that we see today. (Remember mrna and trna are both constructed by proteins and these could not have existed in the original system.) Sounds convoluted to say what I mean let alone to actually self assemble by chance. I find it rather interesting how one could just say he believes this could happen by chance without addressing these issues. It does rival the belief in resurrection as faith based. I would just wish they would admit it. It defies reason when one choose chance because there is no reason for it other than blind faith. jerry
kairosfocus: Thanks for having cleaned up some of my mistakes along the way. My memory, alas, is not very good, and I can make incidental errors here and there. Along these lines: (1) I've said a number of times in this thread "Upper Probability Bound" for UPB; of course, UPB is "Universal Probability Bound". (2) I've--again, from memory--used 10^180 for the UPB, when, in fact, Dembski has the figure of 10^150 (Interestingly, I sort of stumbled upon the UPB using Planck time having forgotten---its been two years since I've read about this stuff---that that is in fact how WD calculated it. Just saw this this morning.) kairosfocus, you're usually quite gracious (more so than I), but it seems the good professor has rubbed you the wrong way. What I have appreciated about P.O. has been his tone: neither dismissive nor overly dogmatic (you might disagree on this last point). Anyways, thanks for correcting as we went along the way. I honestly believe that post [313] adequately addresses the main issue that P.O. was raising, and succeeds in reasserting that the UPB is exceeded (even using 20 a.a. instead of 22) by a simple 300 residue protein. I suspect the good professor is chewing on that right now. PaV
All: I see the thread continues. And, while I would have loved to be able to simply chime in with Trib and PaV just above [leaving the thread to stand on its merits], Prof PO – while his time of interaction here is appreciated – has by making some unfortunate, unwarranted and atmosphere-clouding remarks overnight [cf. 305 and 310] left me little alternative but to make a few balancing remarks on his consistent rhetorical tactics. This will also underscore the original point Also, a few remarks on the points on the merits will be useful; 1] On rhetoric, the art of persuasion, not analysis Onlookers will see that from his original linked article at 19 above, Prof PO begins with the term “creationist,” prejudicing the mind off his likely audience: A classic creationist argument against Darwinian evolution is that it is as likely as a tornado in a junkyard creating a Boeing 747. In fact, the argument originates with the distinguished Astronomer, Sir Fred Hoyle [hardly a Creationist!], is rooted in the underlying statistical thermodynamics of the generation of bio-functional molecules by the known random forces at work, and stands unanswered on the merits to this day, nearly thirty years later – including from prof PO. Then, sadly, the rest of his introductory remarks run downhill, as I noted in 20 – 21 above, and since: biased summary of the literature, citation of experts on one side of a disagreement as though that were the be-all and end-all [especially of Mr Sober], and so on. Of particular note among these was his handling of the Caputo case, using the approach last analysed with appropriate excerpts in 179 – 182 above. In particular, on reading PO [cf 180], one would not realise that WD has been on the record since 1996 on the issue of an inadvertently biased selection process: the court held that from the outset on Mr C's testimony, the process he claimed to be using was fair; the serious question being whether he used the claimed process. Nor, would one see that WD used the issue of exhaustion of available probabilistic resources in his reasoning as to why a 1 in 50 bn chance that fits a simply describable pattern [cf. p. 4 in WD's 2005 paper on Fisher and Bayes, also excerpted in 180], is well warranted as a basis for inferring to design, and why it is qualitatively different from PO's arbitrary choice of an expanded rejection region that would enclose 38% of the curve. Now, overnight we see Prof PO unfortunately again mischaracterising my argument [which he has never cogently responded to on the merits] and my person, dismissing what he has not answered, and then exhorting others not to follow that "bad example." (Sadly, such strawman an ad hominem rhetoric is precisely reminiscent of the approach used from the outset in the critique paper linked at 19 with WD, Behe and even the Creationists. Dembski's complaint in his original post is well-warranted, and Prof PO -- sadly -- gives a further instance of why.) Let us turn to happier matters of substance . . . 2] PaV, 308: 20 or 22 amino acids While there are some “oddball” cases, the vast majority of proteins are made up from a set of 20 acids. So and also, as that humble source, Wiki notes: The sequence of amino acids in a protein is defined by a gene and encoded in the genetic code. Although this genetic code specifies 20 "standard" amino acids, the residues in a protein are often chemically altered in post-translational modification: either before the protein can function in the cell, or as part of control mechanisms. Proteins can also work together to achieve a particular function, and they often associate to form stable complexes. And, in discussing protein structure, Wiki notes: the current estimate for the average protein length is around 300 residues . . . Thus, while a calculation relative to 22 acids and 150 monomers is okay, a calculation relative to 300-length chains with 20 acids is conservative [cf. Supra]; and also it underestimates the known complexity, given the modifications that such monomers may undergo subsequent to chaining. The material point remains as PaV has it. 3] DS, 311: look around you at all the manmade objects. Whether you know what the function of each is for or not doesn’t make much difference in knowing that it’s manmade because there are no known undirected physical processes that could have assembled it. Of course, this underscores that specifications by pattern-matching are wider than functional specifications. We can recognise CSI and asses its likely origin even without knowing the function. Right back tot he tornado in a junkyard, or my small scale version as linked [which is at a scale where molecular agitation is the driving force for spontaneous change.] In my always linked, I have focussed on the subset of functionally specified complex information [FSCI] that works in information-based systems because that is IMHCO the most clear case in point. The silence that has usually greeted that focus, tells me that here is no serious answer on the merits to it. Dave is also right to note that “[t]heoretically possible and practically possible are two quite different things.” That is, we cannot reasonably revert to “lucky noise” when that would exhaust the available probabilistic resources, and would be beyond the edge of chance. Theoretically and empirically, both materialistic abiogenesis and NDT-based macroevolution at body-plan level [the flagellum being a case in point] are far, far beyond the edge of evolution. [BTW, thought eh flagellum is an example of IC, it is also an example of CSI and WD made a calculation that he odds on its fdormation by chance are something like 4] DS, 31: Is there any possible means other than chance or agency? It seems to me and many others that this is a true dichotomy - if it isn’t chance it is design. From Plato on there has been a trichotomy: necessity, chance, agency. But in situations where the system constitutes variable-state components and assembly options -- e.g. a discrete-state chain of “characters” that stores information like DNA – necessity is plainly ruled out as the predominant cause, leaving DS' dichotomy of material causal forces. And, we know directly that agency is capable of creating FSCI. 5] Back-forth on triple mutations, with genetic entropy and Haldane's dilemma lurking. Interesting . . . Okay, trust the above is helpful as a balancing contribution. GEM of TKI kairosfocus
Prof.Olafsson, you’ve been most gracious. Thank you for time here. Ditto that tribune7
Prof.Olafsson, you've been most gracious. Thank you for time here. PaV
Atom [327], Probably. I meant that if each base pair has a probability of 10^-9, then any set of three specific base pairs (that one, that one, and that one) has probability 10^-27 to all mutate. I'm vague on the correct terminology because, tribune7, I am no biologist. ;) olofsson
Patrick [324], I have no problems with the parallel malaria parasite/human. Thanks still for making a clarifying statement. I know a lot of biologists study yeast as a model system for humans. olofsson
tribune7 [323], You seem to be saying because we can’t refute him, we should assume he’s wrong No, I'm saying that his argument is hard to refute due to its inner logic. olofsson
I mentioned earilier Haldane, Nachman's U-Paradox are arguments against Darwinian evolution independent of using the EF. Even on the generous assumption Natural Selection could in prinicple creation specified complexity, we assess the population and mutational resources required to make this amazing feat possible. There simply aren't enough population resources, and there are too many opportunities for bad things to happen. In sum, there is too much Genetic Entropy. scordova
which means that each person harbors about three new deleterious mutations.
Actually, humans can't afford to harbor much more than about 3, becuause 3 would imply human females need to give birth to 40 offspring, just to sustain the clean up. The number could be much higher, but the number 3 is the maximum tolerable rate of bad mutatoins the human race can sustain and still expect to live over geological timescales. I'm so glad Patrick mentioned this. We talked about it at UD. See: Nachman's U-Paradox. scordova
I just reread [320] and [322]. I think I see what you're saying, Patrick; P.O. might have misunderstood your U rate information. PaV
Atom: Whoever has the first chance to explain can do it. I'll be plenty busy over the next 24 hours. PaV
...the genomic deleterious mutation rate (U) is at least 3... I don’t see how that information could ever be used against Behe’s premise in EoE.
I guess "One man's garbage..." :) Atom
Sorry, the confusion is my fault since I was referencing Nachman/Crowell's estimate that the genomic deleterious mutation rate (U) is at least 3. See post 320. I don't see how that information could ever be used against Behe's premise in EoE. Patrick
PaV, I think PO is conflating "three mutated bases per birth" with "triple mutation." (Prof, you can correct me if I'm wrong.) Perhaps you'd like to explain to him why the two are definitely not synonymous. Atom
The rest of my post was lost. Here's what I wrote: "Where did you get this information about 'triples'? I did a Google search and found that there are bacteria in humans that exhibit 'triple' mutations, but not in the human genome itself. Would you have a reference to something that indicates this? PaV
PO [322]: Yoru second paragraph shows that, of course we all carry very rare mutations (triples, for example), far beyond Behe’s CCC. PaV
Olafsson [317]: hitherto unknown, mutation to be 10^-20 and then it actually is observed in somebody, giving us an estimated rate of 10^-12. I believe that basis of this statement of yours is your belief that the malarial genome has nothing to do with a human genome, or a rat genome, or a plant genome, or whatever. I addressed this apparent underlying assumption of yours in post [296]. For most scientists, the genetic mechanisms (point mutations, deletions, insertions, recombination, duplication) found in any eukaryotic cells can be generalized to all eukaryotic cells. Both the malarial parasites and humans are eukaryotes, and so biologists would feel comfortable carrying over the mutation rate found in a malarial parasite to something that would happen in a human. Hope that clears this up. PaV
And you're "going down" :-) PO -- Can we refute him by pointing to an observed mutation. No . . . You seem to be saying because we can't refute him, we should assume he's wrong. I don't think anybody can wrap their mind around that one and still come up with a way of making sense out of it. tribune7
Patrick [320], I question how it can be extended but not because it's atypical. We all agree that it can be ascribed to chance. Yoru second paragraph shows that, of course we all carry very rare mutations (triples, for example), far beyond Behe's CCC. Already this shows that his claim of "not one single CCC in humans" is incorrect as stated in his book, but he does of course refer to beneficial mutations. But how many possible triple point mutations are there? The human genome is, what, 10^8 base pairs? And "10^8 choose 3" is a large number, about 10^23. Then one has to consider that not all mutations affect the amino acids, and a lotof other things, but we might arrive at different conclusions once we start arguing along these lines. olofsson
tribune7 [318], Yeah, feels like I'm on a "Highway to Hell"... Sorry, you already did read it. OK. We're talking about mutations in humans, of rate 10^-20. Are there any? According to Behe, no. Can we refute him by pointing to an observed mutation. No, because if it has been observed, its estimated rate is at least 10^-12. olofsson
PO, I'll keep this short so you don't have to spend too much time: So in essence you question whether Behe's example can be extended to cover all since it "may" be an atypical case? On a side note, in humans the mutation rate is higher in males but it's estimated at ~2.5 x 10-8 mutations per nucleotide site (Nachman/Crowell) which means that each person harbors about three new deleterious mutations. The real (not estimated) genomic deleterious mutation rate (U) is considered to be possibly higher given estimates that around 4/5 of amino acid mutations are deleterious and given that the estimate does not include deleterious mutations in non-coding regions, which may be quite common (and of course in the last couple years since this estimate was made we've been finding a lot of use for non-coding regions). And of course that leads into Haldane's dilemma and such possible "solutions" such as "truncation selection" and "synergistic epistasis" but that's been covered before in depth on UD... Patrick
tribune7, None. I don't think you're getting my point. See my previous post to DaveScot, it might help. PaV, Nicely reasoned by DaveScot perhaps, but not relevant to the issue I raised. Regardless, there is a problem here because we cannot infer anything probabilistically meaningful from a design hypothesis, only from chance hypotheses. If you don't agree, give me a design hypothesis and show me what probability distribution it confers on the data. That's why I argue that only elimination is possible, but that I think it fails. And even if it did succeed, the EF only lets us infer "not chance," nothing else. Atom and PaV, I'll be back as promised, one more time. olofsson
PO -- you can check out anytime you like but you can never leave :-) Let’s say we calculate the probability of a particular, hitherto unknown, mutation to be 10^-20 and then it actually is observed in somebody, giving us an estimated rate of 10^-12. You would conclude that the rate of mutation as per bacteria does not apply to man. Now, what CCC double-mutation event are we talking about that has been observed in humans? tribune7
DaveScot [311], In my post "it" referred to a 10^-20 probability mutation, which, by default, cannot be estimated in the human population. Let's say we calculate the probability of a particular, hitherto unknown, mutation to be 10^-20 and then it actually is observed in somebody, giving us an estimated rate of 10^-12. Setting aside issues of estimate accuracy, what do we conclude? I really need to get out of UD. Please send me an email! olofsson
DaveScot [311], Nicely reasoned! PaV
PO An observed mutation couldn’t have happened? You’re not helping the ID case here my friend! What observed double mutation (CCC event) are we talking about? tribune7
Atom: [306]
But they were always criticized, for various reasons. Now Behe shows actual real world replication data, and guess what, it vindicates the ID position and matches the theoretcial results beautifully.
I couldn't agree with you more. That is 'exactly' what I think Behe has succeeded in doing. And it is quite significant, I believe. PaV
I've thought about my numbers in post [308]. I believe I incorrectly calculated the probability of finding a WORKING, or “functional”, protein of this type and of this length. The correct calculation, I believe is this: (1^90)x(11^60)/(22^150)=11^60/22^150=(3.05x10^62)/(2.3x10^201)= approx. 1 x 10^139. This is assuming that 90 of the 150 locations remain the same; hence, there is only ONE a.a. that can be chosen (or, rather, IS chosen by 'nature'). This gives rise to the (1^90) term in the numerator. For a protein that is 300 a.a. long, the calculation would be: (1^180)x(11^120)/(22^300)=1.8 x 10^278. We have just sizzled the UPB!! PaV
PO, Yes, I know the difference between the two, I was just being imprecise in my wording. I know that "failing to reject" is not the same as "accepting a hypothesis", I just didn't make that clear in the way I phrased it. Thank you for the charitable reading, as it was the correct one. PaV and I will wait patiently. Atom
But unless we have observed it, we don’t know what it does. And if we have observed it, our independence assumption comes into question. Not necessarily true. Some arrangements of matter simply defy any undirected means of assembly whether we know what it's for or not. Just look around you at all the manmade objects. Whether you know what the function of each is for or not doesn't make much difference in knowing that it's manmade because there are no known undirected physical processes that could have assembled it. The problem in organic machinery of life comes from the theoretical ability of random mutation to form any patterns at all that are not physically impossible. Theoretically possible and practically possible are two quite different things. What we need are some sane bounds on what's practically possible for random mutation in forming patterns in organic molecules. The best way to do this is through direct observation. By that method rm+ns appears to be bounded such that it's not anywhere near sufficient to create the patterns we observe. Some other mechanism appears to be necessary to explain it. We know intelligent agency can explain it - human genetic engineers acting as intelligent agents are direct proof that agency can theoretically and practically assemble any physically possible pattern of organic matter. Is there any possible means other than chance or agency? It seems to me and many others that this is a true dichotomy - if it isn't chance it is design. DaveScot
Atom [306], ... but you understood the gist of what I was saying. Well, I thought of starting a Kairosfocussian assault, "I cannot let such a material misrepresentation stand unopposed...Plato...Dembski...Hoyle...Plato again...more Plato..." but as I did understand what you mean, I decided against it. ;) It's not just a play on words though, there is a big difference between failing to reject a hypothesis and proving it. Anyway, as for you long post, and PaV's, I'll have to reply later. I'm trying to leave but "parting is such sweet sorrow." olofsson
tribune7 [307], An observed mutation couldn't have happened? You're not helping the ID case here my friend! olofsson
P.O. It's too bad you just left. I think I have a way of answering all your objections. The example you've used here, by way of comparison to the Caputo case, is that of the bacterial flagellum. And you ask (and for you this is a severe problem), What are the specifications we're talking about, this class of objects with which I can calculate probabilities? In the Caputo case, we had D's and R's, and a certain length. The presumption is that each of the D's and R's had a 50% chance of showing up. You then calculated that the entire probability space for 41 instances of a 1 in 2 selection is 2^41, or 1 in a trillion. Then, for the "pattern"= 1 R and 40 D's, there were, what, 42 such "specifications" or "patterns", thus reducing the odds of the Caputo pattern to 1 in 50 billion. But the bacterial flagellum is used to argue "irreducible complexity", not ID theory, per se. So, instead of the flagellum, let's use an example that falls in the category of ID theory. Rather than look at the entire apparatus of 50 proteins that make up the flagellar system, let's just look at ONE of the 50 proteins associated with it, a hypothetical one. Let's assume it is, to throw out a number, 150 amino acids long. Well, there are twenty-two a.a.s that nature uses for biological life (more than 22 amino acids exist). This hypothetical protein has one specific amino acid at each of its 150 locations along its length. Well, then, we know how to calculate the probability space for this protein: we calculate 22^150=approx 2.2 x 10^221. This is the probability space for this length protein. Now all we have to do is to calculate how many different "forms", or "specifications/patterns" of this protein are encountered in nature. Let's say that looking over all taxonomic categories, there are instances where 40% of the amino acid locations differ in the specific amino acid found there. For convenience sake, let's say that the differences, on average, amount to 11 different a.a.s being found at these varying locations. Then, what is the probability of finding a WORKING, or "functional", protein of this type and of this length? My calculation would be as follows: First, we'll identify the individual components that will make up the calcuation: (22^90)= the 60% of locations where only one, specific a.a. is found over all taxonomic categories; (2^60)= the 40% of locations where any one of 11 amino acids replaces the one found in the flagellar protein over all taxonomic categories; 11/22=1/2, thus 2 raised to the 60th power, i.e., 60 locations where, on average, any one of two a.a. will do; The probability of the particular, hypothetical protein arising is: (22^90)x(2^60)/(22^150)=(2^60)/(22^60)= 3.3 x 10^-63 I don't see how this calculation deviates in any substantial way from that of the Caputo case. Now, this is below the UPB. But what if 3 such proteins are absolutely indispensable for flagellar activity (remember, 50 proteins are involved in the flagellum? This would push the probability of the three functioning together as a whole to over the UPB. And what if instead of 150 a.a.s, we had chosen a protein of 300 a.a.s in length? Then the individual probability would have been 2^120/22^120= 1 x 10^-125. A 1 in 10^125 probability of coming about by chance. Think about that. Again, not the UPB, but a number greater than the probabilistic resources represented by every atom in the universe changing every second since the beginning of the universe. I'm curious as to your reaction, P.O. PaV
He says, in essence, that we must have had CCCs in humans, otherwise Darwinian evolution has been proven impossible. I think what he's saying is that CCCs are the only observed mechanism of what can fairly be called Darwinian evolution (and even that might be subject to argument, malarial parasites remain malarial parasites after all.) So, in a population that is several orders of magnitude smaller than 10^20, how could you ever argue that an observed mutation has rate 10^-20? You'd say it couldn't have happened. tribune7
But unless we have observed it, we don’t know what it does. And if we have observed it, our independence assumption comes into question.
PO, First off, yes, I meant "fail to reject the chance hypothesis", which is (to me) the same as rejecting the design hypothesis. (Sloppy wording, but you understood the gist of what I was saying.) Hopefully you agreed that my approach is a scientific one which is simply a matter of number crunching which may turn out to implicate design or not, in other words, neither you nor I know which outcome will hold. Which is how experiments should be. As for your argument against Behe's point, there have been studies done to show what happens when one of the two required mutations occurs (which are cited in his book): not much in way of resistance. In other words, you need those two, specific mutations to confer resistance. This would make the probability of getting each one roughly independent, since you could not rely on a step-wise gradualist approach. Behe really hammers this point in his book, as he pre-empts this line of counter argument. As for how we could argue from mutation rates in one population with X members to another, smaller population, it is really straight forward. We are dealing abstractly with replication events and mutation events. Each replication event takes some time to occur, replicates a given number of bases, has a given mutation rate, and occurs within a given number of replicators at a time. These can all be plugged into a set of equations (again, mere number crunching) and theoretical results calculated. This is what IDers have been doing for years. But they were always criticized, for various reasons. Now Behe shows actual real world replication data, and guess what, it vindicates the ID position and matches the theoretcial results beautifully. Now the point is it doesn't matter what kind of replicator you are using (you can use a computer if you like), what matters are the numbers you plug into the equation. Since the empirical results match our equation, we have confidence that our equation is correct, and thus, can be used to calculate for other sets of parameters. And if an event (like a double-mutation) takes on average 10^20 replication events to occur, we would not expect to see it, on average, in less than that number of events. Sorry for the long post. Atom
tribune7, Sure, there's always time for a PS. I'm not criticizing data or methodology. By the way, as Behe points out, it is not his data but those of Nicholas White in the paper PaV linked above. I don't reject his conclusion that evolution through random mutations (and natural selection) cannot be responsible for the Earth’s biodiversity (are you trying to pull a trick from Kf's book here? ;)), mererly questioning how he arrives at his conclusion in the particular case of malaria--humans. He says, in essence, that we must have had CCCs in humans, otherwise Darwinian evolution has been proven impossible. I just wouldn't know how to make and defend that argument. The only way we can know mutation rates is by estimating them from data. We know fairly well what such rates are on average, but rates are variable and the only way to assess a specific mutation is by observing it in the population and compute its relative frequency. So, in a population that is several orders of magnitude smaller than 10^20, how could you ever argue that an observed mutation has rate 10^-20? olofsson
PO (if you happend to still be lurking), Behe's calculation of events regarding the evolution of the malaria parasite is based on data acquired by observation. Now, if you want to say his methodology is flawed or his obserations are inaccurate, well, they are certainly available for scrutiny for all the world to see. You seem, however, to be accepting his observations (even if it's just for the sake of argument) but rejecting his conclusion that evolution through random mutations (and natural selection) cannot be responsible for the Earth's biodiversity. Your rejection of his conclusion appears to be tied to the rejection of the premise that rate of mutation equates to the rate of evolution. Now, random mutation (along with natural selection) are the keystones of neoDarwinisn. Since natural selection cannot explain macroevolution, what other metric would there be besides rate of mutation? tribune7
All We have gone past 300. And though that is just a number, it is illustrative of the evolutionary spiral path this thread has taken, going back over the “same” ground over and over and making progress in the teeth of sustained opposition and challenge. In the end, it is plain that WD's work on the explanatory filter emerges essentially unscathed, and that the counter-points that began with my responses at 20 – 21 above, to prof PO's link to his critique at 19, have proved to be well-founded. Indeed, many of them were simply uncontested, and others could only be addressed through side-points. So, while prof PO has had points where he was right [in the specific context of the remarks he just bolded, he correctly agreed with WD that specifications often relate to patterns], on balance his critique as will shortly appear in a Biology Journal fails. One hopes -- but on the evidence of the current state of the institutional politics surrounding design thought among the guild of professional scholarship, sadly, does not expect -- that he Journal will listen to PO's request and will grant prof Dembski a right of reasonable length reply on the points made against his arguments. One or two points will require a further remark, as the thread winds down: 1] PaV's link woes: I think that we see here a lesson on the implications of bleeding edge software! However, maybe the use of simple angle-brackets html address-tags in the text might work. [I am doing this on Open Office 2.0, which is free, reliable and indeed is apparently more compatible across the various Word versions than Word itself is. All I do is to get rid of smart quotes in the “href =” part., by copying and pasting -- ctrl-X, ctrl-V; keyboard short cuts going all the way back to CPM days -- a non-smart quote from the text window at UD, to start with. And of course WordPress does not like to see more than a couple of such links in a comment post. The review article PaV links, html version, is here. My Acrobat 5 rejects the pdf he links as corrupted. A key excerpt that makes the point that the numbers Behe is citing are empirical and non-controversial, is:
Chloroquine resistance in P. falciparum may be multigenic and is initially conferred by mutations in a gene encoding a transporter (PfCRT) (13). In the presence of PfCRT mutations, mutations in a second transporter (PfMDR1) modulate the level of resistance in vitro . . . Resistance to chloroquine in P. falciparum has arisen spontaneously less than ten times in the past fifty years (14). This suggests that the per-parasite probability of developing resistance de novo is on the order of 1 in 10^20 parasite multiplications. The single point mutations in the gene encoding cytochrome b (cytB), which confer atovaquone resistance, or in the gene encoding dihydrofolate reductase (dhfr), which confer pyrimethamine resistance, have a per-parasite probability of arising de novo of approximately 1 in 10^12 parasite multiplications (5). To put this in context, an adult with approximately 2% parasitemia has 10^12 parasites in his or her body. But in the laboratory, much higher mutation rates thane 1 in every 10^12 are recorded (12). Mutations may be associated with fitness disadvantages (i.e., in the absence of the drug they are less fit and multiply less well than their drug-sensitive counterparts). Another factor that may explain the discrepancy between in vitro and much lower apparent in vivo rates of spontaneous mutation is host immunity . . .
2] Comparison and elimination again. I refer onlookers to 269, point 2 above. The point is that statistics never directly infers cause, but rather compares predictions with observations in the context of a causal model, here the three possible sources of cause -- chance, necessity, agency -- tracing back to Plato. Fisherian type nullification of null hyps is based on observing that if the null predicts a particular population pattern, then within reason, there is a limit to probabilistic resources so that it is unlikely that we would see observations that either fit the model very poorly [are in the far extremes] or, fit it only too well. The former is the more relevant case for our discussions, and the exhausting of probabilistic resources is what I remarked on as the edge of chance or probability above. My “draw the chart then drop a dart on it” example from 68 above shows this vividly. In effect, the Caputo case aptly illustrates what is going on. On a claimed selection process that should give even chances for D and R, a pattern was seen that in 41 election selections, D came up 40 times. The likelihood of this outcome is something like 1 in 52 billions, where it is more like 1 in 4 odds that we would have a split 21 R:20 D, or 20 R: 21 D. Intuitively, we see that this is a highly contingent situation so natural regularities do not govern the outcome. On chance, in light of probabilistic resources, we should not be observing the actual pattern; leaving the reasonable alternative as best explanation: agent action. This is reinforced by noting that the pattern played out over decades and that the actual selection itself was not publicly witnessed. Thus, we here see the design filter in action, plainly successfully. When all is said and done, the explanatory filter works in all directly observed cases, and credibly works in the cases that are of particular interest to design thought on origins. That cuts across the expectations, assertions and assumptions of the established evolutionary materialist school, and so they have exerted selective hyper-skepticism to try to discredit it. Rhetorically, they have succeeded in too many cases, but their case plainly fails on the merits. GEM of TKI kairosfocus
Dear all, Prof PO is done and will now retreat from UD. I will check in once or twice to see if there are any unfinished threads, and you can of course also easily reach me by email. I thank those of you who have been willing to engage in meaningful discussion. And to those who have not, remember the wise Mr Kairosfocus's words: Prof PO is right! Goodnight yall, Prof PO olofsson
Kf [269], It does not matter what you call it, comparison is comparison. Of course science does not use pure elimination, neither does applied statistics. olofsson
tribune7, Well, a double mutation at least, not necessarily simultaneous. But he also needs to know that it actually has happened, and then he needs data. Sure, we can look at two different mutations, each with rate 10^-10 which means that that particular double mutation has rate 10^-20 (assuming indpendence etc). But unless we have observed it, we don't know what it does. And if we have observed it, our independence assumption comes into question. A problem is that estimates of very small probabilities are often very uncertain. The Concorde needed only one accident to go from the safest to the last safe airplane in the world! olofsson
PaV, Re "the proxy" see my post 239 again, second paragraph. olofsson
without the benefit of even a single mutation event of the same degree of complexity as a CCC (first of all, the “not even a single” is incorrect, as there are many possible CCCs, not just one PO, I don't understand this. Behe's point concerns the liklihood of a simultaenous mutation, not the development of the resistance to cholorquine. And records are shattered, we will be reaching 300. A landmark not to be passed unless someone resorts to steroids :-) tribune7
PaV, I'll try but it must wait. The link worked, by the way. olofsson
P.O. I have a degree in biology. But, I was a pre-med, didn't get in, and went on to become a petroleum engineer. But, all that was in my younger years. While I'm posting, relative to [288], generally there is a divide between prokaryote and eukaryote organisms. When considering the genetical workings of these major cell types, it is generally agreed that what happens in one kind of a eukaryotic organism would hold equally well for all other eukaryotes (humans are eukaryotes). That is the thinking that hangs in the background. When you speak of mutation rates as proxies, and such, I'm not following that too well. Can you give me specific posts where you lay out your thinking? PaV
Hopefully it will work this time. PaV
PaV, Still doesn't work. Never mind, I'll find it in Medline. What's the title etc? Are you a biologist? olofsson
P.O. Forget the last post. It's for WD's "Specification:....." Here's the link. PaV
P.O. I'm having problems with Adobe 8.1. The links are bad. But kairosfocus, in #272, provides the pdf links. PaV
Olafsson [283]: No, he probably meant that there is on average one mutation per generation which is true if the genome length is 10^8 and the mutation probability is 10^-8, but not in general. No, he probably meant that there is on average one mutation per generation which is true if the genome length is 10^8 and the mutation probability is 10^-8, but not in general. The critical word I used in the prior post was the word "particular", as in a "particular location". IOW, yes, on average, EACH malarial parasite will have "a" (as in 'one' mutation somewhere along the length of the genome) mutation; but since the mutation MUST occur at ONE, precise location, then, to defeat the improbability (there are 1 in 10^8 locations along the genome; the 'chance' of the mutation occurring at the exact, required location is, then, 1 in 10^8) of the mutation occurring at just the required location, 10^8 organisms are needed. Hence, theoretically, one would calculate the probability of 1 in 10^8 for the chloroquine resistance developing. This is born out in the nature. PaV
PaV [267], Interesting. There is nothing at your links though, can you try again? olofsson
PaV, if they told you that based on your family history there was a 1 in 10^20 probability that you’ll develop cancer, would you say this is also “vacuously true”? No, but I would ask them how they came up with the numbers... olofsson
PaV [284], I suppose you have not read Behe's new book? In it, he uses the estimated probability that the malaria parasite develops cholorquine resistance which is 10^-20 (a crude estimate, based on data and other estimates, but the point is that is has happened and is very small). He then labels a mutation event of this rate "CCC." Next, he estimates the number of humans that have lived during the last million years to be 10^12. He then claims that belief in Darwinian evolution requires us to believe that modern humans have evolved without the benefit of even a single mutation event of the same degree of complexity as a CCC (first of all, the "not even a single" is incorrect, as there are many possible CCCs, not just one [not likely that PaV wins the Powerball but likely that somebody does but never mind that now]). But the only way to establish a CCC is to observe it (Behe makes a point that his numbers are esimated from real data, not calculated) and in the population of humans, any mutation that has happened has estimated probability 10^-12 (and those that have not happened we cannot estimate other than maybe saying that they are less likely that 10^-12). The whole argument just doesn't persuade me. See also my previous comments on why mutation rates should be taken as proxies for evolutionary challence or mutation effectiveness. I don't know how we would define these concepts in a precise way but it has to be done such that it is independent of probability. It's in this context a bit sketchy to use "complexity" and define it merely as a probability. olofsson
I hope not, that would make Prof Behe mad at me. I understand he doesn't hold grudges :-) tribune7
PaV [282], I've heard of it but don't know it. There are plenty of prob/stat books that all teach pretty much the same thing. olofsson
tribune7 [281], I hope not, that would make Prof Behe mad at me. olofsson
Olafsson [277]: Thus, with Behe’s numbers of 10^12 humans and a “CCC event” defined as having estimated rate 10^-20, it is vaccuosly true that there has not been any such event.. I'm not very knowledgeable in how these rates were/are calculated, but, as I'm wading slowly through Fisher's "The Genetical Basis of Natural Selection", a comment of his there suggests that these rates are derived from actual human populations when it comes to the human genome. There are ways to know with confidence when a mutation has randomly occurred, and to associate it with the occurrence of some disease in particular populations. From that data, mutation rates can be calculated. So, I'm wondering what you mean by "vacuously true", other than since the probability is so low we have no way of evaluating such a number since there might never be that many humans born. OTOH, if they told you that based on your family history there was a 1 in 10^20 probability that you'll develop cancer, would you say this is also "vacuously true"? I suspect not. PaV
PaV, No, he probably meant that there is on average one mutation per generation which is true if the genome length is 10^8 and the mutation probability is 10^-8, but not in general. What you are describing is a conditional probability, that is, given that there is one mutation, what is the probability that it happends precisely at this location? olofsson
Olafsson [275]: I have several books on statistics. I have an older one. It's by H.D. Brunk, entitled "Mathematical Statistics". It seems to be the most mathematically rigorous one. Have you heard of it? But, of course, my goal is to learn the least amount of statistics as I absolutely have to. I don't like headaches. ;) PaV
Or you just managed to inadvertantly prove creationism :-) tribune7
P.O. I was thinking of Behe’s comment that it is reasonable that the ratio is 10^8 because that is the length of the genome. I'm rather sure what he meant was that given that the length of the malarial genome is 10^8 bases, the probability of a single mutation--- at a particular point along the length of the genome---occurring would then have a probability of 1 in 10^8. (Remember, that the mutation had to occur in one of two locations on the genome. So, given that 'one' mutation has already occurred, the 'second' mutation would have to occur in one particular location, and none other) So, he takes the fact that an additional mutation at a particular location has the very same probability of occurring in nature as the one that is theoretically calculated as confirmation of his logical approach. [I think this "fit" is also very important for the ID argument in general.] PaV
...that the argument is meaningless. olofsson
Thus, with Behe’s numbers of 10^12 humans and a “CCC event” defined as having estimated rate 10^-20, it is vaccuosly true that there has not been any such event. In humans. Which means . . . ? tribune7
DaveScot [266], Mutation rates do not depend on population size but I wrote that estimated rates do, in the sense that if the population size is N, there is a lower bound of 1/N to the estimate. Thus, with Behe's numbers of 10^12 humans and a "CCC event" defined as having estimated rate 10^-20, it is vaccuosly true that there has not been any such event. olofsson
PaV [267], I was thinking of Behe's comment that it is reasonable that the ratio is 10^8 because that is the length of the genome. olofsson
PaV [268], I have read it and more of Dr D's writings; thus I am able to offer an insightful criticism of his work. As for learning statistics, Specification... is not where you should go. You need to learn it from a regular text, preferably take a class or two. Now, that is really painful... olofsson
H'mm: Came back, see blog is back up; also, I see we seem to have made history for UD here . . . kairosfocus
Correction: In post #267, it should be 1 in 10^12, not 1 in 10^14. I somehow got 14 into my head and couldn't keep it out. PaV
PS: PaV, here is the Yahoo converter's link on WD's Specification: The Pattern That Signifies Intelligence, html version. Here is the PDF, for those not doing free beta testing on Adobe 8.1 and/or Vista etc. kairosfocus
4] PO, 259: I have to say, considering how incredibly complicated the flagellum is with all those little parts, it is most likely due to chance. Whether it is intelligent chance or not is another matter. “intelligent chance” reflects the basic problem discussed in my point 2 just above. Agency can use chance, but it normally does so in a context that as WD discussed in his latest, cuts down the config space to such manageable proportions that trial and error heuristics are fruitful. That is, within the edge of probability. Next, in inferring to chance to explain the flagellum, you have in effect lined up with CD's classic burden of proof shift: “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” Our contention, with reasons, is that: [a] we cannot consistently demand that level of proof on a scientific matter or other matters of fact without surrendering the difference between knowledge and mere arbitrary belief, and [b] in fact, the flagellum is beyond reasonable doubt and/or on the preponderance of the evidence – a more feasible level of testing – well beyond the edge of probability. 5] DS, 266: Eukaryote mutation rate is generally given at one base pair error per 10^9 base pair replications. IIRC it’s an order or two magnitude higher for prokaryotes . . . Population numbers don’t effect that rate. That’s not to say it’s a constant. It’s variable by reasons known and unknown . . . the grain pretty much goes away in plasmodium as the predicted rate given above roughly matches the observed rate in acquiring the single point mutation conferring avoquine resistance and the two point mutations needed for chloroquine resistance. Muy interesante. Put that in the context of the loss of elements of functionality that are associated with the emergence of resistance to drugs and associated 1 or 2 point etc mutations, and where does that get you relative to the scale of proposed “creative” mutations required to originate elements of new body plans. [Here, horizontal gene transfers are simply displacing the origination problem sideways. The root issue is where did the functional DNA come form to begin with, to be horizontally transferred and co-opted to new functions.] 6] PaV's links: They are not coming up in my Firefox. Let's wait on his sorting out Adobe 8.1. [Thanks for the warning: if it ain't broke, don't fix it.] GEM of TKI kairosfocus
3] PaV, 256 [Caputo yet again]: what does it matter whether one moves from the center to the edge (as you do, P.O., in the Caputo case, comparing the 38% pattern versus the 1 in 50 billion pattern) or one begins at the extreme end of the P.D. and moves toward the center, with the proviso that once a specification/pattern exceeds the UPB in probability, then “chance” can no longer be ‘ruled out’? Why is it necessary to “know” what each possible specification looks like? Of course, “the edge of chance” is the threshold where the available probabilistic resources run out, so that when we see [a] a pattern that fits the known preference for D, in a context that [b] was allegedly produced by a fair selection process, but is also [c] extremely improbable relative to available probability resources, we put up our “suspicion” flag. As I summarised last evening, this is in effect a special case of a configuration space for a conveniently digital system (i.e. the outcomes are neatly discrete so we do not have to fuss over making up somewhat judgemental phase space cells – and that is in fact the underlying context). Then, the issue is, to find the functionally specified island that is relevant, relative to an arbitrary start-point. In this case, on one throw of the dart so to speak. In the macroevolution case, relative to say DNA in existing organisms, we know we are already in islands of functionality. The issue is to extend the config space and to thus access novel body-plan level islands of functionality, e.g. here the flagellum. But, on reasonable estimates of the scale of proteins and the fine-tuned, interlocking and integrated nature of bio-systems based on proteins, as well as the required underlying DNA to code for it, we know as above that 500 – 1000 new base pairs is not likely to emerge on even a vastly larger search space than our cosmos as we observe it – much less on this one small planet within the window of time life plausibly has existed here. We are beyond the available probability resources, the probabilistic edge of evolution. [Note how much of the mutations and drug and/or insecticide resistance, etc we do observe is due to loss-of-function oriented single point mutations.] Similarly, on origin of life, we see that we need a cluster of many dozens to hundreds of proteins, corresponding DNA, and a plausible cell wall plus the relevant architecture and information systems to integrate them functionally. Taking only the tests on DNA in biologically viable cells, we see that 300 – 500,000 base pairs is a reasonable floor for life, and note too that 100 proteins at 3 codes per protein and with 300 k base pairs, gives us a very crude estimate of 1000 pairs per protein. Again, OOL is well beyond the edge of probability. . . . kairosfocus
It continues . . . Insomnia patrol reporting in after patrol and a 404 error game, “Sarge.” Several highlights overnight: 1] A Quickie on the Flagellum and the TTSS: Last time I checked, the TTSS, was about Y pestis et al using a subset of the gene code for the flagellum [present but not fully expressed, a subset being used] to make a nano-syringe for injecting toxins into especially eukaryote cells, which supposedly evolved from prokaryotes, i.e there seems to be a time-line problem relative to the usual evolutionary trees. [This is similar to last evening's BBC news announcement that H. erectus and habilis are now regarded as contemporaries over in E Africa some 1.5 MYA, without adequately noting on implications for the usual trees and time-lines. My old correspondent, HM, is doubtless sitting on heaven's balcony and having a good laugh!] A WD blast from the past will help underscore the problem:
The bacterial flagellum is a motility structure for propelling a bacterium through its watery environment. Water has been around since the origin of life. But the TTSS, as Mike Gene . . . notes, is restricted "to animal and plant pathogens." Accordingly, the TTSS could only have been around since the rise of metazoans. Gene continues: "In fact, the function of the [TTSS] system depends on intimate contact with these multicellular organisms. This all indicates this system arose after plants and animals appeared. In fact, the type III genes of plant pathogens are more similar to their own flagellar genes than the type III genes of animal pathogens. This has led some to propose that the type III system arose in plant pathogens and then spread to animal pathogens by horizontal transfer.... When we look at the type III system its genes are commonly clustered and found on large virulence plasmids. When they are in the chromosome, their GC content is typically lower than the GC content of the surrounding genome. In other words, there is good reason to invoke horizontal transfer to explain type III distribution. In contrast, flagellar genes are usually split into three or more operons, they are not found on plasmids, and their GC content is the same as the surrounding genome. There is no evidence that the flagellum has been spread about by horizontal transfer." It follows that the TTSS does not explain the evolution of the flagellum (despite the handwaving of Aizawa 2001). Nor, for that matter, does the bacterial flagellum explain in any meaningful sense the evolution of the TTSS. The TTSS is after all much simpler than the flagellum. The TTSS contains ten or so proteins that are homologous to proteins in the flagellum. The flagellum requires an additional thirty or forty proteins, which are unique. Evolution needs to explain the emergence of complexity from simplicity. But if the TTSS evolved from the flagellum, then all we've done is explain the simpler in terms of the more complex.
I guess, we'd better wait to hear the other side of the story to what Mr Matzke et al have to say; maybe from Minnich et al. (In short, we must beware that “debate is that wicked art that makes the worse appear to be the better case [therein being aided by rhetoric, the art of persuasion, not proof].” And that is why I would not have made a good debate judge: I would have severely downgraded persuasive and “artful” [in both senses] but unsound / fallacious pleadings. BTW, in the end, Montserrat actually won vs several other neighbouring islands, arguing a case that is sound -- i.e that Mr Castro has been an oppressive and destructive dictator – but extremely “against the tide” of public opinions in the region, which tends to be reflexively “anti-imperialist.” Of course, on a mere coin-toss, the same team was also prepared to argue the opposite!) 2] PO, 249: To talk about “best explanation” requires a comparative approach which puts you in Elliot Sober’s camp. I and Dr D maintain that only elimination is possible for design inference. Again, the underlying inferential and modelling context of the design inference is that: [1] we observe and model cause in terms of [a] natural regularity, and/or [b] chance, and/or [c] agency; [2] we are provisionally inferring based on empirical evidence, relative to the three; [3] where there is contingency linked to a configuration space , NR is not dominant. Similarly, [4] when there is a sufficiently large config space to exhaust probabilistic resources relative to reasonable search, and an independent specifying pattern, on experience we see agency is superior to chance. Thus, [5] agency – alt. c -- is the best explanation for CSI, relative to the alternatives a and b. In short, we see “elimination” and “comparison,” but not across a Bayesian-style comparison on hyps and conditional probabilities. In particular, NR not being dominant is as a rule a trivial issue, so the crucial question is to eliminate the null hyp that the observed functionally specified phenomenon was produced by in effect chance search across the relevant config space. (BTW, in this context, at stage 1, we simply refuse to foreclose the possibility of any one of the major causal forces. Similarly, the provisionality in 2 is standard for empirical investigations. The inferences in 3 – 5 then serve as empirical data pointing to agent action, and may serve as evidence for the existence of the agents if that was not directly observed.) And, “plausible” is used to point out that empirically based reasoning is provisional, a commonplace of science. . . . kairosfocus
P.O. As for your earlier post, for one who loathes statistics, you’re not doing too bad! I alluded to it before: statistics makes me have to think too hard. It's a completely different world one must enter in order to do statistics. Probability spaces and such are hard for the imagination to conjure up. But, if one is interested in population genetics and ID, the brain is just going to have to suffer. If I do halfway well with statistics, it's because I've read WD's article: Specification: The Pattern That Signifies Intelligence. I highly recommend you read it since it lays out Dembski's argument most fully. (Sorry I'm giving you HTML links; but I just downloaded Adobe 8.1 and my computer is acting up. I can't get pdf files now. Alas.) PaV
P.O. Yes, that is it. Looking forward to your explanation The explanation is pretty much: that's just the way it is. When you look at White's paper, in there is a discussion about the various factors that are involved in development of drug resistance. Quite a number of factors are involved. He gives a brief discussion of some of these factors. But the bottom line is this: the in vivo rate (basically what happens in the organism itself---what is actually seen in the live organims) is 1 in 10^14, while the in vitro (basically what happens in the lab, in Petri dishes, etc.) rate is between 1 in 10^8 and 1 in 10^10. I still find it a little bit puzzling; but those are the rates. We had a very long discussion about it all here at UD. Here's the thread. I don't think that will very useful. The best thing is White's paper itself here. PaV
PO I think estimated mutation rates are not only less than perfect but probably not very good at all as they depend on population size. I'm not sure how to interpret that. Eukaryote mutation rate is generally given at one base pair error per 10^9 base pair replications. IIRC it's an order or two magnitude higher for prokaryotes. Population numbers don't effect that rate. That's not to say it's a constant. It's variable by reasons known and unknown. Many chemicals are carcinogens for example. They increase the rate of mistakes in DNA replication. Background radiation is another environmental factor that varies the rate. It also varies by loci within the same genomes. So one should indeed take the observed average rate given at 1/10^-9 with a grain of salt. However, the grain pretty much goes away in plasmodium as the predicted rate given above roughly matches the observed rate in acquiring the single point mutation conferring avoquine resistance and the two point mutations needed for chloroquine resistance. DaveScot
And what exactly is intellgent chance? tribune7
PO -- No mutation that occurs in humans can ever come even remotely close to Behe’s CCC regardless of how useful it is, simply because of population sizes. And rate of reproduction and generational time span. Which is the point. If Darwinian evolution is occurring at such a rate as to do what its proponents claim it can, it would be seen more readily in more prolific creatures like bacteria than in man. That’s why I fail to see why CCC - 1 in 10^20 - is a valid benchmark for the plausibility of Darwinian evolution. OK, so you are saying that man with a smaller population (and lower reproductive rates and longer generations) is more likely to have the simultaneous beneficial mutations than the malarial parasite? tribune7
PaV, Sure, there is no difference in moving inward or outward in the probability distribution. Each significance level corresponds to a particular cut-off point, the start of the rejection region. In statistics, significance levels are typically 5% or 1% or something similar. The smaller the better, but, if they're too small we cannot reject anything (which is what we want to do). So, we compromise, and the choice is really quite arbitrary. UPB is simply a drastic significance level (se my post 166). As for your proteins, later! olofsson
PaV, Yes, that is it. Looking forward to your explanation (keeping in mind that I don't think it's a big deal). As for your earlier post, for one who loathes statistics, you're not doing too bad! :) More later! olofsson
P.O. As to the suspected error on p.59 of EoE, are you pointing out that Behe speaks of two mutations and that for the fist mutation he has the odds of 1 in 10^12, and for the second, he uses 1 in 10^8? I believe I can explain this, if this is the error you suspect. PaV
tribune7 [258], No particular mutation, any mutation. No mutation that occurs in humans can ever come even remotely close to Behe's CCC regardless of how useful it is, simply because of population sizes. That's why I fail to see why CCC - 1 in 10^20 - is a valid benchmark for the plausibility of Darwinian evolution. olofsson
tribune7 [257], I'm here to discuss design inference, not personal beliefs. But I have to say, considering how incredibly complicated the flagellum is with all those little parts, it is most likely due to chance. Whether it is intelligent chance or not is another matter. olofsson
OK, you actually asked a question. None that I am aware of, but again, I’m no biologist. As you keep pointing out. But assumptions about biology seem to be coloring your mathematical analysis. For instance you ask "why should a mutation that has only appeared once in humans be deemed a trillion times less useful that one that has appeared only once in bacteria?" What mutation is that? tribune7
The EF does not allow us to ask these questions. So you don't have an opinion? tribune7
P.O. "As I have pointed out many times, if you put Caputo and the flagellum side by side, you notice that the specification is absent in the latter (see my post 106, near the end). We can identify E in both examples, but in the flagellum, there is no E*. I have asked you before to no avail." Maybe the answer to your dilemna lies in asking the question: How do we form the "rejection region"? What I mean is this. Implicit in the Caputo case, and in our discusion (both yours and ours) about it, is that as one moves along the probability distribution, i.e., as one moves further and further away from the center (peak) of the distribution and towards the edges, the probabilities get exceedingly smaller and smaller. Let's face it, in the end, there is no rule or law that establishes the "rejection region", rather it is established inductively, or, intuitively. In the Caputo case, e.g., 38% is close to the peak, and, obviously, 1 in 50 billion is out toward the edges. But, what is the rule for establishing the "rejection region". As far as I can see, there isn't any. We simply agree that a "chance" event that is highly improbable (1 in 50 billion) didn't come about by 'chance'. We do this, it appears to me, intuitively, inductively----it's something we infer. And, as your paper points out, it is an inference you came to quickly yourself. Well, what if instead, our journey along the probability distribution doesn't begin from the center. What if, instead, we begin from the extreme (almost infinite) end of the probability distribution and work our ways to the center. Wouldn't it be likewise true that at some point along this distribution, we could no longer 'rule out chance' as a cause for some event? Wouldn't we then indicate this as the end of the "rejection region", wherein, on passing beyond this point on our journey to the center of the probability distribution, "chance" is now a plausible explanation for an event? I don't see any difference between these two scenarios: both involve the same probability distribution, both establish a "rejection region", and in both case what 'establishes' the "rejection region" is a somewhat inductive, intuitive sense of improbability---i.e., a probabilistic inference. Intelligent beings are comfortable doing this. We have some native sense of improbabilites. What WD has done is simply established the UPB as that point along the probability distribution which defines the "rejection region". Thus defined, what does it matter whether one moves from the center to the edge (as you do, P.O., in the Caputo case, comparing the 38% pattern versus the 1 in 50 billion pattern) or one begins at the extreme end of the P.D. and moves toward the center, with the proviso that once a specification/pattern exceeds the UPB in probability, then "chance" can no longer be 'ruled out'? Why is it necessary to "know" what each possible specification looks like? In the Caputo case, we know that, given the rules, the pattern is going to contain nothing but D's and R's, and that they will total up to 41 (or was it 42?) such determinations, and that, each such determination has a 50% chance of occurring naturally. Well, as I pointed out in an earlier post, we know the "rules" for proteins: (1) they're made up of amino acids; (2) each amino acid has a chance of occurring of 1 in 22; (3) specified proteins are of lengths as many as 300 a.a. long.***[see below] Here the a.a. are equivalent to the D's and R's. Here, the total number of determinations is 300, whereas in the Caputo case it was 41. We run the numbers and we get 22^300, which is way beyond the UPB in improbability, assuming a chance hypothesis. In terms of statistical theory, I just don't see this approach violates in any way the canons of statistics. ***[from above] (I might add that we also know that outside of the cell--human intervention notwithstanding--proteins don't exist. That is, outside of biological life, the probability of running into a protein is zero. So, in a way, according to "chances", proteins have zero chance of existing. This, it would seem, should rule out the possibility that chance as a cause for proteins.) PaV
tribune7 [250], OK, you actually asked a question. None that I am aware of, but again, I'm no biologist. olofsson
Anybody who is still interested in the EF and my objections: see post [241]. There is a question at the end. olofsson
Kairosfocus as a debate referee is like Roger Federer being a referee at Wimbledon! :) olofsson
tribune7 [250], I will think it through, to the best of my capability. Off the top of my head though, I think estimated mutation rates are not only less than perfect but probably not very good at all as they depend on population size. Why should a mutation that has only appeared once in humans be deemed a trillion times less useful that one that has appeared only once in bacteria? Just because there is nothing better, it might not be a good idea to use something that is flawed. Note that this problem is not restricted to the ID/darwinism debate but ought to be of interest to evolutionary biologists as well. olofsson
tribune7 [249], The EF does not allow us to ask these questions. See my post 249. olofsson
PO--I question why ranking mutation rates is equivalent to ranking “evolutionary challenge” Now, think this through. Mutation rates may not be perfect but what other objective criteria is there to rank "evolutionary challenged"? tribune7
Kairosfocus, Interesting that you submit, on excellent grounds, agency is the best explanation as this is precisely the kind of conclusion you cannot reach in the Fisherian eliminative paradigm. To talk about "best explanation" requires a comparative approach which puts you in Elliot Sober's camp. I and Dr D maintain that only elimination is possible for design inference. Or do we? Ah, but Dr D himself succumbs to the temptation of comparison in the "Elimination vs Comparison" paper, on page 5: If we can spot an independently given pattern [...] then it's more plausible that some end-directed agent or process produced the outcome [...] than that it simply by chance ended up conforming to the pattern." The boldfacing of "more plausible" is mine, to point out how extremely difficult it is to stay on the straight and narrow Fisherian path. Prof PO, The Last Fisherian olofsson
PO -- We can never reject the design hypothesis PO, what is more reasonable -- the design hypothesis or any other? tribune7
Atom [243], I may be wrong on a number of points and I have written Prof Behe to clear up the situation. It is true that more mutations are required for chloroquine resistance than atovaquone resistance (2 vs 1, according to Behe) but that is not really my point, as I question why ranking mutation rates is equivalent to ranking "evolutionary challenge" or "usefulness of mutations." Anyway, I'll keep thinking about it. olofsson
Atom [242], We can never reject the design hypothesis because that would require us to state it and compute the likelihood of the data under the design hypothesis. Remember that the EF is eliminiative, not comparative (which is something that Elliot Sober has a problem with but I don't). I suppose that you might mean "fail to reject the chance hypothesis." olofsson
PS: Sorry, 10^310 Q-states overall. kairosfocus
Still going strong . . . A few quick comments, not in any particular order: 1] WD and “right” of rebuttal: I thought earlier to keep what was not public private, especially as the request Prof PO makes, apart from unusual influence and openness, will most likely make but little difference to the well-known situation [cf fate of Sternberg etc.]; i.e. IMHCO the operative issue is that WD would be petitioning for access at sufferance, and on long track record that is unlikely given the general state of journal politics and polarisation relative to design issues. (Of course, I did not foresee that that could be “turned” into an insinuation of bad faith on my part. I would love to be happily surprised, but I ain't holdin' my breath waiting for it.) 2] Atom: A bright light indeed. Bon Voyage . . . 3] Caputo: Actually, the Caputo case is an illustration of configuration space at work, and the issue of likely outcomes on reasonable probabilistic resources. We in effect have a space of 2^41 ~ 2.2*10^12 cells, with 41 clusters from 41R to 20 R/21 D to 41 D. At the peak, the near 50-50 splits, we have two clusters of 2.7*10^11 cells, or about 12.2%. One step away from that, and on to the end of the distribution, we have about 38% of the cells. In short, this is a classic inverse T distribution typical of statistical thermodynamics and related situations, i.e sharply peaked. Relative to these numbers, WD's E* of the last two configs on the right as an extreme, is about 1 in 52 billions, indeed. On these numbers and with a reasonable sample of cases, we would not expect to see the sort of distribution Caputo saw, as the most likely outcomes would cluster near the middle of the distribution. So, when we see something on the right leg of the inverted T and it also fits an independent pattern: advantage for Mr C's party, we are right to infer that design is the likeliest explanation. Worse, the report that the actual drawings were unobserved lends to motive and means, opportunity. Equally, an expansion of the reasonable rejection region to 38% of the distribution is not defensible, as this is well within the reasonable expectation of observation. I guess this is for the record, but that will allow onlookers to judge for themselves whether I am simply trying to win a debate by any tactics that come top mind, or actually care about the balance of the case on the merits. [Let's just say my views on that wicked art we call “debate” are not exactly a secret; they are why I recently refused to sit as a judge on an inter-island debate competition.) 4] Flagella and proteins I take on board the new information –- note that word -- on specific proteins, though of course they make no material difference to the underlying issue raised in my openly acknowledged as crude calculations. BTW, as a practical matter, with the 4-state elements in DNA, if we have a string of DNA of over 500 – 1,000 base pairs, i.e double to quadruple the length that gives 10^150 states, we very reasonably are in the region of sufficiently complex that islands of functionality in the config space are comfortably sparse beyond island-hopping strategies such as RM + NS, starting at arbitrary states. For, 4^500 ~1.07* 10^301 and 4^1,000 ~ 1.15* 10^602. To give an idea of what even the first of these numbers means, take our observed cosmos of ~ 10^80 atoms, and expand it so that every atom becomes a new sub-universe of the same order; this gives now 10^80 sub-universes of 10^80 atoms, or 10^160 atoms. (At 10^150 quantum states per universe across its lifetime, that would give us 10^230 quantum states, total.) Thus, the config space approach is both a generalisation of the statistical distribution approach and it is also quite capable of taking on the flagellum issue, even just the first corner of it we outlined above being enough to make the material point: 4 proteins at ~ 300 base pairs each gets us to 1200 base pairs. (Also, as I and others have pointed out above, motor without steering wheel and driver is a recipe for trouble. That is, the system is complex beyond reach of any reasonable probabilistic resources, is functionally specified and information-based.) I see too that someone aptly noted that the issue is not whether other locks exist with combinations, but this particular lock and its combination. So, the formation of an acid-ion powered, controlled, forward/reverse drive outboard motor in bacteria in the face of Behe's edge, is to be explained. I submit, on excellent grounds, agency is the best explanation. Moretime GEM of TKI kairosfocus
There is no a priori reason that resistance to chloroquine is that much harder to develop than resistance to atovaquone (without any expertise in Chemistry, I think the molecules are very similar in size, composition, etc).
It has been a month of so since I finished EOE, so this is from memory, but I think you may be mistaken on this. I think one type of resistance takes more mutations than does the other (lurkers with the book handy can correct or confirm my point.) So theoretically we expect for the "mutli-step" one to take longer to find through blind-search. When we look at the empirical data, this is what we see and our theoretical estimate is confirmed. If the two types of resistance were caused by an identical number of mutated bases, it would be odd indeed for one to take much longer than the other to come about by chance. So this also leads me to believe that you may have made a wrong assumption in your reading of Behe's ideas. Atom
PO, Lol at the distraction. To everyone who sent congratulations and compliments, thank you. I did post a comment with links to my mug, but they seem lost. So sorry Aq. PO (again), Ok, so you agree that in principle we could use my approach to apply the EF, even if our numbers are so small that we reject the design hypothesis. (Which is fine.) We can reject design 30 times, keep sharpening our methods, then on the 31st try we see that a design inference can be made. This isn't a problem for the filter, since it is designed to have a tendency to reject design (it is conservative), rather than have false positives.
You just can’t get anywhere near the UPB with a relative-frequency estimate so you could never reject a chance hypothesis.
Again, I haven't run the numbers so I'm not sure of this. We would need to calculate how many bases long our N would have to be to get a one in a 10^150 isolation ratio. Then we'd have to see how many sequences of that length we have examined (to check for BF type device), and run the numbers. We could do the same with the second method I outlined (dealing with proteins and sub-components, not DNA bases). But either way, it is just a matter of calculation at that point, not one of theoretical issues. Would you agree? Atom
Kairosfocus [219], 2. the flagellum is a specific type Yes, and in applying the EF, we need to form specifications, that is, sets of specific types. Whether or not the "evo mat paradigm" is deeply challenged is not the issue; the issue is how to apply the EF. So, I repeat my hitherto unanswered question: What is E* in the flagellum example? olofsson
Atom, If you need an update on the Bayes issue, see my post 214. olofsson
Atom [230], I meant that if an event is defined to have some certain property based on its observed relative frequency f alone, obviously you will not see it in a population that is much smaller than 1/f. So even if we were to discover a great mutation that helped the caveman suddenly be able to solve differential equations, it would not qualify as Behe's "CCC" event because there has not been enough humans. In other words, I think it is suspect to quantify and rank mutation events by their observed relative frequency. Even if we had a way to independently quantify "usefulness" of a mutation or "evolutionary challenge" there is no reason it would coincide with observed mutation rates. For example, the only reason we can say that chloroquine resistance is 10^8 times more complex than atovaquone resistance is based on estimated mutation rates. There is no a priori reason that resistance to chloroquine is that much harder to develop than resistance to atovaquone (without any expertise in Chemistry, I think the molecules are very similar in size, composition, etc). The argument that "because the malaria parasite needed a 1-in-10^20 mutation event to become resistant to chloroquine, humans must have experienced mutations of the same probability" just does not seem very persuasive to me. By the way, there is an error in Behe's book, on p.59 regarding the numbers 10^20, 10^12, and 10^8. It is not important but see if you can spot it (and ask Kairosfocus if you talk to him). Finally, I do not know anythin about biochemistry so I have no way to comment no protein-protein binding sites. I suppose that any probability calculations there are based on assumptions rather than data though. olofsson
Congrats Atom! scordova
Atom [229], Being just as distracted as everybody else by your lovely fiancee, I'll try to focus now... No logical problem, but still an overwhelming practical problem. You just can't get anywhere near the UPB with a relative-frequency estimate so you could never reject a chance hypothesis. If your data cold be used to somehow choose plausible probablity distributions and these in turn could be used to calculate probabilities, it might work. We should remember that the EF is primarily an attempt at formalizing design inference, not a practical tool. In any practical application, I believe it will boil down to a statistical hypothesis test one way or the other. olofsson
Mods, I think my last comment got stuck in moderation due to having two pics linked on it... Thanks. Atom
Atom, yeah she's smokin hot. Congrats! Of course, we all know that marrying her is just an attempt to pass on your selfish genes to other fortuitously organized clumps of matter, so in turn they can do the same. Ain’t life grand in a blind watchmaker universe? Spread the word... shaner74
Joseph, I was trying to make it easy :-) Jerry, great post and excellent points. Patrick, thank you for the new data. Atom, your fiancee is absolutely beautiful. tribune7
Congratulations atom – do we get a picture of you? Acquiesce
aww...they cut out my photo! Pic of my gorgeous Luz here Atom Atom
GEM, Thank you for the advice. My beautiful fiancee is Ms. Luz Maria: We're marrying Sept. 7th! I read your most current comments on this thread and I think you are (as usual) making valid points, but they are not the points that are talking to what PO is arguing (from my understanding of it.) I would drop the Caputo issue (I think enough has been said about it on both sides) and now foucs on the more interesting question of how to estimate the islands of functionality (BacFlag specific) in configuration space, which you have done before. I'd break that down in simple terms for the rest of us, as it is relevant to what is being discussed. Thanks for all the insight you all have shared. Atom Atom
PS
Incidentally, a similar problem appears in Prof Behe’s new book The edge of Evolution where the probability 1 in 10^20 plays a prominent role, and we can never get such an estimate in any population smaller than 10^20 individuals
I don't see this as a problem. If we can only expect one new protein-protein binding site per 10^20 replication events, then logically we'd expect 1 or less per any group of events less than 10^20. Behe's point is that when you need 100 new binding sites, but only have 10^6 organisms to work with, your theory is untenable. Atom
Hey PO, Thanks. :)
It seems that one problem with your approach is that in applying the EF we have to deal with such extremely small probabilities that we cannot have any hope of estimating them empirically. You mention 2 in 10,000 but the UPB is 10^-180 (by PaV’s account).
Here is the basic idea. We take all our databases of sequenced genomes and break them up into distinct N-length sections. (With the billions of basepairs in the databases, we have a large sample set.) Then we see how many different types of bi-directional, motility devices are represented by those sequences. (We can estimate this by saying, for example, "Humans have no BF type of structures. Therefore all M bases of human DNA, all those sequences, are in our negative set.") In this way we get one estimate of the sparseness of configuration space coding for the type of device we want using N bases of DNA. (It may not reach 1 in 10^150, but we will still be making progress in our estimates.) We can also see how many configurations N proteins long create a functional sub-part of the BacFlag. For example, every BF needs a "motor" component. In the BF, this is coded for by a given number of proteins. Using that number, see how many different configurations also lead to a functional motor. (My guess, none to very few.) Repeat this for the cell membrane rings, etc, until you have estimates on the sub-components. Then you can estimate for the entire structure. Again, these are real research problems and we don't have the estimates yet, but I don't see any logical problem to doing so. Atom
Digression Kairosfocus, You write very well. I'd love to read something by you that does not pertain to our discussion here at UD. Have you written any essays, short stories, opinion pieces, anything? olofsson
Joseph [218], A sharp observation and not much to argue with, really. Let me say a few things though. As the filter is probabilistic, we need to have the probability space decided before applying it. The set of all probability measures on this space is then the set of all chance hypotheses and testing is straightforward (theoretically, not practically). In this sense, there should be no issue how to "sweep the field." However, new knowledge may lead to a change of probability space and conclusions may be revised, nothing strange about that as both you and Dr D point out. But I think it is somewhat inconsistent to claim that a chance hypothesis that may have been missed must be formulated by the "design skeptic" when you have already said that it is not a requirement of logic that a rejected hypothesis be superseded. olofsson
Atom [217], Welcome back and congratulations! As I don't have any expertise in biology, I don't want to speculate too much. It seems that one problem with your approach is that in applying the EF we have to deal with such extremely small probabilities that we cannot have any hope of estimating them empirically. You mention 2 in 10,000 but the UPB is 10^-180 (by PaV's account). [Incidentally, a similar problem appears in Prof Behe's new book The edge of Evolution where the probability 1 in 10^20 plays a prominent role, and we can never get such an estimate in any population smaller than 10^20 individuals (note that Behe talks about probabilities estimated from data, not calculated from some probability model).] I see that some others have replied as well and they have more interesting things to say. olofsson
RE: Kairosfocus [220], 8. ...if he chooses to ask them...? Interesting that Mr K does not want to mention that I told him that I have asked the editors to let Dr Dembski publish a rebuttal...but of course, that little fact doesn't fit well with Mr K's overall characerization of me so why would he mention it? As for all the rest, I understand that Mr Kairosfocus' main interest is to "win the debate." With all those references to Plato, he does. Congratulations! As for the facts regarding my criticism of the filter , I think most followers of this thread can make up their own minds by now. I have decided not to go another round trying to explain Bayesianism, hypothesis testing, or the Caputo example. Let me finally go back to the head of the thread. The reason that I posted my article in the first place was that Dr D mentions that Shallit has no expertise in probability. Thus, I assume such expertise is valued by Dr D (although none of the named supporters (Lennox, Koppel, Tipler, Davies) has such expertise either). Whether formal expertise is relevant or not can always be debated but regardless, as I have expertise in probability and have formulated a criticism of the explanatory filter based on that expertise, I thought it might be of interest to post it here at UD. olofsson
kairosfocus,
Not at all, I am pointing out that the flagellum is a specific type of motility nano-technology and that it is on the experimental evidence and common-sense, locally isolated in the configuration space. Nor –- given, e.g. the 30 unique proteins involved — is, say, a cilium [which is itself complex and fine-tuned] easily convertible into a flagellum, if you have that sort of stepping stones co-optation in mind.
Actually, those "30 unique proteins" are an old data point. http://www.nature.com/nrmicro/journal/v4/n10/fig_tab/nrmicro1493_T1.html The flagellum consists of 42 proteins. 23 proteins are "thought to be" indispensable in modern flagella. Out of those 23, 2 are unique. Otherwise 15 other proteins are unique. So that's 17 unique proteins with no known homologs. So in the last couple years 13 additional homologs have been found. But before accepting those numbers note the sequence similarities. 14 of these homologs were found by BLASTing on non-default settings according to Matzke. Whether that should be considered acceptable I can't say. So perhaps it's debatable exactly how homologous/unique some of these proteins truly are (never mind Behe's work on protein binding sites). But despite any bias in determining homology citing "30 unique proteins" is likely no longer correct. And unless I'm misunderstanding something this chart doesn't include all of the controlling mechanisms separate from the flagellum itself. Patrick
Tribune7 I have suggested another way to approach the problem statistically that does not involve motility. Namely, protein on protein binding. Only a few proteins can ever bind with a specific protein. We have some estimate of this from research. In the flagellum we have 20+ different proteins that bind in a specific sequence and at least one that binds with itself to form a chain. Given the estimate of proteins in general binding with each other and the number of proteins in life, then we can probably form some probability estimates and then use statistics to test a hypothesis. This may be naive but it seems that the flagellum is rare because it involves several instances of protein binding. What other structures exist that involve protein binding. There must be thousands of them but in each how many proteins are interchangeable with each other. Is is common for a protein to bind with many other different proteins or with only a few. If it is a few then this might indicate an approach to take. If protein bindings are generic like tinker toys then it would not be a valid approach. Then there is always the problem that Joseph brought up of how the instructions in the DNA arose to match the protein bindings. The DNA would have to come first but how did it come about with the specific nucleotides? Oh yes, by random mutation of the genome just happening to create proteins that bind with each other with the necessary switches to create an engine that is more efficient than anything man can build. I would be interested in anyone who has some thoughts on the problem of protein on protein binding and how rare or common it is. jerry
A few thoughts: 1) All CSI reaches the UPB 2) Not everything that reaches the UPB is CSI and
That set of proteins organized in specific way causes a specific result — the existence of a device to move a bacteria. Tribune7
You have to also account for the command and control center, along with the communication channel(s) from that center to the structure. IOW a bac flag without any control is useless. Joseph
KF, PO, Jerry That set of proteins organized in specific way causes a specific result -- the existence of a device to move a bacteria. Those proteins -- organized in any other fashion -- do not cause that result. Why would other means of motility even bother to be considered as part of a probability calculation? If you have a code that requires a key of 50 characters (or 30 characters) to unlock, why would you bother considering the possibility that a different arrangement might unlock another lock in calculating the probably of key fitting code by chance and inferring design? And especially why bother considering other ways of providing security other than a code and key? KF -- fair point on Ann's fashion sense. tribune7
6] PO, 214: Bayesianism again: Onlookers [and Prof PO], kindly note my latest remarks under points 12, and 13 in post 209 above. A comparison with 214 will underscore just how apt the points in 209 still are – and how Prof PO is still changing the subject and tilting at a strawman. I am noting that Prof PO cited Sober without balance as if he has the last word on the matter [all the way back to posts 20 – 21] and that in his 2007 paper, he raised a criticism on the Caputo case that comes out of the Bayesian playbook; which WD answered -- better, anticipated -- long since, in 2005. Cf. WD's remarks on expanding the RR to the point of meaninglessness, on p. 4 as I cited in post 180, point 4 and have again had to comment on in points 4 and 5 just above. 7] Jesus Tomb: I observe that in his 2005 paper, WD notes in effect that Bayesian reasoning has its place in contexts where the relevant probabilities and alternatives can be properly estimated. The Talpiot tomb case is one such. For that matter, I have had occasion to use the Bayesian type approach in estimating the conditional probability of HIV given general pop vs given declared homosexual, relative to CDC reported statistics. Again, sadly, this is a changing of the subject from the issue of a specific flawed critique used by contemporary Bayesians, to the question of the general utility of Bayesian reasoning. Both Fisher and Bayes have their uses and limitations. Caputo is a case where Fisher succeeds. Talpiot, one where Bayes works well enough. 8] PO, re 216: I don’t think people here know the background to this comment; it was communicated in an email to you. Actually, I had in mind the now notorious case of Behe, who time and again has been refused the reasonable right of reply to blatantly unfair and inaccurate critiques, on the most flimsy of excuses. I will be very positively surprised if the Journal that publishes your critique will ever give WD even 20% of the space to respond if he chooses to ask them -- indeed, on the evidence of what I have seen, I would be positively surprised if they will accept and publish a response at all [even a simple short letter]. And if they do, it's odds on that there will be a snide editorial comment or something else of like ilk to undermine the point without any access to corrective response. This is Plato's Cave . . . 9] Atom, 217: due to marriage preparations . . . Congratulations, old boy! [Who is the sacrificial “lamb”? Best pre-wedding advice I ever had: once she says yest to the ring, remember the W-day is her big day, so say yes until the parson/priest says he pronounces you man and wife.] When you have a moment, kindly let me know what you think on the balance on the merits, now that I have expanded my points and now you can see onward interactions. 10: Atom, 217: take M random samples of N bases of DNA, each different, and see how many of them result in a working motility device capable of bi-directional movement strong enough to overcome Brownian motion . . . This is of course equivalent tot he point that we are looking at islands of function in a config space. Add to that, that the space in question has in it something like 50 * 300 * 3 4-state DNA elements, which yields a situation that makes bio-functional states rather sparse. Similarly, we empirically observe only a very few cell-level motility devices, and that they are locally fine-tuned, i.e. knockouts of components remove functionality. [Over the past few centuries we have had occasion to observe a very large number of generations indeed, so if body-plan level evolution was to be happening, we should see it in the microbe world. We do not, and indeed, Behe's edge is telling on the limitations of RM + NS even under the serious pressure of drugs.] 10] Joseph, 218: we don’t have to rule out all possibilities to arrive at a design inference . . . . To eliminate everyuthing except design is like asking for proof positive. And that is not how science is conducted. {sarcasm}Tut tut, you haven't learned your Darwin, then: “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down” {/sarcasm} In short, since when was ever Darwinism properly to be termed “science”? GEM of TKI kairosfocus
H'mm, still going strong: I will take up a few points . . . 1] Trib, 211 : I think the reason why Miss Coulter is hated isn’t because she is blunt or rude but because she is effective You have a point, but I am still concerned on her tone (and even on some aspects of her dress). 2] PO, 212: this is not the correct thing to do within the EF paradigm. You now argue that we should compute the probability of a single outcome, not the rejection region/specification that this outcome belongs to. Not at all, I am pointing out that the flagellum is a specific type of motility nano-technology and that it is on the experimental evidence and common-sense, locally isolated in the configuration space. Nor –- given, e.g. the 30 unique proteins involved -- is, say, a cilium [which is itself complex and fine-tuned] easily convertible into a flagellum, if you have that sort of stepping stones co-optation in mind. So, the evo mat paradigm is deeply challenged to show, specifically, how it could get to such a device of such astonishing complexity and functionally defined specification. (In short, you are here trying to broaden the RR to the point of meaninglessness again; but, again, it fails. Just as it failed in Caputo.) 3] PO, 212: any particular sequence in the Caputo case has the same 1-in-a-trillion probability so by just looking at the particular empricially observed” outcome, we could draw whatever conclusion that matches the outcome and claim a 1-in-1-trillion probability. Yup, and so we compute that a result at least as extreme as the 40:1 will be 1 in 50 billions, and this is qualitatively different from a case where 22:19 on gives 38%. The point where your attempted analogy breaks down is that we are not dealing with configurations along a simple set of coin outcomes for 41 tosses or the like, but the functionality of an acid ion-powered nanotech outboard motor constituting multiple integrated and interacting but distinct components, as we can look at, at the head of this web page. That well-observed phenomenon invokes a multidimensional configuration space with the question, how do we get to such an entity from a generic bacterium -- i.e how do we get to body plan level evolution by RM + NS or the like? -- with special reference to the required DNA information. For that the EF sees: [1] plainly contingent so not NR. [2] Extremely complex [simply on the DNA sequences alone is enough to see that], so beyond UPB. [3] Recognisably functionally – as opposed to merely statistically -- specific, so specified. CSI in short, and [4] the only empirically confirmed source of CSI is agents, so we have direct knowledge that is relevant too. [BTW, as noted previously, I happen to be dyslexic so am dependent on spell checks to spot typos, which they sometimes miss. Sorry on that.] 4] PO, 213: let us take 30 Ds instead. Or 28. Or whatever you want. Or let’s put it this way: How would you explain the need for rejection regions, using the Caputo case as an illustration? First, it is you, sir with all due respects, who has chosen as your introduced example a RR that starts at the very next config to those at the centre of the distribution, as if there is no objective basis for selecting RRs in the context of the sort of case Caputo is. That unjustified RR expansionism is what I have repeatedly object to, as it in effect substitutes an unreasonable, expanded RR for a reasonable one, relative to available probabilistic resources. And, as to rationalising RR's, onlookers, I have in fact done so repeatedly, as has WD in his article, based on the now so often mentioned issue of adequacy of probabilistic resources. Cf my chart, stepladder and darts example from 68 above. The point is that it is feasible to hit a target by chance that takes up 38% of the target area within a reasonable span of throws, but not a “target” that takes in 1 in 50 billionths of the target area. (As a statistician who has doubtless solved his fair share of “chance or intent” cases using 5% or 1% rejection regions, Prof PO knows this and has routinely used it. His challenge is thus plainly rhetorical not substantial.) 5] PO, 213: the 22D sequence . . . As I have had repeated occasion to link and cite above, e.g. cf. point 4 in 180 above, in your 2007 paper, you glided straight from the 1 in 50 bn case to the 2 in 5 case, without referencing the issue of probabilistic resources that marks the proper distinction between the two –- and the issue of probabilistic resources is stressed by WD in his 2005 paper, p. 4, which also appears in point 4 of comment-post 180. My objection to the rhetoric you used in your 2007 paper, is precisely because as 68 shows, I full well understand the difference between a 1 in 50 bn chance and a 2 in 5 one. So, sadly, we see here a second level of [I still believe inadvertent] strawman, misrepresenting me and then knocking over that strawman with the sort of language one reserves for a stubborn student who has but will not use ability. That's not cricket, but it does seem par for the course for critiques of ID. WD is doubtless all too familiar with this, on the strength of the post at the head of the thread. . . . kairosfocus
And we don’t have to rule out all possibilities to arrive at a design inference.
The purpose of the filter is to rule out all chance hypotheses (”sweeping the field” in Dr D’s words).-PO
To the best of our abilities. That is why it is called a design inference. To eliminate everyuthing except design is like asking for proof positive. And that is not how science is conducted.
He then backpedals and says that this can’t be done with absolute certainty and shifts the burden to the “design skeptic,” although he also claims that logic does not require that an eliminated hypothesis be superseded.-PO
Why we call it the design inference: pg. 91 of "The Design Revolution":
”The prospect that further knowledge will upset a design inference poses a risk for the Explanatory Filter. But it is a risk endemic to all of scientific inquiry. Indeed, it merely restates the problem of induction, namely, that we may be wrong about the regularities (be they probabilistic or necessitarian) which operated in the past and apply in the present.”
Joseph
To all, Sorry I have not been following in detail this thread due to marriage preparations and work deadlines, but I wanted to throw a thought into the mix. Forgive me if this has been brought up alreayd, as I mentioned, I haven't had time to read teh entire thread. Prof. O, You mention we have no way of figuring out the rejection region of the flagellum, thus the EF cannot be applied. You then asked for ideas as to how to approximate it. The BacFlag (BF henceforth) is composed of a number of proteins, coded for by specific genes, and thus, by N bases of DNA. Take that stretch of DNA that results in a working BacFlag and call it "Sequence 1". Now, take M random samples of N bases of DNA, each different, and see how many of them result in a working motility device capable of bi-directional movement strong enough to overcome Brownian motion. For example, we can find stretches of DNA N bases long in GENBANK, and see how many are capable of producing a motility device that satisfies our constraints. If, out of 10,000 such sequences only 2 produce an equivalent device, then we will begin to have a rough estimate on our rejection region, which would show that sequences that code for such devices are relatively rare. Anyone see a problem with this approach? (I'm especially interested in hearing what GEM and PO think.) Atom
Mr Kairosfocus [210], Let’s just say I would be positively surprised if the editors of the journal in which Prof PO’s hit piece will shortly appear, will then allow WD reasonable room for a rebuttal on the merits. I don't think people here know the background to this comment; it was communicated in an email to you. olofsson
D. Have you read Dr D's essay "The Jesus Tomb Math"? In it, Dr D uses Bayesian reasoning in section 7. Must be very disturbing to you. Or, it could help you understand that statistical methods are not political or religious. olofsson
Karisofocus [209], C. About Bayesianism. The title of Dr D's essay is "Design by Elimination vs Design by Comparison" and that is precisely the difference between the Fisherian and the Bayesian approach (as Dr D points out, there is also third approach, the likelihood approach but it doesn't matter for the elimination/comparison discussion). On page 6, he introduces the Bayesian approach: When the Bayesian approach tries to adjudicate between chance and design hypotheses, it treats both chance and design hypotheses as having prior probabilities and as conferring probabilities on outcomes and events. There are those who have criticized the EF from this angle, for example Elliot Sober, but I don't. Have you even read his criticism? I have. And I have read Dr D's essay, and none of it is relevant to my criticism, regardless how often you repeat that I use tricks from the "Bayesian playbook," keep referring to papers and subjects you apparantly are not able to understand, and accuse me of rhetoric. Where in my article I advocate the use of prior probabilities? There are cases where I would argue for a Bayesian approach, for example when it comes to evidence in court trials, but for design inference I don't. If I did, I would put forth arguments for it, but I believe that, for design inference, Bayesian inference is useless and only an eliminative approach is possible. You say, about the 22D sequence that is would as per PO’s argument “count” against the null hyp? No, it would not because the rejection region has too high a probability, how can you fail to understand this over and over, regardless how many times I explain it??? As for your continuing references to page 4 in Dr D's essay, there is nothing about Bayesian objections on that page!!! Dr D is there attempting to "firm up" the Fisherian approach and discusses Bayesianism (prior probabilities etc, remember?) on page 6 onward. Please, Mr Kairosfocus, why don't you send an email to Dr D and ask him to explain to you privately about the distinction Bayes/Fisher and some of the other issues you have problems grasping? Please do, it will be very helpful. olofsson
Kairosfocus [209], B. About the 22 Ds and 38%. To free your mind of the apparent obsession with these numbers, let us take 30 Ds instead. Or 28. Or whatever you want. Or let's put it this way: How would you explain the need for rejection regions, using the Caputo case as an illustration? You say, about the 22D sequence that is would as per PO’s argument “count” against the null hyp? No, it would not because the rejection region has too high a probability. How can you fail to understand this over and over, regardless how many times I explain it??? olofsson
Kairosfocus [209], A. Nope, we are looking at a specific motility device and asking how could this particular empirically observed... Yes, and I argue that this is not the correct thing to do within the EF paradigm. You now argue that we should compute the probability of a single outcome, not the rejection region/specification that this outcome belongs to. In my article, I followed Dr D in using the Caputo example to illustrate the need for rejection regions/specifications. In the Caputo example, we do not compute the probability of the particular empirically observed outcome (which is 1 in a trillion) but of the rejection region/specification (which is 1 in 50 billion). Why? Because any particular sequence in the Caputo case has the same 1-in-a-trillion probability so by just looking at the particular empricially observed" outcome, we could draw whatever conclusion that matches the outcome and claim a 1-in-1-trillion probability. However, when the rejection region/specification is formed, the probability may not be low anymore (in Dr D's notation, it is the probability of E*, not of E that is relevant). As I have pointed out many times, if you put Caputo and the flagellum side by side, you notice that the specification is absent in the latter (see my post 106, near the end). We can identify E in both examples, but in the flagellum, there is no E*. I have asked you before to no avail. What is E* in the flagellum example? olofsson
I have to stick up for Ann Coulter. With debate raging about how she treats evolution, her references to the mind-numbingly stupid U.S. court decisions that occurred in the 1960s and have become entrenched in law are overlooked. Now, it might have been best if wise, reasoned and detached persons in dignified halls came to the conclusion that civil rights also meant protection from citizen-predators as well as government ones, and with detachment and reason explained why Mapp, Miranda etc. were real bad, and judges then became enlightened and overturned them, but it's been 40-plus years and that hasn't happened, so I guess it's about time someone started addressing the issue sans politeness. Don't forget that evolution became popularized not via scientists but via polemicists like Huxley and Mencken, and via cultural authorities like G B Shaw and Wells. I think the reason why Miss Coulter is hated isn't because she is blunt or rude but because she is effective. If she were caught lying (a la Michael Moore) that would hurt her, but she hasn't been despite the fact that her opponents certainly scrutinize her work. tribune7
PS: Why is it that an afterthought always strikes just after you hit "submit"? On Coulter's Godless. At the time, there was a hot discussion on the Evangelical Outpost on whether this was all a shadow show by a fake conservative whose tone and behaviour were decidedly not up to the standard of the NT. My own position was and is that we are all finite, fallible, fallen, and struggling at our best to avoid being ill-willed and ill-tempered. Ms Coulter's tone and ill-considered sound bites gave opportunity for clever spin doctors in the secularist evolutionary materialist progressivist camp –- they never say where they are really making progress towards! -- to divert the conversation away form substance to trivia and personalities. On the ID issue, there is a noticeable jump up in the quality of substance and tone of the book, and she admits to having had help on that chapter. More broadly, all of this thread above shows just how well founded are WD's complaints in the original post on how he has been mishandled in the general press and the professional journals. Let's just say I would be positively surprised if the editors of the journal in which Prof PO's hit piece will shortly appear, will then allow WD reasonable room for a rebuttal on the merits. So, if you want to see how critics fare on an even playing field [poorly indeed, on the merits], you will have to keep an eye on UD, folks. Prof PO, thanks for being willing to at least engage on such a playing field. As Michaels 7 and DK show, onlookers at UD are perfectly willing and able to see for themselves what is going on on the merits as opposed to the rhetoric. (That is a big part of why I am still taking time to respond on points.) kairosfocus
10] PO, 195: part of my point is that the flagellum as we know it is one example of a motility device in bacteria, namely, the one we have observed (in E. coli). Now, it is of course possible that some other such device would have evolved instead, something that would look different. Nope, we are looking at a specific motility device and asking how could this particular empirically observed acid ion-powered outboard motor have originated by unintelligent [i.e. chance and/or necessity only] means? In short, is this device credibly beyond Behe's edge? ANS 1: on the empirical evidence and inference to best, provisional scientific explanation in light of available probabilistic resources relative to the config space and Behe's observed edge etc etc, yup. ANS 2, On CD's shift: it does not amount to a mathematical/logical demonstration, so no. (We can make up just so stories and look at derivative TTSS structures and imagine how the flagellum can originate through gradual RM + NS and also through co-optation, etc..) COMMENT: Whose report do you believe, why? Also, the underlying physics of phase spaces and random search -- statistical thermodynamics and related information theory [cf my always linked, esp Appendix 1] -- is foundational to a lot of modern physics, my approach -- though a very crude and simple first approximation -- is not exactly a novelty. 11] PO, 195: These possible but unknown scenarios are, in my mind, difficult to incorporate. CD's burden-shift again. 12] PO, 197: Mr Kairosfocus kept going on and on about my “expanding” the rejection region (I used 22 Ds as an example, might as well have used 29 or 33 or…, anyway, most other people seem to have gotten it and even tried to help me explain to him). H'mm: why is it that a RR of 1 in 50 bn at a far edge of the curve for the Caputo case [only one further result is possible: 41 Ds on 41 tries instead of "only" 40], is put on the same level as one where 38% of the curve [Just one step away from the middle of the curve, 20/21 splits to R or to D, each at 12% probability, by subtraction, I am too lazy to work out 41!/21!20! through Stirling's approximation just now] would as per PO's argument "count" against the null hyp? Is that not plainly an expansion of the curve's RR from a region at an extreme that is out of reasonable reach of probabilistic resources to one where the RR is now so easy to reach that it would be meaningless? And, is that not precisely the Bayesian objection strategy that WD answered on p. 4 as excerpted above in 179 - 181? So, PO's rhetorical strategy is properly to be called dismissing an objection on the merits without addressing the evidence, which has been repeatedly put up; by refusing to address the issues but instead making a dismissive reference to an individual and putting up strawmen. Currently of course, apart from such references, PO refuses to address anything I have put up, including of course the telling evidence of the 1996 paper. Cf my critiques from 20 – 21 on, and on the Sokal affair, note how he has had to agree with me that Sokal betrayed a trust by putting up nonsense in the face of an experimental publication whose editors trusted him to be playing straight. There is utterly no comparison between Coulter's chapter [the redeeming virtue in a very bad on tone book, which I communicated to her and her online editor about] and Sokal's dirty trick. BTW, on many points, tart tongue aside, MS Coulter actually makes some sense but the sense has been very conveniently lost/dismissed in the shouting match. But then, my historical model for the current trends in the US and the Wider West is Athens, 430 – 400 BC: how democracies die – by decadence, manipulative rhetoric and manufactured dissensions and distractions, leading to folly in the face of grave threats. Suicide via the manipulative and distracting shadow-shows of Plato's Cave, in short. 13] PO, 197: my criticism is not Bayesian . . . And of course what I have persistently pointed out (including just now) is that prof PO is using some of the same critiques that the Bayesians, starting with Sober, use -- and that WD answered them on the merits in his 2005 paper as linked and excerpted. A rebuttal, that one finds nowhere the faintest trace of in prof PO's 2007 paper that has been duly accepted for publication in one of the vaunted peer-reviewed journals of our age. (Onlookers: See why I keep having to point out that such a misrepresentation of WD, much less myself now, is strawmannish? Nor is pointing out to such a pattern, “pointless.”) 14] Sal and Pat: Thanks for the intervention. Good points, but this is long now . . . GEM of TKI kairosfocus
6] PO, 191: In terms of the filter, which is what we concern ourselves with here, you cannot draw that conclusion as it has not been successfully applied to the flagellum. Here comes that same burden of proof shift again. On a comparative difficulties basis, applying inference to best explanation reasoning, design is based on a KNOWN, directly observed source of FSCI such as is plainly and calculably observed in the flagellum. RM + NS, after 60 or so years of the NDT synthesis, is running into empirical difficulties beyond the threshold of a few mutations [cf Behe on the Malaria parasite]. So, we are not just eliminating based on improbability and particular models here [and on this of course one can dispute WD's particular models at any given time, but as TBO showed back in the early 1980s, the underlying issue -- the existence, significance of and need to credibly account for complex specified information in the case of esp. life systems relative to known causal forces: chance, necessity, agency -- is not critically dependent on that], but instead on what we do know about FSCI and its directly observed origins. 7] DK, 192 – 3: DNA, genes, amino acid residues and monomers vs polymers My calculation is relative to two different informational polymers: proteins, which average out at ~ 300 amino acid residues per protein, at of course 20 possible acids from the ~ 80 or so that are usually observed [as a rule . . .], and DNA. “Monomer” refers to the individual element in the chain, e.g. GCAT for DNA. [I come at this from a physicist's view and taking in the pioneering work of TBO at the turn of the 80's. Bradley, the B, is a polymer specialist.] Then, in the DNA molecule, each amino acid is coded for by a three- “letter” codon [from the four-state system: GCAT]. So, for 30 unique proteins, we go 30 * 300 * 3 = 27,000 positions in the DNA chain. For the 50 or so overall proteins, we similarly get 45,000. A chain of 27k or 45 k 4-state elements, has 4^27k or 4^45k cells , respectively, in the accessible configuration space to be searched. As WD discusses in his latest paper, here, this search-space is best cut down by feeding in information from outside, which is of course a known artifact of intelligent agents. 8] Patrick, 194: . . . an argument about homologies of flagellar proteins, which usually turns into drawing a chart of . . . homologies and devolves into a game of wishful connect the dots without demonstrating how the various proteins can be formed, assembled and function as a motile device via unguided, purpose-less processes . . . . the worst offense is presenting the homologs–often ignoring the weakness of sequence similarity–by themselves as if they’re somehow irrefutable proof of Darwinism even though ID-compatible hypotheses would expect designer(s) reuse and/or homology due to front-loading. That burden of proof shift again . . . 9] Patrick, 194: The whole point of Behe’s new book was to try and find experimental evidence for exactly what Darwinian mechanisms are capable of . . . this “edge” is an estimate based upon a limited set of data which in turn “might” mean the estimated “edge” is far less than the maximum capable by Darwinian mechanisms . . . So, “obviously,” Behe “fails”: Cf. CD: “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down” . . . kairosfocus
Hi Dave, Patrick, PO, Michaels, DK, PaV, Sal et al, I see the thread still goes strong, at about 20 posts per day or so. Greetings from the insomnia patrol, with it seems another tropical wave passing through – great for the farmers, let's hope our friend to the south does not make too much fresh steam out of it. Okay, on points that strike me as key: 1] DS, 185: The bone of contention isn’t the size of the search space per se. It’s the probabilistic resources available to reduce it. RM+NS in theory can find a flagellum pattern . . . I agree, in broadest terms –- providing stepping stones exist to be always functional, RM + NS can find a direct or indirect path to a flagellum. [That BTW, is why even Darwin's challenge that “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down” improperly shifts the burden of proof and is selectively hyper-skeptical.] My remarks above -- and for that matter, WD's analysis of Caputo (and Fisher's theory of statistical inference) -- were in the context of available probabilistic resources, hence for instance my comment that we are dealing with needing to get tot he flagellum on an earth of mass 6 * 10^24 kg in at most a few billion years. 2] DS, 185: If a credible series of reasonably small steps can’t be reconstructed it’s chalked up as a failure of imagination in forensic reconstruction of random changes rather than a failure of imagination in mechanisms that cause the changes. Correct, and reflecting that same improper shift in epistemology. (On empirical matters, we can only achieve warrant to moral certainty at best, and often only tot he preponderance of the evidence. So, to demand demonstration beyond possible alternative in a context that does not admit of such outright mathematical or logical proof, is to presume truth where it should be warranted relative to comparative difficulties across live option alternatives on factual adequacy, coherence and explanatory elegance vs being simplistic and/or merely after the fact ad hoc.) 3] DS, 186: Rather than assuming it can do anything physically possible Behe examines what it has actually done under observation in billions of trillions of rapid reproducers (malaria parasites) under intense selection pressure. Just as importantly he examines what rm+ns failed to accomplish under intense selection pressure . . . . Unless the empirical observations can somehow be impeached as wrong, incomplete, or atypical it stongly suggests random mutation plays only a small role in phylogenesis That's why the critics are so angry. Behe is showing here that RM + NS is factually inadequate. Whilst, of course we know that in all cases of actual observation, functionally specified, contingent, fine-tuned and often irreducibly complex entities are produced by intelligent agents. We have a known source of FSCI, going up against a suggested or assumed source, and the current direct empirical test on the latter is coming up short, real short. 4] Patrick, 189: if we were to make the design inference strong enough to be warranted in your opinion how many Darwinian pathways would need to be tested? Unfortunately, testing ALL of them isn’t likely to be a reasonable/reachable goal. Again, the Darwinian burden of proof shift mis-move surfaces. 5] PO, 190: we need to consider motility devices in general, not just the flagellum . . . As noted, Behe looked also at the Cilium. But, that is not the root issue yet: the point is that a functionally specific, fine-tuned system comprising interacting and interlocking parts based on folding of linear proteins into 3-D shapes under various electrostatically derived and bonding forces, is just that, specific. It is not just that the flagellum is a means of moving, but that it is a particular means of moving based on a particular set of codes in DNA inclusive of some 30 unique proteins. Just asking what is the config space for 30 such proteins, on a crude estimate puts us into the ball park of a space of some 5*10^27 cells, and even if we were to generously overestimate every life form that ever lived, by positing 10^500 samples of DNA, the flagellum state would be incredibly isolated in the relevant config space. That is, the probabilistic resources simply are not there even on the scale of a cosmos that is a lot larger than what we do observe, ~ 10^80 atoms, and the time from big bang to heat death. . . . kairosfocus
Scordova [200], How many can calculate Radon-Nikodym derivatives? I can, I can! :) Goodnight everybody! olofsson
PS. I have not studied Dr D's latest updates on the EF though but I am aware that they exist. olofsson
Patrick [199]: In your opinion how damaging would this critique be to ID; minor or major? Can you yourself think of any additions or workarounds to tighten up the EF? As a principle do you reject UPBs such as 10^–50 (Emile Borel) as being useful? If so, why? I'd say it would be fairly damaging the the EF, not necessarily to ID as such. I've pointed out in a few posts that I do not reject UPB's. Statisticians deal with similar problems all the time as Dr Dembski explains in The Design Inference. I don't really have any good ideas how to tighten up the filter, think it is very hard. The EF is, in my mind, simply too ambitious, which rhymes well with your last paragraph. That doesn't mean it's useless however, we have to remember that science is always a work in progress...ask Galileo! olofsson
PaV [202], Uh-oh, here we go again...but OK, PaV is a nice guy so I'll take it: has made some kind of grudging concession that it “might” be right Actually...never mind, just read my posts on the UPB 162, 166, 176 and note that I have no objections, would even be satisfied with a larger probability, and even attempt to explain the logic behind the UPB. Ungrudgingly yours, PO olofsson
Patrick: "Not being a mathematician myself, let’s say for the sake of discussion your critique is 100% dead-on and Bill agrees with it. In your opinion how damaging would this critique be to ID; minor or major?" Patrick, I'm going to jump in here for a second. In the discussion that P.O. and I have had about the UPB, it seems to me that he has made some kind of grudging concession that it "might" be right. I consider the UPB portion of WD's argument to be unassailable---there just aren't enough probability resources to explore the kinds of configuration spaces that biological systems entail. Having said that, though, there is a way in which, I believe, the argument can break down. It is along the lines that P.O. is arguing. However, as I pointed out in #169, if you rule out chance agencies such as RM+NS searching these kinds of configuration spaces (which WD's argument not doubt does), then you're left with the possibility that the "actual" configuration space, i.e., the one de-limited by natural forces, and then the argument becomes that the "actual" configuration space is so small that RM+NS, indeed, is able to search it out. But, if you make this your arguement, then you are presented with the problem of explaining how it is that "Nature" has, so to speak, shrunk the configuration space. This, then, gets into fine-tuning, as I point out in #169. I suspect someone like Michael Denton ("Nature's Destiny") would say, yes, indeed, nature is actually fine-tuned to this degree. And, so, Denton might see evolution as "front-loaded" in the Big Bang. Well, this is almost what P.O. is arguing, suggesting that unless we really know the composition of the flagellum's configuration space, the appropriate statistical calculations cannot be properly employed since, after all, this kind of "fine-tuning" may be at play; therefore, an "elimination" technique, such as WD would employ, cannot get at such a contingency, and thus it is necessary to use the comparison method. But, as Hoyle says, this kind of de-limiting of what "might" be, is entirely suggestive of a "super intellect". His words: ‘Some super-calculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule.’ IOW, it's impossible to 'begin' with the blind forces of nature, and to conclude from them the properties of the carbon atom: the one doesn't flow from the other. He concludes: "A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature." PaV
SORRY, I meant to say, "One can refute Darwinian evoltuion on grounds outside of the EF such as Nachman’s U-Paradox, Haldane’s Dilemma, Invisibility of Redundancy, etc." scordova
PaV, Salvador, Dave, other moderators, Nick Matzke preens in his interview with Jason Rennie about how he has solved the Flagellum problem for Darwinian evolution. As he is off to Berkeley to get his PhD. he talks about how evolutionary biologist researchers have come to him as the expert on the flagellum. Why not start a thread for comments on Nick’s claims. The audio is available for all to listen to. And by the way Nick is a master of calling ID people creationists. He does it several times in the interview. If anyone knows the difference, Nick does.
One can refute Darwinian evoltuion on grounds outside of the such as Nachman's U-Paradox, Haldane's Dilemma, Invisibility of Redundancy, etc.) Edge of Evolution (EoE) is an excellent example of this, and in fact is more fundamental in that EoE supplies numbers for the EF. The EoE argument stands on its own against Darwinian evolution, but it can be used to help EF arguments. For the 4 years I've been a part of ID, I can't recall that I've strongly endosed the Flagellum as an example. It does not mean I could not one day, but I've had a bit of a hard time with it. Behe's work on protein-protein binding is shoring up the argument, and maybe it's only a matter of time before the ID community tightens the noose around Matzke's boasts. I have little doubt Matzke totally out to lunch. His logic was bogus, but had a few good points.... If there are protein-protein binding interactions unique to the flagellum, Matzke's analysis is totally worthless, and actually a hindrance to scientific understanding. My view is let's not be too hasty to assert every pro-ID conclusion. Both Mike Gene and I think the flagellum and IC are real problems for mindless evolution, but if there areas where the ID community can strenghthen its arguments, by all means let these areas be put on the table. I can say Behe's EoE tied up a lot of loose ends for me personally compared to DBB. But the issue of coming up with good specifications is an important area of research, and professor Olofsson perhaps has criticism worth seriously considering. I have also pointed Professor Olofsson to some of Bill's newer papers which I think will address at least some of his concerns. And let's be honest, how many participating in this discussion have read and understood Bill's latest? How many can calculate Radon-Nikodym derivatives? Or how many here understood Bill's exploration into Minimum Description Length (MDL) representation and its relevance to preclusion of post-dictive specification? These are relevant to professor Olofsson reasonble reservations. I've invited him to peruse the much improved and much more robust ID literature which is so specialized that I would wager most at UD are not even familiar with it. scordova
PO,
orry for not fully answering. I really don’t know enough biology to be able to answer intelligently. As I have mentioned a few times in this thread, I try to keep the focus on what my expertise is in. ...... However, if you have any interesting comments or critique of my filter criticism, I would be delighted to hear them and try to respond as well as I can.
Not being a mathematician myself, let's say for the sake of discussion your critique is 100% dead-on and Bill agrees with it. In your opinion how damaging would this critique be to ID; minor or major? Can you yourself think of any additions or workarounds to tighten up the EF? As a principle do you reject UPBs such as 10^–50 (Emile Borel) as being useful? If so, why? By the way, I don't expect design detection to ever be fullproof. After all, it should be possible to fool the EF. Let's say there is an arch designer that attempts to make garden architecture look "natural". Each arch is unique and customized. A series of stones are designed by shape to tightly interlock with each other to form an arch yet in appearance they appear "natural". If one of these stones are lost we wouldn't be able to tell it was designed using the EF. Patrick
All, Bill originally brought on more people to handle moderation because it was consuming too much of his time. He does read comments sometimes but he might not have noticed the interesting turn this thread took. Has anyone been emailing Bill about the content contained herein? Patrick
Hello Michaels7, I'm glad to hear that you enjoy the thread. I only used "pointless" in reference to my attempts to explain various things to Mr Kairos (publicly and privately), by no means to the entire thread. He, on the other hand, dismissed all of it as pointless (though, only on "one level"). No More Mr. Materialist Nice Guy? There's no Alice Cooper song with that name that I know of. Besides, if your read my article, you will see the context. Hit piece? I just had a bit of fun, just like Ms Coulter did in her book. I seriously doubt she would be insulted if she read it but if she does and is, I will apologize. Are you familiar with the Sokal Hoax? I thought it provided a nice context. Also, there was so much criticism of Ms Coulter at the time, that she was mean and evil, I thought I would show that people do not need to hate her. Hate is neither healthy, nor productive. As for "stubborn" and "do not listen well," I think you would have to back that up a bit. If you read the entire thread, I have repeatedly explained my very narrow criticism of the Explanatory Filter, from the particular position of expertise that Dr Dembski in the original post claims is absent; probability. Mr Kairosfocus kept going on and on about my "expanding" the rejection region (I used 22 Ds as an example, might as well have used 29 or 33 or..., anyway, most other people seem to have gotten it and even tried to help me explain to him). He kept insising that my criticism was "Bayesian" which it, very obviously to those with any knowledge, is not. I have tried to explain to him publicly here at UD, and privately in emails. It can also be learned from Dr Dembski's essay on Elimination vs Comparison (which Mr Kairosfocus, somewhat ironically, kept referring to). Actually, already the title explains he difference. I don't know how closely Dr D himself follows this blog but he could easily step in and point out that my criticism is not Bayesian, regardless of how meritless he might deem it otherwise. Anyway, it is with regards to these issues, I found the exhange pointless. However, if you have any interesting comments or critique of my filter criticism, I would be delighted to hear them and try to respond as well as I can. olofsson
On a side note, this bit of history puts homologies in perspective:
There is yet another reason that the universality of the genetic code is not strong evidence for evolution. Simply put, the theory of evolution does not predict the genetic code to be universal (it does not, for that matter, predict the genetic code at all). In fact, leading evolutionists such as Francis Crick and Leslie Orgel are surprised that there aren’t multiple codes in nature. Consider how evolutionists would react if there were in fact multiple codes in nature. What if plants, animals, and bacteria all had different codes? Such a finding would not falsify evolution; rather, it would be incorporated into the theory. For if the code is arbitrary, why should there be just one? The blind process of evolution would explain why there are multiple codes. In fact, in 1979 certain minor variations in the code were found, and evolutionists believe, not surprisingly, that the variations were caused by the continuing evolution of the universal genetic code. Of course, it would not be a problem for such an explanation to be extended if it were the case that there were multiple codes. There is nothing wrong with a theory that is comfortable with different outcomes, but there is something wrong when one of those outcomes is then claimed as supporting evidence. If a theory can predict both A and not-A, then neither A nor not-A can be used as evidence for the theory. When it comes to the genetic code, evolution can accommodate a range of findings, but it cannot then use one of those findings as supporting evidence. (Hunter, 38.)
Personally I'm in favor of searching for homologies where they should be unexpected. Not just in "convergent evolution", of which there are many examples. Let's say we have have a lower organism and a higher organism that share a homolog. But the creatures that are supposed to be in-between do not share this homolog. Now you could explain it away by saying this code "re-evolved", but I'd consider this scenario to be more compatible with front-loading/designer reuse. BTW, I'm not aware of such an example but it'd be an interesting data point if there was. Patrick
Patrick, Sorry for not fully answering. I really don't know enough biology to be able to answer intelligently. As I have mentioned a few times in this thread, I try to keep the focus on what my expertise is in. Anyway, part of my point is that the flagellum as we know it is one example of a motility device in bacteria, namely, the one we have observed (in E. coli). Now, it is of course possible that some other such device would have evolved instead, something that would look different. [Note: As we are testing the chance hypothesis, we need to assume that it is true and proceed from there.] If that were the case, we would instead test that device, its protein configurations and so on. These possible but unknown scenarios are, in my mind, difficult to incorporate. But I would be certainly be interested to see an attempt along the lines you suggest. Everything doesn't have to be done at once, any partial progress (or lack thereof) is also of interest. One would have to get both IDers and Darwinists to agree to the program though, otherwise it will be the usual "did not - did too" exchange. olofsson
PO,
That is pretty much one of my two objections, with the addition of “potential” to “direct” and “indirect” because, in my view, we need to consider motility devices in general, not just the flagellum.
Lest we retread ground once again...I'm assuming you're saying that as a starting basis for making an argument about homologies of flagellar proteins, which usually turns into drawing a chart of said homologies and devolves into a game of wishful connect the dots without demonstrating how the various proteins can be formed, assembled and function as a motile device via unguided, purpose-less processes. Worst case scenario is the argument devolves into the average PT personal attacks where people are smeared for stating in the past data about homologs that are now known not to be true due to current research (oh, and it's just as bad when Darwinists are smeared for doing the same). Then of course the worst offense is presenting the homologs--often ignoring the weakness of sequence similarity--by themselves as if they're somehow irrefutable proof of Darwinism even though ID-compatible hypotheses would expect designer(s) reuse and/or homology due to front-loading. Why do I say the design inference is currently the strongest explanation? We know what intelligent agencies are capable of coupled with the knowledge of what nature, operating freely and unguided, is capable of. The whole point of Behe's new book was to try and find experimental evidence for exactly what Darwinian mechanisms are capable of. On the other hand we have speculative pathway scenarios but so far the "edge of evolution" doesn't allow these models to be feasible. But this "edge" is an estimate based upon a limited set of data which in turn "might" mean the estimated "edge" is far less than the maximum capable by Darwinian mechanisms. If Darwinists would bother to do further experiments they may see if this "edge" could in reality be extended. Then if this new derived "edge" is compatible with these models then so be it (though I'll add the caveat that the "edge" might be better for Darwinism only in limited scenarios). In the meantime they're just assuming the "edge" allows for it. Even worse, unless I failed to notice the news, the very first detailed, testable (and potentially falsifiable) model is yet to be fully completed (I realize there are people working on producing one). But, yes, Darwinists should stop pretending they have the current strongest explanation. I'll fully acknowledge they're currently formulating a response in the form of continued research, new models, and such but the mere fact is that they're missing all the major parts to their explanation. This might change in the future, but it may not. BTW, you didn't answer my question. Would you find it acceptable if the current symbiotic/endosymbiotic/exogenous models (Margulis) and some of the more current endogenous models (Matzke, whoever) were tested? Are more needed? If so, justify this. EDIT: Edited for grammar but not potential stupidity on my behalf. Patrick
Sorry, kf, I misunderstood "monomer." Of course, you meant "amino acid." Daniel King
kf:
We have about 50 proteins at ~ 300 monomers each, in turn at 3 base pairs per DNA codon. So, we are looking at a DNA configurational space that we may crudely estimate at: 4^[50 x 300 x 3] = 4^45,000 ~ 5.01*10^27,092.
Your other numbers may be fine, but that 300 monomers each would not be individually coded in the DNA. One gene per protein would ordinarily suffice - if I understand what you're saying. Daniel King
Kairosfocus, thanks for your patience and continued efforts in illuminating the issues. I wanted to chime in and say this has been a great thread to follow. Olufsson, re: pointless. Maybe for you, but for others here including myself, this has been a good thread. Please do not make assumptions for others. And as to insults, motivations and No More Mr. Nice Guy. Please do realize you lowered yourself to insults on the hit piece in Skeptical Inquirer. And since you don't want to be considered "nice" anymore, get over it. I'd recommend a change in your sloganeering - try No More Mr. Materialist Nice Guy. And you yourself are stubborn...., do not listen well and frankly have formed your worldview. Michaels7
Patrick, That is pretty much one of my two objections, with the addition of "potential" to "direct" and "indirect" because, in my view, we need to consider motility devices in general, not just the flagellum. When you say that it’s been "made quite clear" that the design inference is the "strongest explanation," I am sure that you are aware that there are many who disagree. In terms of the filter, which is what we concern ourselves with here, you cannot draw that conclusion as it has not been successfully applied to the flagellum. But there's more, even if it were, you could not really draw your conclusion beacuse the filter infers design by elimination, not by comparison (which is the real issue of the chapter by Dr Dembski that has been cited in this thread many times by people who unfortunately don't understand it). olofsson
PO, Assuming I'm comprehending you correctly, your article essentially boils down to the objection that the design inference for biological machines using the EF is not 100% certified unless all Darwinian pathways--indirect and direct--are tested first. This objection has been made before on UD but yours is the most coherent version yet (I mean that as a compliment) since you've explained the reasoning behind it and it's not just a "gut feeling" objection like most Darwinists make. First off, I don't think it's ever going to be possible to have 100% certainty. It's been made quite clear that at this point the design inference is the strongest explanation BUT new evidence could overturn it; that's the nature of science. PO, now if we were to make the design inference strong enough to be warranted in your opinion how many Darwinian pathways would need to be tested? Unfortunately, testing ALL of them isn't likely to be a reasonable/reachable goal. Fortunately the task could be made easier if we reject pathway scenarios that cannot work from an engineering perspective (like numerous non-functioning/non-useful parts evolving toward complexity for no apparent reason) and only consider scenarios that are feasible. Now the scenarios being proposed for the flagellum so far do suffer from much wishful thinking but--I could be wrong--at least they appear to be "reasonable" (as in, while extremely difficult still in the realm of being a possibility). Also, if necessity is a factor for the flagellum (which is currently unknown) I would presume this would take the form of a Direct Darwinian pathway. But from what I've seen all focus has rightly been put on Indirect pathways since there currently aren't any reasonable Direct Pathways. Patrick
By the way, I hope nobody got the impression that I claim to have invented the "outboard motor" analogy for the flagellum. I don't know its origins but I got it from Dr Demsbki's book No Free Lunch. olofsson
Joseph [168], I was explaining the rationale behind the UPB. Put the quote in its proper context. PO olofsson
kf (con't) I think Behe's "Edge of Evolution" or at least the first half (I'm up to chapter eight) is aimed straight at the probabilistic resources of RM+NS. Rather than assuming it can do anything physically possible Behe examines what it has actually done under observation in billions of trillions of rapid reproducers (malaria parasites) under intense selection pressure. Just as importantly he examines what rm+ns failed to accomplish under intense selection pressure. The information Behe brings to bear on bounding rm+ns performance, nucleotide accurate, in an astronomically large population, wasn't available until recently. Unless the empirical observations can somehow be impeached as wrong, incomplete, or atypical it stongly suggests random mutation plays only a small role in phylogenesis. DaveScot
kf The bone of contention isn't the size of the search space per se. It's the probabilistic resources available to reduce it. RM+NS in theory can find a flagellum pattern in an otherwise impossibly large space of non-flagellum patterns. RM+NS is restricted in operation in that to have a reasonable chance of finding something there must be an incremental series of tiny steps that bring the pattern closer to a flagellum wherein each tiny step must (at the least) not be crippling to the intermediaries. That this series of steps exists and was traversed by RM+NS is taken as a matter of faith by Darwinists. Random mutation as the sole or only significant source of variation is taken as a given. It then follows that any change observed must be the result of random mutation. If a credible series of reasonably small steps can't be reconstructed it's chalked up as a failure of imagination in forensic reconstruction of random changes rather than a failure of imagination in mechanisms that cause the changes. DaveScot
PaV, Salvador, Dave, other moderators, Nick Matzke preens in his interview with Jason Rennie about how he has solved the Flagellum problem for Darwinian evolution. As he is off to Berkeley to get his PhD. he talks about how evolutionary biologist researchers have come to him as the expert on the flagellum. Why not start a thread for comments on Nick's claims. The audio is available for all to listen to. And by the way Nick is a master of calling ID people creationists. He does it several times in the interview. If anyone knows the difference, Nick does. jerry
PPS: BTW, re PAV and PO on RR re flagellum, 177 - 178. The proper formulation of the issue here is in terms of CONFIGURATION SPACE, not rejection regions relative to statistical probability distributions. The finely tuned bio-functional state of the flagellum is so isolated therein that the probability of random search accessing it is minimal on the gamut of the observed cosmos. Let us take a cruder look than Mr Dembski does, which will bring out the underlying issues sufficiently. We have about 50 proteins at ~ 300 monomers each, in turn at 3 base pairs per DNA codon. So, we are looking at a DNA configurational space that we may crudely estimate at: 4^[50 x 300 x 3] = 4^45,000 ~ 5.01*10^27,092. This is comfortably beyond the reasonable range of a search in the gamut of the observed universe, even with imagined islands of functionality of say 10^500 possible states. [That is outlandishly far more than the number of cell-based organisms that will ever exist in our observed cosmos from birth at the big bang to eventual heat death.] Within that vast config space, we have known-to-be fine-tuned [cf. Minnich's empirical work] islands of functionality for the flagellum. We cannot credibly get to them from an arbitrary start-point in a space of 5*10^27,092 or anything near that exponent. Similarly, due tot he interlocked functionally fine-tuned complexity, we cannot get half- a flagellum and have it work enough to be encouraged, or even 95% etc, we have to have a flagellum, and worse, we have left off the food concentration gradient based control system [i.e it is often used to move towards a source of nutrients]. A gradualistic incremental RM + NS approach is not credible. Since some 30 of the proteins are more or less unique tot eh flagellum, and the proposed TTSS is in fact evidently derivative not the source of the flagellum [which itself complicates the requirements on the code to embed a second system . . .], co-optation becomes a problem, too. So, relative to the chance + necessity based live options on the table, agency is the best warranted explanation. We may refine the calculation for various factors and issues, but the basic point will remain: we are well beyond UPB, and we are looking at a far more constrained scope: evolution on Earth, ~ 6*10^24 kg, and a window of maybe a few thousand million years. In short, the constraints on the relative probabilistic resources are a lot tighter than we have used, and the complexity is in fact far more than we have used. So, let us focus on the issue, not on a strawmannish side-issue. kairosfocus
PS: Lest we forget the focus of the thread: The above interaction with prof PO, IMHCO, aptly underscores the force of WD's comment at the top on Padian, NCSE and too many other critics of the EF and ID inference. If Prof WD is lurking, could he comment? Also, Atom, do I make more sense to you now? Why/why not? kairosfocus
5] PO, 162: It there are 10^80 atoms, there are “10^80 choose 2″ pairs of atoms which is 5 x 10^159. Actually, 10^80 is the number of particles, but we have been content to simply call it atoms. And, the estimate 10^150 is the number of quantum states of these atoms across the lifetime of the observed cosmos, which includes bonded states. Besides, due to the nature of the cosmos, we are not dealing with a simple random choice of any two atoms that can be addressed by a combinations calculation – many atoms will not bond with others, and others are simply too separated – inter-stellar or inter-galactic distances -- to chemically interact. BTW, PaV, recall: He [the no. 2 most abundant atom, a principal product of fusion of H in stars] is monoatomic, as it is a noble gas. Only under exceptional circumstances is it conceivable for it to become engaged in chemical bonding. 6] PO, 163, 165: There is a potential fallacy here because you could say about the flagellum, “hey, it looks like an outboard motor” and infer design. So, how would you run the pillars through the filter? . . . . [RE Trib's: Dembski points out that the formation of the flagellum could not have formed by chance.] That might be [WD's] opinion but he has not been able to conclude it by his filter. Besides, “necessity” was supposed to be ruled out first. First, the outboard motor issue –- which I raised (so PO has read me but chooses not to respond save to his convenience) -- is one of specification; not “it looks like.” The Flagellum comprises 50 parts constructed based on DNA code, including some 30 unique proteins, and has a stator and rotor reversibly driving an external paddle to move the bacterium back or forth in a liquid medium. It IS an outboard motor, of a technology that emerging nanotech engineers are openly salivating over – as once we copied the Bat's sonar (another astonishingly fine-tuned, integrated and complex body-plan level system beyond Behe's empirically observed edge of evolution). Second, the system is -- by virtue of being contingent and caused based on application of a code -- not the product of necessity. [Indeed, its derivative TTSS, has a further contingency that is temperature sensitive.] So, by observing contingency, necessity has in fact long since already been addressed first. Indeed, to be considered as CSI, a system must first pass the test of contingency, going all the way back to the notes by Thaxton et al on the state of OOL research thought at the turn of the 1980's: order [a simple crystal or repeated digital string] vs complexity [aperiodic polymer, random text strings that are long enough] vs specified complexity [informational macromolecules such as DNA, meaningful text in English of long enough llength to be complex]. Then, the distinction between chance and agency is assessed on inference to best explanation – where Trib stumbles. Had he said “ Dembski points out that the formation of the flagellum could not [CREDIBLY] have formed by chance [on the gamut of the observed cosmos]” he would have been spot on. PO has chosen to pounce on a conveniently poor formulation instead of addressing the real issue. On empirically constrained inference to best explanation, agency is the most credible current explanation of the functionally specified, fine-tuned, empirically demonstrated irreducible complexity [cf. Minnich!] in the flagellum. The inference to design of the flagellum, though of course provisional as are all scientific inferences of consequence, is of high credibility and IMHCO is unlikely to be reversed on the merits – though the question is often slectively hyper-skeptically begged as we see in the following . . . . 7] PO, 166: The problem with probabilistic inference is that even if something has a very small probability, it could still occur. H'mm: are you holding your breath waiting for all the oxygen molecules in your room to rush to one end leaving you choking? By Stat Mech, that can happen, and the odds are similar to those of the formulation of the flagellum by chance, or the formation of the text of the various messages in this thread by lucky noise, etc. [Cf. my always linked] In short, we here come to the precise point Fisher was making way back, and which WD has now updated: when the probabilistic resources available in a situation are inadequate, it is not credible to infer that chance was responsible, relative to agency. We routinely make just that choice in many situations, but the problem is when that same principle cuts against our favourite ideas. On this, cf. My discussion of the attempts to expand the available cosmos to escape the force of the UPB, in my always linked; e.g. through postulating a quasi-infinite cosmos as a whole. This is a resort to pure empirically uncontrolled metaphysical speculation, and leaves on the table the challenge to compare other live options, e.g the theistic one. To prejudicially exclude such an option at the worldviews table, is to crudely beg the question. GEM of TKI. kairosfocus
3] Back to Caputo Observe, I have already cited [154] on why the Court ruled out the inference to a markedly and inadvertently biased selection process:
The first option-that Caputo chose poorly his procedure for selecting ballot lines, so that instead of genuinely randomizing the ballot order, it just kept putting the Democrats on top-was dismissed by the court because Caputo himself had claimed to use a randomization procedure in selecting ballot lines. And since there was no reason for the court to think that Caputo’s randomization procedure [capsules from an urn or the like it seems, but with relatively few capsules] was at fault, the key question therefore became whether Caputo actually put this procedure into practice when he made the ballot line selections, or whether he purposely circumvented this procedure in order for the Democrats consis-tently to come out on top. And since Caputo’s actual drawing of the capsules was obscured to witnesses, it was this question that the court had to answer. [Dembski, 1996. Of course, since the situation was highly contingent, it cannot be dominated by NR, which is why the first issue is a biased chance option . . .]
Now, contrast the discussion of this issue in PO, 2007 [cf. Link in 19 above]:
. . . In contrast [to the EF approach], a statistical hypothesis test of the data would typically start by making a few assumptions, thus establishing a model. If presented with Caputo's sequence and asked whether it is likely to have been produced by a fair drawing procedure, a statistician would first assume that the sequence was obtained by each time independently choosing D or R, such that D has an unknown probability p and R has probability 1p. The statistician would then form the null hypothesis that p = 1=2 which is the hypothesis of fairness. In this case, Caputo would be suspected of cheating in favor of Democrats so the alternative hypothesis would be that p > 1=2, indicating that Ds were more likely to be chosen.[p. 7. BTW, this also underscores the point that PO is here critiquing the use of the EF in this case, contrary to what he has said above, cf. My comments in 20 – 21 on, and in 154 etc. NB: In the context of the excerpt WD is only spoken of as a design thinker and his relevant qualifications to address statistics are ignored.]
--> Let us note: from 1996 on, WD's FIRST OPTION was precisely: a biased chance process that diverted in favour of D's away form what we would expect for a “fair coin” model. --> So, WD in fact did look at the “fair coin” and “biased coin” models, first, and he followed the Court in accepting the credibility of the claimed procedure as being approximate to a fair coin. THAT is the context in which he then went on to look at the comparison of an alleged fair coin being at work vs deliberate action, i.e design. --> So, on inference to best explanation, what is the likeliest and best explanation for the result: a 1 in 50 billion freak outcome, or Mr C yielding to the obvious temptations of a selection process that in the crucial stages was without witnesses? 4] And on broadening the rejection region [RR]. . . Here, prof PO said:
It is important to note that it is the probability of the rejection region, not of the individual outcome, that warrants rejection of a hypothesis. A sequence consisting of 22 Ds and 19 Rs could also be said to exhibit evidence of cheating in favor of Democrats, and any particular such sequence also has less than a 1-in-2-trillion probability. However, when the relevant rejection region consisting of all sequences with at least 22 Ds is created, this region turns out to have a probability of about 38% and is thus easily attributed to chance. [p. 7]
--> The very first part, of course, partly aligns with WD, who pointed out that the issue is being in the extreme tail from 1 R/40D to 0 R 41 D, i.e he is looking at a Fisherian investigation of being so far out in the tail of a claimed probabilistic process that the likelihood of observing such a result is too low to accept chance as the best explanation. The odds of being in the tail from 1 R/ 40 D on, is 1 in 50 billions, cf. my comments in 68 above. --> PO glides smoothly from that to an assertion that neither WD nor Fisher nor I would agree with: the notion that something as close to the peak as 19R/21 D would be "evidence" of cheating. There is no warrant for that glide and it in effect supplants the real issue with a convenient strawman, where it appears that the border of RRs is a mater of arbitrary selection to suit oneself. Not so at all! --> And of course, this is in its core, the sort of “expansion of the RR” argument that WD deplored in his own 2005 paper, p. 4, responding to Bayesian objections:
what’s to prevent . . . [so expanding the RR] that any sample will always fall in some one of these rejection regions and therefore count as evidence against any chance hypothesis whatsoever? The way around this concern is to limit rejection regions to those that can be characterized by low complexity patterns (such a limitation has in fact been implicit when Fisherian methods are employed in practice). Rejection regions, and specifications more generally, correspond to events and therefore have an associated probability or probabilistic complexity. But rejection regions are also patterns and as such have an associated complexity that measures the degree of complication of the patterns, or what I call its specificational complexity. [Note how WD plainly views specification as being a broader concept than RR's; i.e on this too, PO was tilting at a strawman]
. . . kairosfocus
PaV, Joseph, Trib & Prof PO: The onward back-forth since 158 above is aptly illustrative of what happens when material evidence in a situation is suppressed or ignored or subjected to selective hyper-skepticism, which becomes very interesting in light of Prof PO's “lighthearted” statement in 164: What happened to Kairosfocus? I haven’t heard from him after he alleged that I was a KKK supporter Now, of course I nowhere made any such allegation, but that's beside the point. What is on-point, is that we have in hand -- cf my provided links in 157 -- a 1996 presentation of WD's case, and an associated discussion of the statistical theory-based objections to it, as made in 2005, both available to an Internet search, though the first did take a bit of effort on my part. We also have at least the lead of a NYT article, which highlights that the “run” in the Caputo case had played our over the span of “decades.” Similarly, by his own confession, Prof PO knew that the origin of the tornado-in-a junkyard analogy for the odds against forming life-systems by chance-dominated forces originated with Sir Fred Hoyle, but only highlighted that “Creationists” use it, as the lead to his critique of WD's ID work [and rhetorically one makes his hardest-hitting point first of all] -- where neither the man nor the movement are “Creationist” -- without engaging the underlying statistical thermodynamics that underlies Hoyle's remaarks. [NB: In my always linked through my handle, onlookers can see in Appendix 1 an introductory level survey of that thermodynamics and its implications, including under point 6, a scaling down of the 747 to a vat with a disassembled micro-jet in it to be assembled by the random forces responsible for Brownian motion. BTW, the stat mech based explanation of the roots of Brownian motion was a material factor in Einstein's Nobel Prize; oddly, his work on Relativity was deemed “too controversial” at that time, to be a significant contributor. That in itself is telling on the limitations of peer review.] The mere fact that Prof PO's “preprint” of a peer-reviewed article due for publication shortly does not fairly cite and engage the material facts brought forth above, is sufficient to severely undermine our confidence in anything he proposes in his analysis – as you, others and I have now detailed above several times. However, certain points for m the onward discussion and the Caputo case are worth further mention: 1] Joseph, in 167: 1) Did it have to happen? 2) Did it happen by chance? 3) Was it designed (to happen)? By asking the questions in that order [the explanatory filter] prevents any bias towards a design inference . . . . And if someone doesn’t use the EF when they attempt to detect design I would love to know what process they use. Any ideas? Joseph, you have struck the nail on the head, hard and sure, exposing the underlying selective hyper-skepticism at work. For, we finite, fallible materially ignorant and too often outright deceived creatures are simply incapable of formulating all possible hyps in a situation – indeed, that is one reason why Occam's Razor about preferring simple explanations is so important; as, that simplifies the set of live options to look at wonderfully! Indeed, ever since Plato in his The Laws, 2,400 years ago [cf my Appendix 2 in my always linked], cause has been understood to originate in one or more of the above: law-like natural regularity, chance, agency. What is happening here is a case of selective hyper-skepticism, where because the inference to agency opens a door to a philosophical option where many would not wish to go, a question-begging assertion or assumption is used to lock the door, where on other cases where that philosophical question is not at stake, they would not dream of being so skeptical. Indeed,this is underscored by your . . . 2] J, 167: And we don’t have to rule out all possibilities to arrive at a design inference. To demand that all possibilities be ruled out prior to arriving at a design inference is akin to asking for proof of design. Science isn’t in the proving business. Like all scientific inferences the design inference is tentative and can either be confirmed or refuted by future research. In short, Science is -- properly speaking -- an empirically constrained, open-ended, provisional search for the truth about our world, based on the epistemology of inference to best current explanation, i.e Peirce's logic of Abduction. As high-quality dictionaries put it (and I deliberately use two that pre-date the current ID controversies]:
science: a branch of knowledge conducted on objective principles involving the systematized observation of and experiment with phenomena, esp. concerned with the material and functions of the physical universe. [Concise Oxford, 1990 -- and yes, they used the "z" Virginia!] scientific method: principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses. [Webster's 7th Collegiate, 1965]
So, it won't do to beg major questions and block otherwise credible options across the set NR, Chance, Agency, would it? Or, do we KNOW -- how so? -- that an agent could not have been involved in: the origin of the cosmos as we know it, the origin off cell-based life, the body-plan level diversification of that life [e.g. in the Cambrian revolution, the bacterial flagellum], the origin of mind? Why is the possibility of agency in these situations, then so stoutly resisted, even to the point of career busting and the distortion of the positions of those who argue that there is warrant for considering and indeed for inferring to agency in light of the glorified common sense embedded in the explanatory filter? . . . kairosfocus
PaV, Short answers, "no" and "yes." I don't think it can ever be done in a way such that everybody would agree that it's final. But, as Joseph pointed out earlier, "science isn’t in the proving business." [Note to potential newcomers: the issue whether the flagellum is acutally designed or due to chance is not at issue, only whether this can be decided by the EF.] olofsson
P.O., Do you think that there is any way to, for example, define a properly constructed rejection region for something like the flaggellum? That is, do hold out some hope for that? Lastly, I think the answer here is fiarly obviously "yes", but, would a properly formed rejection region that rules out chance for the flagellum settle the issue for you? PaV
PaV [169], OK, I thought about possible pairs of atoms; I get your point now. I'm fine with the UPB, have no objections that I can think of (in contrast to many other critics, I don't criticise everything Dr D does or says). olofsson
tribune7 [173] Yes, it is done instinctively but the point of the filter is to be able to do it rigorously. After the scenario you described, the stones are probably pretty much randomly scattered. We don't then instinctively conclude that they were pillars based on their spatial pattern but based on what they look like. olofsson
Hi Joseph, Welcome to the discussion. As I have pointed out previously, the problem with the flagellum example is that Dr D does not form the rejection region but considers only the particular outcome (a comparison with the Caputo example is illustrative). And we don’t have to rule out all possibilities to arrive at a design inference. The purpose of the filter is to rule out all chance hypotheses ("sweeping the field" in Dr D's words). He then backpedals and says that this can't be done with absolute certainty and shifts the burden to the "design skeptic," although he also claims that logic does not require that an eliminated hypothesis be superseded. As I point out in my article, this is very inconsistent. Not sure if you have read it but it is what started this discussion (after Dr D's original post where he hinted that there has not been any criticism of the filter from probabilists). olofsson
How do I test the chance hypothesis? It's done instinctively all the time. Can it be done mathematically? I don't see why not. How about asking what the probability would be for certain stones in a certain pattern with certain shared markings falling in a certain commmon direction? tribune7
How do I test the chance hypothesis? PO olofsson
PO A pillar is designed and set up by some ancient race to hold the temple roof. Years pass. The races fades into antiquity and the temple and pillars collapse. You come across the site. You find a grouping of stones. You test to see if they are there by regularity. You conclude no. You test to see if they are there by chance. You conclude no. You fairly (and correctly) conclude they are designed. tribune7
tribune7, I can't contribute much on the battlefield of science. I try to stick with what I know. There is no way I could debate protein binding sites with Prof Behe but others can. Now, remember that the filter computes probabilities under the assumption that the chance hypothesis is true. Therefore, I think, we need to consider what objections an evolutionary biologist would have to the particular chance hypothesis that Dr D chooses. olofsson
P.O. "It there are 10^80 atoms, there are “10^80 choose 2″ pairs of atoms which is 5 x 10^159." The Universe is almost all hydrogen and helium. Both are gases, and, as such, are already "paired up". The "paired" status is already their configuration---which will not change much over time. Now, when you have hydrogen in a carbohydrate, then it is a single atom in some kind of single configuration. The calculation I made involves ALL possible configurations (100,000 of them) for EACH atom, at the Planck scale, over the entire history of the universe. There are no probabilistic resources left in the universe after you make this calculation. It's a true upper bound, not for what "could be", but for what "actually" was. Configuration spaces are a separate matter. Here's Hoyle's (the atheist) position: "Hoyle calculated that the chance of obtaining the required set of enzymes for even the simplest living cell was one in 1040,000. Since the number of atoms in the known universe is infinitesimally tiny by comparison (1080), he argued that even a whole universe full of primordial soup wouldn’t have a chance***. He claimed: 'The notion that not only the biopolymer but the operating program of a living cell could be arrived at by chance in a primordial organic soup here on the Earth is evidently nonsense of a high order.'" [*** This is basically what I have calculated.] * * * * * P.O.:
"But I have not been that much concerned with the UPB. I’d be fine with a higher number for any one particular application. For the objections I have raised it doesn’t matter what the bound is, because if we con’t know how to form the rejection region, we cannot compute its probability either."
Your argument, it seems to me, comes down to this: If we calculate the configuration space for a protein as it is normally done, this calculation might be in error since we don't know whether or not some, if not most, configurations are simply not permitted by nature. It seems to me, that is what you want to argue. You have simply couched it in a different form: How do we define the rejection region? Well, addressing your basic argument, there are four known forces: the strong and the weak nuclear forces, electro-magnetism and gravity. (I would add a fifth: entropy) If we assume that gravity hasn't much effect in what happens in a cell (which is true, for the most part), we're then basically left with forces that are analyzed best using quantum mechanics. Therefore, there's two ways to look at this: you can calculate the UPB in a way similar to what I have done, which includes the quantum nature of particles in the Universe, and see that the configuration space of most proteins exceeds this, therefore pointing to intelligent agency; or, you can say that somehow 'natural forces' actually restrict---in a way that we can never know in detail---the actual configurations of proteins, and thus the 'actual' configuration space of these proteins are substantially below the UPB. But this second way of looking at biological systems then automatically opens up this argument: How did 'Nature,' using these four fundamental forces, know to limit the configuration space of proteins to such a great extent? IOW, why is 'Nature' "fine-tuned"? Here's our friend Fred Hoyle's take on all this: "Based on this notion, he made a prediction of the energy levels in the carbon nucleus that was later borne out by experiment. However, those energy levels, while needed in order to produce carbon in large quantities, were statistically very unlikely. Hoyle later wrote: Would you not say to yourself, 'Some super-calculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule.' Of course you would . . . A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question." I don't see the wiggle-room you need here. I take a "common-sense" approach to it. PaV
The problem with probabilistic inference is that even if something has a very small probability, it could still occur.
That's why specified complexity is more than just a probabalistic inference. The last node of the EF asks if X not only has that (very) small probability of occurring, but does it also specify something. Then we have experience- for example we do not have any experience of observations of nature, operating freely, constructing command and control centers. Yet living organsims are chock-full of them. Even the bacterial flagellum needs a command and control center. IOW not only are the correct proteins, in the proper amounts, with the proper configuration required, a command and control center plus communication lines must also be constructed or else the part will not function. Joseph
That might be his opinion but he has not been able to conclude it by his filter.
In Chapter 5 of "No Free Lunch" he does just that. That is he infers design, ie rules out chance and necessity (and any combination), of the bacterial flagellum using the explanatory filter.
Besides, “necessity” was supposed to be ruled out first (although the filter does not look quite the same in TDI and NFL).
Do you have any data which demonstrates a bacterial flagellum can arise out of necessity? As for looking the same- all the filter does is to ask: 1) Did it have to happen? 2) Did it happen by chance? 3) Was it designed (to happen)? By asking the questions in that order it prevents any bias towards a design inference. And we don't have to rule out all possibilities to arrive at a design inference. To demand that all possibilities be ruled out prior to arriving at a design inference is akin to asking for proof of design. Science isn't in the proving business. Like all scientific inferences the design inference is tentative and can either be confirmed or refuted by future research. And if someone doesn't use the EF when they attempt to detect design I would love to know what process they use. Any ideas? Joseph
PaV, WD has adopted the UPB, I believe, to put into perspective the futility of searching huge configuration spaces through chance occurences. Not quite. The problem with probabilistic inference is that even if something has a very small probability, it could still occur. As a matter of fact, plenty of such events happen all the time. People win the Powerball although the odds are extremely poor for each individual. However, because so many people play, it is likely that somebody wins, just not you. Similarly in design inference. If we reject a chance hypothesis because it infers a probability only 1 in a billion on the data (just as an example), it is still possible that there have been so many repetitions over the course of time that, if we were to test all of them, it is likely that we reject some true null hypothesis, that is, infer design when we shouldn't have. So the idea is to choose a probability bound that is so small that even if we had tested the maximum possible number of hypotheses, the odds are still against any erroneous design inference. So by doing the math on the age of the universe etc, we can come up with some number for the UPB. Note however that we cannot apply it to our favorite, the Caputo example. The smallest possible probability we can get there is (1/2)^41 which is, in this context, not small at all. This is by the way also a problem in applied statistics; the more hypotheses you test, the less certain your conclusions. olofsson
tribune7, Dembski points out that the formation of the flagellum could not have formed by chance. I think we all agree on that. No, we don't. That might be his opinion but he has not been able to conclude it by his filter. Besides, "necessity" was supposed to be ruled out first (although the filter does not look quite the same in TDI and NFL). olofsson
What happened to Kairosfocus? I haven't heard from him after he alleged that I was a KKK supporter. :D olofsson
tribune7, I agree, it is a worthy endeavor to pursue design inference (regardless of evolutionary biology) and try to formalize it in mathematical language. But my point regarding your pillars was that you infer design without applying the filter because you already know what pillars are and do. There is a potential fallacy here because you could say about the flagellum, "hey, it looks like an outboard motor" and infer design. So, how would you run the pillars through the filter? I'm already mentioned in a textbook: my own! Didn't want to wait for others... :) olofsson
PaV, It there are 10^80 atoms, there are "10^80 choose 2" pairs of atoms which is 5 x 10^159. But I have not been that much concerned with the UPB. I'd be fine with a higher number for any one particular application. For the objections I have raised it doesn't matter what the bound is, because if we con't know how to form the rejection region, we cannot compute its probability either. olofsson
Oh yes, the pillars and the roof. Sure, we all infer design when we see them beacuse we know what they are, Not always. Sometimes we just find piles of broken stone, but we still infer they were once pillars. not because we run them through the explanatory filter. How would you even do that? That's one of the neat things about Dembski's work. Design exist, it can be inferred and Dembski is trying to present a way in which it can be quantified. Now there is room for improvement, but you have to admit it is an interesting and worthy endeavor. Since, you have a background in statistics get in now on the pioneering stage and you might be mentioned in a textbook 120 years down the road. :-) tribune7
Prof Olafsson: You say: "About the universal probability bound, why wouldn’t you consider pairs of atoms (there are almost 10^160) of those or triples or…?" Didn't you go in the wrong way? There would be 10^40 "pairs" of atoms. Again, we're talking about atoms. It doesn't matter where the atoms are, or to what they are attached. In every micro-instant (Planck's time=10^-43 sec.) we're allowing 100,000 different configuration for EACH atom. According to theory, there are, for each electron, e.g., an "infinite" number of eigenvectors determining its state; but of that "infinite" number of eigenvectors, only a small amount of corresponding eigenvalues would be non-zero. I have a passing familiarity with quantum mechanics, but I think it rare for anyone to calculate more than 100 eigenvectors/eigenvalues. (The best way of thinking of these eigenvectors is to think of spectral lines for hydrogen gas) Each of these would be a potential configuration. So, in the real world, 100,000 possible configurations makes the UPB a very conservative calculation, given that most atoms persist from second to second---let alone from Planck time to Planck time---in the same configuration. For example, the Universe is mostly hydrogen (which, yes, is paired) which is scattered throughout the universe, or locked up in stars. Yes, our sun is changing all the time, but it doesn't radically re-configure itself from instant to instant, making the calculation, again, conservative. Now, if atoms change 'bonding', yes, there is a large number of eigenvectors that would be affected. Should we now consider 100,000,000 configurations/Planck time/atom? That only shifts the calculated number from 10^146 to 10^149, hardly earth-shattering when it comes to its effects on the associated probabilities I calculated the last time. WD has adopted the UPB, I believe, to put into perspective the futility of searching huge configuration spaces through chance occurences. It's the force of his argument. Combining the UPB with the NFL theoroms allows one to conclude that chance could not possibly have brought about the biological systems we encounter. Even the atheist, Sir Fred Hoyle, concluded as much. PaV
PO Post 151 -- Right, because I understand what Dr D writes and I am qualified to crticise it. I am no biologist. My article is a critique of the explanatory filter, not of ID as such. But you do offer a criticism in your work of evolutionary biology and it is relevant to your criticism of Dembski's filter i.e. you say "according to some plausible evolutionary scenario" the probability of the formation of the flagellum changes. Well, sure. Given the right scenario the evolution of the flagellum stops becoming a probability and becomes a certainty. But scenarios aren't reality and "plausible" is word used to express an opinion not to rebut an empirical argument. Dembski points out that the formation of the flagellum could not have formed by chance. I think we all agree on that. That leaves agency and necessity. Now, there is no known force that causes proteins to form into a flagellum. You put all those proteins together in a petri dish and you do not get a flagellum. So that leaves us with what? Now, you can argue that there is some unkown force that necessitates that those proteins become a flagellum but that puts you in the same boat as Young Earth Creationists arguing that radiometric dating is going to turn out wrong. Not that there is anything wrong with that but you have ceded the battle on the field of science. tribune7
PPS: Missed this one in 152: the pillars and the roof. Sure, we all infer design when we see them beacuse we know what they are, not because we run them through the explanatory filter. Actually, no. We recognise things to be pillars because they seem to be artifacts that fulfill a roof-supporting [or sometimes free-standing artistic or ritual -- e.g the Israelite Tabernacle/Temple, in documents] function, not natural occurrences shaped by the relevant blind, partly chance-driven geological forces. Smoothness, shape, regularity of placement, size, pattern of placement relative to what a roof probably might look like, etc all play a role in that. We may do this intuitively and almost instantly, but we are eliminating chance and NR as the decisive cause. [Contrast the issue over sea arches and the question of the Columnar Jointed basalt of Giant's Causeway, Ireland. In the former case, we intuitively recognise a natural effect. In the latter, there was a debate until it was understood that natural crystallisation was involved; i.e there was no credible warrant to reject the null hyp. BTW, I recall seeing some similar basalt on the N side of the former central corridor road, here in M'rat. Probably long since buried by pyroclastic deposits. Yet another tie to the other emerald isle . . .] Similarly, finding a ring of even natural stones in an array that suggests foundations for a hut is taken as evidence of design, e.g. in some rift valley investigations of fossils etc. kairosfocus
PS: It is worth the while linking WD's 1996 paper on Caputo and the explanatory filter as updated, so onlookers can see for themselves; also, his actual 2005 discussion of the Fisher-Bayes issues that keep on coming up in PO's critiques [denials, distractors and dismissals notwithstanding] here. [NB: the former WD paper traces to a 1996 conference, the latter to a book chapter that has been accessible online since 2005.] I can find no fair review of the substance of WD's remarks in PO's paper, or anything more than a dismissive reference to the vexed thermodynamics and information generation challenges in the 747 in a junkyard by a tornado example of Sir Fred Hoyle; hardly a "Creationist." That becomes of particular concern, for these issues go to the heart of the criticisms Prof PO attempted, and they show just why the superficially plausible criticisms fail. kairosfocus
6] PO, 136: on Behe etc . . . Going back to 20 above, kindly note my excerpt from DBB, 10th aniv edn in my very first comment:
Took a look — seems to be your own paper. On selective points on a quick run through: 1] Overly simplifies Irreducible Complexity, to the point of a strawman fallacy. Behe’s actual claim is that there is a core in some systems, that is so constituted, that ‘An irreducibly complex system cannot be produced directly… by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition non-functional.’ So, ‘although irreducible complexity does rule out direct routes, it does not automatically rule out indirect ones.’ Howbeit, as complexity of such a biosystem rises, ‘the more unlikely the indirect routes become.’ [DBB, 10th aniv. edn.]
In short, I am emphasising that there is a lot more to Behe's claim than in the simple citation you have excerpted in 136. Your cite indeed captures the first part, but misses –- as many (myself included) often tend to, BTW –- the implication of the second part. That difference is material if one is going to raise/address the issue of co-optation for instance, in a peer-reviewed context. I think it is fair to note that a peer-reviewed context is just a tad more exacting than a less formal one like a blog comment thread. 7] 142, on Specification -- AGAIN: Again, there are many different, valid and specific approaches to specification, and functional specification -- as with the flagellum above -- is very relevant when considered in light of the statistical thermodynamics of spontaneous assembly of the original nano-machines in the cell, then the onward information generation issues relevant to getting to major cell type and body plan innovations. That is, we are moving beyond Behe's empirically observed edge of evolution. [And, these matters are discussed/summarised in my always linked.] GEM of TKI kairosfocus
3] PO, 129: how do you know that you are at the end? You too have only tested one chance hypothesis. You have unconsciously ignored all other chance hypotheses Of course, this is again right out of the Bayesian playbook on critiques of Fisherian reasoning, as WD addresses in his long since linked, cf link at 34 etc. The case has long since been answered in light of the well-known practices of statistical process control, a glorified extension of the common-sense “law of averages” approach. If we see such a strong run in a case where the “fair coin” model is supposed to be at work [and the testimony of NC that he used a fair method was taken as credible insofar as the fairness of his claimed method was concerned], it indicates something is wrong and should be fixed. Thus, the strength and duration of the runs indicate, again, design by willful negligence. That takes in the whole family of "strongly biased coins models," at one go. 4] PO, 130: THERE IS NOTHING BAYESIAN IN MY REASNONING. Simply cf. The above, from 20 on, to see what was noted: you have at the outset cited Sober's side of the debate as if that were the last word on the matter [a Bayesian critique – and one now usually used by secularist, evolutionary materialist progressivists, regardless of the fact that Bayes was a man of the cloth], then you have used several further critiques from that school, including just now. You yourself may not be a Bayesian, but you are working out of their playbook, without giving WD's easily accessible rebuttals a fair hearing. (The comments by Atom and Jerry show the rhetorical impact of this one-sided approach, and we are dealing with a case where ID thinkers have little hope of being allowed space to rebut such an article in a "peer-reviewed journal." Thus is the very fabric of what is called "knowledge" [[Science . . .] distorted; and the one-sided hit piece will go on the record as an established, duly "peer-reviewed 'fact' ".) 5] PO, 133: My main concern is, if the biological function is “motility,” this encompasses more than the flagellum. Some other different looking motility device could have evolved instead, then that would have been the one under consideration. How do we form an appropriate set of such devices and their probabilities? Again . . . On this, the basic issue is [as is discussed at an introductory level in my always linked] that the flagellum is not just an instrument of motility in general [BTW there is a similar irreducible complexity argument on the cilium . . .] but - as I have noted in part above, and as is widespread in the discussions -- a self-assembling, ion-driven outboard motor based on some 30 unique proteins that by Minnich's empirical work is irreducibly complex [thus, fine-tuned, i.e small perturbations cause dysfunction], and which also embeds in the underlying DNA code and assembly instructions, the design for an injection system the TTSS. One can evaluate the relevant configuration space for at least the genetic code, and set an upper bound on the probability relative to the Laplacian principle of indifference and/or the postulates and principles of statistical mechanics or other relevant criteria. For instance, at 300 monomers per protein coded for at 3 DNA elements each, we are looking at accounting for some 30 * 300 * 3 = 27,000 DNA base pairs, at ~ 2 bits each, just for the unique proteins. This is well beyond the 500 – 1,000 bits range where we can reasonably see that we are beyond the UPB. My discussion of the micro-jet in a vat gives an idea of the issues on assembly by chance or co-optation, which last gives no escape. What is most material is that the system is locally isolated, due to fine-tuning of functionality. In short, Behe is right. WD gave an estimate on the probabilities involved, and came to the intuitively obvious conclusion (however one may wish to dispute his details) – well beyond the UPB, 1 in 10^150 ( PaV uses a different number of events per second estimate, no of binary events reduces to 1 in 10^120 as WD often uses currently). [BTW, this last is a metric of the number of possible quantum states in the observed universe across its lifespan.] . . . kairosfocus
Hi PaV (and Prof PO et al): Still going strong I see . . . so, on a few points of note, starting with 129, which underscores the rhetorical pattern we are dealing with: 1] PO, 129: I DO NOT USE THE CAPUTO CASE TO CRITICIZE THE FILTER! With all due respect, I cannot let such a material misrepresentation of a major and crucial issue pass without comment, at least for the benefit of onlookers who may be new and may think that the above is an accurate and balanced summary rather than a tendentious and factually challenged claim. For, as PaV and I properly note, with due regard for the rhetorical structure of the paper – e.g. impact of loaded language, contrasting titles [design thinkers vs statisticians], failing to recognise the dynamical model used by the explanatory filter vs contrasting the importance of statistical models,etc, and one-sided surveys of the experts – that criticisms start from the very first words of your paper [cf link at 19 above, comments from 20 on]. Second, PaV is very correct to in effect challenge you, not only on improper assignment of origin of the tornado in a junkyard issue, which is due to the distinguished Astronomer Sir Fred Hoyle, but also the dismissive tone used fails to address the very serious thermodynamics, statistical thermodynamics and associated information generation problems that lie under that colourful illustration. [Cf my discussion in Appendix 1 of my always linked. Until you can properly address the issues outlined there, you are in no proper position to dismiss, for you are making an implicit appeal to the authority of one side of a debate as if they automatically have the last word on the merits.] Third, as I long since noted, from 20- 21 on, and in light of say 68 on, etc, as soon as you glide from WD's noting on the extremeness of the Caputo result [over decades] as a key component of why the EF applies, to the notion that a 22/19 split – involving 38% of the curve is to be considered as evidence for cheating, that too is a criticism by expansion of the rejection region contrary not only to WD but also to Fisher himself. For, the precise point of the Fisherian RR, and of the related concept of fluctuations in statistical thermodynamics [as I illustrated in 68], is that when a purported chance-based result is far enough from where the available probabilistic resources would put observations, it is credible to rejectt he null hyp that it was a product of the relevant chance process. [In this case, too, the uneven unintentional chance hyp folds into design once we see that the strong runs towards D should have led to correction if there was an intent to be fair. Cf also WD's discussion and the NYT brief excerpt as I linked in 127.] It is also a technique used by the Bayesians, whether or no you are a Bayesian yourself. (Your remark in 139 BTW, fails to address this, which is what I have emphasised. Presentation of a conditional probability analysis fails to address the relevance of the use of criticisms out of the contemporary Bayesian playbook without also addressing in fairness WD's long since available and IMHCO cogent responses.) That the section is indeed a criticism of the Caputo case is underscored by your final, grudging half-acknowledgment that the filter is in fact correct in its ruling on the case. 2] PO, 129: a statistical test only rules out certain chance hypotheses in favor of others. So that’s what I did: ruled out p=1/2 in favor of p>1/2 First WD's context implicates the wider context of hyp testing, i.e the issue that correlation is not causation, and the underlying dynamical model of cause that is implicit in Fisherian reasoning: causal forces embrace chance and/or natural regularity and/or agency. He has made no material error in setingf that context, and your proposed aux null hyp fails, as will again be pointed out just now. For, bearing in mind the NYT observation that the “runs” in question cover “decades,” we can see that a further issue is relevant, neglect of duties of care. That is, if Caputo were truly concerned to be fair, he would long since have seen that something had gone wrong, and fixed the machine. In such light [as I have repeatedly pointed out], PO's “biased coin” auxiliary null hyp in fact folds into design, design by willful negligence. WD's note [cf. 127 – presented at a conference in 1996, and on the web for a long time!] on why the court considered but then ruled out a biased coin hyp is also relevant:
The first option-that Caputo chose poorly his procedure for selecting ballot lines, so that instead of genuinely randomizing the ballot order, it just kept putting the Democrats on top-was dismissed by the court because Caputo himself had claimed to use a randomization procedure in selecting ballot lines. And since there was no reason for the court to think that Caputo's randomization procedure [capsules from an urn or the like it seems, but with relatively few capsules] was at fault, the key question therefore became whether Caputo actually put this procedure into practice when he made the ballot line selections, or whether he purposely circumvented this procedure in order for the Democrats consis- tently to come out on top. And since Caputo's actual drawing of the capsules was obscured to witnesses, it was this question that the court had to answer.
Again, we see a one-sided presentation of evidence, multiplying rhetorical persuasiveness at the expense of addressing the force of a true and fair view of the matter. . . . kairosfocus
PS. Sal, jerry, tribune7, and PaV, thanks for your posts. I certainly don't have all the answers but I enjoy our exchange here! olofsson
tribune7, Oh yes, the pillars and the roof. Sure, we all infer design when we see them beacuse we know what they are, not because we run them through the explanatory filter. How would you even do that? You have probably just given another example where the filter does not work well. Good night everybody! PO olofsson
tribune7, Right, because I understand what Dr D writes and I am qualified to crticise it. I am no biologist. My article is a critique of the explanatory filter, not of ID as such. PO olofsson
PaV, jerry, In the Caputo case (sorry for bringing it up again but it is kind of useful), we can look at the actual sequence and say "that looks like cheating, let's see in how many other ways he could have cheated as much or more" and come up with another 41 sequences. As for the flagellum, we'd say "hey, that's a nice motility device, let's see in how many other ways it could have evolved to be able to move as well or better" or seomthing like it, and we simply have no way of knowing. Maybe it's a dead end, maybe not, maybe something useful still can be done. About the universal probability bound, why wouldn't you consider pairs of atoms (there are almost 10^160) of those or triples or...? Jerry, could you send me an email and I will reply privately to some of your points that are not of much public interest? PO olofsson
I should have added this: Given the UPB is 10^180, then the total number of all possible configurations...etc, etc., is 10^146. Dividing this number by the 10^180 UPB gives 10^34. Which means that anything which has a "configuration space" of 10^180, or greater, has a 1 in 10^34 probability of coming about by chance if you were given all the possible configuration of all the atoms in the Univers since the beginning of time to work with. It seems to me, that the UPB pretty much is unassailable using chance alone. PaV
Prof Olafsson, You haven't made any mention of the Upper Probability Bound. It strikes me that your concern about "specifications" of the bacterial flagellum and/or "motility", given what the UPB represents, becomes moot. Let me try to explain. If I understand you correctly, your trying to find the "set" of all possible configurations for each of the 50 proteins that go to make up the flagellum. Without these "specifications", you seem to be saying that the 'math' can't be done, hence, we're dealing with meaningless statistics. If I may, let's look at the UPB and what it means. The UPB is 10^180. Now, 180 is not a big number, and 10 certainly isn't; but 10^180 is basically beyond our comprehension. Here's what I mean. Let's assume that our Universe is a giant computer that has an internal 'clock' that ticks at Planck time. (BTW, this is my personal understanding of the how the universe operates). There are, per Wikipedia, 10^80 atoms in the Universe. Per Wikipedia, Planck time is something like 10^-43 seconds; and, there have been 8 x 10^60 Planck times since the beginning of the universe. Let's say that each individual atom can configure itself 100,000 ways. And let's say that each of these configurations changes each time 1 Planck time ticks by (i.e, every 10^43 times/sec). Combining all of these, this means that since the beginning of the Universe, there have been (8*10^60Planck times)x(10^80atoms/Universe)x(10^5 configurations/atom)= roughly 10^146 configurations of all the particles in the entire Universe since the beginning of time. This is the maximum number of chances, then, that we're given if we want to explore any given "configuration space". Now, for one, single, 300 a.a. protein, the configuration space represented by that protein is 22^300, which is roughly, 10^430 different possible configurations. Now, given all the possible configurations of all the particles in the entire Universe, over the entire life of the Universe, gives us, as I calculated above, 10^146. So, through blind, chance processes--that is, a blind search--of the "configuration space" of this ONE protein, the likelihood of arriving at this one protein configuration, via chance, is 10^146/10^430=1 in 10^254 probability. With such numbers, why is it so important for you to know the individual "specifications" of the flagellum? It couldn't possibly number more than the total number of all possible configurations of all the atoms in the Universe could it? Indeed, this is why a Design Inference is warranted. PaV
PO -- Right, but what is the specification with regards to the function of motility? What is the specification of pillars holding a roof? You can find them without the roof and still infer design. And of course pillars are not the only device with the function of roof support. Sal -- dittos. PO is a welcome addition. Jerry -- great post. PO, you cut evolutionary biologist a lot more slack that Dr. D. in your paper i.e An evolutionary biologist would certainly argue that, according to some plausible evolutionary scenario, the formation of the flagellum is an event of a probability that is far from negligible. Care to run the numbers on that scenario? tribune7
Professor Olofsson, Part of the debate between those who support ID and those who support Darwinian processes is how to use statistics to support their arguments and what is appropriate. There is an ongoing strategy by those who oppose ID to find fault with some detail of an analysis and then to cry "foul" in order to eliminate the whole argument when often it is a technicality that is pointed out. ID will continually point out the low probability of an event and the fact that there exists no evidence to support any gradualistic approach to evolution (By the way evolution does not necessarily mean Darwinian processes though this is how many understand the term. Darwinian process are just one proposed mechanism to account for the changes that have been observed in the fossil record and for the diversity within life on earth and there are several non ID friendly biologists who do not think Darwinian processes can explain the changes/diversity observed.) Supporters of a gradualistic approach will counter an argument based on probability with a flaw in the statistical reasoning and that there could have been some way it could happen most often by a claim that it was a cumulative process. In this process they propose what are called "just so" stories to indicate a possible way gradualistic change could have happened. There is never any evidence that the changes or diversity occurred this way but they do not feel the need to support their speculations if they can imagine a sequence for gradual change since they represent the conventional wisdom and do not have to justify their assertions. They never subject their assertions to an hypothesis test. So that is the stand-off. You point out what appears to be a serious flaw in the statistical reasoning of Dembski. But this is nothing new. It obviously has to be addressed because in this debate it seems that one side has to have all their i's dotted and t's crossed or they are thrown out while the other side just has to make seat of the pants assertions. So we do not know how serious the flaw is. Whether it is a technical flaw or seriously undermines the whole approach. Maybe this should be addressed between you and Dr. Dembski and we can witness the exchange. But even if the flaw is serious, there must be some way to legitimately apply probability and statistical reasoning to the problem of protein formation and DNA's role in the specifying of very useful proteins. Not all proteins are useful and the subset used in life (call it A) represents an almost infinitely small subset of the potential proteins that could be formed so the fact that an almost infinitely small set exists is of special interest. And within this incredibly small set (A) the number of proteins that can interact/bind with each other (A1, A2, A3....) are each even a much smaller subset probably at most a few in each subset. There probably exists a larger subset of all proteins (call it B which contains A) and B are all potentially useful proteins for life and there may be someway of approximating how large this set could be to show that even it is probably microscopically small compared to the total number of potential polymers made of amino acids. For example, if you take all the possible polymers of amino acids that are 40 in length you would exhaust all the matter in the universe and a protein/polymer of length 40 would be a small protein so the fact that any specific protein would ever get constructed at random is a probability that is a an incredibly low number just because the resources are not large enough nor is there enough time. So a protein is not like a bridge had that is randomly dealt because only an incredibly small percentage of proteins can ever get randomly formed if such a process is feasible because there is not enough time and matter in the universe to form all the random proteins no matter how it is done. If you have two proteins that are randomly constructed the probability that any two would somehow interact/bind would also be so small that it is essentially zero. This could possibly be estimated by listing the number of proteins that exist and then forming sets of those that can bind/interact with each other. For any given protein in the life set the number in any particular subset would probably be only a handful. But in the flagellum, there are 20 such proteins and the odds of any twenty proteins showing up that would bind/interact is absurd. But this is not the real issue. Not only do the 20 proteins have to be available but they have to assemble in a precise order and the set of instructions for this are in the DNA. In other words the DNA has to have a series of nucleotides that represent all the instructions to form these proteins that interact with each other but the instructions have to be in a precise order. So the combinatorial problem is two fold, precise order of instructions and the results of each instruction is a protein that binds and interacts with the one preceding it and after it and then the total combination is more efficient than any machine built by man. Also while we do not know the total set of motility combinations there maybe some ways of estimating it. We know that there are not many in nature but there is no reason to suspect others would not have shown up so we can suspect the total number is small and the chance that this one happened would then be incredibly small. So while I am not currently well versed in hypothesis testing, there seems like there should be some way of estimating the sizes of the various sets and their probabilities and then applying statistical methods to the estimates/measurements. It is probably not easy but it seems like it could be done. jerry
Professor Olofsson has given the best opposing viewpoint of Bill's work that I've seen in years. I think Bill's latest work addresses the legitimate concerns Professor Olofsson has raised. His concerns about the rigor of specification are understandable. I have been working on this problem independently and I believe Voie, Trevors, Abel, and others have found specification that will resist the difficulties Dr. Olofsson has raised. Furthermore, the Behe EoE has given good ideas with the application of protein binding sites. Dr. Olofsson's paper was one of the more respectable critiques I've seen unlike the trash I saw out of Elsberry and Shallit and Perakh. scordova
Right, but what is the specification with regards to the function of motility? Ay, there's the rub! Gotta run again, sorry. olofsson
But for the flagellum, we have the outcome of the actual protein configuration There are 50 or so proteins in the flagellum and all are needed. That's pretty specific. tribune7
tribune7, Not quite. In short, a single outcome such as Caputo's (sorry, he's back!) particular sequence with the R in the 23rd position is specified. A specification is a set of specified outcomes, what we prob/sat people call a rejection region. Thus, the specification in the Caputo case is the set of all sequences that have at least 40 Ds (and there are 42 of them). Each sequence in the specification have in common that they indicate biasedness toward Ds as much or more than the observed sequence. The definitions tend to be a bit floating and in No Free Lunch there is no strict definition of specification; it is introduced by comparing it to rejection regions. So, as far as Caputo, all is well and Dr D and I are in agreement. But for the flagellum, we have the outcome of the actual protein configuration (specified) and there it ends. A specification should then be a set of outcomes that are specified but how would you for this set? Compare with Caputo. What can we do? I would say it is more or less impossible, Dr D would say that he's not quite there yet, I suppose. Need to run but will be glad to discuss further! olofsson
OK, Caputo wasn't out of the way. tribune7
PaV, You write, about Dr Dembski: As I say, he would consider you a Bayesian Please don't insult the man! He know what a Bayesian is. I know what a Bayesian is. You have no idea, you just use it as a convenient label like "fascist" or "communist" or "Norwegian" (although not as harsh as any of these)! Look in my article again. I will give you a quarter for each appearance of the term prior. If I presented a Bayesian view, you'd be rich. Cheers! olofsson
For those of you who are interested and capable of understanding, here is a Bayesian analysis of the Caputo case, using the same model as for the statistical hypothesis test. The probaility to select D is denoted by p and considered a random variable on [0,1] with the uniform density f(p)=1. This is known as the prior distribution. Upon observing the data (40 Ds and 1 R) we compute the posterior distribution. The density f(p|data) in the posterior distribution is obtained by conditioning on the data and apply Bayes' rule: f(p|data)=c*P(data|p)f(p) where c is the constant that makes the density integrate to 1 over [0,1]. Here, P(data|p)=p^40 * (1-p) which gives us c=1722 and the posterior density f(p|data)=1722 * p^40 * (1-p) on [0,1] which is heavily skewed toward 1. The mean in the posterior is 41/43. Now you can use it to compute probabilities involving p. No tests, no rejection regions, a completely different approach. So, my dear PaV, if you see anything even remotely resembling the above in my article, let me know because I must have missed it... ;) olofsson
Good. Caputo is out of the way. It sounds like you agree with Dembski that events that are complex and specified can be fairly assigned to be designed, but what make up biological organisms can't be assumed to be specified. Is my understanding correct? tribune7
tribune7, Two reasons for using the Caputo example. 1. Dr D uses it to illustrate the explanatory filter in a simple and intuitively clear context. He has received much criticism for pretty much everything he has done, including this example. I agree that the filtger works fine here and show that a statistical hypothesis test (which scientists and statiticians do on a daily basis) reaches the same conclusion. Dr D often points out that the filter is a more general version of hypothesis testing. 2. I illustrate the shortcomings of the filter by contrasting Caputo and the flagellum. I think it works fine in the first case but runs into serious trouble in the second. olofsson
Just want to clear something up. It was pointed out to me by the indomitable Mr Kairosfocus that I in my article had misrepresented Prof Behe and his concept "irreducible complexity." It was not my intention to do so, and upon checking, I don't think that I did. I wrote that a system is irreducibly complex if it "consists of several different parts that are such that if any of them is removed, the system loses its function altogether." Behe's definition from Darwin's Black Box and The Edge of Evolution is that a system is IC if it "is composed of several well-matched, interacting parts that contribute to the basic function" and "the removal of any of these parts causes the system to effectively cease functioning." Not exactly the same words but hardly the "strawman" Mr Kairosfocus wanted to make of it. olofsson
Prof Olafsson: I, too, want to express my appreciation for your presence on the blog. You seem genuinely interested in getting an answer to your questions, rather than imposing a view. You say you're not a biologist, well, I'm not a mathematical statistician. I almost loathe statistics; I think it's because it forces me to think too hard! :) Anyway, the link I was referring to had not much to do with biology, but is instead a statement of WD's statistical approach. As I say, he would consider you a Bayesian. I'll leave that to you and him. As to the issue of "motility", of course the intersection of proteins that make that "motility" possible is itself a "specification", as is each protein. So, I don't know how to answer your question exactly. That's where that link comes in once again. WD would say that it is sufficient to consider the chance association of biological complexity, including into that consideration every replicational and specificational resource allowable, which then brings about an Upper Probability Bound (UPB) that determines the cutoff of the rejection region for any possibility imaginable. One then calculates the complexity (which is more or less equivalent to the improbability) involved in the biological system being considered, determining if that exceeds the UPB or not. I'm not the best person on this board to explain WD's statistical method, but I think I got it roughly correct. You appear to favor a method wherein every conceivable model for any given biological entity must be individually analyzed and it's improbability determined. Once chance is ruled out, then some other agency must is assumed, and the new model evaluated, etc, etc, etc. I think it fair to say that the model you presume would be infinitely difficult to use for the purpose of ruling chance out as an agency since it would require every conceivable model to be analyzed for every conceivable biological configuration found in nature. Now, while I would be the first to admit that your method, if applied in an infinite way, would provide definitive proof that chance is not an agency; nevertheless, being a finite creature, surrounded by other finite creatures, using computers which are themselves finite, I think you can understand why I wouldn't what to go down that road. It's likely that this is the reason WD uses the method he does, and, in using an UPB, feels confident that in establishing that there is an upper bound to what chance processes could possibly bring about in a finite universe, from the beginning of time---i.e., in considering every possible configuration of every known atom since the beginning of the universe---any calculation of improbability in excess of this UPB represents something that is an utter impossibility, thus ruling out chance as a possible mechanism---no matter the model assumed. But, again, Sal, and others, are more expert than me in explaining the ID position. And, we look forward to your involvement in the discussion that takes place here. PaV
PO, I'm enjoying your posts very much and I'm glad you are particpating. What were you trying to illustrate with Caputo? I think that is one thing causing confusion here. Didn't a statistical test basically rule out a chance hypotheses in the matter? tribune7
PaV, Finally, thanks for your discussion about the flagellum and proteins. I have no expertise in biology so I cannot add anything. I will read if more carefully and try to learn a thing or two. My main concern is, if the biological function is "motility," this encompasses more than the flagellum. Some other different looking motility device could have evolved instead, then that would have been the one under consideration. How do we form an appropriate set of such devices and their probabilities? Olafsson olofsson
Kairos says: In short, strictly the “Nobel” in Economics is an equivalent prize not the Nobel proper Thanks. Being Swedish, I often feel the need to point this out, especially to economists. No mention of them in Alfred's will (nor of mathematicians). The fact is even footnoted in my blue book. olofsson
PaV, I have to have a sense of humor. What if I were to get upset every time my last name was misspelled "Olafsson"...? ;) Olofsson olofsson
Dear PaV, I'm breaking up my comments because they tend to be long. Regarding Bayesian vs Fisherian, yes, I have read Dr Dembski's paper, understand it very well, and have no substantial criticism. You should know that the issue is not by any means unique to design inference. There is a continuous debate among statisticians regarding Bayes vs Fisher (rather, Fisher-Neyman-Pearson to throw some more names into the game) and it is probably fair to say that most now agree that these are different methods, each with strenghts and weaknesses. One main problem with Bayesian statistics is that it requires heavy computations, so it is only recently that it has become feasible. OK, that's another discussion. Anyway, I don't think you quite understand the distinction. In a Bayesian analysis you need to assign probabilities to the hypotheses themselves and also be able to compute the probability of the observed data for each hypothesis. Never say never, but in design inference I agree with Dr D that Bayesian analysis is impossible to carry out. See his points (1) and (2) in the paper. There is nothing Bayesian in my reasoning. Unfortunately, Mr Kairosfocus kept insisting and you seem to have picked up the thread. I get the feeling that "Bayesian" is almost used as a slur by some here at UD. Poor Reverend Bayes! Be that as it may, let me repeat that THERE IS NOTHING BAYESIAN IN MY REASNONING. olofsson
HiPaV, No, I wasn't irritated, just thought that we'd keep it factual from the start lest the rhetoric would escalate. Yes, I am concerned about irritating you and have no problems reformulating the statement to make it more accurate...done! Alas, it is too late to change in the version that will appear in print. I simply didn't think much about it and by "classic creationist" meant pretty much what you write. Regarding the Caputo case, rather than going into details about statistical hypothesis testing, let me tell you what my point is. First of all, I DO NOT USE THE CAPUTO CASE TO CRITICIZE THE FILTER! The filter has been criticized from many different angles but I argue that it is actually very similar to statistical hypothesis testing, thus, based on a certain logic that is accepted by many. So, there is nothing mysterious, controversial, or stupid about it per se (in my opinion). Dr Dembski's main source of inspiration for the filter is indeed from statistical hypothesis testing. The filter is intended to be more general though, ruling out chance altogether, whereas a statistical test only rules out certain chance hypotheses in favor of others. So that's what I did: ruled out p=1/2 in favor of p>1/2. Any value of p greater than 1/2 also consitutes a chance hypothesis, but one that is not fair (we probabilists often talk about "flipping a biased coin" in cases like this). So that could be one way of cheating. However, there are many other ways of cheating that are not covered by my choice of model (independent choice of D and R each year with some probabilities p and 1-p). We have to remember that each year he had a new set of candidates and one year he probably deemed the Republican so weak that he put him first. Perhaps. Anyway, the only way in which we can distinguish between the "biased coin" and some deterministic way of cheating is to inspect the equipment used, take Mr Caputo's word that the randomization device was OK, or whatever other way we can think of. And, when applying the filter, you are faced with this problem as well. When you write since we’re at the end of the explanatory filter at this point how do you know that you are at the end? You too have only tested one chance hypothesis. You have unconsciously ignored all other chance hypotheses, for example that p=0.99 which would make the observed sequence quite likely. You and I have to use the same evidence to reach the same conclusion. I hope this clarifies the issue. If not, let me know. olofsson
kairos focus: I didn't know that your username was linked to your website. I just discovered it. Usually---or so I thought---when a username is linked to a website of some sort, the username is in a color other than the normal layout. Anyway, I looked over the site quickly, and see there's a lot there to peruse. Here at UD, I'm always looking to see what you've written as well. PaV
PPPS: It seems the Nicholas Caputo case is very, very hard to find in details online [ESP: NO EASY- TO- FIND WIKI ARTICLE ON IT!!!!!!!] - IMHCO a sign of the balance of the issue on the merits, and of "design" at work itself. NYT has a lead and US $4.95 to view here, sufficient to show that the situation had been going on for DECADES. (A sneering review at ARN - duly peer reviewed and published in 2000 -- underscores the cogency of the explanatory filter as an instrument of design detection. For surely, this is no case of algebraic necessity, nor just a matter of simple common sense! Far better is WD's own discussion of the case [tracked down after some serious searching . . .], here Latest addition to my vaults on origins and debates, of course.) kairosfocus
PPS: This stratification of capsules is reminiscent of the argument Henry Morris et al made on hydrodynamic sorting relative to various factors. [Who'd have thought a YEC argument might crop up relevantly in Caputo . . . ;-) ] kairosfocus
Hi PaV: Great to see you keeping up the good work after I went off to sleep. This sure was the hot thread yesterday, after it had seemingly died at no 18 – 19! Now a few notes: 1] Sir Fred: He won the Crafoord Prize in 1997, which was established, accor to Wiki: “to reward and promote basic research in scientific disciplines that fall outside the categories of the Nobel Prize.” [Only one Crafoord is given per year, and the Nobel Will is actually fairly restrictive on what can be given a Prize. The Crafoord is administered by the Royal Academy of Science of Sweden, and is sometimes viewed as the Astronomers' equivalent to the Nobel. That's why I didn't take up the point earlier.] Of this “substantial equivalency,” Wiki notes on the Nobel, that: The Nobel Prizes (Swedish: Nobelpriset) are awarded for Physics, Chemistry, Literature, Peace, and Physiology or Medicine. More recently the Bank of Sweden has been awarding a non-official, but associated, Nobel memorial prize for "Economics". In short, strictly the “Nobel” in Economics is an equivalent prize not the Nobel proper. So, we can see why the “equivalency” of the Crafoord has been claimed, too. NB: The late, great [I cannot resist this last . . . nor do I want to, I am an unabashed hero worshipper here . . .] Sir Fred was an Astronomer, strictly, but sometimes astronomers have received Nobel Prizes in Physics. [In 1911 I believe, the inventor of an item of marine safety equipment won the Physics prize, under the terms of the Nobel will . . .] 2] On a point of difference with WD: I should note just for completeness, that I do not like the terminology CSI. As my always linked will show, I normally use the term FSCI, for functionally specified [often fine-tuned], complex information, with an eye to the point that first one detects functioning and complex information of an order of complexity beyond the probabilistic resources available in the relevant situation. Then, one asks whence cometh such. (BTW, I use “fine-tuned” to highlight that in such cases slight or moderate perturbations tend to cause dysfunction, and often lead to error detection and recovery or correction mechanisms int eh system.) Also, as implied above, I start from the conceptual issue of the difference between complexity, order and specified complexity that emerged in OOL research and other areas from about the 1960's to 890's. This reflects by over decade old exchange with the late Dr Henry Morris, and the work of Thaxton et al as already cited. In that light, I view WD's work as an attempt to mathematically model an observed and important feature of the world. Insofar as his work succeeds, bully for him. But the underlying issue remains, even if WD's attempts can be nit-picked and faulted on technicalities. [This should be plain from my always linked, but that is a novella's length read . . . and is IMHCO an honest attempt at a relatively simple survey of the least that one should know to competently discuss the issue in light of a fair understanding of what design thinkers are saying.] 3] Yet more on Caputo: PaV, as usual, you are brilliantly insightful – I always pause and take in your comments at UD, knowing I'm in for a treat! You rightly summarised – a typo suggests the neologism for boiling down the essence of a matter, PaV style: simmerised, as in: “PaVian simmerisation on ID is a highly recommended intellectual taste treat” -- that in reaching he conclusion of cheating, Prof PO has in fact intuitively used the underlying Fisherian argument – exactly as WD highlights as going on unnoticed in Bayesian thought on inference testing. The contrasting quotes from PO in 117 are devastating from that viewpoint. Also, I forgot above to note that classically in statistical inference, it is said: correlation is not causation. One needs an independent, dynamical model relative to which the patterns observed in statistical tests can discern what chance is likely to do, and what agency is likely to do, or in the case of ANOVA, what altered contextual forces are likely to influence or achieve – control groups, varying treatments and quantities of treatment and the like. Thus, we easily see that the dynamical model that chance and/or regular lawlike forces and/or agency may all be acting in a situation, and can be in some cases of interest discerned. Fisherian testing and its descendants, are about that in light of the pattern I highlighted back up in no 68, i.e if something is beyond the relevant probabilistic resources, it is likely to be agency not chance. It is in that context that prof PO's easy and subtle glide from summarising what WD said on Caputo to greatly expanding the candidate rejection region is seen for what it is: setting up a strawman [probably inadvertently] for a knockout that thinks it has addressed the actual case. Had the good prof from Tulane stuck to his proper remit, he would have been able to see that in fact his proposed auxiliary null hyp, that the probability was biased, should have been spotted due to the strong runs it produced, and then controlled for. After all, statistical process control is a major subset of practical statistics, and much of it hinges on observing and responding to such runs to bring a process under tighter control thence improved quality. But then, quality is relative to intent,and on the evidence, the process was producing exactly what Mr Caputo wanted, so he left it as it was - design by "negligence,' cf Tort in law, on neglect of duties of care. [Never mind the background that there was in the 60s a major debate on the evident unfairness of using lotteries with capsules to pick draftees for the Vietnam war. (Apparently, they tend to stratify instead of shuffling properly at random.) GEM of TKI PS: Prof PO, use the angle brackets just as you would hand code a basic web page. kairosfocus
"A growing number of proteins can be synthesized. Solid phase peptide synthesis is a leading method. DNA synthesis is also well along the way. A complete working polio genome (produced an infectious virus) was constructed artficially."---DaveScot Dave, I wrote the following later on in the paragraph: "Proteins just don’t exist anywhere else but in cells (medical laboratories are a separate, and inconsequential, exception)" I was saying it was "inconsequential" in terms of the argument I was making regarding "specification"; IOW, I was trying to side-step the issue to stay on point. But, of course, I agree with you 100% that the kinds of things that they're doing in labs firmly buttresses the ID argument. In fact, you would think the production/manipulation of functional genomes, etc., would be proof-positive that the "intelligence" that is encapsulated in the genetic code is commensurate, at least at some elemental level, with our own---a strawman argument that keeps being made. Here's an interesting article from PhysOrg.com if you haven't already seen it. Listen to this quote from the article: "Although the research is far from practical applications, the team's discovery has yielded a new, fundamental “bottom-up” technique for building nanostructures. 'It's all about constructing materials and nanostructures in an easy way,' Pochan said. 'The goal is to design a molecule with all the rules--all the information it needs--to zip up into the desired shape and size that you want. Then you throw them in the water and see what you get--hopefully the desired, complex nanostructure.'” "Desired, complex nanostructure"----Hmmpf! Sounds like 'specified complexity' to me. And all this "information" is packaged into the molecule. What do you know! Intelligent Design operating in, and through, molecules. Sounds familiar. PaV
PaV And the only way that that specified structure can come into existence is if it is manufactured by a cell Not necessarily. A growing number of proteins can be synthesized. Solid phase peptide synthesis is a leading method. DNA synthesis is also well along the way. A complete working polio genome (produced an infectious virus) was constructed artficially. This is actually a prediction of ID and a very important one. Intelligent agency, and only intelligent agency, is capable of designing and producing the nano-molecular machinery found in living cells from inanimate chemicals without any need for living or dead precursors. The first part is verifiable and very nearly verified. Capabilities in genetic engineering and protein synthesis are accruing rapidly. It's only a matter of time and money before the fields are well enough along to generate a living cell from scratch. We aren't quite there yet but I consider this no more than an engineering challenge - well enough demonstrated to take as axiomatic. The second part is not verifiable but it is indeed falsifiable in principle. To falsify the assertion that only intelligent agency can create a living cell absent a living precursor is of course to come up with demonstrable biochemical means of abiogenesis where only unguided chemistry in realistic natural environments produce a living cell. I can't see where this isn't all quite scientific and testable through either positive verification or falsification in principle. When you've only got one method of abiogenesis (through intelligent agency) verified as possible it certainly deserves to be called the best explanation until another means is proven possible. DaveScot
tribune7: "Note that 1983 was just a little after Hoyle started looking critically at chemical evolution." Interesting! PaV
Well, you have to wonder why he never won one. His collaborator William Alfred Fowler did in 1983, and it was a puzzle as to why Hoyle didn't share it. Note that 1983 was just a little after Hoyle started looking critically at chemical evolution. tribune7
tribune7: "Hoyle never won the Nobel prize, btw." I was going to look that up before I posted it, but said, "Naw!" Sorry. It isn't the first time I've awarded him the Nobel Prize. I must be partial to him!;) PaV
The quote in 'blockquote' in the last post is, of course, a quote from P.Olafsson's previous post. P.O., you seem to have a sense of humor. That's a nice gift to have. Keep it up! Good weekend to all. PaV
A classic creationist argument against Darwinian evolution is that it is as likely as a tornado in a junkyard creating a Boeing 747 . . . . PAV good point about Fred Hoyle (who in no way can be considered a classic creationist) being the source of the analogy w/regard to a tornado in a junkyard creating a Boeing 747. Hoyle never won the Nobel prize, btw. tribune7
Prof Olafsson:
“…out of the blocks stumbling…”, let’s skip the insults, shall we?
It sounds like what I wrote irritated you. Well, what you wrote irritated me. Are you concerned about that? You wrote: "A classic creationist argument against Darwinian evolution....." A classic creationist argument???? If you knew this was a quote from Hoyle, then why didn't you write instead, "A classic argument against Darwinian evolution used by creationists is....."? Remember, I don't know you from Adam (pardon the pun), so when you start out this way, well, in my eyes you're stumbling. I hope you correct that first line. (BTW, I'm not a creationist, so I'm not offended that way. I bristle, instead, because I'm unduly being pigeonholed. On a happier note, have you read Hoyle's book, "The Mathematics of Evolution"? I suggest you read it, if you haven't already. I imagine you'll like it.) More substantively: you write: "Anyway, I don't mean that specification is dependent on design." But, at the beginning of the paragraph that ends with the above quote, you wrote: "Yes, specification involves a pattern. In the Caputo case, it just so happens that the pattern (more D's than R's) indicates that he didn't choose fairly." This, in my mind, is almost in direct opposition to the first quote taken from the end of the same paragraph. To get at what I mean when I say this, I'll analyze what I think I see happening in your mind, and then contrast that with the way I look at the same situation. It appears that you've looked at the sample, the 41 D's and 1 R, and in your mind done the math; that is, in your mind you've instantly taken a model of p versus 1-p, understood immediately that for that model to work there should be, on average, half D's and half R's, AND, since there's not even close to a 1/2 to 1/2 ratio, you've concluded that this sample did not occur by chance. IOW, you did instantly in your head what WD mathematically proposes as a test of whether a design inference is warranted or not, and, hence, you instinctively, intuitively, make the following judgment: "it just so happens that the pattern (more D's than R's) indicates that he didn't choose fairly." I, instead, look at the 41 D's and 1 R as simply a pattern, a specification. Then I ask myself, if, assuming there is a 50/50 chance of both D's and R's appearing in this type of pattern, this particular pattern could have occured by chance? I then do the appropriate math and, since we're at the end of the explanatory filter at this point, I arrive at the conclusion that design (inteligent causation) is involved. That is, Caputo cheated. In the paper that kairos focus provides a link to, WD explicitly states that those who engage in Bayesian analysis go through the very process you appear to have gone through, a process that he terms Fisherian, and one which he says underlies what Bayesians actually do without attending to it. Now, as to whether you are a Bayesian or not, I would simply ask you to read WD's paper---it's only 12 pdf pages long [you'll find it on post#34]---I'd be interested in your reaction. (And, frankly, uninterested in carrying on more conversation without you having read it. If you want to engage WD, then please read what he's written. I say this as kindly as I can.)
There is a problem with first introducing specification as a rejection region, that is, indicate that it is a particular mathematical object, namely, a set, and then claiming that in biology, it "refers to function." . . . .What do we compute the probability of? Caputo: a set of sequences, containing the observed.....Flagellum: ?
I am actually somewhat sympathetic to your concern here. Perhaps the seeming equivocation is remedied by comparing biological systems with the already used example of a game of Hearts. This is what I mean. If I have a standard deck of cards and deal them out, then each hand that's dealt is a "specification". It doesn't matter what it looks like. And if I spend the next twenty years just shuffling the deck of cards and dealing out hands, all of those hands would also be "specifications". They are all patterns that, for a brief instant, have naturally occurred through the process of gathering cards, shuffling, and then dealing. It's all repeatable. When it comes to biological systems, though, it just doesn't work that way. We don't, in nature, just find any old kind of protein. Proteins have a very specific kind of structure. And the only way that that specified structure can come into existence is if it is manufactured by a cell---we not only need the DNA, we need the cellular machinery that comes with it. So, if we want to know what are the patterns, i.e., the "specifications" that 'naturally occur', well, the answer is that the only "specifications" that occur are the ones that have some "function" in biological systems. Proteins just don't exist anywhere else but in cells (medical laboratories are a separate, and inconsequential, exception). So, switching back to the card dealing side of things, it is not only possible that someone can be dealt 13 hearts in a game of Hearts, but, if we deal long enough, we'll see it sooner or later. Because of this, every possible arrangement of 13 cards dealt from a standard deck qualifies as a "specification". OTOH, if you want to look for a nucleotide sequence of AGCCCTTTAGACCTTGGAA..... etc, etc, while any such randomly defined sequence is hypothetically possible, they may, or very likely may not, be found in nature. So, if we're going to examine biological systems, the only arrangements of nucleotide sequences that qualify as "specifications" are those which are biologically functioning---Nature doesn't permit otherwise. I think if we view the category of "specification" in this way, understanding that, in the case of biological systems, Nature itself has decided on what is specified or not, then I think, hopefully, we've avoided equivocating, since, then, the "set" of all possible specifications (a mathematical object) has been determined---not hypothetically, but in fact. PaV
Disappointing. It looked much better in the preview, big letters. Anyway, good FRiday to everybody. olofsson
Thanks Kf! Didn't realize it was html code. I'll try it right away: &nbsp&nbsp Of course Prof PO is right I'll try another: Of course Prof PO is right Cool. Thanks! My postings will be much more aesthetic from now on. Prof PO olofsson
PPPS: My reputation is of being a tough but in the end fair grader. If you earn it, you will get 100% from me. But, as the Spartans loved to answer: IF. REALLY gotta go. kairosfocus
PS: Blockquotes? Use the HTML tags. [A code, a marker of agency and of mind . . .) Smart curly quotes, i have used a Word processor and this does it automatically . . . esp when I get tired of the delays in the showing up of print in the UD text box. Open Office 2.0 is very good. PPS: Just talked with one of my former students, now a Geophysicist a MVO. Our friend down south is quiet, thanks. Looking forward to celebration of Emancipation day Aug 6th. Hope the old boy doen't spoil the party. kairosfocus
Kairosfocus says: "Of course Prof PO is right" Thank you! PO olofsson
Prof PO: Not all models are statistical or narrowly specific; a model being a simplified representation of features of reality that allow us to analyse or act into the world. Some are DYNAMICAL, e.g. the one that things in the observed world that are contingent, so are caused, are caused by chance forces, and/or law-like natural forces, and/or by agents. (And here I allude to the issue of sufficient reason for entities, by cause or by necessity.) My favourite example being that heavy objects fall under the NR, gravity if they are not supported. If the object is a die it tumbles to a reading that is based in effect on chance. If we are playing Monopoly with it, agents are using chance and NR to achieve a goal. [Down that road lurks the classical model of causes . . .] In the situation of CSI at work, we look at an object and ask C/NR/A? Contingent, so NR is not dominant. Sufficieintly complex and specified, the chance null hyp fails, and we infer to agents as the best explanation. E.g. lucky noise can account for the apparent data in this thread, but the CSI leads us to all intuitively infer that we are a collection of agents at work here, even though the lucky noise thesis is logically and physically feasible. But it is so improbable, and so functionally specified, that we see that agents are the best explanation of the CSI observed. The information and information systems in the cell are vastly more sophisticated, fine-tuned, functionally specified and complex than this thread. I really gotta go now . . . GEM of TKI kairosfocus
H'mm: Onlookers, it seems the exchange with prof PO is at an end on his side. 1] I note his: We still have the statistical model that Ds and Rs are independently chosen with probabilities p and 1-p, respectively and there are of course many other ways in which he could have cheated. -> Of course, I have now repeatedly pointed out that given the extreme bias in the case, if Mr Caputo were concerned to be fair to begin with, he should have noticed the strong runs and should have corrected the fault. -> This is precisely another case of: design. [In short, the aux null hyp is invalid as it collapses into design by "negligence."] 2] Re: There is a problem with first introducing specification as a rejection region, that is, indicate that it is a particular mathematical object, namely, a set,and then claiming that in biology, it “refers to function.” Of course Prof PO is right that a pattern is often implicated in a specification; such can come out in a simply describable rejection region in a statistical distribution, as the Caputo case illustrates and as Dembski spoke of in his already linked paper, p. 4 -- which I have already cited above. [Which Prof PO unfortunately still ignores.] But that is not the only way to see a specification, and biosystems show us why, relative to the DNA. Here we have an informational, fine-tuned code which functions to supply the recipe for the key molecules of life. Functional specification as an entity in a molecular-scale information system. That too is valid and independent of the pattern in the molecule itself, insofar as the molecules correspond to a definable information system architecture. And in fact as also noted previously Von Neumann predicted this architecure for life as a condition of a self-replicating machine, before DNA was discovered: a blueprint [here -- DNA] and the means for reading and implementing it [here -- RNA, enzymes, ribosomes etc]have to be part of the machine. Here, too, these molecules are isolated in the vast configuration spaces, and are so far away from the likeliest molecular configs, that we can indeed see that chance on the scale of the observed universe will not be likely to access these islands of function in accordance with the requisites and based on the code of life. (Such entities as DNA are both complex and specified, far beyond the Dembski UPB.) For why that is so and e.g. how the relevant probabilities may be estimated, we must consult statistical thermodynamics adn information theory as well as a fair bit of computer science for some issues, and a good fairly easy start is in the already linked TMLO ch 8, also my own always linked Appendix 1. BTW, this cluster of required insights is one reason why ID supporters tend to have backgrounds in engineering, computer science and applied science generally. It is also the reason why the tide is so strongly against the evolutionary materialists on this issue, as we move ever more into an information society in which a lot of people understand what information systems require to work, and how they come to be: they are designed, in every case where we directly see the origin. So why should an "exception" suddenly emerge simply because we don't directly see the designer of the cell at work in the here and now? GEM of TKI kairosfocus
Kairos says: "... the chance and/or regularity and/or agency view of causal forces is a model..." That's not a model,it's a paradigm. A model is to say, for example, that Ds are chosen independently with probability p, 41 times, thus specifying the probability distribution on sequences. olofsson
Mr Kairos! You said: IMHCO, that reads like a grudging half-acknowledgement, but then I could of course be stubbornly misunderstanding you again . . . ;-) (By the way, how do I do those quote-boxes?) Yes, you are but I have to grudgingly half-acknowledge that you are very good at it! A master misinterpreter you are indeed! I wish you a pleasant weekend on the beautiful island of Montseraat! :D PO olofsson
PPS: Here is how Dawkins mentions the 747 case with respect:
Hitting upon the lucky number that opens the bank's safe is the equivalent, in our analogy, of hurling scrap metal around at random and happening to assemble a Boeing 747. Of all the millions of unique and, with hindsight equally improbable, positions of the combination lock, only one opens the lock. Similarly, of all the millions of unique and, with hindsight equally improbable, arrangements of a heap of junk, only one (or very few) will fly. The uniqueness of the arrangement that flies, or that opens the safe, has nothing to do with hindsight. It is specified in advance. [The Blind Watchmaker (1987), p. 8.]
In short, Dawkins here acknowledges the credibility and force of the concept, complex, specified information. He happens to think that biosystems are only apparently designed, but he understands that CSI is real, and that it is a feature of observed designed systems. WD's difference with Dawkins, in the end, is that he infers, giving reasons, that the apparent design in bio-systems is there for the excellent reason that they are on inference to best explanation, actually designed. In particular, chance and necessity are not able to supply a force capable of overcoming the want of probabilistic resources on the gamut of the observed universe. You may disagree, but that is in the end a reasonable position, and not one for which the double-PhD'd Dr Dembski should be subjected to contempt, slander, calculated or insistent misrepresentation, name-calling, derision and disrespect. Worth a pause, and a think or two . . . kairosfocus
Hello PaV, I am aware that the Boeing statement is due to Sir Fred Hoyle. I don't have a Bayesian approach, which is clear already from the first two sentences in my abstract. If you are seriously interested, I can explain it to you but I don't want another pointless exchange such as the one I had with Kairosfocus. In statistical hypothesis testing, you never attempt to confirm the null hypothesis (it is called "null" because you want to "nullify" it). You attempt to reject it in favor of an alternative hypothesis. The logic is that you assume the null hypothesis to be true and if this assumption confers a very low probability on the data, the null is rejected, the same logical structure as the explanantory filter. Yes, specification involves a pattern. In the Caputo case, it just so happens that the pattern (more Ds than Rs) indicates that he didn't choose fairly. We still have the statistical model that Ds and Rs are independently chosen with probabilities p and 1-p, respectively and there are of course many other ways in which he could have cheated. I tried to point out how the filter corresponds to both model choice and hypothesis testing, but I may not have been clear enough. Anyway, I don't mean that specification is dependent on design. There is a problem with first introducing specification as a rejection region, that is, indicate that it is a particular mathematical object, namely, a set, and then claiming that in biology, it "refers to function." In the flagellum example, the function would supposedly be motility but how do you formalize that into a rejection region? After all, it is this region you need to compute the probability of, not merely the particular outcome that has been observed. It is to illustrate this objection I chose to re-use Dembski's own examples: Caputo and the flagellum. Set them side by side. What has been observed? Caputo: a sequence. Flagellum: a protein configuraton. What do we compute the probability of? Caputo: a set of sequences, containing the observed and characterized by containing as many, or more, Ds than the observed sequence. Flagellum: ? "...out of the blocks stumbling...", let's skip the insults, shall we? PO olofsson
PaV: You are right to mention the late, great Sir Fred Hoyle, whio is indeed the originator of the 747 example. He is a personal scientific hero of mine, a man not afraid to swim strongly against the tide of the times -- and that includes when I disagree with him! I miss having him around . . . On the 747 example and its cogent force, kindly see my always attached, appendix 1. (I move it to semi-molecular scale and show what is going on in light of the known random forces at that scale, using the stat mech approach's principles, but fairly light on the real math of that approach.) For some reason, I didn't think to take up Prof Olofsson on it, but way back to my copy of Cr Scis Answer their Critics, I have thought the Creationists, who took up the theme, were right and had the better of the exchanges on the merits, in the 80s and 90s. Indeed, I once had a months-long personal snail mail exchange with the late Henry Morris (whom I found to be a gracious Christian Gentleman) on the issues, and he brought up the question of CSI, in retrospect based on the point cited from Thaxton et al above! He was a distinguished professor of Hydraulics engineering, and by virtue of that had depth of understanding on thermodynamics issues. I came away impressed with him, and the key points he made on the issue in response to my citing of cases like the Red Spot on Jupiter, Hurricanes etc. [These largely reflect deterministic forces, BTW, and are therefore programmed by nature. A snowflake has hex symmetry because of that, and the varied shapes are shaped by the micro-climate where they form as they form. This is for practical purposes a product of chance plus necessity at work, and does not fit in with the kind of specification we are speaking. Similarly, for crystals in general.] In short, let us move away from pigeonholing and dismissing, to addressing the matters on the merits. Cheerio GEM of TKI PS: A read of Ch 8, TMLO will probably be helpful to us all. kairosfocus
Jerry the reply will be we do not understand the chance and laws that governed DNA formation To which I reply that's a bald faced logical fallacy called an appeal to ignorance. DaveScot
Professor Olofsson: Pardon my citing a bit of context, but let's go to part of the opening of your professional paper (which cites a 2007 paper, so is current):
A classic creationist argument against Darwinian evolution is that it is as likely as a tornado in a junkyard creating a Boeing 747 . . . . Whether the arguments against Darwinian evolution are based on tornadoes in junkyards or bacteria, the key concept for evolution critics is improb-ability. Since mathematics, probability, and statistics are highly developed disciplines, and are well established as indispensable scienti [fi]c tools, it is only natural that evolution criticism has turned mathematical, trying to establish objective criteria to rule out chance explanations. The chief advocate for this approach is William Dembski whose ideas are described in his books The Design Inference [2] and No Free Lunch [3], and also in various postings on his own website (gives URL of WD's site) . . .
And, your popular level article in its second paragraph reads:
Coulter has very cleverly written a fake criticism of evolution, much like the way NYU physicist Alan Sokal in 1996 . . . published a fake physics article in a literary journal, an affair that has become known as the “Sokal hoax.” . . . the very people he wished to expose. Coulter’s aim at antiscience is at the other end of the political spectrum. An equally unabashed rightist, she is apparently disturbed by how factions within the political right abandon their normally rational standards when it comes to the issue of evolution . . .
It is plain – sadly -- that in both cases, sir, you began by loading the language and slanting the issue from the outset. That strongly shaped my initial response, and for good reason, it still shapes my understanding of what you have said. I can only hope that in the end, you will reconsider the approach that I have just highlighted. Now, your further claim is: I suppose that he finally realized that his objections regarding my alleged “expanding” of the rejection region were in error. As for the Caputo case, I do not “grudgingly half-acknowledge” anything, I gladly acknowledge that the filter works here. I am frankly tired of his constant misrepresentation of my writing and refusal to acknowledge even the smallest possibility of even the slightest mistake in his own arguments, or in Dr Dembski’s filter . . . I will therefore note as follows: 1] Expansion issue: Kindly cf the end of 89 and the beginning of 90 above, where I point out the way you introduce the “expansion” and its rhetorical effect. (Having first noted under 1 in 89, that you introduce Sobers Bayesian critique as if that has the last word, i.e you do not at all even allow WD's 2005 paper to speak in his own voice in response.) In that context, you on p. 7 introduce a point that WD and I – and Fisher for that matter -- would strongly and for excellent reason disagree with: A sequence consisting of 22 Ds and 19 Rs could also be said to exhibit evidence of cheating in favor of Democrats. Not so, as the 22/19 split is close to the peak and well within the bulk of the peak – which is precisely why the upper roll-off from that point on has in it 38% of the curve's area. By sharpest contrast, and as I detailed in 68, a 40/1 split onwards cuts off a fifty billionth of the curve, and it is entirely reasonable that we would not see this in any fair selection process. And, if Mr Caputo were truly concerned to be fair, he would have noticed the run and corrected it in good time -- design by “negligence” in short. WD emphasised the EXTREMENESS of the 40/1 case, as a key part of his case, and indeed the point of the Fisherian approach is that if the extremes of the skirt are beyond the reasonably available probabilistic resources, such should not appear in the sort of case we are looking at if the test was fair. My chart on the floor example shows why. [And, if the issue was to insert the auxiliary null hyp of a non fifty-fifty split on probabilities, the issue of Mr Caputo's unresponsiveness to the run immediately leads to inference that he was happy with the result he was getting, or he would have fixed his machine. Design, in short.] 2] “I gladly acknowledge that the filter works here . . .” Regrettably, you contrasted “design thinkers” and “Statisticians,” and missed out on the fact that the chance and/or regularity and/or agency view of causal forces is a model, leading up to an inference that the “non-50-50” auxiliary null hyp was missed by WD and that this made a material difference. Then, you concluded: the explanatory filter is merely a description of the entire procedure of choice of model and hypothesis test. IMHCO, that reads like a grudging half-acknowledgement, but then I could of course be stubbornly misunderstanding you again . . . ;-) 3] “A real biological example [of CSI] would be very nice” DNA: 500,000 to ~ 3,000, 000, 000 base pairs, four states per pair, fine-tuned for life- function, thus bio-functionally specified and embracing a configuration space that at the 500 k end is ~ 9.9*10^301,029. [Giving ample room for islands of functionality and for frequency of occurrence, 250 – 500 or 1,000 base pairs would be well over the Dembski UPB, 1 in 10^150.] Okay, let's all see if we can back off the voltage: Cheerio -- have a nice weekend . . . GEM of TKI kairosfocus
Prof Olafsson: Hello. Please let me join the conversation here. Before I get to something more substantive, please let me point out to you the irony with which your linked paper begins. You attribute to "Creationists", and then to advocates of "Intelligent Design" the supposed CLASSIC ARGUEMENT against Darwinian evolution encapsulated in the statement that such evolution "is as likely as a tornado in a junkyard creating a Boeing 747." The irony is (see here) that this remark was made by a Nobel prize winning astrophysicist by the name of Sir Fred Hoyle, who was neither a creationist, nor a theist, but rather an agnocist/atheist who believed in panspermia. It's a shame you come out of the blocks stumbling. But the more substantive observation is this: (1) Most of the objections you make derive from your Bayesian approach to statistics, which is rebutted in a paper by Prof. Dembski that KairosFocus provided as a link in one of his responses. (2) Your notion of "specification" is not the one that ID uses. First, on p. 4, in reference to a hand of cards you say, "The particular feature of the hand of thirteen hearts is, in Dembski's terminology, that it is specifed and this makes you think that something other than chance was responsible." Later on, p. 5, in discussing the Caputo ballot line example, you write: "Caputo's sequence is specifed in the sense of indicating a pattern of cheating." "Specification", as I understand it, does not work that way. Specification simply means that some complicated object has a particular pattern, one that makes it unique. Thus, if you had a hand in Hearts where all 13 cards were Hearts, that is no different that one in which only 4 were Hearts, and 3 Spades, etc. From both of the examples I quoted, you seem to believe that some sort of design must be integral to a pattern (it happens to be cheating in both the cases you supply) before that pattern can be considered to qualify as a "specification". I'm wondering if you're aware of this misconception on your part. Having pointed this out, I can't help but ask this: if, as a statistician, you were asked to examine and anaylze the selection of names by Caputo, and, if as a statistician it was obvious that cheating was involved (which is fairly easy to arrive at when in a choice between two variables supposedly chosen at random, one of the variables shows up 41 out of 42 times), then why wouldn't your "null hypothesis" be that 'cheating was involved'; i.e., p is greater than, or equal to 1/2? That seems to make sense to me. As a statistician, you would then only have to do the required calculations to reach the conclusion that your "null hypothesis" is confirmed. In what way, then, would there be any need at all for an "alternative hypothesis"? Finally, I want to point out that your criticism of Dembski regarding his stipulation of biological systems as being 'specified' if they have function is also off the mark. I'll just quote from p. 148 of NFL: "Nor is specification, or as it is also called biological specificity, at issue, For instance, historian of biology Horace Freeland Judson attributes the twentieth-century revolution in biology to 'the development of the concept of biological specificity.' "Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specification criterion (see sections 1.3 and 2.5)" Again, "specification" involves a "pattern"; not a "design inference" such as "Caputo is cheating". PaV
Prof. Oloffson, I appreciated your critique and I think it shouldn't be ignored. I have an idea of getting your work more consideration. I'm sending you an e-mail from my gmu account. Salvador scordova
Dear all, I read through Mr Kairosfocus's new, very long, post. As he does not mention it anymore, I suppose that he finally realized that his objections regarding my alleged "expanding" of the rejection region were in error. As for the Caputo case, I do not "grudgingly half-acknowledge" anything, I gladly acknowledge that the filter works here. I am frankly tired of his constant misrepresentation of my writing and refusal to acknowledge even the smallest possibility of even the slightest mistake in his own arguments, or in Dr Dembski's filter. On the upside, there are openminded people like Atom who disagrees with me but at least understands my arguments and doesn't try to misrepresent me. Thanks to Atom for being clever and civil and even trying to help make Mr Kairos understand. I encourage everybody else to read my article and feel free to discuss it with me. Again, thanks for letting me partake in the debate although I expose a critical point of view. For newcomers, my criticism is only directed against the filter's usefulness in applications in evolutionary biology. My criticism comes from the point of view of probability theory and mathematical statistics. I decided to post it here as Dr Dembski, in his original post about Dr Padian pointed out that even the most qualified mathematician among his detractors (Shallit) was no expert in probability theory (note though that none of the supporters mentioned by Dr D is a probabilist either). As my expertise is in probability and statistics (more the former than the latter), I thought that my criticism might be of interest. As for CSI, I don't have much to add to what jerry says. It is a very ambitious concept within a very ambitious program. A real biological example would be very nice. PO olofsson
Jerry, The definition was expressed in symbolic form in Dembski's paper Specification the Pattern that Signifies Intelligence. It only took about 10 symbols! A 200-word definition would be a colloquial diluting of an otherwise rigorous descripition as Bill gave. That said, I'll give a colloquial definition (which means it's an approximation with all the pitfalls of associated with lack of rigor).
Specificity is the degree of improbability of a given pattern
A 6 amino acid protein binding site has high specificity (1 in 10^20). An eight character password has a specificity of 1 in 26^8. The description of specificity is dependent on the supposed distribution being used to describe the specificity (say equiprobable versus normal or poisson on whatever). Dembski's "Information as a Measure of Variation" factors out the influence of errors in our estimation of the "correct" distribution. For example, if dice are loaded, the distribution is no longer equiprobable. However, Bill shows that for certain patterns this is a moot point for high complexity issues. Most coins have a 50% chance of being heads. But what if it had a 99% chance of being heads, and your hypothesis was the reverse, namely it has an a priori chance of being only 1% heads? How bad of an error will that be on average when evaluating a coin pattern of say 500 bits? The penalty is a mere 6.6 bits at worst on average. At some point however, if the distributions look too magical (say they result in 500 bit errors), then one can say the distributions are themselves designed or fine-tuned. For example, if a chance distribution results in an astonishing pattern. We can estimate the fine tuning involved. Let's say you happen to come across scrabble letters that have a habit of self-ordering into your name when randomly shuffled. Would we not think the distribution was fine-tuned, practically magical? The latest math describes how to account for the level of fine-tuning. The fine-tuning is described with reference to the simplest possible distribution. Dembski's latest ideas then account for: 1. chance hypothesis 2. fine tuning of chance hypothesis distributions His earlier works had only #1 (which he acknowledged), but he fully set out to solve #2. The critics only gave anemic responses for his latest. It would appear few even understood the papers. Given what Behe has observed in today's world, it would appear, if evolution were true, mutations in the past were fine-tuned, they obeyed a different distribution than what we see today. The level of difference is the level of fine tuning involved. Dembski's math relates the level of fine-tuning need if indeed the mutations were front loaded and no longer in evidence today. Furthermore, there are patterns that resist explanation by reference to ANY hypothetical or imagined chance distribution function. One such pattern is a self-replicating computer. Amazingly, life is such a computer. scordova
kairofocus, Dave, I don't disagree with most of what you are saying. All I am saying is that in order to invoke CSI, one has to apply the definition with whatever supplementary information is available to delineate it from other events. What do particular coin tosses and English sentences have in common that each meets the CSI criteria and other things don't? What is similar between Caputo's drawings and DNA? Why isn't a thunder storm CSI? If you argue that thunder storms are generally understood by chance and laws then the reply will be we do not understand the chance and laws that governed DNA formation. I am off for the day so I cannot add more to this at the moment except that the thing in common is that the pattern is specified by or specifies another independent pattern. But how you make that payout in terms of statistical testing is beyond me at the moment. Maybe PO would want to chip it now that we are off criticizing his paper. jerry
Biological CSI in 200 words: Complexity is any pattern where the possible number of different permutations exceeds the estimated number of elementary particles in the universe (10^150). Any protein or DNA codon sequence of 100 units or more easily qualifies as complex. Specificity is the ability to independently describe a given complex pattern. An amino or nucleic acid string with no known function generally has no specificity. There's no way to independently describe it in a way that makes it somehow distinct from most or all of its 10^150 peers. Contrast this with a protein complex that is precisely shaped so that it exchanges oxygen molecules for carbon dioxide molecules (i.e. hemoglobin) while excluding most or all other molecules from the exchange. There may be many permutations amongst the 10^150 (or more) possibilities which conform to the molecular exchange specification but the vast majority of those permutations do not conform. The conformance or lack thereof can be and has been experimentally tested enough to be confident that most of the possible patterns do not conform to the specification. Thus we have in the hemoglobin molecule complex specified information. This can be easily applied to any pattern. ------------------------------ There you have it in less than 200 words. Caveats follow: 1) If CSI is not identified because an independently given pattern cannot be stated it doesn't follow that CSI is absent. The failure to identify CSI may be due to lack of knowledge about the pattern. 2) If CSI is identified it doesn't automatically mean a design inference is warranted. Probabilistic resources other than intelligent agency that could potentially generate such a pattern must be carefully evaluated. A snowflake might have CSI specified by crystallization patterns but it fails a design inference because well enough understood unintelligent interplay of natural forces can generate such crystal patterns and we can observe them in the process of doing it. The rub between the design inference and the chance & necessity explanation is that RM+NS is asserted by many as a sufficient probabilistic resource to generate all the CSI in living things. Design proponents assert that RM+NS is insufficient. Behe's new book "The Edge of Evolution" does a wonderful job of using empirical observations (i.e. confirmed hard data) to delimit the capabilities (and lack thereof) of RM+NS. DaveScot
Jerry Why don’t the four of you develop a definition of specificity that is less than 200 words long, preferably shorter. By that do you mean dumb it down enough so that even ideologically blinded chance worshippers can grok it? Not possible. Chance worshippers are world class masters of the ad hoc. Everything about evolution except for trivial, observable cases like antibiotic resistance require an ad hoc story about how chance and necessity conspired to produce large changes in organisms through a long sequence of tiny changes where none of those tiny changes were observed in the past and where similar changes resist observation in the present. Ad hoc is not a problem for chance & necessity dogmatists. There's a lot less ad hoc in the specificity part of CSI than there is in making up stories about how random mutation and natural selection drove phylogenesis. The problem IMO isn't that chance worshippers can't understand CSI, the problem is they either refuse to understand it or if they do understand they refuse to admit to understanding. DaveScot
Note: Cf the CSI definition a la WD, here and the shortish discussion around that definition here. [I note too that refinement or adjustment to a mathematical model of a concept does not constitute equivocation and confusion regarding the concept.] WD's site as in the links for this blog, has much more. My own always linked has a discussion of the issue in light of the linked concepts and issues. kairosfocus
Jerry Thanks for your comment on definition. Basic problem: in general, for many cases of great interest, we simply have not been able to define core concepts by composing precising statements. Sometimes, e.g. "justice," for thousands of years of trying. In terms of biology, "life" is a well-recognised concept that we can usually tell examples/counter examples, but there is as yet no such general statement. Instead, we work based on family resemblance, case by case. And, successfully. In the case of CSI, DNA and associated molecules are excellent cases in point where we can make effective analyses. We know these are digital element strings [AGCT monomers, amino acid residues, etc], and we can measure the information content of the strings, in bits -- factoring in frequency of occurence of individual states for instance. We are dealing with complexity on the scale of 500,000 - 3 to 4 billion elements with DNA systems in life. Such strings are contingent, complex and functionally specified by the requisites of a self-assembling molecular information system at the core of life. They are also fine-tuned: relatively small random perturbations typically destroy functionality, partly or wholly -- how many point mutations, even those conferring selectional advantages often work. In short, they exhibit CSI. We also know that in a great many cases we directly observe, without exception, where we see the actual cause of CSI, it is agency. This is not hard to figure out as intelligent agents are able to target the "hot" configurations in the space of all configs, changing something that is infeasible by chance into something that is very feasible by intent. The problem, IMHCO, is not with what we know or should reasonably know, but with where that may point relative to the dominant worldview paradigm of evolutionary materialism. GEM of TKI PS: I trust the above should be enough to show that I am not indulging in selective, misleading, out of context citations -- aka quote mining -- of PO, contrary to his recent claim to that effect. (I freely confess to highlighting the sometimes subtle [and I suspect inadvertent] rhetorical implications of how he built up his case, to bringing out how he perceives design thinkers and how that distorts the reading of what we have said, and to my challenging the substance of the case, with special reference to the Caputo court case of the EF in successful action.) kairosfocus
kairosfocus, Patrick, PO, Salvador, Why don't the four of you develop a definition of specificity that is less than 200 words long, preferably shorter. Then we can see if it applies to all the examples that are supposed to be complex specified information (DNA, certain coin tosses, bridge hand with 13 of same suit, Caputo's selections, English sentences, functional proteins etc) and not apply to those other examples that should be excluded (rock outcroppings, thunder storms, random polymers, random coin tosses, typical bridge hand, random selections of party affiliation etc). Then we can see if statistical analysis can be applied to each example to distinguish each. Until that can be done in an orderly but not ad hoc fashion, CSI will not carry any weight as an argument. jerry
Patrick: The core concept of CSI is simple enough and IMHCO plain enough. The problems come in with [1] mathematical modelling and [2] with dealing with the sort of objections that are often made, some of which traffic on [3] serious and insistent misrepresentations by people who should know and do better. I note again that the core CSI concept came out of the OOL studies turn of 1980's and is NOT an ID concept in its origin. As Thaxton et al noted back in 1984:
Only recently has it been appreciated that the distinguishing feature of living systems is complexity rather than order.4 This distinction has come from the observation that the essential ingredients for a replicating system---enzymes and nucleic acids---are all information-bearing molecules. In contrast, consider crystals. They are very orderly, spatially periodic arrangements of atoms (or molecules) but they carry very little information. Nylon is another example of an orderly, periodic polymer (a polyamide) which carries little information. Nucleic acids and protein are aperiodic polymers, and this aperiodicity is what makes them able to carry much more information. By definition then, a periodic structure has order. An aperiodic structure has complexity. In terms of information, periodic polymers (like nylon) and crystals are analogous to a book in which the same sentence is repeated throughout. The arrangement of "letters" in the book is highly ordered, but the book contains little information since the information presented---the single word or sentence---is highly redundant. It should be noted that aperiodic polypeptides or polynucleotides do not necessarily represent meaningful information or biologically useful functions. A random arrangement of letters in a book is aperiodic but contains little if any useful information since it is devoid of meaning. [NOTE: H.P. Yockey, personal communication, 9/29/82. Meaning is extraneous to the sequence, arbitrary, and depends on some symbol convention. For example, the word "gift," which in English means a present and in German poison, in French is meaningless]. Only certain sequences of letters correspond to sentences, and only certain sequences of sentences correspond to paragraphs, etc. In the same way only certain sequences of amino acids in polypeptides and bases along polynucleotide chains correspond to useful biological functions. Thus, informational macro-molecules may be described as being [aperiodic] and in a specified sequence.5 Orgel notes: Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6
More recently, Dan Peterson gave a simple explanation based on a typewriter that [1] could only type one letter -- order without contingency, [2] produces a longish random chain of letters - complex as in having high contingency [but no meaning], [3] an equally long meaningful phrase -- complex and specified as to being a message in English. Not very complicated, as big ideas go, and one that we intuitively apply all the time; e.g. to infer that real agents stand behind the text strings in this thread, not mere lucky noise. [Cf my always linked.] WD et al recognised the significance of it and are trying to work with the idea, model it mathematically, and apply it to real-world cases (which admittedly can get hairy mathematically and conceptually). Trust this helps you [and onlookers] this time around. GEM of TKI kairosfocus
4] "In contrast, . . . " PO proceeds: Now that Caputo's sequence has passed the second step, the explanatory filter rules out regularity and chance, and infers design (in this case, cheating). In contrast, a statistical hypothesis test of the data would typically start by making a few assumptions, thus establishing a model . . . --> But in fact, the EF starts with a valid model of causation: [a] chance and/or regularities and/or agent action. --> It then proceeds to find and apply characteristics, namely that [b] NR does not dominate contingent outcomes, then B the presence of CSI distinguishes, reliably, chance and agency, then [c] shows that this is a case of CSI relative to available probabilistic resources. --> And of course, the point of the EF is that the Null Hypothesis between agency and chance is: chance, which is rejected on encountering the pattern the EF looks for -- CSI. --> The pattern is reliable as a sign of agency in every case where we do directly know the causal story, and is relied on routinely in inferences such as the NJ court made on Caputo. [Also, the agent- chance- regularity causal model is of course valid.]. --> We simply note in passing the subtle attempted put-down of WD [who is qualified in the relevant disciplines] by contrast with model-making and hyp testing by real statisticians! 5] "a statistician would first . . . " PO continues: a statistician would first assume that the sequence was obtained by each time independently choosing D or R, such that D has an unknown probability p and R has probability 1 – p. The statistician would then form the null hypothesis that p = 1/2 which is the hypothesis of fairness . . . Next, it would be noted that the rejection region of at least 40 Ds has a very small probability, and the null hypothesis of fairness would be rejected in favor of the alternative hypothesis . . . The [explanatory] filter started by ruling out, based on Caputo's own account, the alternative hypothesis, and then tested the only remaining chance hypothesis, that p = 1/2. Once this final hypothesis is rejected, nothing remains but to infer design. In contrast, the hypothesis test started directly at the second step, only rejecting the particular hypothesis that p = 1/2. --> Boiling down, by skipping the step that we have [i] a valid model of causation, and [ii] a case of contingency which removes regularities [air + fuel + heat --> Fire, reliably] from the picture, PO is here trying to pull a third alternative out of the air [ p was not half, by accident], to avoid inference to agency. --> To do that, he fails to see that agents can use allegedly chance- based processes if they serve their agenda, i.e if C was really concerned to be fair, after a long enough "run" of D's he would have tried to fix his machine -- long before the runs reached the proportions in the case. He did not, and that is also evidence of design. --> The dismissive tone and rhetoric are further underscored by . . . 6] "merely a description . . . " PO concludes: Upon hearing Caputo's account, however, the statistician will realize that his model assumptions were incorrect and in the end reach the same conclusion as the "design theorist." Indeed, it is a general observation that an unlikely outcome may not only cast doubt on the null hypothesis but on the entire statistical model. In this regard, the explanatory filter is merely a description of the entire procedure of choice of model and hypothesis test. --> The principal "design theorist" in this case, WD, is of course very arguably well-qualified to call himself a "statistician." --> And, PO has been forced to grudgingly half-acknowledge that the ID approach has in fact WORKED as advertised here. --> PO then goes on to this "no more Mr Nice Guy" remarks on the flagellum, but [a] neglecting the that WD speaks in the context of well-known effects of random mutation, i.e loss of function, and [b] ignoring that the issue is the null and alt across the causal model, just as with the Caputo case above. (With the flagellum, of course, there is a very tight functional specification at work: an acid-ion driven electrical motor with stator, rotor, and paddle tail capable of forward and reverse drive. [Try to get that to self-assemble by random chance sometime . . .]) GEM of TKI kairosfocus
3] Expanding the RR: Here we read: A sequence consisting of 22 Ds and 19 Rs could also be said to exhibit evidence of cheating in favor of Democrats, and any particular such sequence also has less than a 1-in-2-trillion probability. However, when the relevant rejection region consisting of all sequences with at least 22 Ds is created, this region turns out to have a probability of about 38% and is thus easily attributed to chance. First problem? WD never advocated such an “ evidence of cheating” reading of any result close to the middle of the distribution, and in fact such an expanded RR distracts from, in effect substitutes for and thus rhetorically undermines the actual point WD emphasises in his own summary: WD, p. 11: one must also note that a preponderance as extreme as this is highly unlikely. In other words, it wasn’t the event E (Caputo’s actual ballot line selections) whose improbability the Bayesian needed to compute but the composite event E* consisting of all possible ballot line selections that exhibit at least as many Democrats as Caputo selected. This event—E*—consists of 42 possible ballot line selections and has improbability 1 in 50 billion. -> I hardly need to add, that this is a strawman, hardly a non-issue in the light of 1 above. Second problem? This is also the precise issue of expanding the RR that WD addresses in his paper, p. 4, in the context of Bayesian objections. WD, p. 4: . . . what’s to prevent the entire range of possible coin tosses from being swallowed up by rejection regions so that regardless what sequence of coin tosses is observed, it always ends up falling in some rejection region and therefore counting as evidence against the coin being fair? . . . . The way around this concern is to limit rejection regions to those that can be characterized by low complexity patterns (such a limitation has in fact been implicit when Fisherian methods are employed in practice). Rejection regions, and specifications more generally, correspond to events and therefore have an associated probability or probabilistic complexity. But rejection regions are also patterns and as such have an associated complexity that measures the degree of complication of the patterns, or what I call its specificational complexity. Typically this form of complexity corresponds to a Kolmogorov compressibility measure or minimum description length . . . --> Now, simply speaking, Fisherian inference is about going to the tail of the distribution and ruling as unlikely to be by chance what is sufficiently far out relative to available probabilistic resources. -> This is what WD did, and he underscored the importance of the Caputo case being in the extreme, low probability, D-advantage tail as a part of the issue in the filter. [Of course, while this case is well within the UPB of 1 in 10^150, hte issue of relative probabilistic resources makes the filter still relevant.] -> In short, the fair conclusion would have been that the Caputo case aptly illustrates a successful use of the Explanatory Filter in yet another real world case. But of course, we do not see that . . . kairosfocus
Dave (and Atom and PO): The context for the renewed discussion is PO's intervention at 19 above, in which he linked his paper and claimed (referring to himself) that: Here is a piece that criticizes the filter from the point of view of mathematical statistics, written by somebody who seems to know what he is talking about In 20 and 21, after just routinely looking back, at the thread, I put up a dozen quick points on the paper, which IMHCO, is a very poor critique, starting with using the loaded term, Creationism, then misrepresenting both Behe and WD, then citing a spate of one-sided opinions without a balanced look at what is going on, and in the course of this making a particular hash of the Caputo case. (On this last, I thought that by showing graphically and citing WD on what he actually said -- i.e he was looking at how the extremeness of the deviation from the peak in a case where there are some 2 *10^12 members of the config space and the independent specification combined illustrate the point of CSI -- that would be enough to put the matter to rest. Plainly it is not.) A few further notes are in order. Pardon the details necessary to address the ad hominem rhetorical tactics [i.e the idea that I am stubbornly not listening . . .]: 1] Sober and Bayesianism: In much of the thread above, PO distances himself from the Bayesians, but in fact his is how he introduces Sobers' critique of WD [as a bridging step to his main analysis], without giving a balance to it: PO: The explanatory filter from The Design Inference has also been extensively criticized, perhaps most notably by philosopher Elliot Sober whose articles [6,14,15], amongst many other things, criticize its purely eliminative nature, advocating instead that sound scientific practice require that conclusions are based on comparative reasoning. Perhaps a chance hypothesis confers a small probability on the evidence, but how do we know that a design hypothesis does not confer an even smaller probability? [pp. 1 – 2] -> I hardly need to further underscore that he also by giving a series of unfortunately one-sided summary statements and references on pp. 1 - 2 undertakes a biased literature survey in a nutshell. PO of course denies making such a lit survey. --> The cite of course is an allusion to the Bayesian claim, and does not engage the other side of that story, e.g. the themes WD made in say his Fisher-Bayes paper, cf. As linked in PS at 34 above.] 2] Caputo case PO: The crucial argument against Caputo was a probability calculation finding that a fair drawing procedure would produce such an extreme outcome with a probability of less than 1 in 50 billion. Based on the extreme odds against Caputo's ballot lines, the Court suggested that Caputo institute new guidelines for ballot line selections. [p.5] --> I of course highlighted what such a probability estimate does in a case where a bell-curve approaches an inverted T as the cluster of individual outcomes for configurations [X R/(41 - X) D] near the 20:21 peak grow explosively due to the impact of 41!/X![41 - X]! as X nears 20 to 21. It is plain that in no wise should we expect to see an outcome in the config 1 R/40 D. PO notes: By Caputo's own account, he had used a fair procedure by drawing capsules from urns . . . He then says that on the strength of this, by taking his word, we rule out regularity. [p.6] He then correctly summarises the reason to infer cheating from the specification and improbability. My problem comes in with the subtle effect of a bridge comment on p. 7. . . . kairosfocus
Patrick said, "Personally I thought it was well defined but the major problem was thoroughly explaining the concept without getting into essay-length definitions/examples" "As in, there isn’t a summary version that is easily accessible to the general public. Which of course leads to people misunderstanding CSI unless they’ve spent the time to read the full length articles or even books…which few do." If this is true then I can see why CSI is the butt of jokes. I have a hard time believing that any definition requires a full length essay to lay out. I don't believe you can ever have a fruitful discussion on any such concept. Are we talking about something more complicated than the "standard theory" of physics? PO described how Dembski has changed his use of the term from book to book so is subject to equivocation on this concept. In the threads I mentioned it was pointed out how DNA is different from coin tosses which is different from English sentences Why are all specific? What definition encompasses each situation besides each being a low probability event. Why are they different from a specific rock outcropping which is also a low probability event or a thunder storm which is an organized natural system and also a low probability event? If such a definition exists it won't take an essay to explicate it. jerry
Continued: In a potential application in evolutionary biology, a typical simple null would be the uniform distribution. A composite null would specify a whole range of distributions and a worst case (hardest to infer design) would be identified and tested (not necessarily possible to do though). Not sure if Dr D has thought along these lines; I think he might have. Otherwise, perhaps a good idea? The only example I have seen is the flagellum from NFL, which is far from complete (which Dr D does not try to hide). I still don't see an easy way around the problem with the rejection region though. PO olofsson
Caputo: A composite null would be that p is less than or equal to 1/2, meaning that he might not just be fair to Republicans but "super-fair" (not realistic but I'm merely illustrating simple vs composite. The worst case covered by the null is still p=1/2 (least favorable to Repubicans in the null) so the test is the same as for the simple null. Now, a rejection rules out not only fairness but also cheating in favor of Republicans. olofsson
Well...being cut off all the time...sorry. Another try to follow. olofsson
Oops, don't know what happened. Anyway, a composite null would instead be p olofsson
Hi jerry, Thanks for your comment. The relation to hypothesis testing would be to test a so-called composite null hypothesis rather than a so-called simple null hypothesis. A simple null hypothesis specifies only one distribution (or one parameter) whereas a composite null hypothesis specifies a set of distributions (or range of parameter values). The "worst case" is then tested and a rejection rejects the whole set at once. In the Caputo case, a simple null hypothesis would be to test if p=1/2 (fairness). The alternative would be p>1/2 (cheating in favor of Dems). A composite null would instead by p olofsson
DaveScot, The hubub is about Mr Kairosfocus repeatedly claiming that I in my article attempt to argue against the Court's decision in the Caputo case when I merely use it as an example, in precisely the same way Dr D does. Kairos is now stuck on this issue and even though Atom tried to help me explain, he doesn't give up. I know he will not listen to me so I thought that some of you, whom he might trust, could explain it to him. He has even gone so far as to cut one of my sentences in half (in his post #68) to support his quixotic mission. (I believe some refer to such pratice as "quote mining"; hasn't happened to me before). If you read the entire paragraph it is clear that I mean relevant in that example, not in the actual Caputo case. I was hoping we could get over this, and then perhaps his misunderstanding of Bayesian and Fisherian, because I would like to get comments and criticism on the objections I actually do have. I also feel sorry for Mr Kairosfocus to spend so much time and energy on a nonissue. Cheers, PO olofsson
There have been two long threads here in the last 6 months that have tried to defined the “specified” in CSI and at the end of each there was not acceptance that it had been done. Lots of examples were given but no way to summarize all the various examples.
Personally I thought it was well defined but the major problem was thoroughly explaining the concept without getting into essay-length definitions/examples. As in, there isn't a summary version that is easily accessible to the general public. Which of course leads to people misunderstanding CSI unless they've spent the time to read the full length articles or even books...which few do. Patrick
continuing Specification in cellular automata is a little harder to see but not very much so in well chosen examples. My favorite is DNA supercoiling. Watch this video before reading further: https://uncommondescent.com/molecular-animations/how-the-cell-deals-with-supercoiled-dna-during-replication-and-transcription/ Presumably as life evolved DNA got longer and longer as organismal complexity increased. At some point supercoiling became a show stopper. A solution to the problem is a cutting/splicing tool that cuts a supercoil, lets the tension unwind, then splices it back together. That's a complex little tool. A specific tool. Topoisomerase, the tool that somehow appeared to solve the supercoiling problem, is an example of specified complexity. It's a long polymer, which makes it complex, but not just any old long polymer. It's a very specific polymer that conforms to an independently given specification: a DNA cutting, unwinding, and splicing tool. Saying this tool was found by the organic equivalent of flipping coins begs belief. DaveScot
PO Not sure what the hubub is about. IIRC the d and r are equivalent to binary digits randomly selected one at a time like a fair coin toss. One expects an equal number of heads and tails to come up. The more flips the more equal you'd expect. If it were a million flips and the distribution was more than a single percentage off from 50:50 I'd suspect the toss wasn't fair. That said in 41 tosses 21:20 would be an expected result. 0:41 would be unexpected. However, the unexpected happens. In this case we should expect it to happen in some small number of cases out of 2^41 tries. Where misunderstanding usually happens is when specificity enters the picture. A 21:20 outcome is expected but if it happened to be my social security number I'd be highly suspicious. My SSN would be as likely (or unlikely) as any other number to pop up. So why the suspicion? Because it happened to match an independently given specification. That's the difference between complexity and specified complexity. DaveScot
Prof. Olofsson, There have been two long threads here in the last 6 months that have tried to defined the "specified" in CSI and at the end of each there was not acceptance that it had been done. Lots of examples were given but no way to summarize all the various examples. And this included many here who have read Dembski's books. However, given that, there seems like there must be some way to estimate the probabilities of the complexity that has arisen in biological systems even if at every turn conservative estimates must be used. How this relates to hypothesis testing I am not sure. Biological systems is the only example in nature where a complex formation specifies the action of another part of nature. So while there are unbelievably complex combinations of molecules in nature there is no example of any of them specifying another aspect of nature except in life. So maybe the problem has not been defined correctly or approached in the correct way. I have not seen anyone in biology provide probabilities for any event that has to do with evolution though I certainly do not have a wide knowledge in the field (probabilities are used in genetics all the time but this is not really evolutionary biology though it is related). Maybe biologists tend to steer away from it because they know it will lead into a taboo area of assessing Darwinian evolution. However, I find it hard to believe it cannot be approached in some way even if the exact probabilities are not likely to be known. Thank your for your paper and the interesting discussion that has ensued. jerry
And hi to DaveScot, I hear you are a legend over here! ;) Maybe you too can read my Section 3 and try to explain to Kairos what I am trying to explain there. Best, PO olofsson
Atom, Thanks again! You got it. Any sequence that has more Ds than Rs could potentially indicate cheating. However, any such sequence corresponds to a particular rejection region and it is the probability of that region that is decisive. So, if we observe 22 Ds, the relevant rejection region has probability 0.38. If we observe 40 Ds, the relevant region has probability 1 in 50 billion. I leave it up to you to try to explain to Kairos. He has decided not to listen to me. PO olofsson
GEM, I don't know who is missing what, but I think you and Prof O are talking past each other. PO's paper started off with the caputo example. It agreed that in this case, the EF works. He did not expand the Rejection Region for that case. (It seems that you think he did.) He then says (to paraphrase) "Well, let's look at a completely different example to further show that the EF works, one in which we'd reject the design hypothesis because it has a completely different set of circumstances. In this case, let's say that instead, Caputo's ballot had only 22 D's and 19 R's. Could we conclude, in that case, that he cheated? No, since in that case the probablility is 38%, well within the reach of chance." Does that make sense? Maybe I'm the one misreading it, but PO seems to think I grasped his point. Atom PS lol at the kid fish comment. :) Atom
I must apologise, while I left the PC in a hurry, an unsupervised child posted a comment. GEM of TKI kairosfocus
tookie,too bad you do undrestand the the kind of havoc this woldrl d woild be in uf they re fish realized that the fish bate wasent just lunch kairosfocus
Hi Dave I see you spotted my point! I agree that a pie chart would be a good way to see the probability, but the 42-strip bar chart shows the other side of the story, i.e that for the configs from 41R/0D to 21 R/20 D to 0R/41 D, we see a vast disproportion in how the individual outcomes are grouped. There are a lot more microstates for each config near the middle than at the two ends, so that he strips at the ends are actually microscopic scale or thereabouts for a reasonable sized chart. Caputo says in effect he threw a dart at random and hit the second to last strip on the right. Not likely, i'd say. Gotta go ferry duties call GEM of TKI kairosfocus
A pie chart would make a better dart board. The 41R/D slices won't be but a few molecules wide unless it's a very large pie. If the pie chart was the size of the moon the 41R/D slices would be a bit less than one centimeter wide at the circumference if my quick calculation is correct. For a bit more perspective, if the pie chart was big enough to cover a football field you'd need a microscope to see the 41R/D slices and no normal dart would have a small enough tip to fall completely inside it. DaveScot
Double oops: 40 or more D . . . kairosfocus
OOPS: 4 or more D is the proper RR . . . kairosfocus
Hi Atom [& Prof Olofsson]: Thanks for the excerpt. It underscores my point, especially:
when the relevant rejection region consisting of all sequences with at least 22 Ds is created, this region turns out to have a probability of about 38% and is thus easily attributed to chance.
In fact, the truly relevant RR is that which has in it 41 or more D [this being obviously a one-tail issue relative to the circumstances]. The probability of a set of 41 R/D first outcomes falling into the properly relevant region by chance is truly tiny, 1 in 50 billions,not at all 38%. Hence, the warrant for my improper expansion of the RR objection. A way to visualise what is going wrong in PO's paper is to: _______ 1] draw up a bar-chart of all possible configurations of the 41 runs, from 41 R to 21 R/20 D to 40 D/1 R to 41 D, with the vertical stripes having height proportional to the number of possible specific outcomes that are X R/41 - X D. 2] Cut the chart out, and put the "jagged bell shaped" diagram on the floor with a backing of say bagasse board [promoting a Jamaican product here . . .], to protect the floor. 3] Go up on a step ladder and drop darts onto the chart from such a height that the scatter across the chart will be more or less even. [This is just for illustration, so we can ignore CEP issues.] 4] Observe the proportion of holes that are [a] 22 D or more, vs that for those that are [b] 40 D or more. (What this is, is actually the stat mech postulate that can be phrased that since all microstates -- individual outcomes -- are equiprobable, the overall system will be in any given configuration [here X R/41 - X D] a fraction of time that is proportional to the statistical weight of the particular configuration.] 5] The 22+-D trail will have 38 or so % of the holes indeed, but the 40+ D configs will have a much, much lower fraction, about 1 in 50 billions. 6] Indeed in a realistically scaled chart and for a realistic amount of time/ number of dart drops, it very probably will have none. (NB: This is in effect how I originally visualised what is going on in 1-tail and 2-tail statistical hypothesis testing a la Fisher, working back from my previous knowledge of stat mech and of the likely scale of fluctuations from the "equilibrium." Then, I could see why 5% and 1% limits make sense for typical situations and are not just arbitrary numbers grabbed out of the air. A lot of things we are not going to do 20 times in our lives, and even more, we are not going to do 100 times, crudely speaking. But for other cases, a .1% cut-off is sensible, and for other cases, bigger ones are sensible, depending on available probabilistic resources. On the gamut of our universe as observed, we are not going to see more than 10^150 quantum events.) 7]6 is a way of saying that the probabilistic resources are insufficient to get an outcome in the relevant extreme tail, within a reasonable span of experience, if the actual outcomes observed are shaped by chance. 8] That is what highlights the result as not likely to be by chance [debates over Type I/II errors notwithstanding], especially since it fits an independently specified, electorally functional pattern that Mr Caputo presumably would have desired. _________ That, too, is why WD observed:
If Democrats and Republicans were equally likely to have come up (as Caputo claimed), this event has probability approximately 1 in 2 trillion. Improbable, yes, but by itself not enough to implicate Caputo in cheating. Highly improbable events after all happen by chance all the time— indeed, any sequence of forty-one Democrats and Republicans whatsoever would be just as unlikely. What, then, additionally do we need to confirm cheating (and thereby design)? To implicate Caputo in cheating it’s not enough merely to note a preponderance of Democrats over Republicans in some sequence of ballot line selections. Rather, one must also note that a preponderance as extreme as this is highly unlikely. In other words, it wasn’t the event E (Caputo’s actual ballot line selections) whose improbability the Bayesian needed to compute but the composite event E* consisting of all possible ballot line selections that exhibit at least as many Democrats as Caputo selected. This event—E*—consists of 42 possible ballot line selections and has improbability 1 in 50 billion. It’s this event and this improbability on which the New Jersey Supreme Court rightly focused when it deliberated whether Caputo had in fact cheated. [p. 11]
In short, I am pointing out that the issue hinges crucially on how one defines the suspicious-activity tail, and what has happened in PO's discussion, is that he expanded a 1 in 50 billion shot to nearly 2 in 5. That is what I am objecting to, and it is pretty much the same issue as the point WD made in talking about multiplying rejection regions on p. 4 of his paper. In short it is not a matter that I am plugging my ears and refusing to listen to PO, but that I have A FACT-BASED REASON for my objection, one that I can also see in the WD paper I have spoken of, linked and emailed to him. I guess I should add that the equivalent coins H/T discussion is an example in Nash, which I referenced above; indeed,t he reasonable scale of likely fluctuations is how the statistical form of the 2nd law of thermodynamics is made to align with the classical form. (The same sort of point is why Granville Sewell often says that thermodynamically relevant systems don't spontaneously do utterly improbable and simply describable things within reach of our observations.) I trust the above helps clarify. GEM of TKI kairosfocus
Atom, Thanks. I know Kairos won't listen to me but maybe he'll listen to you. PO olofsson
So, we can see that the attempted expansion of the rejection region to try to make it out that 40 D 1 R is well within reasonable expectations utterly fails. The Court was right, WD is right.
GEM, I think you mis-read PO on that section. He doesn't say that Caputo's actual ballot selection (40 D's 1 R) can be expanded to 38%. He gave a counter-example of 22 D's 19 R's...the relevant section is on page 7:
It is important to note that it is the probability of the rejection region, not of the individual outcome, that warrants rejection of a hypothesis. A sequence consisting of 22 Ds and 19 Rs could also be said to exhibit evidence of cheating in favor of Democrats, and any particular such sequence also has less than a 1-in-2-trillion probability. However, when the relevant rejection region consisting of all sequences with at least 22 Ds is created, this region turns out to have a probability of about 38% and is thus easily attributed to chance.
Easy mistake to make, I almost missed it as well. But I think in balance, he gave a charitable reading of Dembski in his paper and example. Atom
Prof Olofsson: I note your "final attempt." That attempt, IMHCO, unfortunately, has yet to actually address the material points at issue, and it is clear that these points cannot be resolved between the two of us, but onlookers can read and follow up links for themselves. I will note on a few points: 1] 38% I have already noted enough to see how you and WD differ on the appropriate specification, and on why he objects to the arbitrary expansion of the rejection region that comes to the 38% figure. Now, consider 41 coins labelled D and R on the opposite sides. There is one way they can be arranged to show all D, and there are 41 ways they can be arranged to show 40 D, 1 R. The number of ways they can be arranged to show 20 D and 21 R or 21 D and 20 R is by contrast a vastly large number; twice 41!/[20!*21!], and similarly for other close-to-that splits. This comports well with the intuition that 40 D, 1 R is way-away form what we would reasonably expect by chance. In short we see complex, specified information at work, and that is a strong sign indeed that Mr C did not have a fair coin toss or the equivalent to make his decision. So, we can see that the attempted expansion of the rejection region to try to make it out that 40 D 1 R is well within reasonable expectations utterly fails. The Court was right, WD is right. 2] Fisherian? Again, I have pointed out just what I meant: you have made a particular objection that Bayesians made, and WD answered cogently, and of course the Court holds to essentially the same point. So, sad to have to say, but important: whether you identify as a Fisherian is irrelevant to and distracting from the material point. 3] Lit survey: Again, note the material point: on evidence I summarised above in no. 20, you made [probably inadvertent -- thanks to Padian and co's work] misrepresentations of Behe and Dembski, and cited/summarised one-sided opinions on Mr Dembski's work as if they were the last word. To date, you have not corrected your misrepresentations, and you have not even acknowledged that here are two sides to the opinions of the Mathematically sophisticated on the value of Dembski's Filter. Nor have you seriously engaged what he has actually said (or what I have noted for that matter, e.g on Caputo). Now, I used "lit survey" as a short hand for that summary you made. You have unfortunately not addressed the material issue, but have now focussed on my use of that term, which should have been plain enough from the context. 4] Flat Distributions: At least we agree it is relevant to Caputo. Just to clarify: Nowhere do I state or imply that such a distribution applies to all or most cases, just to some significant ones. Indeed, I explicitly adverted to the Laplacian criterion as being relevant to cases where we have no reason to go to another distribution, even talking about expert elicitation and Volcano hazard risk assessment here in Montserrat, where flat is a nominal startpoint, then in effect the different scenarios are weighted relative to calibrated expert opinions. ________ I trust we can have profitable exchanges in the future. GEM of TKI kairosfocus
Vividblue, Thanks. If I believe I have anything interesting to say, I might return! PO olofsson
PO, Thanks for your input. Dont make yourself a stranger. I dont run things around here but I hope you will continue to exchange ideas on this site. My best to you Vivid vividblue
Salvador, The same to you! Peter olofsson
Prof Olofsson, Thank you for visiting. I hope I have the pleasure of exchanging ideas with you in the future. Many regards, Salvador scordova
Post scriptum, I would like to thank everybody here for allowing me to present and argue for a criticism of Dr Dembski's explanatory filter. I do not intend to intrude in other discussions you might have; this is a forum for the ID community which I respect. If this thread continues, and my opinions are of interest, I may post more comments. Otherwise, I bid you all farewell. PO Sincerely, PO olofsson
Kairosfocus, I will make one last attempt. 1. Caputo and "PO's objections" For the umpteenth time, I have no objections, I am not making any arguments against Dembski or anybody else. I am explaining how hypothesis testing/explanatory filter works. All strings of 41 Ds and Rs have probability (1/2)^41. Thus, if we were to observe 22 Ds and 19 Rs (as an example, hypothetically), we COULD argue that it indicates cheating in favor of Ds and has the very low probability (1/2)^41, thus fairness WOULD be rejected. However, we should not consider only this particular outcome but form the rejection region of all outcomes that contradict the null hypothesis of fairness. In doing so, we instead get a probability (a p-value) of 0.38 and do not reject fairness. I cannot understand why you claim that I "tried to expand the rejection region" etc. I make no argument. I explain. I and Dembski are in perfect agreement here. If anybody else is still following this thread, please read Section 3 in my paper and see if you can find any "arguments" or "objections" on my part. Kairos, do you really not understand or are you just joking??? 2. Yes, Fisherian. Bayesians don't do hypothesis testing, they don't have rejection regions, etc, etc. I am not expanding anything. The point of the flagellum example is that Dembski does not even form a rejection region. Have you even read his books? 3. There is no "lit survey" in my article so this is a moot point. 4. Yes, that is the obvious model and null hypothesis in this particular example. It does not mean that EVERY hypothesis test is about uniform distributions. Goodbye, PO olofsson
H'mm: Several developments while I slept, including an email from PO, and a response from Sal that further underscores the need for PO to engage with WD directly and relative to what he has to say, both from the 1990s and up to today. Having said that, there are a few things we need to lay to rest then get on to several specific points -- noting that there is a tendency for longish articles to get clipped by the mod filter at UD. [Thus, in part, why I have pointed to the linked and emailed.] a] N-word. "Creationist" - as I have long since pointed out and on abundant facts - tends to be misused in academic, legal, educational and media contexts as a term of dismissive prejudicial contempt. [I have not in this context raised Hitler as a comparison, but do note that there is a line of descent in the history of ideas from Darwin's now well-known social darwinism and the consequences in Hitler's behaviour, which the former would have been horrified to observe. Cf the recent O'Leary thread on this here at UD.] b] Ms Coulter: I read her book, found it distasteful, and emailed both her and her online editor on the misbehaviour she indulges while naming the name of Christ. In that book, there is a noticeable positive and substantial difference in tone and level in the chapter on ID, which is easily accounted for by the assistance she had with the topic. c] Sokal: if PO agrees with me that it was a betrayal of trust, then he should not have used it in the way he did in the popular article he wrote. Now, on points, duly noting PO's quarrel is with Dembski not me. I for good reason stand by my remarks above on his rhetorical approach, which drastically needs to be revised. 1] Caputo: First, on the evidence of Dembski's semi-popular paper as linked etc, PO's objections are in effect the same as those of the Bayesians. In effect he tried to so expand the rejection region that its probability rises to 38%. So, we are invited to lose sight of the basic fact that it is extremely unlikely that 40 of 41 cases of ballot papers leading with a D [one with R] - known to affect voting, C being of course a D - is most credibly explained by intent not chance. That is an obvious strawman fallacy, and Dr Dembski properly laid it to rest, cf. pp 11 - 12. Of course any one 40:1 outcome or the cluster of very similar o/comes are very improbable, but any particular o/come or string standing by itself is just as improbable. What makes the difference is specification, here, a 40:1 split in an obvious agenda-friendly direction, i.e. a clear case of CSI best explained -- note the empirically anchored abduction here -- beyond reasonable doubt by agency, not chance. In this case, a little common sense and knowledge of real-world statistical reasoning go a long way. 2] Fisherian? In the case in view it matters not what PO thinks he is, he is using a type of objection that Bayesians are using, and so WD's response to them is relevant to his case. (I speak here of WD's earlier remarks, p. 4, on expanding the aggregate rejection region until it swallows up the feasibility of rejecting the null credibly. [Cf. from "But if that is the case . . . " and his response that the answer is to limit RRs to low complexity patterns, i.e. simple specifications, e.g through Kolmogorov compressibility. He notes that this tightens up what was implicit/tacit in Fisherian testing from the beginning.]) 3] Balls and courts: I have repeatedly invited PO to first balance his lit survey, while making his characterisations of ID thinkers less simplistic and prejudicial/ strawmannish. -> He has on the evidence refused to do so, even dismissing the issue with the idea that he is in some cases being "lighthearted." [ So, my retort: "Fun fe yuh is death to me . . ."] I have asked him to interact specifically with what WD actually has to say, so we can see why he differs, step by step. -> This too, is missing in action. (To aid that process, I have linked and emailed a specific paper, now taking up a case in point from it, and giving page references.) 4] On specification and probability distributions: The Caputo case also brings out the issue of how relevant a flat probabilistic distribution can be: any given string of 41 Rs and Ds is equiprobable on a fair selection of the first name on the ballot. [That this is so far from the overwhelming group of distributions that will cluster fairly close to a "50-50" distribution is what is telling us that the "coin" being used may well not be fair. Notice, how the idea of a rejection region comes up very naturally.] Next, the particular outcome: 40:1 comes from a simply and independently describable specification. Namely, we do not need to in effect reproduce the string to describe it, but can summarise it so: "40:1 in favour of Mr C's party." Complex and functionally specified, leading to agency as the best explanation. And, the court's own conclusion. 5] Lucky noise: Now, whatever debates may be had over WD's particular models of the filter, in fact this pattern is very familiar and widely practised. Indeed, PO infers from the complex, functionally specified bit strings in this thread that he is interacting with agents, not lucky noise. But, the probabilities and the specifications are similarly arrived at, and he intuitively rejects lucky noise. But, in fact, nothing in physics or logic forbids all of the apparent posts above save his own being lucky noise that got through the internet and becoming "read" as messages that make sense in language and content. [Cf. my always linked.] So, PO is being selectively -- i.e inconsistently and question-beggingly -- hyper-skeptical. GEM of TKI PS: Prof PO, you may find it useful to read a fairly old but nice and very readable [distinctly rare in that field, as I painfully remember . . .] short intro to stat thermo-d, Elements of Statistical Thermodynamics, by Leonard K Nash, Addison Wesley. kairosfocus
Nice to meet you, I saw you on TV the other night on C-Span!
Yes. That was me. Funny thing is I didn't watch the clip. Nice to meet you as well.
" I think the problems I point out are quite fundamental and not easy to fix"
I thought Bill's solution was quite good, as good as can be done. In math, it is hard to say the statistics of a physical phenoma are the result of intention or not. What we can say however is the degree of likelihood a particular distribution can be successful at resulting at a given pattern. I don't think this should be controversial... If there are other distributions than some simple equiprobable distribution, they can be ranked in order of their front-loaded information content relative to the equiprobable one. For example, if my reference distribution predicts a 99% chance of an event happening (say it is based on current knowledge of physical experiments), a different distribution, say one that allows only a 1% chance of this same thing happening, will have about 6.6 bits more information for that distribution based on the Radon-Nikodym derivative. [see: Information as a Measure of Variation] What this line of reasoning tells us is how much different the past must have been than today for evolution to happen. Darwin was arguing that observed mechanism today are adequate to explain the distant past. Dembski's line of argumentation shows how divergent Darwinian evolution must be from current reality to be sustainable as a theory. What does this mean. If there is a 99% chance of rain in a given location (say the rain forests), but maybe sometime in the past there was only a %1 chance, in terms of bits, it's only 6 bits going from one distribution to another. It is a believable step. It is a believable change in distributions even in the absence of physical knowledge of the details. However, when we start having to invoke distributions that are on the order of megabits from conditions we see to day, how believable can that be in the absence of empirical knowledge? By the standards of science, we certainly would be reluctant to ascribe to it the status of a scientific theory. One does not have to accept the ID part of Dembski's claim to see that his math adequately casts doubt on the adequacy of Darwinian mechanisms. I have asserted that his math, the form of the explanatory filter, demonstrates that Darwinian evolution essentially makes statements of the form: E = not-E Bill shows this in mathematical terms. The EF was not phrased in terms of Intelligence, but rather how well a phenomena can be claimed to be the result of a given distribution. Whether one thinks this implies intelligent causation is another matter. The motivation of the EF was to show that whatever distribution evolutionary biologist suggests, it will lead to a fatal contradiction of the form E = not-E The EF led to No Free Lunch arguments which both Bill and Bob Marks are now working on. It might be worthwhile to look at their more recent papers. It is quite different than the writings they offer for popular consumption... regards, Salvador scordova
Hi Salvador, Nice to meet you, I saw you on TV the other night on C-Span! I don't doubt that his ideas have evolved (or, should we say, have become better designed...?). I'd be glad to learn more. I think the problems I point out are quite fundamental and not easy to fix, but I keep an open mind. Cheers, pO olofsson
Vivid, I doubt evolutionary biologists use probabilistic models, but that's not really the point, nor an important point of my argument. PO olofsson
The fundamental problem with Dembski’s application to the flagellum is that he has previously stated that he must rule out all chance hypotheses, yet, he tests only one. PO
The one that he tests has the least information content relative to all other possible "chance" distributions, namely, some sort of rote equiprobable one. The other distributions which have a stronger chance of creating a pattern can be ranked in terms of their information content via the Radon-Nikodym derivative.... This ranking of distributions then says nothing of intention, merely information content with equiprobable at the bottom and one equal to the pattern itself at the top. By your comments here, your assessment is out of sync with his latest work. He is not ruling out that there are other possible distributions, but rather points out the information content those "improved" distributions would have relative to the equiprobable one. It then remains to be seen how believable a distribution is based on: 1. its existence from physical first principles 2. the fact an information rich distribution is suggestive of front loading I'm afraid Dembski has outrun his critics and they are shooting at the earlier form of his ideas some 15 years ago. He has since evolved his ideas, and they are more virulent and resistant to what the critics can throw at him these days. Salvador scordova
PO, "I have no expertise in evolutionary biology, not arguing from that point of view" But you are assuming that evolutionary biologists have a more realistic probabilistic model...what is that model and how do they know that the formation of the flagellum is not probalisticly negligible. This assumption is an important part of your critique. "The fundamental problem with Dembski’s application to the flagellum is that he has previously stated that he must rule out all chance hypotheses, yet, he tests only one." What are the other ones? Vivid vividblue
Vivid, Good point! I think such probailities are very difficult to compute, hard to know what assumptions to make etc. A biologist would not agree with the "random assembly" model though and, at least qualitatively, argue that evolution of some particular feature is not that unlikely. I have no expertise in evolutionary biology, not arguing from that point of view. The fundamental problem with Dembski's application to the flagellum is that he has previously stated that he must rule out all chance hypotheses, yet, he tests only one. PO olofsson
Kairos, Well, as the onlookers can see, I have posted a few questions for you. Let me repeat: 1. Regarding the Caputo case, you claim that I “indulged a series of arguments and claims that would be very properly tossed out of court.” Can you please explain what you mean. What are my arguments? What are my claims? 2. What Bayesian issues do I have with specification? Already in the abstract I point out that my criticism is Fisherian, not Bayesian. Please answer the questions instead of just repeating that the "ball is in my court" and referring to writings by Dembski. I have patiently and repeatedly tried to answer your questions and points of criticism and pointed out that I identify two main problems (a) specification and (b) chance hypotheses (see the article for details). If you have any meaningful criticism against my arguments regarding (a) and (b), let's hear them now, succinctly and preferrably wihout referring to Plato, Dembski, and without calling me obfuscatory or dismissive. There are plenty of balls in your court, I see none over here! PO olofsson
"whereas biologists realistic models of millions of years of evolution , reproduction and natural selection." This should read "whereas biologists use realistic models.." Vivid vividblue
Onlookers: At one level, all of this is an apparently pointless distraction from Prof Dembski's point about Dr Padian and others of his critics. Dig deeper: at the next level, the exchange above, sadly, all too aptly illustrates EXACTLY how Dr Dembski is being mistreated by his critics of the ilk of Padian et al. (Critics who won't do him the basic courtesy of accurately representing and responding to what he has to say, and to the fact that there are many serious thinkers who have sophistication in Mathematics who respect a lot of what he has to say. That tells us more about such critics than it does about the merits of the case for the design filter, and/or the real challenges faced by design thought.) So, the ball is firmly in Prof Oloffsson's court. Let's see if he plays or is willing to forfeit the case by default. GEM of TKI kairosfocus
PO, If I understand correctly one of the problema you point out is that in your opinion Dembskis filter uses unrealistic uniform probabilty distributions whereas biologists realistic models of millions of years of evolution , reproduction and natural selection. You also say that the evolutionary biologist would say that the formation of the flagellum is an event of probability that is far from negligible. But then you also say that it is unrealistic to expect biologists to calculate these probabilities. Well if they cannot calculate them how do they know they are far from negligible? Furthermore what scenarious are you referring to Vivid vividblue
Oh Captain, my Captain! PO olofsson
The good Ship of Knowledge :-) tribune7
Kairos, While I wait for your answer to my previous questions, let me correct another of your misunderstandings. You say: "And, on my reading of your remarks, and his, your issues with the specification class etc are precisely those typically raised by Bayesians" Not sure how many times I've said it, but my entire criticism is from he point of view of hypothesis testing, that is, the Fisherian pont of view. There is nothing Bayesian in any of my arguments. But because you claim it is, perhaps you can point out exactly where? PO olofsson
tribune7, Thanks! Ehm...aboard what? ;) PO olofsson
Kairos, Regarding the Caputo case, you claim that I “indulged a series of arguments and claims that would be very properly tossed out of court.” Can you please explain what you mean. What are my arguments? What are my claims? PO olofsson
olofsson, welcome aboard :-) tribune7
tribune7, Yes. PO olofsson
Prof Olofsson: I see you have now put up some points, including repeating that I am not addressing your point in the main. You will please observe that I have long since pointed to where you can profitably engage a serious level presentation of the Fisherian side of the debate, in a paper from Mr Dembski - first by reference to his site which is accessible in the sidebar, then now at length by emailing you a copy and by providing a direct link here. In short, the ball is in your court, not mine. In the above, I have pointed out by excerpting or summarising points that reflect the trend of your serious level paper, problems with the broader pattern of your argument, starting with a one sided and distorted presentation of the position of Mr Dembski and Mr Behe too. The general audience needs to know that, and those who are interested can easily enough read the two papers and make up their own mind in absence of your own serious interaction with Mr Dembski's points. Now, on further notes: 1] Filter issues As a reader can see by looking at the head of the thread and by exmaining the Fisher-Bayes paper just linked, there is a DEBATE on Fisher and Bayes in Statistics, which Mr Dembski presents and addresses making his own basic point, in 12 pp. In that context, he points out that, and how, the CSI filter helps to resolve the debate. (And, on my reading of your remarks, and his, your issues with the specification class etc are precisely those typically raised by Bayeians; which he aptly answers.] When you can show us that you have interacted seriously with what Dembski actually has to say,t hen we can profitably move the issue forward. 2] The Caputo case: Dembski answers precisely to several of your objections, in the linked. Let's see an accurate summary of what he said in the linked, and then your further response to that. (The case as he discusses will show just why your argument would have been tossed by a savvy judge as a clear case of selective hyper-skepticism in the teeth of what is well-received, reliable praxis in practical statistics.) 3] Laplacian indifference Onlookers: when we have no reason to prefer certain of the outcomes from a set of possible outcomes, the default position is to allocate as the default that outcomes are equiprobable. This is as common as assessing the odds of a six on tossing a presumably "fair" die: 1 of 6 or 1/6. It for instance commonly appears in managerial decision-making, as a precursor to "loading" the possible outcomes, e.g on the scenarios for the collapse or otherwise of the current volcanic dome here in Montserrat through expert elicitation. 4] Stat Mech: The core principle of statistical mechanics I was pointing to is the principle that for a given macroscopically observable state of a thermodynamic system, there are in general many microstates that are possible, and as a first option we take it that each microstate is equiprobable. From this base, much of modern physics was built. [The normal distribution is more relevant to the assessment of experimental errors of observation, due to many interacting and independent sources of error, e.g. in observing the location of a star through a telescope -- precisely what Gauss was doing.] My discussion in the always linked, appendix 1 will bring out the issues at stake in abiogenesis. GEM of TKI kairosfocus
Olosfsson -- I mean the literal interpretation of Genesis: Earth is young, all species were created separately, and so on. OK, thanks. Then you agree ID is not creationism? tribune7
Kairos, We may have overlapped. My response was posted 3 minutes before yours. So, look above your posting and you'll find more answers and, for a change, a question for you. PO olofsson
I agree with what you say about Sokal's hoax. Also, I don't view ID as creationist. In fact, I recently pointed out to Behe that he is scientifically much closer to Richard Dawkins than to Ken Ham. Anyway, I am sorry if you feel offended by my Coulter Hoax article. I can assure you that many, many more have felt offended by Ms Coulter's writings over the years. My piece was an attempt to show that one does not have to hate her or feel offended by her. olofsson
Kairos, Have you read Ms Coulter's book? PO olofsson
PS. So "creationist" is now like the N-word. Any chance we can get Hitler into this as well...? olofsson
Prof Olofsson: I await your substantial response. In the meanwhile, I find it necessary to say a few frank things: 1] "Circles" -- for that, read "movement." The problems with the rhetorical approach in the first article you linked and the one I subsequently found, reflect the patterns of a movement centring on Mr Padian and his ilk. Those patterns are damaging and misleading, and should be corrected forthwith. This holds for a one-sided presentation on Bayes vs Fisher, just as it holds for the sort of antics Judge Jones indulged when he more or less copied a post-trial submittal by the ACLU, misrepresentations, basic and easily corrected factual errors and all. 2] "Lighthearted"? Funny, but that is the same excuse currently being offered by the woman who thought it an excusable "lighthearted" action to deliberately mislocate books in a bookstore and boasted of it on the web. That sort of "fun and games" rather reminds me of the frog speaking back to the boy who had just thrown a couple of rocks at him: "Fun fe yuh is death to me!" . . . in my native dialect. In short, there are basic, well-known duties of care we owe to people on the other side of important issues. To try to make light of failing to live up to the duty to respect, to not distort and misrepresent [leading to setting up and knocking over a strawman caricature] and the like, are inexcusable. Period. BTW also, there is more than one side to the Sokal incident, as even the Wiki article on the affair will show. In sum -- and as one who objects to pomo thought for what I consider excellent reasons -- I must say this in fairness: there is serious reason to infer that Mr Sokal betrayed a trust put in him by the editors of the relevant experimental journal, then trumpeted it to the world as a triumph. _________ Sorry to have to be so direct, but the matter at stake is deeply important, and has already cost serious people on "my" side great and undue damage to their careers and reputations at the hands of some on "your" side. As the descendant of slaves and a relative of a man hanged by an oppressive state in 1865 for standing up for the rights of poor, oppressed people, I tend to take such things seriously indeed. I trust you will understand why, and will reconsider your "lighthearted" approach. GEM of TKI PS: Onlookers may wish to look up the "first level," relatively accessible Dembski paper I spoke of, on the debate between Fisher and Bayes, here. PPS: By your definition, the ID movement is NOT a creationist movement of thought [as I have already noted in outline], and many of its leading proponents are precisely not creationists [as I have also noted]. kairosfocus
Kairos and Vivid, I am afraid this does not promise to be a very interesting discussion. My article is a criticism of Dembski's filter from the point of view of mathematical statistics, no more and no less. The main points of criticism is the problems with (a) the rejection region ("specification") and (b)ruling out all chance hypotheses by only testing one (see the quotes in my article). I am truly interested in hearind how these would be addressed by a "filter supporter" (regardless of the ID/Darwinism debate). Kairos, you do not address my main points but keep picking on words and phrases and bringing in other arguments that have nothing to do with the filter per se. You don't seem to gasp the narrow focus of my criticism. I will again address a few points below, but I don't think this debate will lead much further. Vivid, I encourage you to read my article and ask me any questions you have. Email please, I agree that this thread ought to be saved for its initial purpose. I posted the link to my article in response to Dembski's list of mathematicians for and against the filter. There is no probabilist on either side so I thought the view of one would be appreciated. On to Kairos's points: 4. As I am criticising the filter, I list others who have done the same to point out that (a) I am aware of this and (b) I am presenting a different type of criticism. It is not a literature survey and there is no "appeal to authority." I certainly don't view some of these people as authorities. 5. Bayes vs Fisher is intersting in its own right but let's do that via email instead. As you notice, I accept the Fisherian approach in my filter criticism. 8. Again, I am using the Caputo case to illustrate hypothesis testing, just like Dembski uses it to illustrate his filter, and as you can see, I point out that the statistician and the design theorist would reach the same conclusion. Now, please, tell me how I "indulged a series of arguments and claims that would be very properly tossed out of court." 11. The space does not change, only the probability distribution. There is no such thing as a "basic standard null position." I have no expertise in statistical thermodynamics but I seem to recall that the normal distribution plays a prominent role. olofsson
Tribune7, In the context of my article, I mean the literal interpretation of Genesis: Earth is young, all species were created separately, and so on. PO olofsson
olofsson, just curious but how do you define "creationism"? tribune7
Kairosfocus, I'm not part of any circle hat I know of. My piece on Ms. Coulter was just a little lighthearted retort to her chapters on evolution in "Godless." Lighten up! PO olofsson
"Vivid, I bear in mind your concern for side-tracking the thread" kairos, I think you may have misinterpreted wha I was asking for. I was not concerned that the thread would be sidetracked I was expressing my desire that you and the professor continue to have the type of discussion you are having right here on this thread rather than through e mail. Hope that helps. BTW your site has been very helpfull to me Thanks Vivid vividblue
PPS: I meant the Sidebar's link to the Design Inference web site. kairosfocus
PS: I poked around Prof Olofsson's web site and came across this article [under his ID section] which was apparently published in Skeptical Inquirer magazine. On the strength of that article, it seems that Prof Olofsson is part of the circle of those who have misrepresented ID in public repeatedly, using the sort of questionable rhetoric I noted on above. In short,the above interaction with him may be more germane to the core issues of this thread than at first appears. kairosfocus
Cont'd: 5] In noting on Sober, I am pointing you to just such a presentation from the other side of the case, a source that can be accessed through the sidebar's link to Uncommon Descent. There you can see also why I noted that there is another side to the Bayes-Fisher story. And as a Mathematician, surely you know that the accident of chronology does not necessarily show the logical pattern in a process of reasoning. (Cf the "rooting" of Calculus in C19 - 20 after its “practical” emergence in C17.) 6 - 7] The inference across chance, necessity and agency has a 2400 year documented context, and an associated wealth of underlying issues and instances and ties to epistemology, the theory of knowledge. Dembski wrote in that context, which should be explicitly engaged. The reference to bio-systems implicates all that has been discovered from 1953 on about the information systems at the root of biology at cellular level as well. 8] The Caputo case is aptly illustrative of how real world inference to design works, and works reliably. In your addressing of it -- in a fairly lengthy discussion -- you indulged a series of arguments and claims that would be very properly tossed out of court. (In the real world of fact, means, opportunity and motive, we look for moral not demonstrative certainty; and in fact that is all that science is capable of. As a post-Goedel mathematician you can appreciate the point.) 9 - "10"] I see I missed a number! On 9, I was pointing out the sort of configuration spaces we are dealing with and the sort of very generous upper limit on the population of organisms that can have lived in our observed universe across its lifetime. Even such a crude calculation immediately shows how isolated viable life systems are in the config space of just DNA or proteins as potentially informational polymers. In short the strategy of expanding the specification set fails in the relevant case, spectacularly. (And this, through Leslie's "fly on the wall gets hit by a bullet" [cf my always linked] argument, holds even if radically different architectures of life are feasible. All that is required is local isolation, which is what the mutation leads to damage effect shows. In short, and extending Behe, life as we know it is fine-tuned, and such is again a strong and reliable empirical pointer to a Fine-Tuner.) 11] You used the example of getting to a grammatically and semantically functional English text by chance, using techniques to make the config space smaller. I pointed out that absent a serious fine-tuning argument that writes the DNA code etc into the laws of nature, this is an unjustified move relative to the basic standard null position: equiprobable distributions. Such a postulate is foundational to the success of statistical thermodynamics, which is the precise science that addresses the behaviour of molecules in Darwin's still warm pond. In short, I am not so sure as you seem to be, that I have not addressed your points sufficiently for the purposes of this semi-popular level forum, directing you to more serious sources for the technical level discussion. Okay, I will email . . . so we can let the thread remain on focus on Padian, Nature et al. GEM of TKI kairosfocus
Prof Olofsson (And VB): First, thanks for the responses on this "quietening down" thread. Vivid, I bear in mind your concern for side-tracking the thread. However, some of the Padian-NCSE mis-perceptions are still operative, and so there is a need to address here. (Beyond this I suspect that perhaps indeed an email exchange will help and will also send this to Prof Olofsson that way.) I therefore note too, that a core concern above with Prof Padian, head of NCSE, is that the NCSE has trumpeted to the academic, legal, media and educational communities, serious misrepresentations of what the Design movement holds and thence has clouded and poisoned the atmosphere. So this exchange shows the impact of the resulting confused atmosphere, a sad development that nature -- the leading general purpose peer-reviewed scientific journal in the world -- has now unfortunately accommodated within its pages. And, I will not hold my breath waiting for them to publish a correction . . . We need to reckon with that poisonous atmosphere before doing anything else and make a careful effort to clear it before anything serious can be done. PO, unfortunately, you have -- plainly inadvertently (given your onward concerns, protests and surprise) -- drawn from this misleading "consensus" and have consequently used rhetorical devices that fall under the several strictures I made above. In short, the "work" of Padian et al has been effective. So that is lesson 1 from this exchange within the thread: ID is commonly misrepresented and misperceived, so it would be wise to look up and even better interact with original sources before tilting at a strawman. Also, since there are many disciplines which bear with profit on the issue, one would be well advised to bring to bear an inter-disciplinary perspective, partnering with those who are not playing rhetorical games. Now on particular points of interest: 1] IC: You need to address the rhetorical effect of a simplistic and in effect dismissive summary of another's case, at the outset of your own. (That is the notorious strawman fallacy, as just linked.) 2 & 12] ID is not "un-traditional Creationism." It is not Creationism, period -- a point that the Creationists underscore by critiquing ID from their own perspective. ID's classical progenitors are people like Plato, Socrates and Cicero, and the issue of credibly and empirically distinguishing sources of causation across chance, natural regularity and agency. And in light of the above, any resort to the term, becomes namecalling. (if you are unaware that "Creationist" is legally and academically loaded and prejudicial language, you are unfamiliar indeed with the ongoing discussion. But then, a long time ago, a lot of people were not aware that a certain N word was a term of contempt in most contexts.) 3] Similarly, many ID advocates [and Creationists!]accept NDT mechanisms tot he level where they have been empirically demonstrated -- microevolution, so we are not dealing with critiques of "evolution" as such. Further to this Behe and I believe Dr Dembski too [?], accept common descent of life on earth and the usual projected timeline, but reject the notion that RM + NS predominantly shaped it, given the evidence of design in the process as they trace it. 4] In your citations, you have summarised only one side of a contentious issue among the Guild. Therefore your lit survey has, rhetorically speaking [though probably inadvertently], improperly appealed to authority and has thereby stacked the deck. . . . kairosfocus
Kairosfocus, I don't see any real arguments against my main points among your 12 comments. Nevertheless, let me address them. 1. My article does not deal with irreducible complexity; I only wanted to mention it very briefly. Yes, it is simplified but as I don't argue against it, that seems to be a very moot point and hardly worthy of your "strawman" label. 2. Whereas a traditional creationist may argue that Darwinian evolution contradits the bible, ID arguments are based on improbability. If you don't agree, I think he burden of proof is on you. What features of Darwinian evolution are likely, yet contradicted by ID theorists? 3. True, but a bit nitpicky in this context, don't you think? 4. Complete misunderstanding on your part. In scholarly contexts, it is customary to refer to earlier work on the same subject. I don't build upon these references and do not "appeal to authority." My sole point is that others have criticized the filter but I have a different point of view. Whether I agree with their criticism or not is a different topic. 5. I am merely describing Sober's criticism so if you have objections, you have to bring it up with him. Considering that Bayesian statistics predates hypothesis testing, I doubt that you can argue it is dependent upon the latter but that's another discussion. 6. After necessity and chance are ruled out by the filter, design is inferred without further specifying what it means. I have no particular problem with this from a logical point of view. 7. This is no dismissal; it is what he claims. I don't have his books here so I can't give you precise references but I will look it up later. 8. Another complete misunderstanding. I am only describing how statistical hypothesis testing works and use the Caputo example as illustration. What on earth do you mean by "strawman" and that I should "try my argument"? I'm not even making one! 9. Sorry, I don't understand what you are arguing for or against here. 10. Well, I can't argue against that! 11. Again I am just trying to explain something, this time how words like "chance" and "random" have different meanings in daily language and in probability theory. 12. There is no "name-calling." Dembski compares the composition of the flagellum to randomly shopping for cake ingredients. If you mean that "creationist" is an insult, I don't think of it as one. It is being liberally used on creationism.org. Besides, I didn't say that Dembski is one; he obviously isn't. In conclusion, you have not addressed any of my main points. Rather, you have selected a few pretty irrelevant passages and criticized them and completely misunderstood others. Based on this, it is hard to see how you can honestly claim that I engage in "obfuscation and dismissal rhetoric." Best, PO olofsson
"If you prefer, we can debate via email instead. You’ll find me at peterolofsson.com." Since this is such an interesting topic I would hope kairofocus and you could hash it out here. Thanks Vivid vividblue
Hi kairosfocus, Thanks for your comments! I have not yet had the time to read them through carefully, but I think you have missed some of my points. I will get back shortly so we can sort it out. I don't think it is fair to blame me of obfuscation and dismissal rhetoric though; I certainly did not intend any of these. If you re-read my piece, you can see that I am far more benevolent to the filter than most other critics and that my criticism comes from the point of view of statistical hypothesis testing which is Bill's (yes, we have dropped the titles!) main source of inspiration. If you prefer, we can debate via email instead. You'll find me at peterolofsson.com. Best, PO olofsson
Cont'd: 7] Neither is it proper to dismiss that Dembski simply claims that any biological system that has a recognizable function must be specified, and also adds that no biologist he knows would question this conclusion . . . --> Bio-function is in fact tightly restricted based on the DNA-ribosome-enzyme system and the requisites of life in general, and such functionality is known to be relatively isolated in the configuration space of DNA strings of biological length, i.e 500 k to 4 bn base pairs. [cf what mutations at random often do, and look at the config space implied by 4^500,000 to 4 bnth powers] 8] The Caputo case: here we see a strawman being set up, i.e by manufacturing a very loose specification, the real issue is subverted, by getting to a case where any one of the above could happen has probability 38%. [The case was in fact a real world one, and a it hinged on the fact that there was 1 of 40 cases where R led the ballot in which context there was good reason to infer that this was not just by happenstance. This is defeatable but beyond reasonable doubt, the standard of "proof" in a real world case. Try your argument on a real world judge on a discrimination case sometime – other than |with “ACLU copycat” John E Jones III.] 9] A similar pattern extends to the case of E coli. For, it is probable that far less than 10^500 life forms have ever existed in the known universe -- given that 10^150 exhausts the number of quantum states that can exist across the same gamut across its estimated lifespan. 4^500,000 vastly swamps the range, and that is the lower end of the complexity of the DNA in life – that is, the odds of getting to the islands of bio-functionality from an arbitrary initial configuration are vanishingly small; and, it is reasonable to see that to move from one major body plan to another is likewise vastly improbable. There are commonly available estimates on the metabolism first route that look at ~ 200 enzymes and the probability of getting to the working molecules for life. 1 in 10^ 40,000 is the well-known estimate that results, and led Crick to Panspermia in his despair. 11] Similarly, the proper comparison to grammatically correct English words and phrases in a prebiotic concept -- given the point that there is no reason to suppose that DNA code etc are written into the fundamental laws of the universe -- is the Laplacian equiprobability of configurations assumption. Trying to make the outcome less improbable by saying one can pick a different distribution, e.g English letters or words in the typical patterns of English already smuggles in all sorts of syntax and semantics. (Cf my micro-jets in a vat example in the App 1 to the always linked.] 12] Dismissing this by adverting to “Creationists” -- namecalling, in an academic context -- and dismissing microscopic tornadoes in junkyards [BTW, have you studied statistical thermodynamics as it relates to this issue, the relevant science on this one?] of course simply ducks the point that there is in the OOL field no robust model of abiogenesis. That after decades of trying hard and hyping minuscule “encouraging” results like the now notorious spark in a beaker-type experiments. [Cf my always linked, on abiogenesis.] We could go on and on, but by now the pattern is sadly obvious: obfuscation and dismissal rhetoric. GEM of TKI kairosfocus
Hi Prof Olofsson: Took a look -- seems to be your own paper. On selective points on a quick run through: 1] Overly simplifies Irreducible Complexity, to the point of a strawman fallacy. Behe's actual claim is that there is a core in some systems, that is so constituted, that 'An irreducibly complex system cannot be produced directly... by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition non-functional.’ So, ‘although irreducible complexity does rule out direct routes, it does not automatically rule out indirect ones.’ Howbeit, as complexity of such a biosystem rises, ‘the more unlikely the indirect routes become.’ [DBB, 10th aniv. edn.] 2] The same happens with the claim: the key concept for evolution critics is improbability. Nope, we ID thinkers -- ever since Plato in his The Laws Bk 10, 2400 years ago, look at a three-way split on causation, and infer from a pattern that is COMMONLY and reliably used in statistics, courtrooms, management and in common sense life, that CSI is a reliable indicator of design not chance and/or natural regularity only. [BTW, how often do practical investigators compose “all” alternative hyps and compute conditional probabilities relative to them then deduce the most/least improbable, before drawing conclusions? Or is it that most often Fisher's approach is used of finding a rejection region that meets specification and improbability then rejecting the null . . .?] 3] Also, there is a world of difference between critiquing NDT and its wider context of evolutionary materialist models and critiquing "evolution." 4] Citing a string of critics on one side of a debate among research level scientists as though they have the last word is obviously an improper appeal to authority. 5] Sober's critique is about Bayesian vs Fisherian inference testing: sound scientific practice require that conclusions are based on comparative reasoning. Perhaps a chance hypothesis confers a small probability on the evidence, but how do we know that a design hypothesis does not confer an even smaller probability? --> H'mm first a comparison of chance and/or necessity vs design relative to what we directly observe and know about CSI systems we see being made is a simple elimination based on mere improbability? --> Next, in fact Fisherian type reasoning is more used in practical stats and science than is Bayesian, and the latter is in part at least dependent on the first. [This Dembski discusses in details on his site. I see nowhere any sign of interaction with Dembski on this, who is knowledgeable in the Math and Stats and associated probability.] 6] It is by no means clear -- given the known pattern of agent causation of CSI systems, and the longstanding recognition that chance, necessity and/or agency are known (and the only known) major causal forces - that the concept "design" is simply taken to mean neither regularity, nor chance." That is the concept of design by an agent who leaves empirically detectable traces of his work is not at all vacuous or mysterious – unless you have begged the metaphysical question and reject the possibility of such an agent before looking at the facts. . . . kairosfocus
I agree that many of the critics step far away from their areas of expertise (not sure how wildlife and fisheries give you expertise in complexity theory...). Here is a piece that criticizes the filter from the point of view of mathematical statistics, written by somebody who seems to know what he is talking about: www.math.tulane.edu/~polofsson/IDandMathStat.pdf olofsson
NB: Minor arithmetic correction: at 31.6*10^6 s/year [86,400 s/day * 365.25 d/yr], we have 316*10^12 s/10Myr. That moves the calculation of The UPB-Cambrian to 1 in 9*10^110. Missed a factor of 10 . . . no material effect. kairosfocus
Arnold Horseshack. Mr. Kaaahta, Mr. Kaaahta, my paper's not ready because I'm wating for the words to evoooolve. tribune7
Kevin Padian: The Archie Bunker Professor of Paleobiology at Cal Berkeley
Oh my goodness, I didn't realize an endowned chair was created in honor of Archie Bunker. scordova
PPS: I think Mr O'Leary's take here will also be helpful. Padian's review looks worse and worse all the time, and with it so do Nature -- let's not forget, the leading general purpose Science Journal -- and the vaunted peer review process. (Aquiesce pardon my misreading of your handle . . . a right brain long story issue.) kairosfocus
PS: To clarify a point. Tyler (in the article Prof Dembski links) notes that The president of the NCSE is none other than the reviewer, Kevin Padian. kairosfocus
H'mm: 1] Let's go back, for a moment, to the core fallacy in Mr Padian's claim as cited in the original post:
Here is Padian’s take on my work: “His [Dembski’s] notion of ’specified complexity’, a probabilistic filter that allegedly allows one to tell whether an event is so impossible that it requires supernatural explanation, has never demonstrably received peer review, although its description in his popular books (such as No Free Lunch, Rowman & Littlefield, 2001) has come in for withering criticism from actual mathematicians.”
--> WHOA! Since when -- apart from in evolutionary materialists'strawman attacks -- has the design inference been inherently an inference to the SUPERNATURAL; as opposed to an inference that, the null hyp of chance and/or necessity having on good though revisable reason failed, AGENCY is then the most reasonable alternative explanation? --> Worse yet, this claim has consistently explicitly rejected by Design thinkers and theorists in the Dover trial and elsewhere, including in the book that the ACLU and their tame judge tried to make this a trial about without giving the publishers a chance to speak for themselves, Pandas and People. [So, if there is ignorance there, it is willfully negligent ignorance; but, given Mr Padian's links to the NCSE, I doubt that "ignorance" is the correct explanation. This alone is sufficient to discredit him and his associates in my mind, as on the evidence openly dishonest and exploitive of the trust that others have unwisely put in them.] --> Similarly, did the vaunted peer reviewers and editors of Nature bother to do a basic fact check? [What does that tell us about the practical import of such peer reviews in a climate where the likes of a Gonzalez can be shabbily trwated as he was?] --> Also, so far as I can tell, as discussed in Crandaddy's July 6 "Events, Causes . . ." thread, this pattern of differential causal inference across chance, necessity and/or agency [though now developed in statistical inferential form] traces back to say, Plato in his The Laws, Book X:
. . . we have . . . lighted on a strange doctrine . . . . The wisest of all doctrines, in the opinion of many . . . . The doctrine that all things do become, have become, and will become, some by nature, some by art, and some by chance . . .
--> Thus the quest to differentiate the three possible causes of events by looking for credible empirical traces of the one or the other at work, is plainly legitimate, useful and indeed it is commonly resorted to in Science. [BTW, 1 in 10^150 is a lot tighter than the usual 1 in 20 or 100 in most Fisher-style statistical inference testing . . .] --> Prof Dembski has more than adequately addressed the allegation that the UPB has not been peer reviewed, for whatever that is now worth given what we just saw. (And of course Nature failed to check with the leading Scientific publisher before comitting to print . . .) 2] Now, back to the attempted distractor, on the validity of the UPB. [Observe, too, how Aquiescence fails to reckon with the implications of his resort and the issues it opens up, as per issues in no 8 above. No prizes for guessing why.]
[WD] You’re right, the size of the known universe is known, but beyond it could lie considerably more elementry particles meaning the size of your UPB would need to be revised.
--> Of course, first, since when was this need to be open to revision a novelty on any empirically anchored scientific inference, up to and including not only observations but also the laws and theories of Science in general? Thus also, is there empirical evidence that currently credibly warrants such a revision, on any material scale? --> As A has plainly conceded, there is no current data to warrant the revision he hopes for. As I pointed out, the hoped for quasi-infinite scale is inherently unobservable, as the finite cannot observe the infinite, though we may infer to it. --> Let us imagine for a moment the universe is ten times the present scope, as A raises in no 4: 10^81 particles. That shifts the number of possible quantum states across the known scope of the universe across its lifetime to: 10^81 particles * 10^25 seconds * 10^45 quantum states/second = 10^151. --> Similarly a scope of 100 times would raise the number of particles to 10^82, leading to the UPB going to 10^152 etc. (This is why Prof Dembski more or less made a simple footnote on the issue of scope of the observed universe affecting the UPB, a scope that has been more or less moving only a few orders of magnitude for decades. TO affect his main point a DRAMATIC expansion in the scope would be required . . .] --> E.g., with the known fact that once knockouts on small micro-organisms go below about 360k base pairs, life function disintegrates, we can easily enough see that the configuration space in question encompasses 4^360k ~ 3.95*10^216,741 possible states. --> This is so many orders of magnitude beyond ~ 10^150 that it is not at all reasonable that any random walk-based search [even with various augmentations/adjustments, as WD has long since seriously addressed] in even a far larger cosmos than the one we observe, would get to the first functional life form based on DNA by necessity and chance alone. Thus, the inference to agency is well warranted relative to what we know -- as opposed to what some may wish to speculate over. (And, if there lurks a natural law that cuts down that scope dramatically,that simply tightens the cosmic fine-tuning argument . . .) --> It gets worse. A modern arthropod (a fruit fly) has about 180 mn base prs. Let us cut that down for argument to 60 mn, and compare the Cambrian explosion window, of say 10 mn yrs [~ 31.6*10^12 s] on an earth of mass 6*10^24 kg [ ~ 3*10^50 C atoms at 12 AMU/atom, reasonable as we here look at the surface as the biosphere]. The UPB-Cambrian now falls to 1 in 9*10^109, while the config space to get the dozens of new body plans, on a per plan basis is ~ 3*10^36,123,599. Config space explodes, UPB collapses, underscoring the relevance of agency tot he origin of body plan level biodiversity -- as Meyer pointed out in that now famous peer-reviewed paper. 3] Of course, my math can be checked and is subject to correction in details, but the underlying message in the main is plain: ever since Plato, the inference to design on materially important cases is a credible one, and the current situation owes more to the politics of worldview agendas in the institutions of science, than to the balance of the case on the merits. GEM of TKI kairosfocus
Dembski [5] The size of the known physical universe is — at the risk of sounding tautological — well known. What’s beyond it is a matter of speculation.
It’s the point beyond it – the matter of speculation – that my point was regarding, not the size of the known universe. You’re right, the size of the known universe is known, but beyond it could lie considerably more elementry particles meaning the size of your UPB would need to be revised. Although to be fair, you do make this clear in The Design Inference p.217
”In making this admission, however, let’s be clear what it would mean to underestimate the probabilistic resources in our universe… (1) the number of elementry particles in the universe…[UPB] varies inversely with these numbers [number of elementary particles, age of the universe, events per second], so that as they increase, [UPB] decreases. Hence if these numbers are off, then so is [UPB]”
Acquiesce
To retain my sanity, I try to remain cognisant of the wider setting--in this case I'll call it: fallen NATURE. DG
[...] Recent Comments gpuccio: Bob: "Firstly, and this might surprise you, Behe’s work... kairosfocus: Aquiescence: I see Prof Dembski has immediately pointed o... j: [Just for laughs:] Robert Pennock in Discover (Februa... Freelurker: ... in other words, the systems must be designed... johnnyb: Why I love Kevin Padian -- When he is talking outside his... tribune7: So, in the same way that Christians don’t hate Allah, s... tribune7: Archie Bunker? That's Arnold Horseshack. Mr. Kaaahta, Mr... Jehu: Am I missing something? Why is TalkOrigins bothe... Patrick: Would you be so kind as to point out which of th... jerry: Salvador, The old expression "Follow the money" drives th... feed » [...] Big science mags as mouthpieces for the materialist lobby | Uncommon Descent
Aquiescence: I see Prof Dembski has immediately pointed out the obvious flaw in your:
the upper probability bound of 10^150 rests on pure speculation. Who can really say, with any empirical basis, there are only 10^80 elementary particles in the universe? For all we know the universe could be twice this size, or ten times or even infinite in size.
He aptly observed: The size of the known physical universe is — at the risk of sounding tautological — well known. What’s beyond it is a matter of speculation. I add, that it is not just "speculation," but specifically METAPHYSICAL -- i.e. philosophical, not scientific -- speculation. And that brings in all the apparatus of serious comparative difficulties analysis across the live metaphysical options, e.g. including Theism as well as evolutionary materialism and Pantheism. You have therefore stepped beyond the reasonable bounds of science, in order to facilitate a dismissal argument. On the wider field of comparative difficulties, I am not at all so sure that Evolutionary Materialism, often simply but inaccurately called "Science," is the best present explamation, relative to: 1] factual adequacy [note how you have had to speculate counter to the observations we have in hand, and that a quasi-infinite cosmos as a whole is inherently unobservable . . . noting meanwhile that the specified complexity in view in many cases is many orders of magnitude beyond the 500 bit limit, e.g. DNA 500k - 3 bn or so base pairs, each being 2 bits.] 2] coherence [not least as Ms O'Leary often points out, there are major difficulties accounting for a credible mind, not to mention morals] 3] Explanatory elegance vs either ad hocness or being simplistic. So, do you really want to go down that speculative road? [And, what does the resort to such speculation tell us about the problems with the attempted case against the design filter.] GEM of TKI kairosfocus
Why I love Kevin Padian -- When he is talking outside his expertise (i.e. about ID), he is actually helpful, as the only thing that shows through is his anger. On the other hand, when he is talking within his expertise, he gives powerful evidence of Biblical Creationism. johnnyb
Archie Bunker? That's Arnold Horseshack. Mr. Kaaahta, Mr. Kaaahta, if I design somethin does that make me intelligent? tribune7
Acquiesce: The size of the known physical universe is -- at the risk of sounding tautological -- well known. What's beyond it is a matter of speculation. William Dembski
Whilst I believe chance to be an inadequate explanation for life, I feel the upper probability bound of 10^150 rests on pure speculation. Who can really say, with any empirical basis, there are only 10^80 elementary particles in the universe? For all we know the universe could be twice this size, or ten times or even infinite in size. Acquiesce
Does he mean David Wolpert, a co-discoverer of the NFL theorems? Wolpert had some nasty things to say about my book NO FREE LUNCH, but the upshot was that my ideas there were not sufficiently developed mathematically for him to critique them.
Bill, you forgot to mention that David Wolpert is going to become one of the best supporters of your theses, although (and this will be even more significant) probably he will never state so. After having stated what follows in his paper on IEEE Trans. in December 2005, D. Wolpert states the same concept in many speeches. For example: http://www.mis.mpg.de/talks/abstracts/4444.html Special Seminar David Wolpert (NASA Ames Research Center, USA, + MPI MIS, Leipzig) No Free Lunch Theorem Abstract - "At least since Hume, people have wondered whether there are first principles limitations on what humans can deduce / infer about reality. ... In contrast to the traditional optimization case where the NFL results hold, in self-play there are free lunches: in coevolution some algorithms produce better champions than other algorithms, averaged across all possible problems. However in the typical coevolutionary scenarios encountered in biology, there is no champion. There the NFL theorems still hold." It is also quite interesting to see the comment by a well know PT guy (M. Perach): http://www.pandasthumb.org/archives/2006/07/there_is_a_free.html After somenone had given him the reference to Wolpert's paper on IEEE Trans., he wrote: "Yes, Bob, this is the paper I meant. Thanks for pointing to it. Whether or not the NFL theorems are valid for biological evolution (they are), including co-evolution (where they may be invalid in certain situations - see the referenced Wolpert/Macready’s paper) is irrelevant because the uniform average does not tell anything about the actual performance of search algorithms, as only the performance on a specific landscape is what counts, and there always are algorithms that outperform blind search on given specific landscapes, and this is true regardless of whether the landscape is co-evolving or not. Moreover, evolution is not a search for a target (contrary to what Dembski asserts), therefore his calculations of probabilities of finding a “target” are likewise irrelevant. Maybe I’ll write one more brief essay about it and post it here." It seems that the defense line has been moved back. As now (after Wolpert's work) the reference of NFL theorems in biology cannot be anymore labelled as non sense, the new position is to claim that in evolution the landscapes are specific and favourable for search. But this is pretty wishful speculations! Bill, I think that your work will be more and more vindicated. kairos
Way the go! The general public--and that includes the media and academics and leaders and just about everybody outside the mathematical community--isn't going to follow the mathematics of ID nor appreciate it that much. But all should be glad it's there! It's a front line component in the struggle and I think it's in the hands of a strong and competent individual. And if it wasn't a fight--if there were no struggle against malignant forces--there would be no heroes. Just intelligence. And maybe not even that, for is real intelligence ever awakened apart from passion? And what inspires more than to face down the enemies of reason and endure a little persecution for the cause? Rude
You tell them, Dr. Dembski. Why anyone would listen to a bunch of guys critiquing groundbreaking work outside their area of expertise is beyond comprehension. But, there you have it. Credulity is a powerful tool when you have an axe to grind. rrf
It is beyond my ability to see why "supposedly reasonable scientists" will go to such extraordinary length to deny the truth of the apparent design found in nature. Especially when it is backed up with such solid work as yours Dr. Dembski. Is the concept of a designer that scary for them? What are they so scared of? bornagain77

Leave a Reply