Uncommon Descent Serving The Intelligent Design Community

Some Thanks for Professor Olofsson II

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The original Professor Olofsson post now has 340 comments on it, and is loading very slowly.  Further comments to that post should be made here.

Comments
I must apologise to the administrator for posting this under a second account. I neglected to note down my password, and cannot access my work email from home to recover it. "Interesting post. But I have to disagree with you on some crucial points." First of all, I wanted to thank you for responding so thoroughly. Secondly, as I anticipated my inexpert use of terminology has proved to be an obstacle to communication. I'll try to take on board your corrections and re-express my argument. That said, I think you are accustomed to dealing with arguments of the form "ID theory fails because you are wrong about X, Y and Z", whereas my contention is that ID theory fails if you are right. It might be helpful to bear that in mind. So, to recap: in ID theory as applied to biological systems, conventional undirected evolution is represented by a random search, whose behaviour is approximated via a uniform distribution. The use of a uniform distribution, and the scale of the search space, is justified thus: 1. All known lawful biases are, on average, neutral with respect to fitness or function, and supposing otherwise is the preserve of theological evolutionism. I agree with this assessment completely. 2. Although we may speculate there are fixable intermediate configurations that could reduce the burden upon the random search, it is unscientific to presume their existence in the absence of evidence. I also agree with this assessment. Indeed, my agreement with these points is crucial to the arguments that follow. I think now would be an appropriate time to bring in a quote from your response: "Design utilizes other tools, which are not random. You cite some of them, which bring to restricitons of the search space, but we could sum them up in a very fundamental sentence: design utilizes understanding and purpose." So if I tell you that the 400 digit binary number I thought of earlier is the encryption key for my wireless router, does your understanding of its purpose help you make a better next guess? 32, by the way, was wrong. Allegedly intelligence can create any amount of CSI, and yet here you are unable to produce a meagre 400 bits of it! Here's why: the 'tools' employed by design to restrict the search space are properties of the problem, not of the intelligence trying to solve it. Consider, by way of example, the following (somewhat whimsical) scenario: One day, minding your own business, you happen to observe a cat, sat on a mat. Amazed, you decide to communicate this information to your two friends, Alan from next door and Zordar from the planet Clog. To Alan you say "the cat sat on the mat", one of 30^22 or so possible sentences of that length, apparently demonstrating your ability to create CSI at will. Next, you turn to Zordar. You know Zordar would be interested to hear about the cat, and you know it's possible to express the information in Cloggian. But there's a problem: the Cloggian language is not broken down into building blocks of meaning. There is no consistent word for 'the' or 'cat' or 'sat' or 'mat'. There is definitely a sequence of characters that means 'the cat sat on the mat' in Cloggian, but even the most encyclopaedic knowledge of other Cloggian sentences gives you no clue as to what it might be. Assuming that you know the sentence you need is 22 characters long, is your intelligence of any help in deducing what you should say? No. Cloggian language, like the binary key for my wireless router, is - to borrow a supremely apt phrase - irreducibly complex. So it seems you can't create CSI at will after all! This is relevant for two reasons. First of all, ID theory supposes that biological systems contain CSI, and that intelligence can create CSI. Therefore, intelligence can create biological systems. But we've just shown, without much effort at all, that intelligence cannot always produce the CSI necessary to solve a problem. So the bedrock assumption of ID, that CSI is an indicator of intelligence at work - the more CSI the better, even - is unsafe. Secondly, you spent quite a bit of your response to my previous post assuring me that the genetic language of biological organisms is irreducibly complex: there are no higher-level building blocks of meaning, no 'words' as such: a protein or complex of proteins 'means' what it does as a whole. You've patiently explained to me that trying to find the right protein to accomplish a task is like trying to tell Zordar about the cat - in other words, intelligence and prior experience is of no help when you try to do it. When we look at a complex biological system, then, the situation is not as ID theory would presently have you believe. Design is not automatically the eager child with its hand in the air, saying 'Me sir!' On the contrary: just as NDE advocates should substantiate their claim that the origination of a given biological structure is plausible via evolutionary mechanisms, so - in each and every individual case - the design hypothesis depends upon its advocates' ability to demonstrate how the structure could have been designed, step by step. Peeling2
Peeling: Interesting post. But I have to disagree with you on some crucial points. You say: "In measuring the CSI of a forum post, once again intelligence is pitted against the uniform distribution." No, intelligence is compared to a "random search". I think that you, like many, make some confusion between the definition of a random event and the definition of a probability distribution. The fact is that the causal engine in darwinian theory is, by definition, a process of random variation. It is not so important to know exactly which probability distribution we apply. We assume the uniform distribution because it is the most reasonable approximation. But, whatever distribution we assume, the search remains a random search. As I have argued many times, differences in the probability distribution cannot influence significantly the results, unless the distribution is in some way connected to the function which has to be found. And believe me, there is really no reason in the universe why a probability distribution aplliable to a random variation should in any way favor functional proteins. Unless, as I have already said, you are a die-hard theological evolutionist, and believe in a super-super intelligent fine tuning of biochemical laws. So, darwinian evolution postulates randomness as the only engine of variation. But that is not true of intelligent design. Design utilizes other tools, which are not random. You cite some of them, which bring to restricitons of the search space, but we could sum them up in a very fundamental sentence: design utilizes understanding and purpose. You say: "if we cannot demonstrate any lawful bias toward the origination of that protein, any stepwise process by which it might be attained, or quantify the degree to which it is a non-unique solution, we are justified in pitting a uniform probability distribution against a supposedly known quantity, intelligent design, and seeing which wins" Again, I am afraid there is some sonfusion in what you say. One thing are problems about the appropriate random distribution to assume: as I have already said, there may be all possible deviations from uniform distribution (some aminoacids can occur more frequently, some mutations can be easier than others), but none of these things can help in selecting complex functional sequences, and by the way a lot of complex functional sequences of different structure and function. Another thing, instead, is the problem of a possible stepwise process: that is aimed to make NS possible. In other words, if you can show a stepwise process through which a protein can be built, where each step is probabilistically accessible (say one or two aminoacid mutations), and each step is selectable (giving a definite function advantage), then you could argue that NS can act on each step, fixing it, and the probabilistical impossibilities could be solved. But unfortunately (for the darwinists) that is not the case. A new function cannot be obtained by a different pre-existing function bt stepwise functional mutations. In other words, complex functions are not "deconstructable" into a sum of simple stepwise incresing functions. Function derives from the general complex organization, and is not the accumulation of smaller functions. So, even NS cannot help. The probabilisitcal impossibilities remain. You say: "A character set of maybe 30 letters, spaces and punctuation and a modest post length of maybe 1000 characters yields a search space for the chance hypothesis of 30^1000. A clear win for intelligence, surely?" Yes, a clear win for intelligence against a purely random search. "However, for that reasoning to be valid we must be able to demonstrate that intelligence is capable of overcoming the odds in question." That's exactly the point: intelligence has nothing to do with the odds. Intelligence does not work by random search (even if sometimes it can use a partially random search in specific contexts: see intelligent protein engineering, for example). You say: "If we cannot do that then it cannot be our best guess, and we are forced to presume some other explanation - be it an unknown law or a stepwise process we haven’t spotted - fills the gap." Wrong: intelligence does not have to beat any odds, as I have already said. Intelligence just has to be able to generate CSI. And it does. You say: "But could intelligence beat those odds in a fair fight?" As alredy said, intelligence has to beat no odds. And what do you mean by "a fair fight"? The fight is obviously not fair. Intelligence has tools which a random search can never have. Understanding of meaning, purpose, and so on. You say: "To arrive at our uniform distribution we once again supposed a vast search space with no lawful bias, no iteration, no partial success and a single solution." Here you make a lot of confusion. First of all, again, here the problem is not the distribution. A uniform distribution in the search space of a protein just means that all sequences have a similar probability to occur. Even if the distribution is not uniform, it is impossible that any probabilistic distribution appropraite for biochemical events may favor functional sequences on random sequences: that would really be a "miracle". Lawful biases, in the sense of asymetries in the probabilsitc distribution, may certainly exists: I have cited elsewhere tha asymetry of the gentic code, for instance. But that would not make functional proteins, in general, more likely. What do you mean by "no iteration"? I don't understand. "No partial success" is simply wrong. Partial successes are certainly admitted in a darwinian search, but, as I said before, they must bring an increase in function to be "recognizable" by NS. Otherwise, NS is blind. In design, instead, the designer can certainly recognize many partial successes which still imply no function: for instance, a designer can recognize that the result is nearer to the desired structure, even id the functional structure has not yet been achieved. That's one of the main differences between a blind selector and a designer: again, the designer has understanding, and a definite target (purpose). Finally, why do you say "a single solution"? We are well aware that the solution is not single. That's why I always speak of a "target set" of functional solutions (for a defined function). the rate of the target space to the search space is exactly th probability of the target space. So, there is no reason that the solution should be "single". You say: "No! We granted intelligence a lexicon and a set of grammatical laws, collapsing the search space to a tiny fraction of its size before a word was written" No! Intelligence created the lexicon and the set. You are only saying a very trivial truth, that intelligence uses intelligent tools. It can do that, because it has understanding and purpose. "We also granted intelligence the freedom to iterate, to devise partial solutions and refine them based on feedback, gradually zipping together its words with the problem it was trying to solve." Again, "we" (whoever you mean with "we") granted intelligence the right to be intelligent. "Nor did we insist the solution be unique: countless different posts would convey meaning similar enough to overlap in the minds of readers." Nor do I insist the solution be unique in the case of proteins. Similar, and sometimes very different, proteins overlap in biological function. That is not a problem. The final probability is calculated for the whole functional target space. "Also note the non-catastrophic typos in the quote above." Also note the many non-catastrophic mutations we well know of. "In short, in measuring CSI the EF has detected something intelligence never did." Always the same error: intelligence can generate CSI. It does that with intelligent means. It is not supposed to do that by a random search. That would be impossible. "In creating one of these forum posts, intelligence didn’t find the needle in a 30^1000 haystack." Yes, it did. But obviously not by a random search. "It’s a false positive" Absolutely not. It's a true positive. The EF concluded that that result could not be obtained by a random search, and that intelligence was needed. That's exactly what happened. "For it to have been a fair contest between the uniform distribution and intelligence" Again, it has not to be a fair context. And the context is not between a distribution and intelligence, but between a random search, with whatever probability distribution, and intelligence. "For it to have been a fair contest between the uniform distribution and intelligence, intelligence would have had to construct that post with no lexicon, no knowledge of grammar, and without reading any of the preceeding thread." Why? Intelligence is not a blind watchmaker. It is a seeing watchmaker, and possibly an expert one. Do you pretend that watchmakers improvise their art? "That may at first blush seem absurd - after all, we know for a fact intelligent human beings really do possess the search-space-slashing tools we attributed to them: a lexicon and knowledge of grammar." Indeed, it is absurd. And indeed, we know that for a fact. "But here’s the rub: do we know that about the intelligence supposedly behind our 200bp protein?" You bet. After all the designer is our assumption, and we are not postulating a blind designer, or an idiot or incompetent one. We are postulating an "intelligent" designer, in case you have not noticed. "The answer is no, we don’t. In assuming an intelligence could be responsible for constructing that protein, we assume it possesses tools for collapsing the search space in which it is hiding - yet we have no idea what those tools might be." What about an understanding of pjysical and biochemical laws, of the laws if protein folding (which we still don't understand well) and of the laws of protein function? Plus a very good understanding of the general plan of a living being? In other words, we are simply assuming that the designer understands the things he is designing. "So the question is: why is it ok to justify the design hypothesis based on the idea that there might be tools enabling an intelligence to abstract the problem, simplify it and beat the odds," It is OK because a designer is intelligent, and if he needs tolls he can create them, and does not have to beat any odds, because he is not working by random serach, but by understanding and purpose. "but not ok to say that there might be an iterative process of partial solutions, or a lawful bias, that make a naturalistic origin equally plausible?" It is not OK because the things you suggest make no sense. Please show how an "iterative process" of "partial solutions" (not functional), or any reasonable bias on the probability distribution of protein sequences, could explain the emergence of functional proteins by a random serach engine. You simply cannot do that. That's why it's not OK. "Why is design our best guess?" Because of all the above reasons. "As intelligent human beings, I invite you to solve a problem genuinely equivalent to the one ID theory claims is posed by a 200bp protein: I have picked a number between 1 and 2^400, and all you have to do is guess what it is." The only intelligent guess I can make is that you still don't understand the difference between an intelligent process and a random guess. But, just for the game, I will try: was it 32? gpuccio
(continued from above) I'd like to begin by tackling the assumption that we know intelligence is capable of yielding the biological structures in question. For our purposes I equate intelligence with foresightedness: the ability to abstractly model the problem and select a solution. That is my understanding of 'design': the facility to know in advance whether a given option represents a solution to a problem (I apologise if this is kindergarten-level stuff; I just want to lay things out as clearly as I can so that any errors I make are explicit rather than concealed). At this point I'd like to borrow a scenario similar to those proposed in this thread as being indicative of design: a common-or-garden 200bp protein. According to the arguments quoted in my preamble, if we cannot demonstrate any lawful bias toward the origination of that protein, any stepwise process by which it might be attained, or quantify the degree to which it is a non-unique solution, we are justified in pitting a uniform probability distribution against a supposedly known quantity, intelligent design, and seeing which wins. We may not be right, but it is our best guess given what we know, and best guesses is what science is about. Correct? However, for that reasoning to be valid we must be able to demonstrate that intelligence is capable of overcoming the odds in question. If we cannot do that then it cannot be our best guess, and we are forced to presume some other explanation - be it an unknown law or a stepwise process we haven't spotted - fills the gap. Recall the assumptions we used to justify our application of the uniform probability distribution: no stepwise process via proteins of partial utility or alternative function, no lawful bias, and the solution found is utterly unique: a needle in a 1 in 2^400 haystack. So how does intelligence - the intelligence for which we have evidence, at any rate - find solutions? Answer: by abstracting the problem, selecting a solution to the abstraction, and then realising it. Except... according to our assumptions, abstracting the problem in a simpler form isn't possible. There is only one, extraordinarily precise solution, and if our hypothetical intelligence, in its musings, misses it by the smallest margin it achieves exactly nothing, and gets zero feedback as to how close it got. What then, of the alleged ability of human intelligence to create CSI? I will again borrow an example presented here: An apt illustration of tis is the fact that lucky noise could in principle account for all teh posts inthis thread. Nothing in the physics or lofgic forbids that. but, we all take it dfor granted that the posts are intelligent action. So, even the objectors to the EF are actually using it themselves, intuitively. The claim here is twofold: first that we intuitively mimic the use of the EF to identify these posts as the product of an intelligent mind, and second that measurements of CSI, when applied to something like a forum post, positively identify intelligence at work. Neither claim bears up under scrutiny, but it is the second that causes ID theory real trouble. In measuring the CSI of a forum post, once again intelligence is pitted against the uniform distribution. A character set of maybe 30 letters, spaces and punctuation and a modest post length of maybe 1000 characters yields a search space for the chance hypothesis of 30^1000. A clear win for intelligence, surely? But could intelligence beat those odds in a fair fight? To arrive at our uniform distribution we once again supposed a vast search space with no lawful bias, no iteration, no partial success and a single solution. Is that the same problem we posed inteligence when assessing whether or not it could have produced the post? No! We granted intelligence a lexicon and a set of grammatical laws, collapsing the search space to a tiny fraction of its size before a word was written. We also granted intelligence the freedom to iterate, to devise partial solutions and refine them based on feedback, gradually zipping together its words with the problem it was trying to solve. Nor did we insist the solution be unique: countless different posts would convey meaning similar enough to overlap in the minds of readers. Also note the non-catastrophic typos in the quote above. In short, in measuring CSI the EF has detected something intelligence never did. In creating one of these forum posts, intelligence didn't find the needle in a 30^1000 haystack. It's a false positive. For it to have been a fair contest between the uniform distribution and intelligence, intelligence would have had to construct that post with no lexicon, no knowledge of grammar, and without reading any of the preceeding thread. That may at first blush seem absurd - after all, we know for a fact intelligent human beings really do possess the search-space-slashing tools we attributed to them: a lexicon and knowledge of grammar. But here's the rub: do we know that about the intelligence supposedly behind our 200bp protein? The answer is no, we don't. In assuming an intelligence could be responsible for constructing that protein, we assume it possesses tools for collapsing the search space in which it is hiding - yet we have no idea what those tools might be. On the contrary, we resort to brute force (folding@home). So the question is: why is it ok to justify the design hypothesis based on the idea that there might be tools enabling an intelligence to abstract the problem, simplify it and beat the odds, but not ok to say that there might be an iterative process of partial solutions, or a lawful bias, that make a naturalistic origin equally plausible? Why is design our best guess? As intelligent human beings, I invite you to solve a problem genuinely equivalent to the one ID theory claims is posed by a 200bp protein: I have picked a number between 1 and 2^400, and all you have to do is guess what it is. Peeling
Hi all, I'm new here, and a frightful layman to boot, so do be gentle. I find this topic a fascinating one, and this looks to be the sort of place to delve deeper. I've tried to read the entirety of this thread (and its precursor) and several recurring themes stood out, exemplified to my mind by these quotes: Intelligence, sir, is the only empirically observed source of FSCI: functionally specific information requiring storage capacity beyond 500 - 1000 bits, as a practical description. You think it wiser to presume an unknown law is actively at work [than to presume that no such laws exist]? That we should somehow incorporate into calculations something we know nothing about? the distribution [of mutations and availability of amino acids] is certainly not completely uniform. Has all that any relevance to our problem, which is the nature of biological information? Practically not [because it is unlikely any such bias would be congruent with fitness or new function]. In essence: the formation of complex biological mechanisms is a job we know intelligence can do. It is therefore a parsimonious explanation for the job having been done. Moreover, for each and every biological mechanism it and the uniform distribution are the only games in town unless and until a specific alternative is proposed and can be evaluated. Is that a reasonable summary? Assuming that it is, I'll continue. (comment to follow) Peeling
That should be "accept" not "except" Mark Frank
I except PO's point. Some kind of hypothesis testing is simpler. I also realised after I posted that in this context the Bayesian approach includes its own assumptions about the prior distribution which are hard to justify. Mark Frank
Mark, StephenB, onlookers[91,92,93], Mark and I must have posted at the same time and his comment appeared first. My reply to [91] is [93]. The case of Bayesian inference is entirely separate. I don't advocate it in this case but the conclusion would be the same: Caputo cheated. In a way, the focus on "bayesianism" is unfortunate because it is too difficult to follow. It was brought up because of Dembski's "E vs C" chapter. My criticism of the filter is, as has been pointed out many times, within his chosen paradigm of hypothesis testing. Prof_P.Olofsson
StephenB[91], Congeniality back at you! We're having fun, wasn't it so? :) About Caputo, yes (of course, come on, politician, Caputo=Capone+Bluto, what more evidence do we need!?) and yes. Let's go to specifics. I apologize in advance if this post becomes long. We should do what I assume the experts in the case did, namely, compute the probability that a fair drawing produces 40 D's or more. There are 42 such seqeunces of D's and R's and let us call this set of seqeunces E* to follow Dembski. We get a probability that is so small we rule out fairness. So far so good and everybody is in agreement. Let's look a bit closer. In statistical hypothesis testing (SHT from now on) we set up the null hypothesis H0, which is the hypothesis we are trying to reject ("nullify"). Here we would write it as H0:p=1/2 where p is the probability with which Caputo draws. Based on the probability of the outcome, we reject H0. That leaves a bunch of other possible values of p that we haven't tested. Now what? In SHT we also always have an alternative hypotheses, called HA. It could simply be the complement of H0 but mose often it is more specific. Here we would take HA:p>1/2, that is, HA specifies an entire range of possible values of p. The property of HA is that any value of p in HA gives a higher probability of E*, thus indicating bias in favor of Democrats. For example, if p=0.9, we get P(E*)=0.08 and if p=0.95 we get P(E*)=0.39, etc. So the conlusion thus far is that we reject H0 in favor of HA. However, all the calculations above are based on drawings being independent and with the same probability p each time (binomial trials). That's our statistical model and it might not be correct. Let's expand it. A more general model is to consider all probability distributions on the set of 2^41 sequences, including those where p changes between drawings, where drawings are not independent etc. One specific distribution is the one where drawings are independent with p=1/2, that distribution is our H0 and we can still reject it in favor of an alternative HA which is now more complicated. In brief, HA consists of all distributions that make P(E*) greater than P(E* given H0), that is, the distributions that favor Democrats. Note that all these distributions are "chance hypotheses," even those that put a 100% probability on the particular sequence we observed. Probability 1 is just another probability. I think we're done. We have ruled out H0 and whether Caputo cheated by using a method that had independent drawings with p too high, or in some other way, doesn't matter. As for the court case, we need to, of course, rule out the possibility that he used a method that was flawed (such as p too high) unknown to him. He was asked what device he used and it was found to be OK, so only cheating remained. OK, what about "design" and the "filter"? In Dembski's words, we need to "sweep the field" of all chance hypotheses. My question (1) was how to do that? All Dembski seems to offer is to rule them all out based on Caputo's word. My answer is that we can't. My question (2) was why we need to rule out all chance hypotheses. The "filter" requires us to rule out all chance hypotheses and infer "design." But what is "design"? Cheating? But there are plenty of chance hypotheses in HA that also amount to cheating. I'd say the distinction between "design" and "chance" is impossible to make, as you can "cheat by chance." It might even be argued that any design hypothesis is, in fact, a chance hypothesis. The distinction is also pointless. All we need to do is rule out H0, make sure Caputo didn't cheat inadvertently, and we're done. Case closed, go to jail and forget about the 200 bucks. Prof_P.Olofsson
# 91 I am not sure if PO is retiring from UD or not. I imagine it was very hard work responding to so many comments simultaneously. I am very much in agreement with him (although far less qualified) so I will attempt to act as a poor substitute. I am sure he will correct me if he returns. Do you believe that Caputo cheated? It is hard to say without knowing more about the process by which the ballot order is determined. Maybe he cheated, maybe the process is strongly inclined to put the parties in alphabetical order, maybe it is strongly inclined to keep the order the same as last time. etc etc Can you confirm that fact using statistical methods? Well no. As I explain above that would require understanding the process. But I can give strong reason for supposing that the probability of a democrat topping the ballot paper was higher than 50%. If so, show me the math and explain why your approach alone is adequate. POs did the math. His Bayesian analysis results in a pdf which shows that (assuming a fixed independent probability on each occasion) "the probability of the probability of being a Democrat exceeding 50% was virtually 1" - see post #300 on the first PO thread. As far as I know there is no other method of calculating the probability of the method being biased towards a Democrat based on the evidence available. Mark Frank
Prof Olofsson: Let me end on a congenial note. I appreciate your many posts and hope that I have not been too contentious in my responses. The three questions I wish I had asked are these: Do you believe that Caputo cheated? Can you confirm that fact using statistical methods? If so, show me the math and explain why your approach alone is adequate. My best attempt to answer your question is this: I don't think one can, on the one hand, consider the math in the context of its application and, on the other hand, consider it independent of its application and get any consistency. StephenB
kf[89], Merry Christmas to you too and everybody else, or as we say in Swedish, God Jul! Prof_P.Olofsson
Prof PO, 1] Re 71: Bayesian inference is not merely applying the simple Bayes’ rule to events. Nor does Dembski assert in his article that that is all there is to Bayesian reasoning. Indeed he points out that a particular "Bayesian" he cites is actually more of a Likelihood theorist but delimits that this distinction is beyond his then present scope. WD as I cited above, was simply first highlighting that Bayesians use composite outcomes [I notice that you do not now deny this . . . ], of the general class that E* is. He then showed though a simple example, how -- under relevant practical situational circumstances (and associated issues of warrant and inference to best explanation across competing models) -- the Bayesian approach to a design inference [note his 2005 allusion to Robin Collins] would implicitly bring up issues tied to target zones of interest in the overall config space of possible outcomes. Lurking in these zones are all the issues of "law of averages" intuitive expectations, and their rational reconstruction [cf WD here in a parallel thread] into eliminationist reasoning. And that brings us right back to the core point: in the real world, we all have to infer to chance or necessity or design,and the CSI or FSCI in objects is a pretty good pointer to which is dominant for a given aspect. 2] But also, you assert: I have stated my case in my article and we shall see if there is any opposition from the statistical community . . . I think -- having already gone though the way you structured your argument, in a sequence of comments last year starting here and with further critical reading as discussed starting here [also see my response on the paper's opening at 103 which in effect sets all in the context of "creationism" including Sir Fred Hoyle's 747 example] -- that it is fair comment to observe that you set up a strawman caricature of W D's case, and knocked that over. Given human nature, even among statisticisns, and the sort of strawman misrepresentations of ID and polarisations that are all too common, mere agreement of the statistical community is far from good enough. What is needed instead is an assessment on the merits, starting from your construction of what Mr Dembski had to say, then the wider issue of the relevance and credibility of Eliminationist reasoning, then the Bayesian alternative, then the real-world contexts and balance on the merits. For, no individual/ collective authority is better than his/its facts, assumptions, explanatory models and logic. GIGO. In short, as a final fair comment for now: it remains my considered opinion that on the merits, you have still not made the case that you need to make, so whether or not the case as presented is persuasive simply tells me about the quality of reasoning in the community of your peers. All said, a lighter moment: greetings at Christmas! GEM of TKI PS: Sir Fred Hoyle in his "tornado forms a 747" in a junkyard example, is making a serious point tied to statistical thermodynamics considerations, on origin of life in light of evident deep isolation of islands of function [specifications or target zones . ..] in a sea of non function; the POINT of Hoyle's example -- which is not a matter of mere opinion but of serious issues. Considerations that have long had the OOL research programme in a tizzy; as the recent public exchange between Shapiro and Orgel underscores. (Similar issues of course obtain for body plan innovation level macroevolution, e.g. the Cambrian revolution.) kairosfocus
StephenB[85], I will try to answer your new questions. They are many and I will keep my answers short. As for your first 4 question marks, I think it is more or less impossile to apply statistical methods in such an ambitious scheme as Dembski's. Next question mark; I cite Fred Hoyle because it's a famous quote and it's pretty colorful, whether you agree or not. Number 6, as I have pointed out many times, I am selective based on my expertise. Number 7, of course I am not a "neutral observer." Who is? I am however perfectly neutral when it comes to the math and statistics involved. Number 8, what do you mean? You talk about one theory and another. Demsbki's filter vs evolutionary biology? That's apples and elephants. The filter is not a theory, it is a method. As far as I know, there is no ID theory that explains the origin of species. Is there? With that caveat, my answer to number 9 is that I rely on the authority of scientists to establish and explain science. I don't believe in conspiracy theories. If only evolutionary biologists and no other scientists supported evolution, I'd be suspicious. I know I don't have all the answers to all the questions, but at least I've made an attempt to address each one. Prof_P.Olofsson
StephenB[85], My two questions (1) and (2) on Olofsson I were not directed at you personally, as you can check for yourself. They were posed because I think they're relevant and might generate some interesting discussion. As you jumped in with comments about me, prompted by those questions, I thought that you perhaps also might have an interest in answering them. If you don't, I have no intention of pressing you. Prof_P.Olofsson
StephenB[85], Regarding questions, yours and mine, see my post[84]. Prof_P.Olofsson
----PO: "Have a pleasant weekend!" Thanks, I hope you have one too. I haven't yet declined to answer your question. On the other hand, my standard is that we must all be held accountable, which means that you should answer my questions first. My earlier reprieve was an attempt at being gracious, meaning I decided not to press an the issue. In the meantime, you have asked me to held accountable with questions of your own. So, the game is back on. Besides, I didn't show the relevance of my questions the first time. I will do that now. -----Ending your article, you wrote: “Arguments against the theory of evolution come in many forms, but most share the notion of improbability, perhaps most famously expressed in British astronomer Fred Hoyle's assertion that the random emergence of a cell is as likely as a Boeing 747 being created by a tornado sweeping through a junkyard. Probability and statistics are well developed disciplines with wide applicability to many branches of science, and it is not surprising that elaborate probabilistic arguments against evolution have been attempted. Careful evaluation of these arguments, however, reveals their inadequacies.” If you have enough confidence to say that Dembski’s approach is wrong, why do you not also have enough confidence to say which approach would be right? If you are so sure about what does not work, how about showing us what would work? Or, are you saying that these arguments cannot be applied to formal statistical methods at all? If you cannot answer the latter question, how can you answer first two? At the same time, you seem blissfully unconcerned about the incredible improbability of the alternative argument, namely that a cell could emerge randomly. Fred Hoyle dramatizes the point that such an event is most unlikely and you even cite his example. What do your make of that? Evidently, you disagree with him. That's odd, because Darwinism's mechanisms are neither described with enough precision nor defined with enough consistency to be measured at all. Why are you so selective with your rigor? Doesn’t that selectivity suggest that you are not a neutral observer? Doesn't it seem reasonable that a theory that is precise enough to be subjected to mathematical scrutiny should be preferred over the one that is not? Or, as you suggest in your article do you accept the less precese theory solely on the authority of a majority evolutionary biologists? StephenB
StephenB[81],
I am having fun, aren’t you?
Yeah, but I want to have more fun!
When it comes to answering questions, you are a little behind the curve. On the other thread, I posed questions to you at 140, 145, and 188. First things first.
OK, fair enough. Your question in 140 was answered by yourself in the same post and also addressed by me two posts down. There are no questions in 145 and 188, but we went back and forth a while. Your last comment was "fair enough" and I thought we were done. At the very least, I didn't feel there were any unanswered questions and, looking back, I can't find any. You don't have to answer my questions. They are difficult. Have a pleasant weekend! Prof_P.Olofsson
rockyr[82], Well, you have mostly talked about the philosophical underpinnings of probability without suggesting what would or should possibly change. My view is that the axiomatic approach we use today is the correct setting as it is independent of any one particular interpretation, and results such as the Law of Large Numbers and Central Limit Theorem, that are observable in practice, can be proved within the theory. I see no benefit in defining probabilities as limits of relative frequencies which seems to be what von Mises advocated. I don't want to misrepresent his position though, and I have pulled out my old copy of "Probability, Statistics and Truth" for further reading. He seems to be skeptical of the Principle of Indifference that Dembski relies on in some of his writings which may be an interesting point. Anyway, I wish you a good weekend as well! Prof_P.Olofsson
Prof_P.Olofsson, 80, For several reasons, I don't want to get involved here in the filter debate, I think this would be best done by Mr. Dembski and his associates, if he is willing to do it on this blog. In this respect all I am asking for is that the ID get a fair chance in the public forum to present and discuss their ideas, because they are worthy of a serious in-depth discussion. In other words, I hope the ID gets a proper respect worthy of real science, not the usual labels like "creationism" or "pseudo-science." As I said, I think that the best and the most beneficial approach for both sides would be to have a serious look at the philosophical underpinnings of both the evolutionary and design approach. Have a good weekend. rockyr
-----PO: "Shouldn’t you answer my questions (1) and (2) from Olofsson I instead of bickering? Wouldn’t that be more constructive, not to mention more fun?" I am having fun, aren’t you? When it comes to answering questions, you are a little behind the curve. On the other thread, I posed questions to you at 140, 145, and 188. First things first. StephenB
rockyr[78], OK, so where does it lead us? Can we use Kolmogorov's axioms? What are the implications for statistical inference and for the "explanatory filter"? I think it's time for you to be a bit specific. You obviously have objections to what I wrote but it's hard to tell what they are. Prof_P.Olofsson
Mark Frank, 77, Yes I do. I admit it is not easy, even at the higher philosophical level, but, as I said before, things don't get any easier once they are clouded by a veil of complex math. You still have to deal with the same problems, but now you need to cut through the complex math as well. rockyr
Prof_P.Olofsson, 76, From what I have read and what I know, I think von Mises was one of the most intelligent and rational modern philosophers of chance & probability. He criticized where criticism was necessary, and he praised where praise was due, such as when considering the ideas of Kolmogoroff. He even considered delicate psychological and physiological aspects, even para-psychology. I like von Mises' definition of probability and chance, i.e. former with respect to proper random collective vs. improper non-random collective. As he says, this is the only truly scientific way to deal with the mess. But, if you disagree, I would be interested in hearing your opinion. I am always ready to learn something new. rockyr
rockyr[72} That is why my approach is an attempt to transpose the key argument into a domain which I think is better suited for a quicker & easier resolution of the deep misunderstanding between the two sides. You really think that arguing about the philosophy of probability will lead to a quicker and easier resolution???? Mark Frank
rockyr[72],
That is why I am not prepared to go to such depth of highly specific arguments.
A perfectly reasonable and acceptable position. If you don't mind it though, I am curious about one point. Did you mention von Mises and relative frequencies as the "correct" approach or was it just an example of one point of view? Prof_P.Olofsson
StephenB[73],
To pretend that Bayesian statistics is independent of Bayesian epistemology
No such pretense on my part. Bayesian statistics is, of course, dependent upon Bayesian epistemology (the parameter as a random variable). And explaining how Bayesian inference is carried out in a simple example is not "providing a point of view." Shouldn't you answer my questions (1) and (2) from Olofsson I instead of bickering? Wouldn't that be more constructive, not to mention more fun? Prof_P.Olofsson
Hello JT, I am enjoying our brief exchange, however I rarely have enough free time to get into full fledged discussions anymore. I do think that you are bringing some clear thoughts about the issue to the surface, and I do agree with you on some matters. I would like to specifically reply to one thing you said, since I think that it is quite key to the whole issue, and I've noticed a lot of people don't think about. You state: "However, It seems ID would say that intelligence is something different from law, but its not." The slight, yet extremely important, thing that is missing in your statement is that although intelligence may operate in a lawful manner (and I see no reason why it doesn't), it also makes use of information. Information, as I've previously alluded to, can be very simply seen as different types of organization. If the organization of the units necessary to realize the intelligent program are not themselves the result of mere regularity or physical properties of the units, then there is something other than law involved, either just plain "randomness" or something else which can coordinate law and chance processes. I'll leave it at that for now, since I've already delved into that a bit in my previous comments. Law is a part of intelligence, but to say it is just law is not giving the whole account. The organization (not defined by regularity or by physical properties of the units/bits/letters) necessary for intelligence needs to be accounted for. Previous intelligence, maybe? JT: "The above is all stated rather informally. I suppose its possible you might reply the above is what No Free Lunch is all about, only stated much more rigorously." Yes, no free lunch is from where Dembski began to develop his theorem re: Conservation of Information. You won't get more CSI out of a program than the CSI of the program itself. Again, a program operates on law and information (or organization) and can make use of random events. IMO evolutionary algorithms are an excellent example of the coordination of previous intelligence, laws, and chance. CJYman
----"Prof Olofsson: "You would know. I’m not criticizing, I’m educating. Besides, blogging at UD isn’t my entire life, believe it or not." No, you are providing a point of view. To apply mathematics at this level requires judgment and, at some level, a grounding in some philosophical perspective about how it is supposed to be used. To pretend that Bayesian statistics is independent of Bayesian epistemology and therefore solely a matter of factual application is to dismiss a serious point. StephenB
prof_P.Olofsson, Mark Frank, [ re 59, 60, 63, 66], (I am getting funny narrow formatting in preview for some reason, don't know why.) I am sure that blogging isn't entire life for most of people who post here, many of us have to work for living. That is why I am not prepared to go to such depth of highly specific arguments. Besides, nobody is paying me to engage in such specialist debates. I don't think that highly specialist arguments are transparent enough for most non-specialists. You are right Prof_P.Olofsson, that you deserve a technical answer to your technical criticism.If others are willing to do it in a concise way limited by the capabilities of a blog, great, I'll be glad to read. But I don't think such nitpicking, even if it may turn out important, is of interest to the general public, and it certainly does not validate wholesale far-reaching denigration or demonization of ID and its concepts. Especially when you consider what the other side has to offer. (I am not impressed by the practical results of evolutionary biology and by the related math and statistics.) Besides, this is not just a strictly technical issue of being correct or incorrect using some statistical methods or approaches, there is philosophy of chance and probability involved at almost every step even in such highly technical issues. That is why my approach is an attempt to transpose the key argument into a domain which I think is better suited for a quicker & easier resolution of the deep misunderstanding between the two sides. My approach is to challenge you, and all opposing evolutionary scientists & statisticians, to reveal the philosophy or theory of chance and probability to which you or they subscribe. Surely, being a probabilistic mathematician who admits the importance of proper understanding of his basic definitions and principles is of utmost importance in the science you do. There are many examples from even not too distant philosophy of chance & probability that highlight a need for such an approach, since, due to the complexity of even the basic concepts, the history of chance is strewn with an unusual amount of nonsense and confusing semi-sense. I would be willing to give examples of how even very bright and respected mathematicians and scientists of the highest calibre made outright stupid mistakes in basic philosophical reasoning which they used to power the machinery of their mathematics and statistics. Mark Frank, I am not saying that all statistics is affected by these errors in basic philosophy. A lot of it is correct and it works for many specific purposes. What I am criticizing is that when it comes to such often trivial or wrong philosophies of chance being used or applied to the most complex phenomena which deal with biology, intelligence, psychology, etc., sheer insulting nonsense is often produced, especially for those who know about the difficulties involved in probabilistic reasoning. And this nonsense is than presented to the general public as the great achievement of science, with far reaching consequences. rockyr
kf[70], There is a difference between what Bayesians do and what Dembski claims that they do. Bayesian inference is not merely applying the simple Bayes' rule to events. I have stated my case in my article and we shall see if there is any opposition from the statistical community. I am happy to say that we are in perfect agreement that P(even number)=1/2. A true kumbaya moment! Prof_P.Olofsson
Prof PO: Not quite: 1] WD states that Bayesians (like the rest of us) sometimes, even often, calculate probabilities relative to not just individual specific outcomes but identifiable clusters. [Consider, what is the probability of tossing an even number with a fair, six-sided die: P(2 or 4 or 6) = 3/6.] 2] He constructs a particular, simple model for how a Bayesian would approach the situation of design/non-design in a case that if carried out properly is equivalent to tossing a fair coin. [Remember, if there is a significant deviation from what a fair coin would do, that is sustained for a long run -- here over a decade it seems, that is design by negligence of duties of care.] 3] He then discusses the context in which such would come up as an issue -- i.e the "oddness" of the result. That is, though targets and rejection regions are not in the explicit language and theory of Bayesian investigations, in praxis, under the relevant circumstances, that is what would come implicitly. 4] As we have discussed, I am well aware that likelihood-style investigations look for relative degree of support across a spectrum of alternative hyps; but in this case, anything that is well away from 50-50 and produced such a run as was seen was evidence of design. And that is of course based on an intuitive form of the elimination approach. Cf my note's appendix. GEM of TKI kairosfocus
Mark Frank[66],
If we were to follow the practice of not proceeding with a field of study until the underlying philosophical debates were settled we would have to rule out almost every intellectual endeavour on the planet.
Excellent point! Shuold we wait for the resolution of the philosophical issues regarding the Axiom of Choice before we do calculus? Prof_P.Olofsson
kf[64], Thanks for displaying the quote by Dembski. Anybody can now see that he claims that a Bayesian analysis needs to consider E* and its probability, and as I have demonstrated, this is not the case. You repeat his claim when you say
Bayesians do estimate probabilities for clustered potential outcomes
so I ask you, again, for an example. There are thousands of papers published on Bayesian statistical inference so there ought to be plenty of examples of what they according to Dembski do "routinely." Prof_P.Olofsson
KF, things haven't changed much since 2007. tribune7
Rockyr 58 "The problem with these highly technical arguments is that most non-specialists quickly loose interest in them, both sides claim to be right, and nothing gets resolved" Are you claiming that the philosophical arguments are easy to follow, compelling for the layman and get resolved? The fact is that the statisticians such as PO and their maths do agree about a vast range of things and actually make a difference in the real world. Experiments designed by statisticians do establish the effectiveness of drugs etc. If we were to follow the practice of not proceeding with a field of study until the underlying philosophical debates were settled we would have to rule out almost every intellectual endeavour on the planet. Mark Frank
Professor, I stand corrected. I was wondering why I was enjoying pointing out that our “superstitious, theocratic anti-science” is more scientific than than your “science” in our discussions :-) tribune7
All: A few contextual notes are in order, and I want to pick up a point by Mr Baxter further, but briefly. 1] Baxter, 28: The environment selects and improbable structures can so be constructed over time and generations . . . . Your improbabilty arguments, in my opinion, serve only to provide a verbal fog you can hide behind PB has of course not come back to the responses by GP and the undersigned at 36 - 39, and the onward links. So, on correction: I (and other design thinkers) do discuss the facts and reasons for concluding that islands of function for first life ands biodiversity are so deeply isolated in config space that the probabilistic resources of the observed cosmos would by overwhelming improbability be exhausted on relevant chance hypotheses. So, we can see easily enough where the balance on the merits lies. 2] Context and the PO paper Prof PO's paper was commented on at length in the July 20, 2007 Kevin Padian thread, after he introduced the subject in 19. (Cf my initial remarks at 20 - 21 and again at 25 - 27, 33, 40 & 49 on first seeing the paper. Sal Cordova's comment at 54 & 57 on the flagellum still bears reading. The exchange KF-PO that follows from 58 - 59, 65 is wirth re-reading. In 68 (cf DS at 71 and my response at 72] I took time to show why the statistically dominant clusters tend to crowd out rare and significant ones [the foundation of the usefulness of Fisherian elimination]. DS at 79 - 80 is again worth a comparative look for this thread and the previous. Cf PO's claims at 82. This set up my own "review" of key claims and rhetoric in the paper in 89 - 91, footnoting on CSI in 92 and coming back on a further point in 103 and additionals at 110 - 111. PaV adds significant observations at 102 and 117. Observe PO's Bayesian analysis of Caputo at 139, to which I add that per statistical process control, when we see a persistent run to one side in a situation which is supposed to look like typical fair coin flipping, a chance in method is indicated if there is INTENT to be fair. In short, the "biased probability" model and apparatus for assessing which {p, (1 - p)} model best fits the data through likelihoods and the like is utterly besides the point. In the end, Bayesians do estimate probabilities for clustered potential outcomes [though they do not discuss them under the explicit labels: rejection regions or targets, etc . .. ], which in a reasonable Bayesian design inference scenario sets up implicit dependence on elimination based approaches. Second, the latter rests on the statistically sound idea that most samples reflect the dominant clusters in a population of possible outcomes, so it is both effective practice and reasonable practice to reject claimed chance-based outcomes that are "too unusual" to pass the plausibility test, at some reasonable confidence level. Dembski thus has a serious point when he said as recently as 2005, pp 35 ff:
observation never hands us composite events like E* but only elementary outcomes like E . . . Within the Fisherian framework, the answer is clear: E* is the rejection region . . . Bayesians, however, offer no account of how they identify the composite events qua specifications to which they assign probabilities. If the only events they ever considered were elementary outcomes, there would be no problem. But that’s not the case. Bayesians routinely consider such composite events. In the case of Bayesian design inferences [WmAD constructs and discusses a Bayesian design inference scenario] . . . those composite events are given by specifications . . . . To infer design, Bayesian methods therefore need to compare the probability of E* conditional on the design hypothesis with the probability of E* conditional on the chance hypothesis. E* here is, of course, what previously we referred to as a target given by a pattern T. It follows that the Bayesian approach to statistical rationality is parasitic on the Fisherian approach and can properly adjudicate only among competing hypotheses that the Fisherian approach has thus far failed to eliminate. In particular, the Bayesian approach offers no account of how it arrives at the composite events (qua targets qua patterns qua specifications) on which it performs a Bayesian analysis. The selection of such events is highly intentional and, in the case of Bayesian design inferences, presupposes an account of specification. Specification’s role in detecting design, far from being refuted by the Bayesian approach, is therefore implicit throughout Bayesian design inferences.
GEM of TKI kairosfocus
rockyr[58], You say
The problem with these highly technical arguments is that most non-specialists quickly loose interest in them, both sides claim to be right, and nothing gets resolved.
Again, this mysterious idea that arguments are irrelevant because they are difficult to follow. A layman could not pinpoint Dembski's errors, but I have pointed them out and I challenge you to find a single statistician who disagrees with me. As for PaV, his "criticism" was based completely on misunderstandings that I tried to explain on the blog and will continue to do in private exchange. OK, let's leave it at that and go on to your next comment:
What I tried to point out is that these statistical & probabilistc concepts contain the same fallacies which have not been satisfactorily resolved at the higher (logically prior) philosophical level, and that it is much easier to argue at a higher level where everything is clearer and easier. So rather then to hide behind a veil of complex math, it makes more sense to think things through at the conceptual level.
Time to be constructive. What are these fallacies and what do they imply for statistical inference and the explanatory filter? As you claim they have not been resolved, I suppose you would strongly disagree with Dembski's technicalities not only my criticism thereof? Prof_P.Olofsson
StephenB[61], You would know. I'm not criticizing, I'm educating. Besides, blogging at UD isn't my entire life, believe it or not. Prof_P.Olofsson
PO: Ah yes, the life of the critic is so much easier than the life of the builder, isn't it? StephenB
rockyr[57],
So rather then to hide behind a veil of complex math, it makes more sense to think things through at the conceptual level
Seriously, it sounds like you are talking to Dembski here. I have not introduced any complex math, I merely respond to his writings. Prof_P.Olofsson
rockyr[57],
Olofsson provides a superb analysis of the fallacy of Dembski’s treatment … reveals Dembski’s distortion of Bayesian approach…” etc. Do you agree with such far reaching criticism?
Well, I don't know about "superb" but Dembski certainly distorts the Bayesian approach. Prof_P.Olofsson
Prof_P.Olofsson, [re 50&51] Panda's Thumb, and perhaps others, have used your paper and claim that it offers "a devastating critique of Dembski’s and Behe’s mishandling of probabilistic and statistical concepts ... Olofsson provides a superb analysis of the fallacy of Dembski’s treatment ... reveals Dembski’s distortion of Bayesian approach..." etc. Do you agree with such far reaching criticism? Is this about mishandling statistical concepts? Anyway, others, like PaV, argued with you in this respect. The problem with these highly technical arguments is that most non-specialists quickly loose interest in them, both sides claim to be right, and nothing gets resolved. What I tried to point out is that these statistical & probabilistc concepts contain the same fallacies which have not been satisfactorily resolved at the higher (logically prior) philosophical level, and that it is much easier to argue at a higher level where everything is clearer and easier. So rather then to hide behind a veil of complex math, it makes more sense to think things through at the conceptual level. And, do you believe that such a high level philosophical clarification of probabilistic concepts has been done by the evolution & darwinism side? Are these concepts clearly explained and understood? Do many or most "evolutionary" scientists understand them? Asa far as the "nonsense", (popularly also know as "garbage in, garbage out"), I hope you would agree that if garbage is put in or represented by the formulas, even if the formulas & math is "technically" correct, the practical useable result will be "garbage out" or nonsense. I am glad that you are interested in the philosophical concepts and the meaning behind them, but I don't agree with you that the issues and concepts "do not do much for practical application in sciences." As you likely know, the history of chance and probability, up to concepts like the frequentist, Bayesian and Fisherian, (and many other concepts devised by various individuals), are one big "deja vu", (as one recent researcher of probability put it). The basic concept of probability also suffers from the dualism which affects other sciences (aleatory vs. epistemolological), so, in popular terms, it is a one big mess. In practice most scientists using probabilistic concepts simply ignore all this, assume that all is well, and proceed to do their science and present their theories and proofs, ignoring the simple fact that what they propose at the higher biological or genetic level may be sheer philosophical nonsense. The general public than swallows all that hook-line-and sinker. I am saying that this detachment of concepts from the technical or calculating devise level (math and statistics) is not without consequences — I think you cannot just work out a generic statistical scheme or a formula without applying the conceptual knowledge of things you are dealing with. We may be in a trap, but then science and honest scientists should honestly admit what they don't know and how their theories and proposals may actually turn out to be nonsense. rockyr
tribune7[55], I was joking by quote-mining you a bit...count the negations! Prof_P.Olofsson
Tribune, Olofsson was using a double negative. CharRose
I have never dogmatically insisted that design must not be considered anti-science! I know but there are those who do, and it's fun to point out that our "superstitious, theocratic anti-science" is more scientific than than their "science." tribune7
tribune7[53], I have never dogmatically insisted that design must not be considered anti-science! :) Prof_P.Olofsson
Second, “the issue” was the particular issue of inferring intelligent design of us by recognition of design as such. As we have no other empirical evidence of design than our own, I don’t think it’s a valid inference. Professor, if we come to understand that things designed by us have specific, consistent, empirical characteristics, and then we find these characteristics in things we know we didn't design, why would it be invalid to infer design? I'm not saying that we should treat this inference as something dogmatic and unquestionable, but why not just say, hmmm, it looks designed maybe it was? Or at the very least, why would it be less valid to assume design than to dogmatically insist that design is impossible/must not be considered/anti-science etc.? tribune7
kairosfocus[36], I can't find your email address at the moment. Could you please drop me a line? Prof_P.Olofsson
rockyr[46],
Because without understanding what your mathematical symbols mean, your sophisticated statistics and math is, (I am trying to be polite), well, just plain nonsense,
First of all, Dembski is the one who raised the bar regarding mathematics and statistics; I am merely repsonding to him and explaining to other what are his errors. Second, are you saying that somebody who doesn't understand mathematical symbols should conclude that math is nonsense?
The whole debate about Bayesian vs. Fisherian ought to be debated not in some obscure statistical arena, [...], but it must be first resolved philosophically! The debate is, or ought to be, about the meaning of probability in terms other than “relative frequency,” which, as von Mises claimed, was, and is, the only valid scientific meaning of probability with a solid philosophical backing.
The "obscure statistical arena" is highly relevant to those who actually use these methods to solve problems each and every day. Yes, there are philosophical issues that are very interesting to discuss, will never be resolved, and does not do much for practical applications in the sciences. In modern probability, we use Kolmogorov's measure-theoretic axioms which do not rely upon any particular interpretation of probability. Prof_P.Olofsson
rockyr[46], Thanks for you kind words, and also for your calm, civilized, and erudite tone! Let me respond to a few of your points. I'll break up my reply so individual comments do not run too long.
You have revealed your Achilles heel in 199, where you said that the issue was too philosophical for you, and that you were more interested in the mathematical & scientific debate, to which StephenB aptly replied, in 231: “So, It seemed odd to me that you would question the mathematics behind the principle and yet have no interest in the principle itself,…”
First, my article was about the mathematical and statistical arguments put forth by Dembski and Behe, and it was published in the journal Chance which is published by the American Statistical Association. It was never my intent to criticize every aspect of ID, only those that were in my field of expertise. Second, "the issue" was the particular issue of inferring intelligent design of us by recognition of design as such. As we have no other empirical evidence of design than our own, I don't think it's a valid inference. We're in Plato's cave, or, if you wish, in a Godelian trap where we can only mak inference within the system and not about the system. From a phil osophical point of view. Third, the statement "would question the mathematics behind the principle and yet have no interest in the principle itself" is vague and misleading. Vague, because it's unclear what "the mathematics behind the principle" and "the principle" refer to. Misleading, because I have said that I am not uninterested in "the principle itself," but I prefer to contribute to the part of it where I know precisely what I'm talking about. Sure, I can go on and debate biology, epistemiology, theology, and whatever else enters the discussion but time is limited and there are many others who could make better points. Prof_P.Olofsson
[48 cont.] As I’ve alluded to previously its the idea that if a prexisting process f acts on some thing x and the result is y Then f(x) equates to y, in terms of probability as well. In other words the only way you can explain anything is by the prexistance of something that directly equated to it. You don’t need nonmaterial minds or a design inference to infer that. But there is one thing that I left out. In the scheme of Darwinian evolution I believe they would say that a big chunk of f(x) is purely random, i.e. came into existance for no reason at all. The mutations are a huge part of that (and presumably embodied in x with f being the natural laws) and the mutations are purely random and causeless. But its not as if Darwinians or anyone else for that matter would accept as an explanation for life that molecules just flew together for no reason at all. They adamantly deny that evolution equates to that in any way. But what percentage of life is accounted for by the mutation side of the equation? 80%? 90%? Supposing the natural laws are eternal but simple. if just by spontaneous chance an eye did completely form one day, strictly through mutations, its not as if the laws f (at least in a Darwininan scheme) could say, "Yep -that's an eye all right - let's keep it!". First of all it would take a program of a certain size to recognize a fully formed eye, and in a Darwinian scheme there just woudn't be anywhere near enough information in the natural laws f to give it credit for the eye. The eye in this scenario came into existence completely by chance and f (natural selection, natural laws) had nothing to do with it. So my point is, Darwinists need to be nailed down on what exactly is the percentage of the genome that the completely random mutations are responsible for. If its even 50%, then its equivalent to saying you can take away 50% of the currently existing life forms on earth and complete randomness can account for the remainder. However, if say 90% of the information in the genome is explained by the laws f, then that seems to be a much more realistic scenario. If F is laws but eternal, it didn't come into existence by chance - its just always existed and that's a much more tenable scenario. So disembodied intelligence isn't the issue, but then again maybe it is - after all even evolutionists would admit that natural laws are disembodied. The only relevant question is - how complex are the laws and what percentage of the genome are the mutations solely responsible for. However, It seems ID would say that intelligence is something different from law, but its not. The above is all stated rather informally. I suppose its possible you might reply the above is what No Free Lunch is all about, only stated much more rigorously. JT
CJYMan [44][45]: just wanted acknowledge I read your lastest post. There's nothing there I really want to debate at the moment. ID advocates have a pretty intricately developed rhetoric for talking about "mind" and "intelligence" and you've comported yourself pretty well in that department, I just don't feel like debating it. I dont mean to demean it by calling it rhetoric either, just can't think of a better term at the moment. I also wanted to ackowledge that the more I think about it, Dembski's design inference does seem like an inegenious piece work regardless of the extent to which it may be incorrect. Admittedly, its an ingenious idea, I don't think he should be maligned. Its just that I believe there's a much more basic and compelling idea that's bein obscured - by all the talk of nonmaterial minds as well as all the subsequent research on what evolotionary algorithms are purportedly incapable of achieving. As I've alluded to previously its the idea that if a prexisting process f acts on some thing x and the result is ym Then f(x) equates to y, in terms of probability as well. In other words the only way you can explain anything is by the prexistance of something that directly equated to it. You don't need nonmaterial minds or a design inference to infer that. But there is one thing that I left out. In the scheme of Drawinia JT
William Dembski: "(1) I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection." Why is a bombshell of this magnitude announced in the form of a reply, buried in an on-going topic? Next: "Chance" and "Design" are mutually exclusive. The former term has a long pre-history that belongs to a group of terms describing the action of material-natural causation-agency. The latter term has a long pre-history that belongs to a group of terms describing the action of Divine-supernatural causation-agency. The origin of material agency is thus described as "horizontal." The origin of Divine agency is thus described as "vertical." Horizontal agency, because it is inanimate and unconnected to Vertical agency, must be unguided, unsupervised, mindless, random and chance. This agency was proposed by Charles Darwin because Vertical agency, Intelligence and Design (attributes of invisible Designer), along with guidedness, supervision and purpose, were judged to be absent from nature. This is the proposal that Science eventually accepted, hence: *natural* selection (= material agency). Darwinism says God or Mind and His power is not involved with nature-reality (Atheism ideology). Creationism-ID says God or Mind and His power is involved with nature-reality (Book of Genesis ideology). The choice, according to Philosophy and History of Science, Darwinism and British Natural Theology, is not a combination of agencies operating in nature, but one or the other, not both. Thus Dembski's announcement that "chance" and "design" to not be mutually exclusive is evidence of "contrary-fusion"----the fusion of contrary or contradictory concepts or ideas" (= confusion). "Chance" belongs to horizontal agency. It says God is not involved with biological production. "Design" belongs to vertical agency. It says God is involved with biological production. Since Intelligence and Design, organized complexity and adaptation are recognized universally to exist in nature, mutation or variation cannot be the result of a fundamentally random or chance Darwinian process; but the same is designed, guided or supervised. Ray Martinez, student of British Natural Theology. R. Martinez
Prof_P.Olofsson, I am sure many UD people reading your thread appreciate that you are ready not only to cast criticism, but are willing to stand up and defend it in the dragon's den, so to speak. May all critics of ID be so honest and courageous. There were good arguments thrown at you by StephenB, gpuccio, PaV, etc. Still, such a valiant attempt can leave one exposed, and reveal one's weaknesses. You have revealed your Achilles heel in 199, where you said that the issue was too philosophical for you, and that you were more interested in the mathematical & scientific debate, to which StephenB aptly replied, in 231: "So, It seemed odd to me that you would question the mathematics behind the principle and yet have no interest in the principle itself,..." What you have been saying is so typical of the whole evolution & Darwinism debate, and that is the main reason why many rational and well educated people just cannot swallow it, even after 150 years. It's not so much about science or statistics, we can sort it out sooner or later, but more about the meaning or key terms and principles. Because without understanding what your mathematical symbols mean, your sophisticated statistics and math is, (I am trying to be polite), well, just plain nonsense, or worse, as Disraeli (or Mark Twain) famously quipped — There are lies, damn lies, and there are statistics. Since Fisher cast biology in statistical terms, the term probability, or chance, has acquired key importance in biology, yet, from the very beginning, there were serious charges against its philosophical meaning. None other than than the "father" of the modern probability, Richard von Mises, actually criticized Fisher that his likelyhood was nonsense. The whole debate about Bayesian vs. Fisherian ought to be debated not in some obscure statistical arena, which most people who know what statistics is all about don't want to venture into, but it must be first resolved philosophically! The debate is, or ought to be, about the meaning of probability in terms other than "relative frequency," which, as von Mises claimed, was, and is, the only valid scientific meaning of probability with a solid philosophical backing. rockyr
JT: "There wasn’t any sort of pattern that Dembski presented in that paper that he remotely suggested could not be generated by a program or laws." Of course. And this is where fine tuning and the "somewhat vacuous COI" (according to you) enters. You seem not to be understanding Conservation of Information. Of course CSI can be "generated" by programs. EAs unfold CSI all the time. But these programs are not "just law." They are a highly improbable and specified, contingent organization not based on the laws of the material used. So what else do we have to account for them? They are law, but are not completely law ... information is involved. Again, you are merely pushing the information back a level and not accounting for its origin. If CSI can't be generated by "just law and chance" then neither can the EA which produces the CSI. JT: "There are laws of nature and there is a general presumption that those laws are simple, and ID tends to exploit such a vague assumption to make common-sense type appeals about what we could reasonably expect a very limited set of laws to accomplish." The more laws and variations of those laws which are at disposal, the harder it will be (in a probabilistic sense) to fine tune only the right set of laws to provide a configuration which will produce CSI. IOW, write a program which receives statistically random background noise and filters that noise based on a randomly applied (again via noise) assortment of laws and variations of those laws. Will those filters generate CSI or an EA? What will happen as you apply more and more possible laws and variations? JT: "But surely you can’t be thinking that there exists some binary string that cannot be the output of an automated process (i.e. a set of laws)." The ID hypothesis involved here is that there are binary strings which, when produced are not well explained by "just law" or "just chance" or any combination of the two. Be careful not to forget the informational arrangement of law and chance. This is what some people don't think about when discussing programs and sets of laws. You can't just push the issue of information (in this case CSI) back a level and say that you've accounted for it. As to the rest of your post, I have no problem with mechanism and I don't agree with everything that Dembski says, especially when it comes to algorithmically compressible patterns, however they are a good starting point to get the idea of the type of measurements involved. And I have my own ideas as to what intelligence involves and why it is a better explanation than "an infinite regress of active information." Some of these reasons include arguments from consciousness, argument from COI (which would imply that to reach intelligence, you must start with the same improbability of organization -- which would then most likely itself be intelligent), and I think that an "intelligence-intelligence loop" should at least be on the same scientific footing as an "infinite regress of active information" until we can discover which is the better explanation. CJYman
JT: "It can’t be arbitrarily chosen, as the coded form could be too limited to map to the intended message. If the Eskimos have a hundred different words for snow, then something is lost when their message is translated to English." My apologies. I don't know why I used the word "arbitrarily" in this case. What I meant is that the meaning or function can be sent on a bit string which is chosen independent of any laws of the material used. This is how a code of any form is sent across a communication channel to produce function or meaning and the same code can be sent many different ways. IOW, the material used is not important, as any material can be employed to create function/meaning. The function/meaning is not in the material but in the organization. I then explained how this relates to the definition of a specified pattern. JT: "Also, you can transmit some computer program in a number of different high level languages, but the program is still tied to some physical pattern." Yes, a physical pattern which is contingent and not caused by any properties of the material used just as the properties of ink and paper do not account for the organizational arrangement necessary to produce an essay. The origin of this type of string which is sent across a communication channel and produces function/meaning is what needs to be accounted for. Taking Dembski's definition at face value, that would be one type of specified pattern. JT: "Doesn’t this seem like a vacuous sort of thing to be trying to prove. It would seem to be self-evident that IF a set of laws coupled with chance cannot produce CSI, then they cannot do so by any means. It seems that the “by any means” is implicit in the statement." I don't think that vacuous is the right word, since many people don't seem to understand this. Conservation of Information shows that you can't get a free lunch by just invoking evolution. Doing so merely pushes the information back one level and accounts only for how the information was unfolded but not how it originated. If instead of "vacuous," you mean "obvious" then I have actually thought the same thing, however it needs to be stated since many people think that an Evolutionary Algorithm can be invoked to explain how an improbable result was obtained without realizing that the EA is under the same probabilistic constraints as the pattern exhibiting CSI in question. The math involved in Conservation if Information (which apparently has just been published by Dembski and Marks) underscores Dembski's hypothesis that EAs do not generate CSI, but merely unfold previous CSI. JT: "If a set of laws plus chance COULD find the EA to produce CSI then that would imply that this set of laws plus chance could produce CSI, but you’re assuming they can’t, thus your proof." No, it starts with a description of law as regularity and chance as statistical randomness. Then a definition of a pattern which exhibits neither mere randomness nor regularity and yet is specified is given. The pattern is seen to be both specified and contingent and I would add algorithmically complex thus ruling out law (as a mathematical description of regularity). The probabilistic resources are factored in and the pattern is shown to also be complex (most likely beyond chance processes). It is observed that these types of patterns are routinely generated by intelligent agents. Furthermore, foresight (the awareness of future targets) and the ability to apply that foresight to organize law and harness chance may be a necessary causal factor in the origin of these patterns. Thus, the hypothesis is created that CSI is outside of the scope of chance and law and requires previous intelligent causation. This can be falsified by merely showing that law and chance on its own will cause CSI to self-generate. From here, the argument is mathematically carried on to EAs. JT: "Also keep in mind that an evolutionary algorithm is itself a set of laws.)" Yes, I'm pretty sure I implied that earlier when I stated that it would have to be a set of non-arbitrary, non-random (statistically speaking) laws. It is a set of laws exhibiting CSI, since it is a highly improbable organization which produces further CSI unlike the vast majority of possible sets of laws. Because of this, I would say EAs falls into the category of a pseudo-random specified pattern (much like the champernowne sequence). Here's the related question: "Will an EA spontaneously self organize from a bunch of background noise, without any foresighted guidance previously supplied by the experimenter?" My commment is getting a little long, so I'll post another to respond to the rest. CJYman
CJYMan wrote [41]: I merely *showed* that ideas and meaning are independent of physical patterns. Another example would be that you can transmit the same idea or meaning in any arbitrarily chosen coded form. It can't be arbitrarily chosen, as the coded form could be too limited to map to the intended message. If the Eskimos have a hundred different words for snow, then something is lost when their message is translated to English. Also, you can transmit some computer program in a number of different high level languages, but the program is still tied to some physical pattern. the recent formulations of Conservation of Information show that ... if we can’t find CSI using only law and chance, then we can’t find the EA to produce that CSI through only law and chance. Doesn't this seem like a vacuous sort of thing to be trying to prove. It would seem to be self-evident that IF a set of laws coupled with chance cannot produce CSI, then they cannot do so by any means. It seems that the "by any means" is implicit in the statement. If a set of laws plus chance COULD find the EA to produce CSI then that would imply that this set of laws plus chance could produce CSI, but you're assuming they can't, thus your proof. (Also keep in mind that an evolutionary algorithm is itself a set of laws.) And this is where I somewhat part ways with the paper in question. I’m not sure if specificity through algorithmic compression of a regular pattern can necessarily count as a “design” — that is, necessarily caused by intelligence — since laws are mathematical descriptions of regularities. So, we already account for regularities with law and arbitrary or statistically random events with “chance” but we need something to describe those pseudo-random complex and specified patterns. So far intelligence is the only contender for those types of patterns and to rule it out a-priori is, IMO, a completely ridiculous hindrance to science. Pseudo random number are called pseudo because they are generated by laws. The confusion is, I think that people don't seem to grasp that a computer program is a collection of laws. There wasn't any sort of pattern that Dembski presented in that paper that he remotely suggested could not be generated by a program or laws. There are laws of nature and there is a general presumption that those laws are simple, and ID tends to exploit such a vague assumption to make common-sense type appeals about what we could reasonably expect a very limited set of laws to accomplish. But surely you can't be thinking that there exists some binary string that cannot be the output of an automated process (i.e. a set of laws). What Dembski does is prove that some string cannot be attributed to chance. (But even here he's probably wrong). Then by his own admission all that is left is either mechanism or "design". So he has always granted that a mechanism could output anything. But what happens is, that there is always an implicit argument from ignorance in his statements, often obscured in a very frustrating way, but wherein he is demanding that we assume his nonmaterial design as the cause if no one yet has been able to describe the mechanism. Really what you need to do is just dispense with everything he has said and just consider the following: If some process f acting on x results in y then f(x) equates to y. If we are the result of some process out there in nature, then that process along with whatever it was acting on equates to us. Think of the fact that some computer image would be extant in memory or on disk before being on your computer screen. Its in a different form in the two locations but its actually the same thing (just as you were alluding to the same idea being expressed in different forms). One already assumes we're the result of one mechanism - epigeneisis. And people intuitively understand how the DNA extant before the final product, along with the machinery for decoding it, together equate to the end result (us). Well whatever process out there in the universe, no matter how diffuse and indirect it may be, if it is responsible for our existence, then it must equate to us, (in terms of probability as well) so you're only pushing back what needs to be explained. This may have been what you were alluding to when you spoke of an infinite regress of "active information", And you said that we must consider Intelligence as the ultimate cause. But what does that explain, since to you Intelligence is some monolithic thing that cannot be deconstructed or explained in any way and is certainly not a mechanism itself. Why must science pay homage to a completely nebulous thing like that? Why not just assume that at some point of regression into the past you find a an infinite pool of "active information" that has always existed. (if you're using the term as I think you are.) In the end I'm demanding you remove your nothing postulation of Intelligence from the equation, but maybe I shouldn't be. Keep it in, or take it out - it doesn't matter. Briefly, the reason that Dembski is also probably wrong about what cannot occur by chance, is that he considers a string y and just adds up the bits of it to compute its uniform probability (as one step in his formula). However, that string's probability would be determined by the liklihood of getting a process that would output it. So to compute y's probability he should add up the bits of the smallest program-input that can output y. So the probability of a string of all 1's for example would be really high. But as you said, we see a lot of processes in nature that produces that type of regularity. JT
I have not followed this thread at all in this part and in the original part. But I believe the discussion over the fossils in the Cambrian and Pre Cambrian in another thread exhibits a use of the EF by Darwinists. We use the term intelligence loosely in the EF but what it means is non law and non chance. A lot of the so called fossils in the Pre Cambrian are "trace fossils." These are not body forms or even parts of bodies but evidence that a life form was there. In other words, paths in the sediment made as some worm like creature passed through. So the paleontologist rightly concluded these were due to some life form and not to any chance event or law like process. The intelligence was minimal but instinctively the classification was made. Is this not an example of the EF being used? I am not trying to make a big deal of this but just thought it was curious given the discussion. Namely, that the EF is a natural process we as human beings use. jerry
JT Thanks for the feedback. you state: "Dembski’s paper does not contend that ideas or meaning are independent of physical patterns." My apologies if I was not clear enough on this point. But, I didn't state that Dembski said that ideas or meaning are independent of physical patterns (although I think he may have done so elsewhere). I merely *showed* that ideas and meaning are independent of physical patterns. Another example would be that you can transmit the same idea or meaning in any arbitrarily chosen coded form. Then from this fact that the contingent event can be formulated as a conditionally independent pattern, we know we are dealing with something that is specified. I was merely showing how searching for a specification (as explained in the aforementioned paper) can be applied to different examples since this is what many ID critics ask for. First search for specificity and then calculate complexity. you also state: "The word “semantic” does not appear in that paper (not to mention the phrase “functional semantic specificity”)" This is true, and I am merely showing different types of specificity as per his definition and how he explains the mathematics in the paper. It is the natural and logical extension of the idea of the application of CSI. He merely lays the groundwork and then we can take that and apply it to many different examples -- One of which produces what I have titled functional or semantic specificity. I think Kairosfocus refers to one type as FSCI or "functional specified complex information." But again it all boils down to finding a specified pattern -- one that is a conditionally independent formulation of a contingent event. Then move on to calculating complexity by using either the UPB or a "case dependent" probability bound. JT: "However take out the word random:" Yes, this is where there is a lot of hang up on terminology. I may not have been clear on this, but I use the word random in the sense of being arbitrarily chosen, which as far as I know will produce a statistically random sample. JT: "The next question: will a set of laws cause an information processing system and evolutionary algorithm to materialize? That’s the relevant question, and ID theorists are not in a position to answer NO! on this question." You are correct, however the recent formulations of Conservation of Information show that it is just as difficult (measured probabilistically in an information theoretic sense) to discover the path (or evolutionary algorithm) to produce CSI as it is to find the original CSI. IOW, if we can't find CSI using only law and chance, then we can't find the EA to produce that CSI through only law and chance. So we either have an infinite regress of what is called "active information" (with no explanation for its existence -- which in my opinion is an intellectual cop out and also intellectually stultifying) or else we need to find out what else acting in conjunction with law and chance will produce CSI by fine tuning lawful parameters to take advantage of chance occurrences in an EA. So, my use of the word "random" is to signify an arbitrarily chosen set of laws -- which we can simulate with random number generators who's output will "hover" around a statistically random distribution. Of course, there will be isolated examples of order when any laws are in play (as Langton's ant shows) however this order is different than organization seen in algorithmically complex specified patterns. And this is where I somewhat part ways with the paper in question. I'm not sure if specificity through algorithmic compression of a regular pattern can necessarily count as a "design" -- that is, necessarily caused by intelligence -- since laws are mathematical descriptions of regularities. So, we already account for regularities with law and arbitrary or statistically random events with "chance" but we need something to describe those pseudo-random complex and specified patterns. So far intelligence is the only contender for those types of patterns and to rule it out a-priori is, IMO, a completely ridiculous hindrance to science. Furthermore, according to the mathematics involved, it may be that there is a "no go theorem" in place in the form of conservation of information which may mean that in order to arrive at intelligence, you need to begin with intelligence. This would form a causal loop and IMO would be more intellectually satisfying than a "chance of the gaps" explanation involving an infinite regress of active information which has no explanation other than "it just happens to be that way -- don't ask questions." CJYman
CJYMan [26]: Just a follow up to let you know I did read both Parts I and II of your overview of Dembski's paper. As your intention is primarily to clarify what he wrote, I think its perfectly acceptable as far as it goes. I did make a note of several things I could have addressed in your treatment, but upon reflection it would appear contentious for me to dwell on them to any signficant extent. I'll just bring up a few items. The fact that you can send the same idea using different patterns in the same language or even different patterns by using another language shows that the ideas themselves are independent from the pattern which is sent across the communication channel. That is how we know that the idea “contained” in the pattern is defined independent of the pattern itself. We could even state the same meaning in a different way – “Do you have the ability to comprehend what these symbols mean?” Either way, the idea contained in the above pattern (question) can be transferred across a communication channel as an independent pattern of letters. This is referred to as functional semantic specificity – where specific groups of patterns which produce semantic/meaningful function are “islands” of specified patterns within a set of all possible patterns. Dembski's paper does not contend that ideas or meaning are independent of physical patterns. That's something you're reading into it. He talks about patterns that are independent of the actual signal. The word "semantic" does not appear in that paper (not to mention the phrase "functional semantic specificity") Maybe it does in another paper of his, but I think his objective here is to pare everything down to what he thinks he has a reasonable chance of actually demonstrating. ------------------------ The next question: will a random set of laws cause an information processing system and evolutionary algorithm to randomly materialize? According to recent work on Conservation of Information Theorems ID theorists state that the answer is "NO!" If that's what ID theorists are seeking to demonstrate it would seem a pointless exericize becuase no one would disagree with the premise. A random set of laws does not increase the odds of anything, just by virtue of them being laws. NO ONE would take issue with this so why are they seeking to prove it. However take out the word random: The next question: will a set of laws cause an information processing system and evolutionary algorithm to materialize? That's the relevant question, and ID theorists are not in a position to answer NO! on this question. ------------------------------- In part II I would agree that an opening and closing door does not refute the Design Inference. But it seems like to me there are compressible patterns throughout the non-biological component of the physical universe. I was going to suggest that that there would be a nonrandom pattern of molecules in a chunk of solid matter so that you could express that pattern in a compressible way (thus indicating design.) But I don't know what quantum theory does to that. But completely excluding the biological world, it seems you could find patterns throughout the physical universe that would have to indicate design in the ID scheme of things. JT
GP: Thanks. Mr Baxter: Please note that all my comments at UD are in the context of the online note that is linked through my handle in the LH column. I believe that sections B and C will be relevant to several of your remarks above. In particular, I think you will see that the issue is not with whether or no Mt Improbable has a gently sloping easy back-path, but with getting TO the shores of islands and archipelagos of bio function that are marked by various body plans. In the case fo the Cambrian, we need to account for some 30 - 40 phyla and subphyla that turn up in the fossil record in a window of some 10 MY, on the usual timelines. The significance of this is that, first, to get to first life you have to get to some 300 - 500 k bases, plus the executing machinery, codes and algorithms, then to get to onward body plans you credibly need at least dozens of millions of base pairs, dozens of times over. One base pair has four states [A/G/T/C] and stores two bits. Just 250 - 500 bases would imply a sea of 10^150 - 10^301 configurations, and that is the threshold that would exhaust no the search resources of our home planet, but of the observable universe. So, the issue is not the blind, non-purposeful, unfit-destroying culling filter known as Natural Selection, but to reasonably have a probabilistically credible means of innovating the functional information systems of life. This, in a context where the known complexity of the sub-assemblies [DNA, proteins etc] is well collectivley and even sometimes individually beyond the reach of random search on the scope of the observed universe. Just the posts on this thread alone are sufficient to show that intelligent designers are capable of creating digital string-based FSCI, and the computers we are using are evidence that such agents are capable of creating the executing machinery and required algorithms and codes. Intelligence, sir, is the only empirically observed source of FSCI: functionally specific information requiring storage capacity beyond 500 - 1000 bits, as a practical description. So, it is very reasonable to abductively infer that the observed information systems in the cell credibly come from the same class of causal source as the much cruder ones we have invented over the past 70 or so years: Intelligent design. GEM of TKI kairosfocus
PhilipBaxter: #30: You say: "I’ve read a few of your posts before and I suppose my thought is that you think it’s either “all chance” or design. Yes, the probability of a 747 or a complex protein or gene sequence forming all at once is vastly improbable. So improbable that I think we’d all agree that it was impossuble, UPB believers or no." OK, that's a good start. "Except I think you do your opponent a disservice kairosfocus by consistantly misunderstanding that point." Maybe it's you who are misunderstanding? Please read the following point. "If you were to look at their point of view with an open mind, I believe you’d find that random chance has it’s place (which gene, how it will mutate is random) but the enviroment provides a very non random filter." Believe me, we are really looking at their point of view with an open mind, and the result is always the same: it is completely wrong. Are you suggesting that we are not aware of the suggested role of NS in darwinian theory? Do you think we are completely stupid? See next point. "The enviroment selects and improbable structures can so be constructed over time and generations." That's the point, at last. And it's very simple. Let's see if you understand it. The environment can select only what already exists. Are we OK with that? The environment is not an engine which generates variation or information. It's just a filter. And a blind filter, obviously. The environment has no idea of what it is selecting, or why. The only selection which can happen is based on the appearance of a new (or improved) function (which must be relevant enough to give a sufficient reproductive advantage, but let's not go into details here). If the function is not already there, it cannot be selected. So, we can calculate the minimum CSI increase necessary for the appearance of a new function in any specific model of transition, if and when darwinists will provide at least one. If the increase in CSI is high enough (that is, improbable enough), that transition is simply unacceptable in a model based on random variation. You can obviously try to deconstruct that transition by showing that there are simpler intermediates which are selectable (that is, exhibit some selectable function and can be fixed). But you have to "do" that. Not just imagine it. So, unless you can show that any existing protein function can be achieved through specific selectable intermediates, at the molecular level, darwinian theory is a complete failure. And it is. Because you cannot demonstrate that, because it's simply not true. Complex function is not deconstructable in a sum of simpler functions achievable with simple bits of information. That's really an urban myth of darwinism (one of the many). Please take notice that Behe has clearly showed in TEOE, that while single mutations are obviously in the range of possibility on all organisms, double coordinated mutations are exceedingly rare, and probably out of the range of what most organisms can achieve. But I want to be generous: I concede a "step" of 5 (five!) unguided coordinated mutations as "possible" (it is not, I know, but it's Cristmas time, after all!). So, please show me any model which shows the possible achievement of a specific new function in a medium length protein (let's say 200 aminoacids) from a completely different protein through single functional selectable steps of 5 coordinated mutations. Then we can start talking about the role of NS. "Complex does not form instantly except with Intelligent design, remember?" There is no reason to say that complex forms "instantly" with Intelligent Design. It may well take its time. But complex "does" form with Intelligent Design, and it "does not" form through random variation and NS. "Improbable comes from lots of somewhat less improbable." That's simply false. One improbable function is not the sum of lots of less improbable functions. Why should it be? "Yes, it’s all improbable, but here we are." We certainly are here, and so? We are here because we are designed. "That’s their point of view, right?" It certainly is. On that you are right. "Your improbabilty arguments, in my opinion, serve only to provide a verbal fog you can hide behind so you don’t have to actually answer your critics on this point." And that kind of statement is a very good example of the arrogance, superficiality and inconsistency which reigns in the darwinian field. gpuccio
PS: Oh well, neither sigma on LHS nor phi on RHS made it to the thread. Sorry. SIGMA = –log2[PHI S(T)·P(T|H)] PPS: GP, thanks for illuminating remarks, as always. kairosfocus
All, esp Philip and Prof PO: Interesting discussion overnight. Prof, kindly email me . . . and yes, we are grateful that there has been no hard hit here, though Cuba and Haiti were not so lucky. Sigh! Our old friend down south has been celebrating an early Christmas, too. No warning pyroclastic flows that uncomfortably echo the 1902 St Pierre Martinique case. (Bad news for the geothermal energy development effort. [[Thread owner, please pardon a bit of group bonding, which always helps on tone when diverse groups address contentious issues.]) Philip: 1] Examples of FSCI and our use of the filter E.g. no 1: you took the post at 28 above as the product of an intelligent actor, not lucky noise; which strictly could physically possibly have generated it. It is a basic exercise to estimate the config space for 128-state ASCII text, to estimate the fraction that would be sense-making text more or less in English, and to compare relative capacities of random searches with empirically known capacities of intelligent agents. FSCI is a reliable, empirically anchored sign of intelligence. Your action also tells me that you, yourself intuitively accept and routinely use the EF. So, you need to address evident self-referential incoherence. 2] Quantification? Cf Dembski's formula on p. 18 in the 2005 paper on Specification at his personal reference article site, as CSI is the superset for FSCI. Sigma on the LHS is CSI's metric in bits: ? = –log2[?S(T)·P(T|H)] The paper gives details and simple examples. Overall, though, we have good reason -- tied to the foundations of the statistical form of the 2nd law of thermodynamics [cf my appendix 1, the always linked through my handle] -- to estimate config spaces and to see that we are dealing with deeply isolated islands of function, in a context that is near-free, esp for OOL, which is where it must begin, Cf GP's points on that and my simple discussion here. But kindly note, too, that the FSCI - CSI concept, as my always linked appendix 3 discusses, is NOT due to WmAD, but to Orgel et al from 1973 on, as they tried to understand the peculiarities of the molecular basis of life. 3] Chance/lucky noise, necessity, agency or other? The trichotomy above dates back to being immemorial in the days of Plato's The Laws, Book X [cf. my cite in my appendix 2]; so I think we can be fairly confident that it is a well established, long-tested analytical framework. "Law," relates to aspects of a phenomenon or object that show LOW OR LITTLE CONTINGENCY. Where there is significant freedom to vary outcomes from case to case, the contingency is per massive observation (and, arguably, basic logic) either directed or undirected. The latter we call "chance," the former, "design." It would therefore be interesting indeed to see a credible case that there is a realistic fourth alternative -- not a dubiously vague promissory note. "Lucky noise" simply speaks to how hard it is for undirected contingency to get to deeply isolated islands of function, similar to how we simply do not fear that the O2 molecules in the rooms where we sit, will rush to one end and so kill us. GEM of TKI kairosfocus
PhilipBaxter: I think your last posts betray some common misunderstandings. Maybe you are new to the discussion, so I will make some elementary comments: #29: You say: "Interesting stuff. Do you have a list, or could I give a few examples of objects and you could tell me the FSCI, or how you would go about putting a figure on it?" A list of FSCI in biological objects? Just start with all known functional proteins longer than, let's say, 120 aminoacids (just to be safe). "Does genome size directly relate to FSCI? I presume humans, as the most advanced organism on the planet also have the largest amount of FSCI also? What units is FSCI measured in?" FSCI can measured. You must well understand what it is, anyway. It is a property defined for one "assembly" of information exhibiting a specifically defined function. So, the definition of the "assembly" and function is critical to the measurement of CSI. For instance, in the simple example of one protein, the protein itself is the pertinent assemblage, and the protein function is the function (in many case we can define a minimum level for the function). Fot complex machines, instead, like the flagellum, the "assembly" would be the flagellum itself, and the function its function. Ibce you have assigned those terms, FSCI can be measured as the complexity of the assembly (the ratio between the subset of functional targets and the whole set of possible sequences, expressed for instance as the negative logarithm). That will be the CSI of that assembly (in other words, its complexity). But the assembly must obviously display the function. For many single proteins, with our current understanding of them, it is possible to make an approximate computation of CSI, expressed as a lower limit, making some reasonable assumptions. For instance, I have suggested that for the whole search space we can easily define a lwoer limit as the space of combinations of an aminoacid sequence of the same length as the protein itself (so, for human myoglobin, that would be 20^154). More difficult it is to evaluate the size of the target set. There we have to make other kinds of considerations. One way of reasoning is to consider what we know from protein folding and protein engineering. I am confident that with the growing knowledge in those fields, we will soon be able to make reasonable approximations about that, and I believe that we can already assume easily that, however bid the functional target space is, it can never be big enough so that its ratio with the whole space can bring the probability of a protein such as myoglobin over the limit of UPB. Another suggested approach is to consider the same protein in known species, like Dursten, Chiu, Abel and Trevors have done, and to measure indirectly the complexity considering the difference between Shannon H of a particular protein (considering how much it varies in different species) and the same value in a truly random sequence of the same length. Thus, they measure functional information in a unit they have defined as Fit (functional bit). That's a very interesting approach, and it shows that "can" be measured. gpuccio
kairosfocus[28], There is our insular friend, finally! Welcome to the thread that bears my name. My comments are gone and that's just as well; they were probably mostly disatracting to the current discussion in which I will not participate. I hope all is well and that you escaped yet another hurricane season unscathed! Prof_P.Olofsson
PhilipBaxter, Hello, I was interested in your comment.
Is it a stark choice between “lucky noise” and “design” then?
Perhaps it is. After all, either there was intelligent input, or there was not. In any case, these are the two concepts that experience tells us must be in play. There could also be the unknown, but it’s fairly clear that each side already thinks they have a winner. Then again, the existence of neither of these excludes the existence of the other. So, the answer could also be that it’s a little of both, or a lot of one over the other. Perhaps it’s even that one works in one domain while the other works in another. How could anyone know any of these answers as long as only one answer is allowed? It’s a fair question. To your larger point, design proponents can easily draw from more fertile ground than just the improbability that inanimate particle matter may one day organize itself into living tissue full of molecular machinery driven by an encoded data stream, metabolizing energy and exhibiting a strong will to survive.
random chance has it’s place
Yes it does. But, isn’t it that observations also suggest that chance has no role in the selection of nucleotides in the original replicating cell (perhaps 200-400 protein sequences at 300-1000 nucleotides per sequence, plus regulation, transcription, organization, replication, energy distribution, etc., all coming together within the lifespan of the first cell). Chance is subject to the search space and availabilities, and to its inherent independence from any other nucleotides in the chain. These things don’t just extend the probabilities argument; they describe a mechanism that’s the polar opposite from what is needed to create a functional sequence – no order, no law, complete unity. Upright BiPed
“Ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.” That's an interesting faith-based statement :-) Using the criteria of methodological naturalism, how can methodological naturalism avoid the infinite loop? tribune7
something about this thread made me think of this: "Ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science." Khan
I believe you’d find that random chance has it’s place (which gene, how it will mutate is random) but the enviroment provides a very non random filter. The enviroment selects and improbable structures can so be constructed over time and generations. And some things are impossible for natural selection plus random genetic change to accomplish. What was the environment in which proteins self-organized into a flagellum? tribune7
CJYman [26]: I will read that when I get a chance. Thanks. JT
An apt illustration of tis is the fact that lucky noise could in principle account for all teh posts inthis thread. Nothing in the physics or lofgic forbids that. but, we all take it dfor granted that the posts are intelligent action.
Is it a stark choice between "lucky noise" and "design" then? I've read a few of your posts before and I suppose my thought is that you think it's either "all chance" or design. Yes, the probability of a 747 or a complex protein or gene sequence forming all at once is vastly improbable. So improbable that I think we'd all agree that it was impossuble, UPB believers or no. Except I think you do your opponent a disservice kairosfocus by consistantly misunderstanding that point. If you were to look at their point of view with an open mind, I believe you'd find that random chance has it's place (which gene, how it will mutate is random) but the enviroment provides a very non random filter. The enviroment selects and improbable structures can so be constructed over time and generations. Complex does not form instantly except with Intelligent design, remember? Improbable comes from lots of somewhat less improbable. Yes, it's all improbable, but here we are. That's their point of view, right? Your improbabilty arguments, in my opinion, serve only to provide a verbal fog you can hide behind so you don't have to actually answer your critics on this point. PhilipBaxter
GEM
An empirically observable sign that points to intelligence.
Interesting stuff. Do you have a list, or could I give a few examples of objects and you could tell me the FSCI, or how you would go about putting a figure on it? Does genome size directly relate to FSCI? I presume humans, as the most advanced organism on the planet also have the largest amount of FSCI also? What units is FSCI measured in? PhilipBaxter
Patrick Does this help? As you will recall, I have long noted that say a falling die tossed in a game illustrates how chance, necessity and intelligent action may all be at work in a situation, and how they are not simply reducible the one to the other. However, for purposes of analysis -- comparable to how we isolate signal form noise in comms work, or law from bias and error in a simple physics experiment -- we isolate aspects and address how they behave. Once we do so, we can see that:
1] if an aspect reflects low contingency, i.e. natural regularity, it is best explained as mechanical necessity that we describe in terms of a law. [E.g. unsupported heavy objects on earth fall at about 9.8 m/s^2] 2] Where there is significant contingency as a part of the key aspect in focus, we see per experience that it may be purposefully directed and controlled, or it may be more or less free up to some probability distribution, ther most free case being a so-called flat distribution across the configuration space for outcomes. 3] The issue is to tell the difference to some degree of reasonable confidence. 4] When we see that something is complex [per the UPB, in practical terms: storage capacity for more than 500 - 1,000 bits of information], AND simply or purposefully or functionally specified, we have excellent reason to infer that the contingency is intelligently directed.
An apt illustration of tis is the fact that lucky noise could in principle account for all teh posts inthis thread. Nothing in the physics or lofgic forbids that. but, we all take it dfor granted that the posts are intelligent action. Why? ANS: The textual information is functionally specified as contextually relevant text in English, and is complex well beyond 500 - 1,000 bits of information carrying capacity. The odds of that happening by chance are so far beyond merely astronomical that it is more than reasonable to infer to intelligent action. So, even the objectors to the EF are actually using it themselves, intuitively. Even, where they have not precisely calculated the probability distributions! Tthis too, should bury the "false positive" argument. For, the legitimate form of the layman's law of averages reflects that for realistic samples, we are not at all likely to ever see by chance, outcomes so deeply isolated in the config space that they are overwhelmed by far more common clusters of non-functional states. [This utterly dwarfs the proverbial challenge of finding a needle in a haystack at random.] BTW, this is also a view that in fact builds on the core idea in Fisherian elimination. Do objectors to such reasoning seriously expect that the oxygen molecules in the room in which they sit will spontaneously move to one end, leaving them asphyxiating? (The odds in view are comparable, and are rooted in the same basic considerations.) So, I think it is fair and reasonable comment to say that FSCI (or its superset, CSI) is a reliable indicator of intelligent design. An empirically observable sign that points to intelligence. thus, when we see such signs we have a right to infer that there was an intelligence that left it behind. Even when we cannot specifically identify "whodunit"! GEM of TKI kairosfocus
Joseph #22, I agree with what Rude said. All I'm trying to say is that the EF in its original form does not DIRECTLY reflect these realities. As I've already explained in #14 this does not make it wrong or useless--I even give examples of practical applications--but its description of reality is not accurate for SOME scenarios. Personally I think that Bill should not "dispense with it" but "update it" since these problems are fixable, although the resulting flowchart would probably be fairly complex compared to the original form. Patrick
Hello JT. I have also read through "Specifications: the Patterns which Signify Intelligence" and I have put my two cents into the discussion on my own blog. It is a little long, but I try to explain it from what I understand as best I can. If you wish to check out my perspective, go to http://cjyman.blogspot.com/2008/02/specifications-part-i-what-exactly-are.html CJYman
gpuccio [10] mark frank [12] I responded to both of you yesterday, but those responses are now gone, along with some of Prof Oloffson, and Pav (evidently all erased in whatever maintenance they were doing here). But very briefly: gpuccio The length of the Turing Machine Computer would be constant and very small and so can be ignored. I have thought about the time issue myself (the time the process might take) but that is not considered either in algorithmic info theory. mark I actually agree with you, and some of my previous posts were too vague. The length of the output string is irrelevant. I would say the uniform probability of the output string is equal to the length of the smallest program-input that would generate it. JT
I think it would be great if Dembski worked all the kinks out and came up with something that blew everyone out of the water. (I use a lot of cliches too. I hate that.) JT
JT great post- One minor point for now: You stated:
I quickly realized however, why the design inference is never applied to macro biological objects in the design inference.
I believe the reason is a) to keep it simple and perhaps b) unfamiliarity with macro-biology. I say (b) because to me the neuro-muscular system (wet electricity) is one of te best evidences for intelligent design. gotta go... Joseph
Patrick: The problem is that the EF in its original binary flowchart form does not explain any of what you just said.
My bad- I was under the impression that first came the determination and then the investigation to get an explanation. Take for example Stonehenge- first we determined it was deigned and then via years of research an explanation was provided. But what do I know. I only have decades of investigation under my belt. to Rude in comment 19 That is exactly what I am saying! Joseph
I'm still taking a look at the "Specification" paper. A couple of days ago I implied that the technical sections were just too much for me, but I must have looked kind of foolish to anyone who's actually looked at these sections, as they're really no big deal (at least not in this particular paper). It's just when all the paragraphs start turning greek I immediately start scanning ahead for a conclusive sentence, e.g. "So what can we conclude from all this? Namely the following..." (Maybe some of the following is commonly known already and if so I apologize for covering old ground. Its possible I've perused critical reviews of Dembski's writings in the past without comprehending some of the objections made, but upon encountering the referenced passages myself now, am suddenly able to understand what they were talking about. Don't know if that's the case with the following though. I also apologize for writing in the first person so much.) Whenever I'm reading Dembski, any momentary epiphany where I actually start to comprehend something he's saying is like a small victory, and in my optimism I start to think "Maybe this guy is actually on to something." But then a few paragraphs later all optimism is gone, as brand new misgivings emerge. I've had both type of reactions of late. First some background: As I mentioned in a previous post, the way CSI works is that the less complex a detected pattern is the more strongly it indicates design in Dembski's scheme. You can absolutely take my word on this (whether or not you've heard it before). This seemed ridiculous to me. However I realized something recently that momentarily tempered my criticism: You can apply a CSI pattern to a macro object. The reason I didn't think about this previously is because of the example that's always used - the bacterial flagellum. But you could definitely look at a human being and validly apply the simple pattern "Walks on two legs" and use that to infer design (in the Dembskian scheme). This seemed to dramatically increase the relevance of CSI. I quickly realized however, why the design inference is never applied to macro biological objects in the design inference. (And here is where my optimism started to fade.) The reason is, you can immediately point to a known mechanism to account for such macro objects - epigenesis. The reason the bacterial flagellum is used so much as an example is presumably because the mechanism to account for its origination is not known. IOW, it is used repeatedly specifically for the purpose of bolstering an argument from ignorance (i.e. "we don't know exactly what mechanism accounts for the bacterial flagellum, so there must not be one.") You could not do this credibly with a pattern exhibited by a macro biological object. However, there is an even more serious problem I discovered recently, related to the simplicity requirement for specifications (and this was my primary reason for writing this post): It is that there is no objective basis for deciding which pattern to apply to an object. This is relevant because you can only rule out chance with a simple pattern, but any one of innumerable patterns, across a wide spectrum of complexity, could be validly applied to an object. Some background: Dr. Dembski writes, "With specifications, the key to overturning chance is to keep the descriptive complexity of patterns low." For reference, consider that the definition for a specification (ie something not caused by chance) is any pattern where: -log2[10^120*fs(T)*P(T|H)] > 1. where P(T|H) : the uniform probability of the entire bit string fs(T): The specification resources of the target pattern T. (Don't know what the editor here will do with greek characters so I've taken them out.) As fs(t) increases, the ability to rule out chance drastically and geometrically decreases. Its only with extremely simple patterns that you'll be able to rule out chance: "For a[n]...example of specificational resources in action, imagine a dictionary of 100,000 (= 10^5 ) basic concepts. There are then 10^5 1-level concepts, 10^10 2-level concepts, 10^15 3-level concepts, and so on. If “bidirectional,” “rotary,” “motor-driven,” and “propeller” are basic concepts, then the molecular machine known as the bacterial flagellum can be characterized as a 4-level concept of the form “bidirectional rotary motor-driven propeller.” Now, there are approximately N = 10^20 concepts of level 4 or less, which therefore constitute the specificational characterizing the bacterial flagellum." So with only a 4-level concept you have to plug 10^20 into the formula above. Do the math and figure out what the result will be if even a few additional terms are in the specification. After discussing the specificational resources of the flagellum above, Dr. Dembski talks about poker hands and gives the example of "single pair" as a valid pattern. But why couldn't we use an augmented pattern for that hand, for example, "single pair, both red" to describe any hand with a single pair where the two cards were diamonds and hearts. This is an independent pattern, too. Only now the specificational resources are 10^20 whereas previously the were 10^10. So, which is the correct value? The answer will have a drastic effect on our calculation to rule out chance. Now consider the flagellum. We saw how with card hands there could be at least two valid patterns simultaneously, one more complex and descriptive than the other. So with flagellums why couldn't we drop "bidirectional" and use the following as a valid pattern: "rotory motor-driven propeller". Now all of a sudden our specificational resources have dropped by a 10^5. We could go in the other direction and form valid conditionally independent patterns of 10, 20, 50 100, 1000 words or more. Any such description would immediately throw us out of design (as design can only be shown in the Dembskian scheme with simple patterns.) Is there only one type of propeller? Is anyone thinking you could not decompose propeller into an arbitrary number of more descriptive terms? What about "motor-driven"? Is that all that can conceivably said to characterize the power source? So once again, there is no objective basis for deciding which pattern to use, and the pattern we use dictates whether or not the entity could be the result of chance in Dembski's scheme. As a final objection, consider another pattern "blue water with small waves". Seems like a pattern to me. Does it indicate design? It seems all sorts of inanimate objects in nature could be interpreted as designed, either by being compressible as the result of a repeating structure (e.g. a repeating molecular structure), or merely because that with a general consensus we could ascribe some observed pattern to them, e.g. "blue water with small waves". This seem so ridiculous that it seems I must be missing something, and if so I apologize. OTOH, maybe I'm late to the table and everyone here already understands all these things, and are just not talking about it. (In which case I apologize as well). Or maybe I'm full of it, and the Design Inference is of great value (and that would actually be fine with me). JT
Uh, sorry, seems that Patrick pretty much said better in 14 what I tried to say in 19. Rude
Joseph (in 15), let me suggest, if I may, that design is never 100% design, that even if perfect design exists in the abstract, all of it that is instantiated in matter is subject to the vagueries of chance and the limits of necessity. A circle where pi goes to its infinite perfection may exist as a mathematical object, but never as a metal disk. Thus not only do chance and necessity take their toll in time, they’re there from the outset. Maybe what was meant above—if anything—is that when the Explanatory Filter is applied to anything material it also has to allow for some chance and necessity. Thus the crystalline structure in the stone of a building is due to necessity, and the imperfections in your new automobile can be chalked up to chance (or negligence if not malice). Rude
However, if a computer programme which amounts to write 1 a 100 times is only 10 bits long (one can imagine such a mechanism arising naturally) then the probability of this arising under a uniform pdf is 2^-10. Therefore, the probability of the 100 1’s is actually 2^-10 and the assumption of uniform pdf was very misleading.
True, but if the computer program is unknown how can we account for that? Patrick
#12 Marh Frank I'll start with your last statement:
I think I must have missed something. This all seems so trivially obvious??
Indeed I understood yor argument since your first message. Let's see it.
Let’s make it more concrete. Suppose the outcome is 100 1’s. The probability of this outcome assuming a uniform pdf is 2^-100. However, if a computer programme which amounts to write 1 a 100 times is only 10 bits long (one can imagine such a mechanism arising naturally) then the probability of this arising under a uniform pdf is 2^-10. Therefore, the probability of the 100 1’s is actually 2^-10 and the assumption of uniform pdf was very misleading.
Your point is clear but IMHO it's not pertinent to the DNA case. Here we have that the whole DNA code is what is supposed to be arisen by natural processes. In other words we have not a restricted piece of DNA which deterministically produces the whole sequence. In my opinion the only possibility for applying of your argument to DNA code would require that in some way assembling the DNA double helix should strictly deterministic under sone sort of chemical properties. But this is *not* what happens for the sequence of the nucleotides (A C G T) is largely independent chemically kairos
Joseph, I agree that in practice that's how everyone has been using the EF for many years. The problem is that the EF in its original binary flowchart form does not explain any of what you just said. I don't see the need to be defensive toward all criticism, especially if it's constructive. So the EF either needs to be updated (made more detailed) in order to reflect these realities or be discarded (at least in terms of usage in regards to scenarios where chance, necessity, and design are NOT mutually exclusive). Patrick
(1) I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive.
I strongly disagree. 1- Desiners must take into consideration the laws that govern the physical world, ie necessity. 2- Designers also know that random effects will occur. Take a look a today's Stonehenge. I doubt anyone thinks it was designed and built in its current state. IOW those random effects are taken into account. Chance is accounted for. I always saw the EF as an accumlation: 1- To get by step one necessity alone isn't enough to explain X so we move on to step 2 2- necessity and chance together (as Bill writes in NFL) are not enough to explain X so we move on to step 3 3- Designing agencies working with the laws of nature can explain X. And X's current condition is the result of chance events. And chance could have also played a part in the design. The "Ghost Humnters" on the SciFi channel use the EF. That is first they tyry to explain X via regularity. And only after all "natural" processes have been exhausted to they say "ghost". And I doubt that Stonehenge was determined to be an artifact using CSI. To me CSI would be a verifier of the EF. Joseph
I suppose I'll copy my last comment here, since the topic came up:
The new Dembski does not believe the filter works.
I wish Bill had taken the time to explain his comment. The only qualifier he added was "pretty much"...which does not explain his position adequately. But to say that "it does not work" or "it is a zombie" is a gross over-simplification. [Actually, after thinking about it I'd call it a distortion.] Before he wrote it I had expressed via email my belief that the old formulation of the EF was too simplistic (which was also pointed out here). This is not so say that it does not work in practical applications but that it's limited in its usefulness since it implicitly rejects the possibility of some scenarios since "[i]t suggests that chance, necessity[law], and design are mutually exclusive." For example, the EF in its original binary flowchart would conflict with the nature of GAs, which could be a called a combination of chance, necessity, and design. [To clarify, I'm referring to scenarios which have a combination of these effects, not whether necessity equates to design or some nonsense like that.] In regards to biology when the EF detects design why should it arbitrarily reject the potential for the limited involvement of chance and necessity? For example, in a front-loading scenario a trigger for object instantiation might be partially controlled by chance. Dog breeding might be called a combination of chance, necessity, and design as well. This does not mean the EF is "wrong" but that it's not accurate in its description for ALL scenarios. The current EF works quite well in regards to watermarks in biology since I don't see how chance and necessity would be involved and thus they are in fact "mutually exclusive". [I'd add SETI as well, presuming they received something other than a simplistic signal.] Personally I believe that the EF as a flowchart could be reworked to take into account more complicated scenarios and this is a project I've been pondering for quite a while. Whether Bill will bother to do this himself I don't know. Patrick
From 2 above: “There has been confusion over Dembski’s point (1) in the other thread. What I believe he is saying is that chance, necessity, and design may all contribute to an event.” As the nonspecialits here I wonder why not just say that all design occurs against the backdrop of chance and necessity, just as a painting implies the backdrop of canvas and paint. If necessity equals the reality of mathematics and the laws it allows, maybe chance—even if the laws disallow it—equals context. Any act of design, i.e., the employment of free will, adds the ingredient of chance. Thus a coin toss, even if entirely predictable if every detail of context were known after the fact, still hangs on the unpredictability of the agent’s act. Rude
JT 8 The issue is."How reasonable is it to assume a uniform pdf when calculating the probability of an outcome arising by chance" How can the complete cause for some thing be more probable than the thing itself. Once you have the complete cause for the thing, the thing itself occurs. Of course the result can't be less probable than the cause if it always follows the cause. But that just goes to show that the result (the 100 bit string) was not the result of a uniform pdf. The presence of the cause means the supposed pdf was wrong. Let's make it more concrete. Suppose the outcome is 100 1's. The probability of this outcome assuming a uniform pdf is 2^-100. However, if a computer programme which amounts to write 1 a 100 times is only 10 bits long (one can imagine such a mechanism arising naturally) then the probability of this arising under a uniform pdf is 2^-10. Therefore, the probability of the 100 1's is actually 2^-10 and the assumption of uniform pdf was very misleading. I think I must have missed something. This all seems so trivially obvious?? Mark Frank
Barry, thank you for the thread; Sal Gal, thank you for the link. There has been confusion over Dembski’s point (1) in the other thread. What I believe he is saying is that chance, necessity, and design may all contribute to an event. I agree. tribune7
JT: I am always thinking in terms of the real thing: biological information. In the cell, you have not, as far as we know, any computer which can calculate a compressible sequence and output it. First of all the protein gene sequences are not compressible, and second, there is no computer there. Even in the abstract example, you don't need only the input program. You need some Turing machine too, where the input program can be inputted. So, the real probability of the compressible string to arise by chance in a purely random environment through a compressed input program has to take into account the probability of both the Turing machine and the program to arise by chance and to work to produce the required output. I am not sure that probability is higher than the probability if the whole string arising by random variation alone. Perhaps, for very long strings, that would be the case. But again, that has no relevance to the biological issue. gpuccio
gpuccio - Its strange because we start saying the same thing and end up saying the exact opposite. We both agree the probabilities will be equal - you say they're equal to the output length, I say its the program-input length. JT
Mark Frank wrote: Let’s assume that in the case of the bit string it means each bit is equally likely to be 1 or 0 and the probabilities are independent. Then the probability a particular bit string of length 100 is 2^-100. If that bit string can be generated by a program of length 10 then the probability of the that generating bit string is 2^-10 - which is much greater. How can the complete cause for some thing be more probable than the thing itself. Once you have the complete cause for the thing, the thing itself occurs. (Not to inundate you with technical jargon.) I would suppose that the probability of the output bit string would be 2^10 except you have to consider the length of input to the program as well. So its the length of the smallest program-input that's relevant. So assume you have some active process f that came into existence by chance or has always existed for no apparent reason. This process acts on some thing else that also came into existence by chance, call it x. and the output of f acting on x is y. How can the probability of y be less than the probability of f(x). So no matter how long y is in bits its probablity can't be less than the probability of f(x). I did just read the following in the Dembski paper: define p = P(T|H) as the probability for the chance formation for the bacterial flagellum. T, here, is conceived not as a pattern but as the evolutionary event/pathway that brings about that pattern (i.e., the bacterial flagellar structure). So maybe what I'm saying is obvious to him and everyone else (though I'm not sure.) Sal Gal is the one apparently expert in Algorithmic Info Theory. JT
Mark Frank: The "uniform probability distribution function” is referred to the distribution of all the possible forms of the whole sequence. So, for instance, in a 100 bit sequence, the probability ditribution alawys refers to a search space of 2^100 different sequences: if we assume an uniform distribution, each sequence will have the same probability (1:2^100). If the distribution is not uniform, some sequences will be more likely, others less likely. Let's remember that the total probability is always 1, and that the number of sequences is always 2^100. Obviously, some sequences could have 0 probabilities, depending on the constraints of the system. Your observation about the compressible sequence is not correct: the fact that the sequence has a lower compressed informational content does not mean that it has higher probability in a random system. It remains equally improbable. It just means that, if you have a computer and the correct algoritm, which is certainly shorter than the string otself, then you can generate the string without detailing each single bit. But, in a random system, the string will always have a probability of 1:2^100, if the distribution is uniform. Let's bring all that to the biological field. In DNA, you have a four letter alphabet (the four nucleotides). In proteins, you have a 20 letter alphabet, which is linked to the DNA alphabet by the genetic code. Is the distribution of, say, a 100 nucletide sequence uniform? Is the probability of each sequence 1:4^100? I believe it practically is. If you build a DNA strand randomly, the distribution will be uniform or quasi-uniform. As we are dealing with a complex biological system of synthesis, the true empirical distribution can obviously vary, according to the specific system. For instance, if there is different availability of the four nucleotides, some sequences will be more likely, others less. And there can be other factors which favor some nucleotides in a real biological system. So, yes, in a real system the empirical distribution will not be perfectly uniform. For proteins, I have already noticed that, being the genetic code asymetric, some aminoacids have higher probability to be represented in a random protein than others. And aminoacids are present in different concentrations in the cell. So, again, the distribution is certainly not completely uniform. Has all that any relevance to our problem, which is the nature of biological information? Practically not. Why? Because what we are interested in here is the probability distribution of functional protein (or DNA) sequences vs all non functional sequences of the search space. It is obvious that any asymetries in the theoretical uniform distribution can have no correlation with the general space of functional proteins. That should be evident, because there is no relationship between the constraints which make a protein functional (folding, active site, etc.) and the constraints which may influence the distribution of random sequences (availability of the elementary components, biological characteristics of the environment, etc.). They are obviously totally unrelated, unless you are a theistic evolutionist of the most deperate kind... So, a specific non uniform probability distribution can certainly favor one specific functional protein, by mere chance, but it will have completely different effects, on other functional proteins. And we have a huge number of functional proteins in natue, very differnt one form the other, organized in very different families and superfamilies, and whose primary sequences, and tertiary structure are completely different. Therefore, it is obvious that any deviance from the uniform distribution will have totally random effects on the space of functional proteins in respect to the general space of all possible proteins. Indeed, it is raher obvious that if the reastraints imposed on the system, which may cause a non uniform distributions, are too strong, the system will no more be flexible enough to express all functional sequences, even in a design context: in other words, let's suppose that the designer needs, for functional reasons, a specific protein where a 50% of tryptophan is required to achieve the functional sequence, and that the physical constraint of the system make that kind of sequence not only improbable, but impossible. Then even the designer cannot achieve that specific result. And if the designer is using, as a tool in engineering his proteins, some partially random variation (as do modern protein engineers, as well as the immune system), then if the probability of the target sequence is too low, even if not 0, just the same that result will be out of the designer's power. In other words, a random system must be flexible enough (in other words, behave according to a sufficiently uniform probability distribution) to be used as an instrument to generate functional information, even in the hands of an intelligent designer. gpuccio
Sal Gal: I appreciate your perfectly balanced comments on Dembski's post. He is obviously not refuting the EF, but rather "dispensing" with it in favor of a more advanced approach to the matter. Obviously, CSI remains for him the most important concept, as can be clearly seen in his points 2 to 5. I am in whole agreement with him on all those points, and am looking forward to any new development of his thought. gpuccio
JT wrote this (which I repeat to avoid switching back to the other thread) The objection is that Dembski’s calculations establish the probability of a bacterial flagellum being thrown together at a point in time, such that that molecules randomly floaiting around just by happenstance one day converged into the configuration of a bacterial flagellum, where no type of organism existed before. But, what if preexisting conditions in physical reality favored the formation of at least certain key attributes of a bacterial flagellum at a rate that was much higher than blind chance. The argument goes that Dembksi’s arguments do not address this, and the probability he calculated could be too low as a result. Well what I was saying was, suppose those preexisting conditions are such that they directly account for the formation of every key attribute of a bacterial flagellum. IOW, let’s just take it for granted that some identifibable physical process alone, sans ID, can completely account for the production of a bacterial flagellum from nothing. So the probability of getting a bacterial flagellum is equal to 1, and we have this physical process that preceded it to account for it, but now we can’t account for the origin of that physical process. Well what I’m saying is the probability of getting that physical process by uniform chance cannot be greater than the probability of getting a bacterial flagellum by uniform chance. Even if this physical process itself was directly caused by something that preceded it, you will eventually have to hit a point of origin, where nothing preceded it by blind chance or something else, and the probability of that point of origin for bacterial flagellums occurring by uniform chance cannot be greater than the probability of a bacterial flagellum itself occurring by uniform chance. It should be obvious why the cause for a bacterial flagellum cannot be more likely to occur than a physical flagellum itself. Just thinking about this statement for a few seconds should explain why. But to expand on this, in algorithmic information theory, the probability of a particular binary string C (e.g. 100011001…) is equal to the probability of the smallest program-input that will generate C as output. So that’s why the probability of a cause for a bacterial flagellum is equal to the probability of a flagellum itself. It is an interesting idea but I don't think it works. A "uniform probability distribution function" is not fully defined until you specify what it is uniform across. Let's assume that in the case of the bit string it means each bit is equally likely to be 1 or 0 and the probabilities are independent. Then the probability a particular bit string of length 100 is 0.5^100. If that bit string can be generated by a program of length 10 then the probability of the that generating bit string is 0.5^10 - which is much greater. What am I missing? Mark Frank
There has been confusion over Dembski's point (1) in the other thread. What I believe he is saying is that chance, necessity, and design may all contribute to an event. Sal Gal
There was an extraordinary clarification by Bill Dembski in the other thread. I'd like to start by thanking for setting matters straight. Here is his comment, for easy reference.
I wish I had time to respond adequately to this thread, but I’ve got a book to deliver to my publisher January 1 — so I don’t. Briefly: (1) I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection. (2) The challenge for determining whether a biological structure exhibits CSI is to find one that’s simple enough on which the probability calculation can be convincingly performed but complex enough so that it does indeed exhibit CSI. The example in NFL ch. 5 doesn’t fit the bill. The example from Doug Axe in ch. 7 of THE DESIGN OF LIFE (www.thedesignoflife.net) is much stronger. (3) As for the applicability of CSI to biology, see the chapter on “assertibility” in my book THE DESIGN REVOLUTION. (4) For my most up-to-date treatment of CSI, see “Specification: The Pattern That Signifies Intelligence” at http://www.designinference.com. (5) There’s a paper Bob Marks and I just got accepted which shows that evolutionary search can never escape the CSI problem (even if, say, the flagellum was built by a selection-variation mechanism, CSI still had to be fed in).
Sal Gal

Leave a Reply