Uncommon Descent Serving The Intelligent Design Community

Some Thanks for Professor Olofsson II

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The original Professor Olofsson post now has 340 comments on it, and is loading very slowly.  Further comments to that post should be made here.

Comments
I must apologise to the administrator for posting this under a second account. I neglected to note down my password, and cannot access my work email from home to recover it. "Interesting post. But I have to disagree with you on some crucial points." First of all, I wanted to thank you for responding so thoroughly. Secondly, as I anticipated my inexpert use of terminology has proved to be an obstacle to communication. I'll try to take on board your corrections and re-express my argument. That said, I think you are accustomed to dealing with arguments of the form "ID theory fails because you are wrong about X, Y and Z", whereas my contention is that ID theory fails if you are right. It might be helpful to bear that in mind. So, to recap: in ID theory as applied to biological systems, conventional undirected evolution is represented by a random search, whose behaviour is approximated via a uniform distribution. The use of a uniform distribution, and the scale of the search space, is justified thus: 1. All known lawful biases are, on average, neutral with respect to fitness or function, and supposing otherwise is the preserve of theological evolutionism. I agree with this assessment completely. 2. Although we may speculate there are fixable intermediate configurations that could reduce the burden upon the random search, it is unscientific to presume their existence in the absence of evidence. I also agree with this assessment. Indeed, my agreement with these points is crucial to the arguments that follow. I think now would be an appropriate time to bring in a quote from your response: "Design utilizes other tools, which are not random. You cite some of them, which bring to restricitons of the search space, but we could sum them up in a very fundamental sentence: design utilizes understanding and purpose." So if I tell you that the 400 digit binary number I thought of earlier is the encryption key for my wireless router, does your understanding of its purpose help you make a better next guess? 32, by the way, was wrong. Allegedly intelligence can create any amount of CSI, and yet here you are unable to produce a meagre 400 bits of it! Here's why: the 'tools' employed by design to restrict the search space are properties of the problem, not of the intelligence trying to solve it. Consider, by way of example, the following (somewhat whimsical) scenario: One day, minding your own business, you happen to observe a cat, sat on a mat. Amazed, you decide to communicate this information to your two friends, Alan from next door and Zordar from the planet Clog. To Alan you say "the cat sat on the mat", one of 30^22 or so possible sentences of that length, apparently demonstrating your ability to create CSI at will. Next, you turn to Zordar. You know Zordar would be interested to hear about the cat, and you know it's possible to express the information in Cloggian. But there's a problem: the Cloggian language is not broken down into building blocks of meaning. There is no consistent word for 'the' or 'cat' or 'sat' or 'mat'. There is definitely a sequence of characters that means 'the cat sat on the mat' in Cloggian, but even the most encyclopaedic knowledge of other Cloggian sentences gives you no clue as to what it might be. Assuming that you know the sentence you need is 22 characters long, is your intelligence of any help in deducing what you should say? No. Cloggian language, like the binary key for my wireless router, is - to borrow a supremely apt phrase - irreducibly complex. So it seems you can't create CSI at will after all! This is relevant for two reasons. First of all, ID theory supposes that biological systems contain CSI, and that intelligence can create CSI. Therefore, intelligence can create biological systems. But we've just shown, without much effort at all, that intelligence cannot always produce the CSI necessary to solve a problem. So the bedrock assumption of ID, that CSI is an indicator of intelligence at work - the more CSI the better, even - is unsafe. Secondly, you spent quite a bit of your response to my previous post assuring me that the genetic language of biological organisms is irreducibly complex: there are no higher-level building blocks of meaning, no 'words' as such: a protein or complex of proteins 'means' what it does as a whole. You've patiently explained to me that trying to find the right protein to accomplish a task is like trying to tell Zordar about the cat - in other words, intelligence and prior experience is of no help when you try to do it. When we look at a complex biological system, then, the situation is not as ID theory would presently have you believe. Design is not automatically the eager child with its hand in the air, saying 'Me sir!' On the contrary: just as NDE advocates should substantiate their claim that the origination of a given biological structure is plausible via evolutionary mechanisms, so - in each and every individual case - the design hypothesis depends upon its advocates' ability to demonstrate how the structure could have been designed, step by step.Peeling2
December 18, 2008
December
12
Dec
18
18
2008
05:55 PM
5
05
55
PM
PDT
Peeling: Interesting post. But I have to disagree with you on some crucial points. You say: "In measuring the CSI of a forum post, once again intelligence is pitted against the uniform distribution." No, intelligence is compared to a "random search". I think that you, like many, make some confusion between the definition of a random event and the definition of a probability distribution. The fact is that the causal engine in darwinian theory is, by definition, a process of random variation. It is not so important to know exactly which probability distribution we apply. We assume the uniform distribution because it is the most reasonable approximation. But, whatever distribution we assume, the search remains a random search. As I have argued many times, differences in the probability distribution cannot influence significantly the results, unless the distribution is in some way connected to the function which has to be found. And believe me, there is really no reason in the universe why a probability distribution aplliable to a random variation should in any way favor functional proteins. Unless, as I have already said, you are a die-hard theological evolutionist, and believe in a super-super intelligent fine tuning of biochemical laws. So, darwinian evolution postulates randomness as the only engine of variation. But that is not true of intelligent design. Design utilizes other tools, which are not random. You cite some of them, which bring to restricitons of the search space, but we could sum them up in a very fundamental sentence: design utilizes understanding and purpose. You say: "if we cannot demonstrate any lawful bias toward the origination of that protein, any stepwise process by which it might be attained, or quantify the degree to which it is a non-unique solution, we are justified in pitting a uniform probability distribution against a supposedly known quantity, intelligent design, and seeing which wins" Again, I am afraid there is some sonfusion in what you say. One thing are problems about the appropriate random distribution to assume: as I have already said, there may be all possible deviations from uniform distribution (some aminoacids can occur more frequently, some mutations can be easier than others), but none of these things can help in selecting complex functional sequences, and by the way a lot of complex functional sequences of different structure and function. Another thing, instead, is the problem of a possible stepwise process: that is aimed to make NS possible. In other words, if you can show a stepwise process through which a protein can be built, where each step is probabilistically accessible (say one or two aminoacid mutations), and each step is selectable (giving a definite function advantage), then you could argue that NS can act on each step, fixing it, and the probabilistical impossibilities could be solved. But unfortunately (for the darwinists) that is not the case. A new function cannot be obtained by a different pre-existing function bt stepwise functional mutations. In other words, complex functions are not "deconstructable" into a sum of simple stepwise incresing functions. Function derives from the general complex organization, and is not the accumulation of smaller functions. So, even NS cannot help. The probabilisitcal impossibilities remain. You say: "A character set of maybe 30 letters, spaces and punctuation and a modest post length of maybe 1000 characters yields a search space for the chance hypothesis of 30^1000. A clear win for intelligence, surely?" Yes, a clear win for intelligence against a purely random search. "However, for that reasoning to be valid we must be able to demonstrate that intelligence is capable of overcoming the odds in question." That's exactly the point: intelligence has nothing to do with the odds. Intelligence does not work by random search (even if sometimes it can use a partially random search in specific contexts: see intelligent protein engineering, for example). You say: "If we cannot do that then it cannot be our best guess, and we are forced to presume some other explanation - be it an unknown law or a stepwise process we haven’t spotted - fills the gap." Wrong: intelligence does not have to beat any odds, as I have already said. Intelligence just has to be able to generate CSI. And it does. You say: "But could intelligence beat those odds in a fair fight?" As alredy said, intelligence has to beat no odds. And what do you mean by "a fair fight"? The fight is obviously not fair. Intelligence has tools which a random search can never have. Understanding of meaning, purpose, and so on. You say: "To arrive at our uniform distribution we once again supposed a vast search space with no lawful bias, no iteration, no partial success and a single solution." Here you make a lot of confusion. First of all, again, here the problem is not the distribution. A uniform distribution in the search space of a protein just means that all sequences have a similar probability to occur. Even if the distribution is not uniform, it is impossible that any probabilistic distribution appropraite for biochemical events may favor functional sequences on random sequences: that would really be a "miracle". Lawful biases, in the sense of asymetries in the probabilsitc distribution, may certainly exists: I have cited elsewhere tha asymetry of the gentic code, for instance. But that would not make functional proteins, in general, more likely. What do you mean by "no iteration"? I don't understand. "No partial success" is simply wrong. Partial successes are certainly admitted in a darwinian search, but, as I said before, they must bring an increase in function to be "recognizable" by NS. Otherwise, NS is blind. In design, instead, the designer can certainly recognize many partial successes which still imply no function: for instance, a designer can recognize that the result is nearer to the desired structure, even id the functional structure has not yet been achieved. That's one of the main differences between a blind selector and a designer: again, the designer has understanding, and a definite target (purpose). Finally, why do you say "a single solution"? We are well aware that the solution is not single. That's why I always speak of a "target set" of functional solutions (for a defined function). the rate of the target space to the search space is exactly th probability of the target space. So, there is no reason that the solution should be "single". You say: "No! We granted intelligence a lexicon and a set of grammatical laws, collapsing the search space to a tiny fraction of its size before a word was written" No! Intelligence created the lexicon and the set. You are only saying a very trivial truth, that intelligence uses intelligent tools. It can do that, because it has understanding and purpose. "We also granted intelligence the freedom to iterate, to devise partial solutions and refine them based on feedback, gradually zipping together its words with the problem it was trying to solve." Again, "we" (whoever you mean with "we") granted intelligence the right to be intelligent. "Nor did we insist the solution be unique: countless different posts would convey meaning similar enough to overlap in the minds of readers." Nor do I insist the solution be unique in the case of proteins. Similar, and sometimes very different, proteins overlap in biological function. That is not a problem. The final probability is calculated for the whole functional target space. "Also note the non-catastrophic typos in the quote above." Also note the many non-catastrophic mutations we well know of. "In short, in measuring CSI the EF has detected something intelligence never did." Always the same error: intelligence can generate CSI. It does that with intelligent means. It is not supposed to do that by a random search. That would be impossible. "In creating one of these forum posts, intelligence didn’t find the needle in a 30^1000 haystack." Yes, it did. But obviously not by a random search. "It’s a false positive" Absolutely not. It's a true positive. The EF concluded that that result could not be obtained by a random search, and that intelligence was needed. That's exactly what happened. "For it to have been a fair contest between the uniform distribution and intelligence" Again, it has not to be a fair context. And the context is not between a distribution and intelligence, but between a random search, with whatever probability distribution, and intelligence. "For it to have been a fair contest between the uniform distribution and intelligence, intelligence would have had to construct that post with no lexicon, no knowledge of grammar, and without reading any of the preceeding thread." Why? Intelligence is not a blind watchmaker. It is a seeing watchmaker, and possibly an expert one. Do you pretend that watchmakers improvise their art? "That may at first blush seem absurd - after all, we know for a fact intelligent human beings really do possess the search-space-slashing tools we attributed to them: a lexicon and knowledge of grammar." Indeed, it is absurd. And indeed, we know that for a fact. "But here’s the rub: do we know that about the intelligence supposedly behind our 200bp protein?" You bet. After all the designer is our assumption, and we are not postulating a blind designer, or an idiot or incompetent one. We are postulating an "intelligent" designer, in case you have not noticed. "The answer is no, we don’t. In assuming an intelligence could be responsible for constructing that protein, we assume it possesses tools for collapsing the search space in which it is hiding - yet we have no idea what those tools might be." What about an understanding of pjysical and biochemical laws, of the laws if protein folding (which we still don't understand well) and of the laws of protein function? Plus a very good understanding of the general plan of a living being? In other words, we are simply assuming that the designer understands the things he is designing. "So the question is: why is it ok to justify the design hypothesis based on the idea that there might be tools enabling an intelligence to abstract the problem, simplify it and beat the odds," It is OK because a designer is intelligent, and if he needs tolls he can create them, and does not have to beat any odds, because he is not working by random serach, but by understanding and purpose. "but not ok to say that there might be an iterative process of partial solutions, or a lawful bias, that make a naturalistic origin equally plausible?" It is not OK because the things you suggest make no sense. Please show how an "iterative process" of "partial solutions" (not functional), or any reasonable bias on the probability distribution of protein sequences, could explain the emergence of functional proteins by a random serach engine. You simply cannot do that. That's why it's not OK. "Why is design our best guess?" Because of all the above reasons. "As intelligent human beings, I invite you to solve a problem genuinely equivalent to the one ID theory claims is posed by a 200bp protein: I have picked a number between 1 and 2^400, and all you have to do is guess what it is." The only intelligent guess I can make is that you still don't understand the difference between an intelligent process and a random guess. But, just for the game, I will try: was it 32?gpuccio
December 17, 2008
December
12
Dec
17
17
2008
08:59 AM
8
08
59
AM
PDT
(continued from above) I'd like to begin by tackling the assumption that we know intelligence is capable of yielding the biological structures in question. For our purposes I equate intelligence with foresightedness: the ability to abstractly model the problem and select a solution. That is my understanding of 'design': the facility to know in advance whether a given option represents a solution to a problem (I apologise if this is kindergarten-level stuff; I just want to lay things out as clearly as I can so that any errors I make are explicit rather than concealed). At this point I'd like to borrow a scenario similar to those proposed in this thread as being indicative of design: a common-or-garden 200bp protein. According to the arguments quoted in my preamble, if we cannot demonstrate any lawful bias toward the origination of that protein, any stepwise process by which it might be attained, or quantify the degree to which it is a non-unique solution, we are justified in pitting a uniform probability distribution against a supposedly known quantity, intelligent design, and seeing which wins. We may not be right, but it is our best guess given what we know, and best guesses is what science is about. Correct? However, for that reasoning to be valid we must be able to demonstrate that intelligence is capable of overcoming the odds in question. If we cannot do that then it cannot be our best guess, and we are forced to presume some other explanation - be it an unknown law or a stepwise process we haven't spotted - fills the gap. Recall the assumptions we used to justify our application of the uniform probability distribution: no stepwise process via proteins of partial utility or alternative function, no lawful bias, and the solution found is utterly unique: a needle in a 1 in 2^400 haystack. So how does intelligence - the intelligence for which we have evidence, at any rate - find solutions? Answer: by abstracting the problem, selecting a solution to the abstraction, and then realising it. Except... according to our assumptions, abstracting the problem in a simpler form isn't possible. There is only one, extraordinarily precise solution, and if our hypothetical intelligence, in its musings, misses it by the smallest margin it achieves exactly nothing, and gets zero feedback as to how close it got. What then, of the alleged ability of human intelligence to create CSI? I will again borrow an example presented here: An apt illustration of tis is the fact that lucky noise could in principle account for all teh posts inthis thread. Nothing in the physics or lofgic forbids that. but, we all take it dfor granted that the posts are intelligent action. So, even the objectors to the EF are actually using it themselves, intuitively. The claim here is twofold: first that we intuitively mimic the use of the EF to identify these posts as the product of an intelligent mind, and second that measurements of CSI, when applied to something like a forum post, positively identify intelligence at work. Neither claim bears up under scrutiny, but it is the second that causes ID theory real trouble. In measuring the CSI of a forum post, once again intelligence is pitted against the uniform distribution. A character set of maybe 30 letters, spaces and punctuation and a modest post length of maybe 1000 characters yields a search space for the chance hypothesis of 30^1000. A clear win for intelligence, surely? But could intelligence beat those odds in a fair fight? To arrive at our uniform distribution we once again supposed a vast search space with no lawful bias, no iteration, no partial success and a single solution. Is that the same problem we posed inteligence when assessing whether or not it could have produced the post? No! We granted intelligence a lexicon and a set of grammatical laws, collapsing the search space to a tiny fraction of its size before a word was written. We also granted intelligence the freedom to iterate, to devise partial solutions and refine them based on feedback, gradually zipping together its words with the problem it was trying to solve. Nor did we insist the solution be unique: countless different posts would convey meaning similar enough to overlap in the minds of readers. Also note the non-catastrophic typos in the quote above. In short, in measuring CSI the EF has detected something intelligence never did. In creating one of these forum posts, intelligence didn't find the needle in a 30^1000 haystack. It's a false positive. For it to have been a fair contest between the uniform distribution and intelligence, intelligence would have had to construct that post with no lexicon, no knowledge of grammar, and without reading any of the preceeding thread. That may at first blush seem absurd - after all, we know for a fact intelligent human beings really do possess the search-space-slashing tools we attributed to them: a lexicon and knowledge of grammar. But here's the rub: do we know that about the intelligence supposedly behind our 200bp protein? The answer is no, we don't. In assuming an intelligence could be responsible for constructing that protein, we assume it possesses tools for collapsing the search space in which it is hiding - yet we have no idea what those tools might be. On the contrary, we resort to brute force (folding@home). So the question is: why is it ok to justify the design hypothesis based on the idea that there might be tools enabling an intelligence to abstract the problem, simplify it and beat the odds, but not ok to say that there might be an iterative process of partial solutions, or a lawful bias, that make a naturalistic origin equally plausible? Why is design our best guess? As intelligent human beings, I invite you to solve a problem genuinely equivalent to the one ID theory claims is posed by a 200bp protein: I have picked a number between 1 and 2^400, and all you have to do is guess what it is.Peeling
December 17, 2008
December
12
Dec
17
17
2008
06:27 AM
6
06
27
AM
PDT
Hi all, I'm new here, and a frightful layman to boot, so do be gentle. I find this topic a fascinating one, and this looks to be the sort of place to delve deeper. I've tried to read the entirety of this thread (and its precursor) and several recurring themes stood out, exemplified to my mind by these quotes: Intelligence, sir, is the only empirically observed source of FSCI: functionally specific information requiring storage capacity beyond 500 - 1000 bits, as a practical description. You think it wiser to presume an unknown law is actively at work [than to presume that no such laws exist]? That we should somehow incorporate into calculations something we know nothing about? the distribution [of mutations and availability of amino acids] is certainly not completely uniform. Has all that any relevance to our problem, which is the nature of biological information? Practically not [because it is unlikely any such bias would be congruent with fitness or new function]. In essence: the formation of complex biological mechanisms is a job we know intelligence can do. It is therefore a parsimonious explanation for the job having been done. Moreover, for each and every biological mechanism it and the uniform distribution are the only games in town unless and until a specific alternative is proposed and can be evaluated. Is that a reasonable summary? Assuming that it is, I'll continue. (comment to follow)Peeling
December 16, 2008
December
12
Dec
16
16
2008
09:36 AM
9
09
36
AM
PDT
That should be "accept" not "except"Mark Frank
December 13, 2008
December
12
Dec
13
13
2008
11:55 AM
11
11
55
AM
PDT
I except PO's point. Some kind of hypothesis testing is simpler. I also realised after I posted that in this context the Bayesian approach includes its own assumptions about the prior distribution which are hard to justify.Mark Frank
December 13, 2008
December
12
Dec
13
13
2008
11:55 AM
11
11
55
AM
PDT
Mark, StephenB, onlookers[91,92,93], Mark and I must have posted at the same time and his comment appeared first. My reply to [91] is [93]. The case of Bayesian inference is entirely separate. I don't advocate it in this case but the conclusion would be the same: Caputo cheated. In a way, the focus on "bayesianism" is unfortunate because it is too difficult to follow. It was brought up because of Dembski's "E vs C" chapter. My criticism of the filter is, as has been pointed out many times, within his chosen paradigm of hypothesis testing.Prof_P.Olofsson
December 13, 2008
December
12
Dec
13
13
2008
10:58 AM
10
10
58
AM
PDT
StephenB[91], Congeniality back at you! We're having fun, wasn't it so? :) About Caputo, yes (of course, come on, politician, Caputo=Capone+Bluto, what more evidence do we need!?) and yes. Let's go to specifics. I apologize in advance if this post becomes long. We should do what I assume the experts in the case did, namely, compute the probability that a fair drawing produces 40 D's or more. There are 42 such seqeunces of D's and R's and let us call this set of seqeunces E* to follow Dembski. We get a probability that is so small we rule out fairness. So far so good and everybody is in agreement. Let's look a bit closer. In statistical hypothesis testing (SHT from now on) we set up the null hypothesis H0, which is the hypothesis we are trying to reject ("nullify"). Here we would write it as H0:p=1/2 where p is the probability with which Caputo draws. Based on the probability of the outcome, we reject H0. That leaves a bunch of other possible values of p that we haven't tested. Now what? In SHT we also always have an alternative hypotheses, called HA. It could simply be the complement of H0 but mose often it is more specific. Here we would take HA:p>1/2, that is, HA specifies an entire range of possible values of p. The property of HA is that any value of p in HA gives a higher probability of E*, thus indicating bias in favor of Democrats. For example, if p=0.9, we get P(E*)=0.08 and if p=0.95 we get P(E*)=0.39, etc. So the conlusion thus far is that we reject H0 in favor of HA. However, all the calculations above are based on drawings being independent and with the same probability p each time (binomial trials). That's our statistical model and it might not be correct. Let's expand it. A more general model is to consider all probability distributions on the set of 2^41 sequences, including those where p changes between drawings, where drawings are not independent etc. One specific distribution is the one where drawings are independent with p=1/2, that distribution is our H0 and we can still reject it in favor of an alternative HA which is now more complicated. In brief, HA consists of all distributions that make P(E*) greater than P(E* given H0), that is, the distributions that favor Democrats. Note that all these distributions are "chance hypotheses," even those that put a 100% probability on the particular sequence we observed. Probability 1 is just another probability. I think we're done. We have ruled out H0 and whether Caputo cheated by using a method that had independent drawings with p too high, or in some other way, doesn't matter. As for the court case, we need to, of course, rule out the possibility that he used a method that was flawed (such as p too high) unknown to him. He was asked what device he used and it was found to be OK, so only cheating remained. OK, what about "design" and the "filter"? In Dembski's words, we need to "sweep the field" of all chance hypotheses. My question (1) was how to do that? All Dembski seems to offer is to rule them all out based on Caputo's word. My answer is that we can't. My question (2) was why we need to rule out all chance hypotheses. The "filter" requires us to rule out all chance hypotheses and infer "design." But what is "design"? Cheating? But there are plenty of chance hypotheses in HA that also amount to cheating. I'd say the distinction between "design" and "chance" is impossible to make, as you can "cheat by chance." It might even be argued that any design hypothesis is, in fact, a chance hypothesis. The distinction is also pointless. All we need to do is rule out H0, make sure Caputo didn't cheat inadvertently, and we're done. Case closed, go to jail and forget about the 200 bucks.Prof_P.Olofsson
December 13, 2008
December
12
Dec
13
13
2008
10:45 AM
10
10
45
AM
PDT
# 91 I am not sure if PO is retiring from UD or not. I imagine it was very hard work responding to so many comments simultaneously. I am very much in agreement with him (although far less qualified) so I will attempt to act as a poor substitute. I am sure he will correct me if he returns. Do you believe that Caputo cheated? It is hard to say without knowing more about the process by which the ballot order is determined. Maybe he cheated, maybe the process is strongly inclined to put the parties in alphabetical order, maybe it is strongly inclined to keep the order the same as last time. etc etc Can you confirm that fact using statistical methods? Well no. As I explain above that would require understanding the process. But I can give strong reason for supposing that the probability of a democrat topping the ballot paper was higher than 50%. If so, show me the math and explain why your approach alone is adequate. POs did the math. His Bayesian analysis results in a pdf which shows that (assuming a fixed independent probability on each occasion) "the probability of the probability of being a Democrat exceeding 50% was virtually 1" - see post #300 on the first PO thread. As far as I know there is no other method of calculating the probability of the method being biased towards a Democrat based on the evidence available.Mark Frank
December 13, 2008
December
12
Dec
13
13
2008
10:17 AM
10
10
17
AM
PDT
Prof Olofsson: Let me end on a congenial note. I appreciate your many posts and hope that I have not been too contentious in my responses. The three questions I wish I had asked are these: Do you believe that Caputo cheated? Can you confirm that fact using statistical methods? If so, show me the math and explain why your approach alone is adequate. My best attempt to answer your question is this: I don't think one can, on the one hand, consider the math in the context of its application and, on the other hand, consider it independent of its application and get any consistency.StephenB
December 13, 2008
December
12
Dec
13
13
2008
08:12 AM
8
08
12
AM
PDT
kf[89], Merry Christmas to you too and everybody else, or as we say in Swedish, God Jul!Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
10:32 PM
10
10
32
PM
PDT
Prof PO, 1] Re 71: Bayesian inference is not merely applying the simple Bayes’ rule to events. Nor does Dembski assert in his article that that is all there is to Bayesian reasoning. Indeed he points out that a particular "Bayesian" he cites is actually more of a Likelihood theorist but delimits that this distinction is beyond his then present scope. WD as I cited above, was simply first highlighting that Bayesians use composite outcomes [I notice that you do not now deny this . . . ], of the general class that E* is. He then showed though a simple example, how -- under relevant practical situational circumstances (and associated issues of warrant and inference to best explanation across competing models) -- the Bayesian approach to a design inference [note his 2005 allusion to Robin Collins] would implicitly bring up issues tied to target zones of interest in the overall config space of possible outcomes. Lurking in these zones are all the issues of "law of averages" intuitive expectations, and their rational reconstruction [cf WD here in a parallel thread] into eliminationist reasoning. And that brings us right back to the core point: in the real world, we all have to infer to chance or necessity or design,and the CSI or FSCI in objects is a pretty good pointer to which is dominant for a given aspect. 2] But also, you assert: I have stated my case in my article and we shall see if there is any opposition from the statistical community . . . I think -- having already gone though the way you structured your argument, in a sequence of comments last year starting here and with further critical reading as discussed starting here [also see my response on the paper's opening at 103 which in effect sets all in the context of "creationism" including Sir Fred Hoyle's 747 example] -- that it is fair comment to observe that you set up a strawman caricature of W D's case, and knocked that over. Given human nature, even among statisticisns, and the sort of strawman misrepresentations of ID and polarisations that are all too common, mere agreement of the statistical community is far from good enough. What is needed instead is an assessment on the merits, starting from your construction of what Mr Dembski had to say, then the wider issue of the relevance and credibility of Eliminationist reasoning, then the Bayesian alternative, then the real-world contexts and balance on the merits. For, no individual/ collective authority is better than his/its facts, assumptions, explanatory models and logic. GIGO. In short, as a final fair comment for now: it remains my considered opinion that on the merits, you have still not made the case that you need to make, so whether or not the case as presented is persuasive simply tells me about the quality of reasoning in the community of your peers. All said, a lighter moment: greetings at Christmas! GEM of TKI PS: Sir Fred Hoyle in his "tornado forms a 747" in a junkyard example, is making a serious point tied to statistical thermodynamics considerations, on origin of life in light of evident deep isolation of islands of function [specifications or target zones . ..] in a sea of non function; the POINT of Hoyle's example -- which is not a matter of mere opinion but of serious issues. Considerations that have long had the OOL research programme in a tizzy; as the recent public exchange between Shapiro and Orgel underscores. (Similar issues of course obtain for body plan innovation level macroevolution, e.g. the Cambrian revolution.)kairosfocus
December 12, 2008
December
12
Dec
12
12
2008
10:26 PM
10
10
26
PM
PDT
StephenB[85], I will try to answer your new questions. They are many and I will keep my answers short. As for your first 4 question marks, I think it is more or less impossile to apply statistical methods in such an ambitious scheme as Dembski's. Next question mark; I cite Fred Hoyle because it's a famous quote and it's pretty colorful, whether you agree or not. Number 6, as I have pointed out many times, I am selective based on my expertise. Number 7, of course I am not a "neutral observer." Who is? I am however perfectly neutral when it comes to the math and statistics involved. Number 8, what do you mean? You talk about one theory and another. Demsbki's filter vs evolutionary biology? That's apples and elephants. The filter is not a theory, it is a method. As far as I know, there is no ID theory that explains the origin of species. Is there? With that caveat, my answer to number 9 is that I rely on the authority of scientists to establish and explain science. I don't believe in conspiracy theories. If only evolutionary biologists and no other scientists supported evolution, I'd be suspicious. I know I don't have all the answers to all the questions, but at least I've made an attempt to address each one.Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
08:50 PM
8
08
50
PM
PDT
StephenB[85], My two questions (1) and (2) on Olofsson I were not directed at you personally, as you can check for yourself. They were posed because I think they're relevant and might generate some interesting discussion. As you jumped in with comments about me, prompted by those questions, I thought that you perhaps also might have an interest in answering them. If you don't, I have no intention of pressing you.Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
08:13 PM
8
08
13
PM
PDT
StephenB[85], Regarding questions, yours and mine, see my post[84].Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
07:55 PM
7
07
55
PM
PDT
----PO: "Have a pleasant weekend!" Thanks, I hope you have one too. I haven't yet declined to answer your question. On the other hand, my standard is that we must all be held accountable, which means that you should answer my questions first. My earlier reprieve was an attempt at being gracious, meaning I decided not to press an the issue. In the meantime, you have asked me to held accountable with questions of your own. So, the game is back on. Besides, I didn't show the relevance of my questions the first time. I will do that now. -----Ending your article, you wrote: “Arguments against the theory of evolution come in many forms, but most share the notion of improbability, perhaps most famously expressed in British astronomer Fred Hoyle's assertion that the random emergence of a cell is as likely as a Boeing 747 being created by a tornado sweeping through a junkyard. Probability and statistics are well developed disciplines with wide applicability to many branches of science, and it is not surprising that elaborate probabilistic arguments against evolution have been attempted. Careful evaluation of these arguments, however, reveals their inadequacies.” If you have enough confidence to say that Dembski’s approach is wrong, why do you not also have enough confidence to say which approach would be right? If you are so sure about what does not work, how about showing us what would work? Or, are you saying that these arguments cannot be applied to formal statistical methods at all? If you cannot answer the latter question, how can you answer first two? At the same time, you seem blissfully unconcerned about the incredible improbability of the alternative argument, namely that a cell could emerge randomly. Fred Hoyle dramatizes the point that such an event is most unlikely and you even cite his example. What do your make of that? Evidently, you disagree with him. That's odd, because Darwinism's mechanisms are neither described with enough precision nor defined with enough consistency to be measured at all. Why are you so selective with your rigor? Doesn’t that selectivity suggest that you are not a neutral observer? Doesn't it seem reasonable that a theory that is precise enough to be subjected to mathematical scrutiny should be preferred over the one that is not? Or, as you suggest in your article do you accept the less precese theory solely on the authority of a majority evolutionary biologists?StephenB
December 12, 2008
December
12
Dec
12
12
2008
07:14 PM
7
07
14
PM
PDT
StephenB[81],
I am having fun, aren’t you?
Yeah, but I want to have more fun!
When it comes to answering questions, you are a little behind the curve. On the other thread, I posed questions to you at 140, 145, and 188. First things first.
OK, fair enough. Your question in 140 was answered by yourself in the same post and also addressed by me two posts down. There are no questions in 145 and 188, but we went back and forth a while. Your last comment was "fair enough" and I thought we were done. At the very least, I didn't feel there were any unanswered questions and, looking back, I can't find any. You don't have to answer my questions. They are difficult. Have a pleasant weekend!Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
04:06 PM
4
04
06
PM
PDT
rockyr[82], Well, you have mostly talked about the philosophical underpinnings of probability without suggesting what would or should possibly change. My view is that the axiomatic approach we use today is the correct setting as it is independent of any one particular interpretation, and results such as the Law of Large Numbers and Central Limit Theorem, that are observable in practice, can be proved within the theory. I see no benefit in defining probabilities as limits of relative frequencies which seems to be what von Mises advocated. I don't want to misrepresent his position though, and I have pulled out my old copy of "Probability, Statistics and Truth" for further reading. He seems to be skeptical of the Principle of Indifference that Dembski relies on in some of his writings which may be an interesting point. Anyway, I wish you a good weekend as well!Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
03:51 PM
3
03
51
PM
PDT
Prof_P.Olofsson, 80, For several reasons, I don't want to get involved here in the filter debate, I think this would be best done by Mr. Dembski and his associates, if he is willing to do it on this blog. In this respect all I am asking for is that the ID get a fair chance in the public forum to present and discuss their ideas, because they are worthy of a serious in-depth discussion. In other words, I hope the ID gets a proper respect worthy of real science, not the usual labels like "creationism" or "pseudo-science." As I said, I think that the best and the most beneficial approach for both sides would be to have a serious look at the philosophical underpinnings of both the evolutionary and design approach. Have a good weekend.rockyr
December 12, 2008
December
12
Dec
12
12
2008
02:53 PM
2
02
53
PM
PDT
-----PO: "Shouldn’t you answer my questions (1) and (2) from Olofsson I instead of bickering? Wouldn’t that be more constructive, not to mention more fun?" I am having fun, aren’t you? When it comes to answering questions, you are a little behind the curve. On the other thread, I posed questions to you at 140, 145, and 188. First things first.StephenB
December 12, 2008
December
12
Dec
12
12
2008
02:34 PM
2
02
34
PM
PDT
rockyr[78], OK, so where does it lead us? Can we use Kolmogorov's axioms? What are the implications for statistical inference and for the "explanatory filter"? I think it's time for you to be a bit specific. You obviously have objections to what I wrote but it's hard to tell what they are.Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
01:44 PM
1
01
44
PM
PDT
Mark Frank, 77, Yes I do. I admit it is not easy, even at the higher philosophical level, but, as I said before, things don't get any easier once they are clouded by a veil of complex math. You still have to deal with the same problems, but now you need to cut through the complex math as well.rockyr
December 12, 2008
December
12
Dec
12
12
2008
01:26 PM
1
01
26
PM
PDT
Prof_P.Olofsson, 76, From what I have read and what I know, I think von Mises was one of the most intelligent and rational modern philosophers of chance & probability. He criticized where criticism was necessary, and he praised where praise was due, such as when considering the ideas of Kolmogoroff. He even considered delicate psychological and physiological aspects, even para-psychology. I like von Mises' definition of probability and chance, i.e. former with respect to proper random collective vs. improper non-random collective. As he says, this is the only truly scientific way to deal with the mess. But, if you disagree, I would be interested in hearing your opinion. I am always ready to learn something new.rockyr
December 12, 2008
December
12
Dec
12
12
2008
01:20 PM
1
01
20
PM
PDT
rockyr[72} That is why my approach is an attempt to transpose the key argument into a domain which I think is better suited for a quicker & easier resolution of the deep misunderstanding between the two sides. You really think that arguing about the philosophy of probability will lead to a quicker and easier resolution????Mark Frank
December 12, 2008
December
12
Dec
12
12
2008
12:39 PM
12
12
39
PM
PDT
rockyr[72],
That is why I am not prepared to go to such depth of highly specific arguments.
A perfectly reasonable and acceptable position. If you don't mind it though, I am curious about one point. Did you mention von Mises and relative frequencies as the "correct" approach or was it just an example of one point of view?Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
12:28 PM
12
12
28
PM
PDT
StephenB[73],
To pretend that Bayesian statistics is independent of Bayesian epistemology
No such pretense on my part. Bayesian statistics is, of course, dependent upon Bayesian epistemology (the parameter as a random variable). And explaining how Bayesian inference is carried out in a simple example is not "providing a point of view." Shouldn't you answer my questions (1) and (2) from Olofsson I instead of bickering? Wouldn't that be more constructive, not to mention more fun?Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
12:23 PM
12
12
23
PM
PDT
Hello JT, I am enjoying our brief exchange, however I rarely have enough free time to get into full fledged discussions anymore. I do think that you are bringing some clear thoughts about the issue to the surface, and I do agree with you on some matters. I would like to specifically reply to one thing you said, since I think that it is quite key to the whole issue, and I've noticed a lot of people don't think about. You state: "However, It seems ID would say that intelligence is something different from law, but its not." The slight, yet extremely important, thing that is missing in your statement is that although intelligence may operate in a lawful manner (and I see no reason why it doesn't), it also makes use of information. Information, as I've previously alluded to, can be very simply seen as different types of organization. If the organization of the units necessary to realize the intelligent program are not themselves the result of mere regularity or physical properties of the units, then there is something other than law involved, either just plain "randomness" or something else which can coordinate law and chance processes. I'll leave it at that for now, since I've already delved into that a bit in my previous comments. Law is a part of intelligence, but to say it is just law is not giving the whole account. The organization (not defined by regularity or by physical properties of the units/bits/letters) necessary for intelligence needs to be accounted for. Previous intelligence, maybe? JT: "The above is all stated rather informally. I suppose its possible you might reply the above is what No Free Lunch is all about, only stated much more rigorously." Yes, no free lunch is from where Dembski began to develop his theorem re: Conservation of Information. You won't get more CSI out of a program than the CSI of the program itself. Again, a program operates on law and information (or organization) and can make use of random events. IMO evolutionary algorithms are an excellent example of the coordination of previous intelligence, laws, and chance.CJYman
December 12, 2008
December
12
Dec
12
12
2008
11:27 AM
11
11
27
AM
PDT
----"Prof Olofsson: "You would know. I’m not criticizing, I’m educating. Besides, blogging at UD isn’t my entire life, believe it or not." No, you are providing a point of view. To apply mathematics at this level requires judgment and, at some level, a grounding in some philosophical perspective about how it is supposed to be used. To pretend that Bayesian statistics is independent of Bayesian epistemology and therefore solely a matter of factual application is to dismiss a serious point.StephenB
December 12, 2008
December
12
Dec
12
12
2008
11:19 AM
11
11
19
AM
PDT
prof_P.Olofsson, Mark Frank, [ re 59, 60, 63, 66], (I am getting funny narrow formatting in preview for some reason, don't know why.) I am sure that blogging isn't entire life for most of people who post here, many of us have to work for living. That is why I am not prepared to go to such depth of highly specific arguments. Besides, nobody is paying me to engage in such specialist debates. I don't think that highly specialist arguments are transparent enough for most non-specialists. You are right Prof_P.Olofsson, that you deserve a technical answer to your technical criticism.If others are willing to do it in a concise way limited by the capabilities of a blog, great, I'll be glad to read. But I don't think such nitpicking, even if it may turn out important, is of interest to the general public, and it certainly does not validate wholesale far-reaching denigration or demonization of ID and its concepts. Especially when you consider what the other side has to offer. (I am not impressed by the practical results of evolutionary biology and by the related math and statistics.) Besides, this is not just a strictly technical issue of being correct or incorrect using some statistical methods or approaches, there is philosophy of chance and probability involved at almost every step even in such highly technical issues. That is why my approach is an attempt to transpose the key argument into a domain which I think is better suited for a quicker & easier resolution of the deep misunderstanding between the two sides. My approach is to challenge you, and all opposing evolutionary scientists & statisticians, to reveal the philosophy or theory of chance and probability to which you or they subscribe. Surely, being a probabilistic mathematician who admits the importance of proper understanding of his basic definitions and principles is of utmost importance in the science you do. There are many examples from even not too distant philosophy of chance & probability that highlight a need for such an approach, since, due to the complexity of even the basic concepts, the history of chance is strewn with an unusual amount of nonsense and confusing semi-sense. I would be willing to give examples of how even very bright and respected mathematicians and scientists of the highest calibre made outright stupid mistakes in basic philosophical reasoning which they used to power the machinery of their mathematics and statistics. Mark Frank, I am not saying that all statistics is affected by these errors in basic philosophy. A lot of it is correct and it works for many specific purposes. What I am criticizing is that when it comes to such often trivial or wrong philosophies of chance being used or applied to the most complex phenomena which deal with biology, intelligence, psychology, etc., sheer insulting nonsense is often produced, especially for those who know about the difficulties involved in probabilistic reasoning. And this nonsense is than presented to the general public as the great achievement of science, with far reaching consequences.rockyr
December 12, 2008
December
12
Dec
12
12
2008
10:47 AM
10
10
47
AM
PDT
kf[70], There is a difference between what Bayesians do and what Dembski claims that they do. Bayesian inference is not merely applying the simple Bayes' rule to events. I have stated my case in my article and we shall see if there is any opposition from the statistical community. I am happy to say that we are in perfect agreement that P(even number)=1/2. A true kumbaya moment!Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
08:49 AM
8
08
49
AM
PDT
1 2 3 4

Leave a Reply