Uncommon Descent Serving The Intelligent Design Community

Some Thanks for Professor Olofsson II

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

The original Professor Olofsson post now has 340 comments on it, and is loading very slowly.  Further comments to that post should be made here.

Comments
CJYMan [26]: Just a follow up to let you know I did read both Parts I and II of your overview of Dembski's paper. As your intention is primarily to clarify what he wrote, I think its perfectly acceptable as far as it goes. I did make a note of several things I could have addressed in your treatment, but upon reflection it would appear contentious for me to dwell on them to any signficant extent. I'll just bring up a few items. The fact that you can send the same idea using different patterns in the same language or even different patterns by using another language shows that the ideas themselves are independent from the pattern which is sent across the communication channel. That is how we know that the idea “contained” in the pattern is defined independent of the pattern itself. We could even state the same meaning in a different way – “Do you have the ability to comprehend what these symbols mean?” Either way, the idea contained in the above pattern (question) can be transferred across a communication channel as an independent pattern of letters. This is referred to as functional semantic specificity – where specific groups of patterns which produce semantic/meaningful function are “islands” of specified patterns within a set of all possible patterns. Dembski's paper does not contend that ideas or meaning are independent of physical patterns. That's something you're reading into it. He talks about patterns that are independent of the actual signal. The word "semantic" does not appear in that paper (not to mention the phrase "functional semantic specificity") Maybe it does in another paper of his, but I think his objective here is to pare everything down to what he thinks he has a reasonable chance of actually demonstrating. ------------------------ The next question: will a random set of laws cause an information processing system and evolutionary algorithm to randomly materialize? According to recent work on Conservation of Information Theorems ID theorists state that the answer is "NO!" If that's what ID theorists are seeking to demonstrate it would seem a pointless exericize becuase no one would disagree with the premise. A random set of laws does not increase the odds of anything, just by virtue of them being laws. NO ONE would take issue with this so why are they seeking to prove it. However take out the word random: The next question: will a set of laws cause an information processing system and evolutionary algorithm to materialize? That's the relevant question, and ID theorists are not in a position to answer NO! on this question. ------------------------------- In part II I would agree that an opening and closing door does not refute the Design Inference. But it seems like to me there are compressible patterns throughout the non-biological component of the physical universe. I was going to suggest that that there would be a nonrandom pattern of molecules in a chunk of solid matter so that you could express that pattern in a compressible way (thus indicating design.) But I don't know what quantum theory does to that. But completely excluding the biological world, it seems you could find patterns throughout the physical universe that would have to indicate design in the ID scheme of things.JT
December 10, 2008
December
12
Dec
10
10
2008
05:47 AM
5
05
47
AM
PDT
GP: Thanks. Mr Baxter: Please note that all my comments at UD are in the context of the online note that is linked through my handle in the LH column. I believe that sections B and C will be relevant to several of your remarks above. In particular, I think you will see that the issue is not with whether or no Mt Improbable has a gently sloping easy back-path, but with getting TO the shores of islands and archipelagos of bio function that are marked by various body plans. In the case fo the Cambrian, we need to account for some 30 - 40 phyla and subphyla that turn up in the fossil record in a window of some 10 MY, on the usual timelines. The significance of this is that, first, to get to first life you have to get to some 300 - 500 k bases, plus the executing machinery, codes and algorithms, then to get to onward body plans you credibly need at least dozens of millions of base pairs, dozens of times over. One base pair has four states [A/G/T/C] and stores two bits. Just 250 - 500 bases would imply a sea of 10^150 - 10^301 configurations, and that is the threshold that would exhaust no the search resources of our home planet, but of the observable universe. So, the issue is not the blind, non-purposeful, unfit-destroying culling filter known as Natural Selection, but to reasonably have a probabilistically credible means of innovating the functional information systems of life. This, in a context where the known complexity of the sub-assemblies [DNA, proteins etc] is well collectivley and even sometimes individually beyond the reach of random search on the scope of the observed universe. Just the posts on this thread alone are sufficient to show that intelligent designers are capable of creating digital string-based FSCI, and the computers we are using are evidence that such agents are capable of creating the executing machinery and required algorithms and codes. Intelligence, sir, is the only empirically observed source of FSCI: functionally specific information requiring storage capacity beyond 500 - 1000 bits, as a practical description. So, it is very reasonable to abductively infer that the observed information systems in the cell credibly come from the same class of causal source as the much cruder ones we have invented over the past 70 or so years: Intelligent design. GEM of TKIkairosfocus
December 10, 2008
December
12
Dec
10
10
2008
05:15 AM
5
05
15
AM
PDT
PhilipBaxter: #30: You say: "I’ve read a few of your posts before and I suppose my thought is that you think it’s either “all chance” or design. Yes, the probability of a 747 or a complex protein or gene sequence forming all at once is vastly improbable. So improbable that I think we’d all agree that it was impossuble, UPB believers or no." OK, that's a good start. "Except I think you do your opponent a disservice kairosfocus by consistantly misunderstanding that point." Maybe it's you who are misunderstanding? Please read the following point. "If you were to look at their point of view with an open mind, I believe you’d find that random chance has it’s place (which gene, how it will mutate is random) but the enviroment provides a very non random filter." Believe me, we are really looking at their point of view with an open mind, and the result is always the same: it is completely wrong. Are you suggesting that we are not aware of the suggested role of NS in darwinian theory? Do you think we are completely stupid? See next point. "The enviroment selects and improbable structures can so be constructed over time and generations." That's the point, at last. And it's very simple. Let's see if you understand it. The environment can select only what already exists. Are we OK with that? The environment is not an engine which generates variation or information. It's just a filter. And a blind filter, obviously. The environment has no idea of what it is selecting, or why. The only selection which can happen is based on the appearance of a new (or improved) function (which must be relevant enough to give a sufficient reproductive advantage, but let's not go into details here). If the function is not already there, it cannot be selected. So, we can calculate the minimum CSI increase necessary for the appearance of a new function in any specific model of transition, if and when darwinists will provide at least one. If the increase in CSI is high enough (that is, improbable enough), that transition is simply unacceptable in a model based on random variation. You can obviously try to deconstruct that transition by showing that there are simpler intermediates which are selectable (that is, exhibit some selectable function and can be fixed). But you have to "do" that. Not just imagine it. So, unless you can show that any existing protein function can be achieved through specific selectable intermediates, at the molecular level, darwinian theory is a complete failure. And it is. Because you cannot demonstrate that, because it's simply not true. Complex function is not deconstructable in a sum of simpler functions achievable with simple bits of information. That's really an urban myth of darwinism (one of the many). Please take notice that Behe has clearly showed in TEOE, that while single mutations are obviously in the range of possibility on all organisms, double coordinated mutations are exceedingly rare, and probably out of the range of what most organisms can achieve. But I want to be generous: I concede a "step" of 5 (five!) unguided coordinated mutations as "possible" (it is not, I know, but it's Cristmas time, after all!). So, please show me any model which shows the possible achievement of a specific new function in a medium length protein (let's say 200 aminoacids) from a completely different protein through single functional selectable steps of 5 coordinated mutations. Then we can start talking about the role of NS. "Complex does not form instantly except with Intelligent design, remember?" There is no reason to say that complex forms "instantly" with Intelligent Design. It may well take its time. But complex "does" form with Intelligent Design, and it "does not" form through random variation and NS. "Improbable comes from lots of somewhat less improbable." That's simply false. One improbable function is not the sum of lots of less improbable functions. Why should it be? "Yes, it’s all improbable, but here we are." We certainly are here, and so? We are here because we are designed. "That’s their point of view, right?" It certainly is. On that you are right. "Your improbabilty arguments, in my opinion, serve only to provide a verbal fog you can hide behind so you don’t have to actually answer your critics on this point." And that kind of statement is a very good example of the arrogance, superficiality and inconsistency which reigns in the darwinian field.gpuccio
December 9, 2008
December
12
Dec
9
09
2008
11:52 PM
11
11
52
PM
PDT
PS: Oh well, neither sigma on LHS nor phi on RHS made it to the thread. Sorry. SIGMA = –log2[PHI S(T)·P(T|H)] PPS: GP, thanks for illuminating remarks, as always.kairosfocus
December 9, 2008
December
12
Dec
9
09
2008
11:33 PM
11
11
33
PM
PDT
All, esp Philip and Prof PO: Interesting discussion overnight. Prof, kindly email me . . . and yes, we are grateful that there has been no hard hit here, though Cuba and Haiti were not so lucky. Sigh! Our old friend down south has been celebrating an early Christmas, too. No warning pyroclastic flows that uncomfortably echo the 1902 St Pierre Martinique case. (Bad news for the geothermal energy development effort. [[Thread owner, please pardon a bit of group bonding, which always helps on tone when diverse groups address contentious issues.]) Philip: 1] Examples of FSCI and our use of the filter E.g. no 1: you took the post at 28 above as the product of an intelligent actor, not lucky noise; which strictly could physically possibly have generated it. It is a basic exercise to estimate the config space for 128-state ASCII text, to estimate the fraction that would be sense-making text more or less in English, and to compare relative capacities of random searches with empirically known capacities of intelligent agents. FSCI is a reliable, empirically anchored sign of intelligence. Your action also tells me that you, yourself intuitively accept and routinely use the EF. So, you need to address evident self-referential incoherence. 2] Quantification? Cf Dembski's formula on p. 18 in the 2005 paper on Specification at his personal reference article site, as CSI is the superset for FSCI. Sigma on the LHS is CSI's metric in bits: ? = –log2[?S(T)·P(T|H)] The paper gives details and simple examples. Overall, though, we have good reason -- tied to the foundations of the statistical form of the 2nd law of thermodynamics [cf my appendix 1, the always linked through my handle] -- to estimate config spaces and to see that we are dealing with deeply isolated islands of function, in a context that is near-free, esp for OOL, which is where it must begin, Cf GP's points on that and my simple discussion here. But kindly note, too, that the FSCI - CSI concept, as my always linked appendix 3 discusses, is NOT due to WmAD, but to Orgel et al from 1973 on, as they tried to understand the peculiarities of the molecular basis of life. 3] Chance/lucky noise, necessity, agency or other? The trichotomy above dates back to being immemorial in the days of Plato's The Laws, Book X [cf. my cite in my appendix 2]; so I think we can be fairly confident that it is a well established, long-tested analytical framework. "Law," relates to aspects of a phenomenon or object that show LOW OR LITTLE CONTINGENCY. Where there is significant freedom to vary outcomes from case to case, the contingency is per massive observation (and, arguably, basic logic) either directed or undirected. The latter we call "chance," the former, "design." It would therefore be interesting indeed to see a credible case that there is a realistic fourth alternative -- not a dubiously vague promissory note. "Lucky noise" simply speaks to how hard it is for undirected contingency to get to deeply isolated islands of function, similar to how we simply do not fear that the O2 molecules in the rooms where we sit, will rush to one end and so kill us. GEM of TKIkairosfocus
December 9, 2008
December
12
Dec
9
09
2008
11:29 PM
11
11
29
PM
PDT
PhilipBaxter: I think your last posts betray some common misunderstandings. Maybe you are new to the discussion, so I will make some elementary comments: #29: You say: "Interesting stuff. Do you have a list, or could I give a few examples of objects and you could tell me the FSCI, or how you would go about putting a figure on it?" A list of FSCI in biological objects? Just start with all known functional proteins longer than, let's say, 120 aminoacids (just to be safe). "Does genome size directly relate to FSCI? I presume humans, as the most advanced organism on the planet also have the largest amount of FSCI also? What units is FSCI measured in?" FSCI can measured. You must well understand what it is, anyway. It is a property defined for one "assembly" of information exhibiting a specifically defined function. So, the definition of the "assembly" and function is critical to the measurement of CSI. For instance, in the simple example of one protein, the protein itself is the pertinent assemblage, and the protein function is the function (in many case we can define a minimum level for the function). Fot complex machines, instead, like the flagellum, the "assembly" would be the flagellum itself, and the function its function. Ibce you have assigned those terms, FSCI can be measured as the complexity of the assembly (the ratio between the subset of functional targets and the whole set of possible sequences, expressed for instance as the negative logarithm). That will be the CSI of that assembly (in other words, its complexity). But the assembly must obviously display the function. For many single proteins, with our current understanding of them, it is possible to make an approximate computation of CSI, expressed as a lower limit, making some reasonable assumptions. For instance, I have suggested that for the whole search space we can easily define a lwoer limit as the space of combinations of an aminoacid sequence of the same length as the protein itself (so, for human myoglobin, that would be 20^154). More difficult it is to evaluate the size of the target set. There we have to make other kinds of considerations. One way of reasoning is to consider what we know from protein folding and protein engineering. I am confident that with the growing knowledge in those fields, we will soon be able to make reasonable approximations about that, and I believe that we can already assume easily that, however bid the functional target space is, it can never be big enough so that its ratio with the whole space can bring the probability of a protein such as myoglobin over the limit of UPB. Another suggested approach is to consider the same protein in known species, like Dursten, Chiu, Abel and Trevors have done, and to measure indirectly the complexity considering the difference between Shannon H of a particular protein (considering how much it varies in different species) and the same value in a truly random sequence of the same length. Thus, they measure functional information in a unit they have defined as Fit (functional bit). That's a very interesting approach, and it shows that "can" be measured.gpuccio
December 9, 2008
December
12
Dec
9
09
2008
11:09 PM
11
11
09
PM
PDT
kairosfocus[28], There is our insular friend, finally! Welcome to the thread that bears my name. My comments are gone and that's just as well; they were probably mostly disatracting to the current discussion in which I will not participate. I hope all is well and that you escaped yet another hurricane season unscathed!Prof_P.Olofsson
December 9, 2008
December
12
Dec
9
09
2008
10:00 PM
10
10
00
PM
PDT
PhilipBaxter, Hello, I was interested in your comment.
Is it a stark choice between “lucky noise” and “design” then?
Perhaps it is. After all, either there was intelligent input, or there was not. In any case, these are the two concepts that experience tells us must be in play. There could also be the unknown, but it’s fairly clear that each side already thinks they have a winner. Then again, the existence of neither of these excludes the existence of the other. So, the answer could also be that it’s a little of both, or a lot of one over the other. Perhaps it’s even that one works in one domain while the other works in another. How could anyone know any of these answers as long as only one answer is allowed? It’s a fair question. To your larger point, design proponents can easily draw from more fertile ground than just the improbability that inanimate particle matter may one day organize itself into living tissue full of molecular machinery driven by an encoded data stream, metabolizing energy and exhibiting a strong will to survive.
random chance has it’s place
Yes it does. But, isn’t it that observations also suggest that chance has no role in the selection of nucleotides in the original replicating cell (perhaps 200-400 protein sequences at 300-1000 nucleotides per sequence, plus regulation, transcription, organization, replication, energy distribution, etc., all coming together within the lifespan of the first cell). Chance is subject to the search space and availabilities, and to its inherent independence from any other nucleotides in the chain. These things don’t just extend the probabilities argument; they describe a mechanism that’s the polar opposite from what is needed to create a functional sequence – no order, no law, complete unity.Upright BiPed
December 9, 2008
December
12
Dec
9
09
2008
09:38 PM
9
09
38
PM
PDT
“Ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.” That's an interesting faith-based statement :-) Using the criteria of methodological naturalism, how can methodological naturalism avoid the infinite loop?tribune7
December 9, 2008
December
12
Dec
9
09
2008
07:42 PM
7
07
42
PM
PDT
something about this thread made me think of this: "Ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science."Khan
December 9, 2008
December
12
Dec
9
09
2008
07:13 PM
7
07
13
PM
PDT
I believe you’d find that random chance has it’s place (which gene, how it will mutate is random) but the enviroment provides a very non random filter. The enviroment selects and improbable structures can so be constructed over time and generations. And some things are impossible for natural selection plus random genetic change to accomplish. What was the environment in which proteins self-organized into a flagellum?tribune7
December 9, 2008
December
12
Dec
9
09
2008
07:05 PM
7
07
05
PM
PDT
CJYman [26]: I will read that when I get a chance. Thanks.JT
December 9, 2008
December
12
Dec
9
09
2008
05:57 PM
5
05
57
PM
PDT
An apt illustration of tis is the fact that lucky noise could in principle account for all teh posts inthis thread. Nothing in the physics or lofgic forbids that. but, we all take it dfor granted that the posts are intelligent action.
Is it a stark choice between "lucky noise" and "design" then? I've read a few of your posts before and I suppose my thought is that you think it's either "all chance" or design. Yes, the probability of a 747 or a complex protein or gene sequence forming all at once is vastly improbable. So improbable that I think we'd all agree that it was impossuble, UPB believers or no. Except I think you do your opponent a disservice kairosfocus by consistantly misunderstanding that point. If you were to look at their point of view with an open mind, I believe you'd find that random chance has it's place (which gene, how it will mutate is random) but the enviroment provides a very non random filter. The enviroment selects and improbable structures can so be constructed over time and generations. Complex does not form instantly except with Intelligent design, remember? Improbable comes from lots of somewhat less improbable. Yes, it's all improbable, but here we are. That's their point of view, right? Your improbabilty arguments, in my opinion, serve only to provide a verbal fog you can hide behind so you don't have to actually answer your critics on this point.PhilipBaxter
December 9, 2008
December
12
Dec
9
09
2008
03:12 PM
3
03
12
PM
PDT
GEM
An empirically observable sign that points to intelligence.
Interesting stuff. Do you have a list, or could I give a few examples of objects and you could tell me the FSCI, or how you would go about putting a figure on it? Does genome size directly relate to FSCI? I presume humans, as the most advanced organism on the planet also have the largest amount of FSCI also? What units is FSCI measured in?PhilipBaxter
December 9, 2008
December
12
Dec
9
09
2008
03:04 PM
3
03
04
PM
PDT
Patrick Does this help? As you will recall, I have long noted that say a falling die tossed in a game illustrates how chance, necessity and intelligent action may all be at work in a situation, and how they are not simply reducible the one to the other. However, for purposes of analysis -- comparable to how we isolate signal form noise in comms work, or law from bias and error in a simple physics experiment -- we isolate aspects and address how they behave. Once we do so, we can see that:
1] if an aspect reflects low contingency, i.e. natural regularity, it is best explained as mechanical necessity that we describe in terms of a law. [E.g. unsupported heavy objects on earth fall at about 9.8 m/s^2] 2] Where there is significant contingency as a part of the key aspect in focus, we see per experience that it may be purposefully directed and controlled, or it may be more or less free up to some probability distribution, ther most free case being a so-called flat distribution across the configuration space for outcomes. 3] The issue is to tell the difference to some degree of reasonable confidence. 4] When we see that something is complex [per the UPB, in practical terms: storage capacity for more than 500 - 1,000 bits of information], AND simply or purposefully or functionally specified, we have excellent reason to infer that the contingency is intelligently directed.
An apt illustration of tis is the fact that lucky noise could in principle account for all teh posts inthis thread. Nothing in the physics or lofgic forbids that. but, we all take it dfor granted that the posts are intelligent action. Why? ANS: The textual information is functionally specified as contextually relevant text in English, and is complex well beyond 500 - 1,000 bits of information carrying capacity. The odds of that happening by chance are so far beyond merely astronomical that it is more than reasonable to infer to intelligent action. So, even the objectors to the EF are actually using it themselves, intuitively. Even, where they have not precisely calculated the probability distributions! Tthis too, should bury the "false positive" argument. For, the legitimate form of the layman's law of averages reflects that for realistic samples, we are not at all likely to ever see by chance, outcomes so deeply isolated in the config space that they are overwhelmed by far more common clusters of non-functional states. [This utterly dwarfs the proverbial challenge of finding a needle in a haystack at random.] BTW, this is also a view that in fact builds on the core idea in Fisherian elimination. Do objectors to such reasoning seriously expect that the oxygen molecules in the room in which they sit will spontaneously move to one end, leaving them asphyxiating? (The odds in view are comparable, and are rooted in the same basic considerations.) So, I think it is fair and reasonable comment to say that FSCI (or its superset, CSI) is a reliable indicator of intelligent design. An empirically observable sign that points to intelligence. thus, when we see such signs we have a right to infer that there was an intelligence that left it behind. Even when we cannot specifically identify "whodunit"! GEM of TKIkairosfocus
December 9, 2008
December
12
Dec
9
09
2008
02:28 PM
2
02
28
PM
PDT
Joseph #22, I agree with what Rude said. All I'm trying to say is that the EF in its original form does not DIRECTLY reflect these realities. As I've already explained in #14 this does not make it wrong or useless--I even give examples of practical applications--but its description of reality is not accurate for SOME scenarios. Personally I think that Bill should not "dispense with it" but "update it" since these problems are fixable, although the resulting flowchart would probably be fairly complex compared to the original form.Patrick
December 9, 2008
December
12
Dec
9
09
2008
01:35 PM
1
01
35
PM
PDT
Hello JT. I have also read through "Specifications: the Patterns which Signify Intelligence" and I have put my two cents into the discussion on my own blog. It is a little long, but I try to explain it from what I understand as best I can. If you wish to check out my perspective, go to http://cjyman.blogspot.com/2008/02/specifications-part-i-what-exactly-are.htmlCJYman
December 9, 2008
December
12
Dec
9
09
2008
10:52 AM
10
10
52
AM
PDT
gpuccio [10] mark frank [12] I responded to both of you yesterday, but those responses are now gone, along with some of Prof Oloffson, and Pav (evidently all erased in whatever maintenance they were doing here). But very briefly: gpuccio The length of the Turing Machine Computer would be constant and very small and so can be ignored. I have thought about the time issue myself (the time the process might take) but that is not considered either in algorithmic info theory. mark I actually agree with you, and some of my previous posts were too vague. The length of the output string is irrelevant. I would say the uniform probability of the output string is equal to the length of the smallest program-input that would generate it.JT
December 9, 2008
December
12
Dec
9
09
2008
08:06 AM
8
08
06
AM
PDT
I think it would be great if Dembski worked all the kinks out and came up with something that blew everyone out of the water. (I use a lot of cliches too. I hate that.)JT
December 9, 2008
December
12
Dec
9
09
2008
06:17 AM
6
06
17
AM
PDT
JT great post- One minor point for now: You stated:
I quickly realized however, why the design inference is never applied to macro biological objects in the design inference.
I believe the reason is a) to keep it simple and perhaps b) unfamiliarity with macro-biology. I say (b) because to me the neuro-muscular system (wet electricity) is one of te best evidences for intelligent design. gotta go...Joseph
December 9, 2008
December
12
Dec
9
09
2008
06:09 AM
6
06
09
AM
PDT
Patrick: The problem is that the EF in its original binary flowchart form does not explain any of what you just said.
My bad- I was under the impression that first came the determination and then the investigation to get an explanation. Take for example Stonehenge- first we determined it was deigned and then via years of research an explanation was provided. But what do I know. I only have decades of investigation under my belt. to Rude in comment 19 That is exactly what I am saying!Joseph
December 9, 2008
December
12
Dec
9
09
2008
06:06 AM
6
06
06
AM
PDT
I'm still taking a look at the "Specification" paper. A couple of days ago I implied that the technical sections were just too much for me, but I must have looked kind of foolish to anyone who's actually looked at these sections, as they're really no big deal (at least not in this particular paper). It's just when all the paragraphs start turning greek I immediately start scanning ahead for a conclusive sentence, e.g. "So what can we conclude from all this? Namely the following..." (Maybe some of the following is commonly known already and if so I apologize for covering old ground. Its possible I've perused critical reviews of Dembski's writings in the past without comprehending some of the objections made, but upon encountering the referenced passages myself now, am suddenly able to understand what they were talking about. Don't know if that's the case with the following though. I also apologize for writing in the first person so much.) Whenever I'm reading Dembski, any momentary epiphany where I actually start to comprehend something he's saying is like a small victory, and in my optimism I start to think "Maybe this guy is actually on to something." But then a few paragraphs later all optimism is gone, as brand new misgivings emerge. I've had both type of reactions of late. First some background: As I mentioned in a previous post, the way CSI works is that the less complex a detected pattern is the more strongly it indicates design in Dembski's scheme. You can absolutely take my word on this (whether or not you've heard it before). This seemed ridiculous to me. However I realized something recently that momentarily tempered my criticism: You can apply a CSI pattern to a macro object. The reason I didn't think about this previously is because of the example that's always used - the bacterial flagellum. But you could definitely look at a human being and validly apply the simple pattern "Walks on two legs" and use that to infer design (in the Dembskian scheme). This seemed to dramatically increase the relevance of CSI. I quickly realized however, why the design inference is never applied to macro biological objects in the design inference. (And here is where my optimism started to fade.) The reason is, you can immediately point to a known mechanism to account for such macro objects - epigenesis. The reason the bacterial flagellum is used so much as an example is presumably because the mechanism to account for its origination is not known. IOW, it is used repeatedly specifically for the purpose of bolstering an argument from ignorance (i.e. "we don't know exactly what mechanism accounts for the bacterial flagellum, so there must not be one.") You could not do this credibly with a pattern exhibited by a macro biological object. However, there is an even more serious problem I discovered recently, related to the simplicity requirement for specifications (and this was my primary reason for writing this post): It is that there is no objective basis for deciding which pattern to apply to an object. This is relevant because you can only rule out chance with a simple pattern, but any one of innumerable patterns, across a wide spectrum of complexity, could be validly applied to an object. Some background: Dr. Dembski writes, "With specifications, the key to overturning chance is to keep the descriptive complexity of patterns low." For reference, consider that the definition for a specification (ie something not caused by chance) is any pattern where: -log2[10^120*fs(T)*P(T|H)] > 1. where P(T|H) : the uniform probability of the entire bit string fs(T): The specification resources of the target pattern T. (Don't know what the editor here will do with greek characters so I've taken them out.) As fs(t) increases, the ability to rule out chance drastically and geometrically decreases. Its only with extremely simple patterns that you'll be able to rule out chance: "For a[n]...example of specificational resources in action, imagine a dictionary of 100,000 (= 10^5 ) basic concepts. There are then 10^5 1-level concepts, 10^10 2-level concepts, 10^15 3-level concepts, and so on. If “bidirectional,” “rotary,” “motor-driven,” and “propeller” are basic concepts, then the molecular machine known as the bacterial flagellum can be characterized as a 4-level concept of the form “bidirectional rotary motor-driven propeller.” Now, there are approximately N = 10^20 concepts of level 4 or less, which therefore constitute the specificational characterizing the bacterial flagellum." So with only a 4-level concept you have to plug 10^20 into the formula above. Do the math and figure out what the result will be if even a few additional terms are in the specification. After discussing the specificational resources of the flagellum above, Dr. Dembski talks about poker hands and gives the example of "single pair" as a valid pattern. But why couldn't we use an augmented pattern for that hand, for example, "single pair, both red" to describe any hand with a single pair where the two cards were diamonds and hearts. This is an independent pattern, too. Only now the specificational resources are 10^20 whereas previously the were 10^10. So, which is the correct value? The answer will have a drastic effect on our calculation to rule out chance. Now consider the flagellum. We saw how with card hands there could be at least two valid patterns simultaneously, one more complex and descriptive than the other. So with flagellums why couldn't we drop "bidirectional" and use the following as a valid pattern: "rotory motor-driven propeller". Now all of a sudden our specificational resources have dropped by a 10^5. We could go in the other direction and form valid conditionally independent patterns of 10, 20, 50 100, 1000 words or more. Any such description would immediately throw us out of design (as design can only be shown in the Dembskian scheme with simple patterns.) Is there only one type of propeller? Is anyone thinking you could not decompose propeller into an arbitrary number of more descriptive terms? What about "motor-driven"? Is that all that can conceivably said to characterize the power source? So once again, there is no objective basis for deciding which pattern to use, and the pattern we use dictates whether or not the entity could be the result of chance in Dembski's scheme. As a final objection, consider another pattern "blue water with small waves". Seems like a pattern to me. Does it indicate design? It seems all sorts of inanimate objects in nature could be interpreted as designed, either by being compressible as the result of a repeating structure (e.g. a repeating molecular structure), or merely because that with a general consensus we could ascribe some observed pattern to them, e.g. "blue water with small waves". This seem so ridiculous that it seems I must be missing something, and if so I apologize. OTOH, maybe I'm late to the table and everyone here already understands all these things, and are just not talking about it. (In which case I apologize as well). Or maybe I'm full of it, and the Design Inference is of great value (and that would actually be fine with me).JT
December 9, 2008
December
12
Dec
9
09
2008
05:37 AM
5
05
37
AM
PDT
Uh, sorry, seems that Patrick pretty much said better in 14 what I tried to say in 19.Rude
December 8, 2008
December
12
Dec
8
08
2008
01:31 PM
1
01
31
PM
PDT
Joseph (in 15), let me suggest, if I may, that design is never 100% design, that even if perfect design exists in the abstract, all of it that is instantiated in matter is subject to the vagueries of chance and the limits of necessity. A circle where pi goes to its infinite perfection may exist as a mathematical object, but never as a metal disk. Thus not only do chance and necessity take their toll in time, they’re there from the outset. Maybe what was meant above—if anything—is that when the Explanatory Filter is applied to anything material it also has to allow for some chance and necessity. Thus the crystalline structure in the stone of a building is due to necessity, and the imperfections in your new automobile can be chalked up to chance (or negligence if not malice).Rude
December 8, 2008
December
12
Dec
8
08
2008
01:18 PM
1
01
18
PM
PDT
However, if a computer programme which amounts to write 1 a 100 times is only 10 bits long (one can imagine such a mechanism arising naturally) then the probability of this arising under a uniform pdf is 2^-10. Therefore, the probability of the 100 1’s is actually 2^-10 and the assumption of uniform pdf was very misleading.
True, but if the computer program is unknown how can we account for that?Patrick
December 8, 2008
December
12
Dec
8
08
2008
11:22 AM
11
11
22
AM
PDT
#12 Marh Frank I'll start with your last statement:
I think I must have missed something. This all seems so trivially obvious??
Indeed I understood yor argument since your first message. Let's see it.
Let’s make it more concrete. Suppose the outcome is 100 1’s. The probability of this outcome assuming a uniform pdf is 2^-100. However, if a computer programme which amounts to write 1 a 100 times is only 10 bits long (one can imagine such a mechanism arising naturally) then the probability of this arising under a uniform pdf is 2^-10. Therefore, the probability of the 100 1’s is actually 2^-10 and the assumption of uniform pdf was very misleading.
Your point is clear but IMHO it's not pertinent to the DNA case. Here we have that the whole DNA code is what is supposed to be arisen by natural processes. In other words we have not a restricted piece of DNA which deterministically produces the whole sequence. In my opinion the only possibility for applying of your argument to DNA code would require that in some way assembling the DNA double helix should strictly deterministic under sone sort of chemical properties. But this is *not* what happens for the sequence of the nucleotides (A C G T) is largely independent chemicallykairos
December 8, 2008
December
12
Dec
8
08
2008
11:18 AM
11
11
18
AM
PDT
Joseph, I agree that in practice that's how everyone has been using the EF for many years. The problem is that the EF in its original binary flowchart form does not explain any of what you just said. I don't see the need to be defensive toward all criticism, especially if it's constructive. So the EF either needs to be updated (made more detailed) in order to reflect these realities or be discarded (at least in terms of usage in regards to scenarios where chance, necessity, and design are NOT mutually exclusive).Patrick
December 8, 2008
December
12
Dec
8
08
2008
10:44 AM
10
10
44
AM
PDT
(1) I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive.
I strongly disagree. 1- Desiners must take into consideration the laws that govern the physical world, ie necessity. 2- Designers also know that random effects will occur. Take a look a today's Stonehenge. I doubt anyone thinks it was designed and built in its current state. IOW those random effects are taken into account. Chance is accounted for. I always saw the EF as an accumlation: 1- To get by step one necessity alone isn't enough to explain X so we move on to step 2 2- necessity and chance together (as Bill writes in NFL) are not enough to explain X so we move on to step 3 3- Designing agencies working with the laws of nature can explain X. And X's current condition is the result of chance events. And chance could have also played a part in the design. The "Ghost Humnters" on the SciFi channel use the EF. That is first they tyry to explain X via regularity. And only after all "natural" processes have been exhausted to they say "ghost". And I doubt that Stonehenge was determined to be an artifact using CSI. To me CSI would be a verifier of the EF.Joseph
December 8, 2008
December
12
Dec
8
08
2008
10:37 AM
10
10
37
AM
PDT
I suppose I'll copy my last comment here, since the topic came up:
The new Dembski does not believe the filter works.
I wish Bill had taken the time to explain his comment. The only qualifier he added was "pretty much"...which does not explain his position adequately. But to say that "it does not work" or "it is a zombie" is a gross over-simplification. [Actually, after thinking about it I'd call it a distortion.] Before he wrote it I had expressed via email my belief that the old formulation of the EF was too simplistic (which was also pointed out here). This is not so say that it does not work in practical applications but that it's limited in its usefulness since it implicitly rejects the possibility of some scenarios since "[i]t suggests that chance, necessity[law], and design are mutually exclusive." For example, the EF in its original binary flowchart would conflict with the nature of GAs, which could be a called a combination of chance, necessity, and design. [To clarify, I'm referring to scenarios which have a combination of these effects, not whether necessity equates to design or some nonsense like that.] In regards to biology when the EF detects design why should it arbitrarily reject the potential for the limited involvement of chance and necessity? For example, in a front-loading scenario a trigger for object instantiation might be partially controlled by chance. Dog breeding might be called a combination of chance, necessity, and design as well. This does not mean the EF is "wrong" but that it's not accurate in its description for ALL scenarios. The current EF works quite well in regards to watermarks in biology since I don't see how chance and necessity would be involved and thus they are in fact "mutually exclusive". [I'd add SETI as well, presuming they received something other than a simplistic signal.] Personally I believe that the EF as a flowchart could be reworked to take into account more complicated scenarios and this is a project I've been pondering for quite a while. Whether Bill will bother to do this himself I don't know.Patrick
December 8, 2008
December
12
Dec
8
08
2008
10:05 AM
10
10
05
AM
PDT
From 2 above: “There has been confusion over Dembski’s point (1) in the other thread. What I believe he is saying is that chance, necessity, and design may all contribute to an event.” As the nonspecialits here I wonder why not just say that all design occurs against the backdrop of chance and necessity, just as a painting implies the backdrop of canvas and paint. If necessity equals the reality of mathematics and the laws it allows, maybe chance—even if the laws disallow it—equals context. Any act of design, i.e., the employment of free will, adds the ingredient of chance. Thus a coin toss, even if entirely predictable if every detail of context were known after the fact, still hangs on the unpredictability of the agent’s act.Rude
December 8, 2008
December
12
Dec
8
08
2008
06:40 AM
6
06
40
AM
PDT
1 2 3 4

Leave a Reply