Uncommon Descent Serving The Intelligent Design Community

The Circularity of the Design Inference

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Keith S is right. Sort of.

As highlighted in a recent post by vjtorley, Keith S has argued that Dembski’s Design Inference is a circular argument. As Keith describes the argument:

In other words, we conclude that something didn’t evolve only if we already know that it didn’t evolve. CSI is just window dressing for this rather uninteresting fact.

In its most basic form, a specified complexity argument takes a form something like:

  • Premise 1) The evolution of the bacterial flagellum is astronomically improbable.
  • Premise 2) The bacterial flagellum is highly specified.
  • Conclusion) The bacterial flagellum did not evolve.

Keith’s point is that in order to show that the bacterial flagellum did not evolve, we have to first show that the evolution of the bacterial flagellum is astronomically improbable, which is almost the same thing. Specified complexity moves the argument from arguing that evolution is improbable to arguing that evolution didn’t happen. The difficult part is showing that evolution is improbable. Once we’ve established that evolution is vastly improbable, it seems a very minor obvious point that it would therefore not have occurred.

In some cases, people have understood Dembski’s argument incorrectly, propounding or attack some variation on:

  1. The evolution of the bacterial flagellum is highly improbable.
  2. Therefore the bacterial flagellum exhibits high CSI
  3. Therefore the evolution of the bacterial flagellum is highly improbable
  4. Therefore the bacterial flagellum did not evolve.

This is indeed a very silly argument and people need to stop arguing about it. CSI and Specified complexity do not help in any way to establish that the evolution of the bacterial flagellum is improbable. Rather, the only way to establish that the bacterial flagellum exhibits CSI is to first show that it was improbable. Any attempt to use CSI to establish the improbability of evolution is deeply fallacious.

If specified complexity doesn’t help establish the improbability of evolution what good is it? What’s the point of the specified complexity argument? Consider the following argument:

  1. Each snowflake pattern is astronomically improbable.
  2. Therefore it doesn’t snow.

Obviously, it does snow, and the argument must be fallacious. The fact that an event or object is improbable is insufficient to establish that it formed by natural means. That’s why Dembski developed the notion of specified complexity, arguing that in order to reject chance events they must both be complex and specified. Hence, its not the same thing to say that the evolution of the bacterial flagellum is improbable and that it didn’t happen. If the bacterial flagellum were not specified, it would be perfectly possible to evolve it even thought it is vastly improbable.

The notion of specified complexity exists for one purpose: to give force to probability arguments. If we look at Behe’s irreducible complexity, Axe’s work on proteins, or practically any work by any intelligent design proponent, the work seeks to demonstrate that the Darwinian account of evolution is vastly improbable. Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.

So Keith is right, arguing for the improbability of evolution on the basis of specified complexity is circular. However, specified complexity, as developed by Dembski, isn’t designed for the purpose of demonstrating the improbability of evolution. When used for its proper role, specified complexity is a valid, though limited argument.

 

Comments
Winston, I reiterate that if Dembski had thought that CSI excluded "chance" and "law" by definition, then it would have been pointless to make a lengthy argument to show this by other means. Something that's true by definition does not need to be demonstrated by argument -- you can simply restate the definition again! Further, the Law of Conservation of Information makes no sense if CSI excludes "chance" and "law" by definition:
Natural causes are incapable of generating CSI. I call this result the law of conservation of information, or LCI for short... Intelligent Design, p. 170
The fact that Dembski calls this a law indicates that he thinks it is an empirical truth, not a definitional tautology.keith s
November 15, 2014
November
11
Nov
15
15
2014
06:48 PM
6
06
48
PM
PDT
Box, You made the mistake of trusting StephenB. Stephen misquoted Winston as saying:
Keith S is right. Sort of. Dembski’s design argument is a circular argument.
What Winston actually wrote was this:
Keith S is right. Sort of. As highlighted in a recent post by vjtorley, Keith S has argued that Dembski’s Design Inference is a circular argument.
Winston doesn't think that Dembski's argument, as Dembski intended it, is circular, but he does think that other people have misconstrued Dembski's argument, and that the altered argument is circular.keith s
November 15, 2014
November
11
Nov
15
15
2014
06:46 PM
6
06
46
PM
PDT
Winston:
But to reject modern evolution theory we need P(H|T).
Absolutely. My intended but poorly stated point was that everybody already rejects any hypothetical explanations for biology that require astronomical luck. We don't need to learn about specified complexity to be convinced of this position, although Dembski might argue that specified complexity explains why we take this position.R0bb
November 15, 2014
November
11
Nov
15
15
2014
06:27 PM
6
06
27
PM
PDT
Winston Ewert: Dembski’s design argument is a circular argument.
Is this meant to be a general statement? IOW does the 'circularity' also apply to e.g. Dembski's 'conservation of information'? Or does circularity only apply to Dembski's (second version ?) specified complexity argument? Please clarify.Box
November 15, 2014
November
11
Nov
15
15
2014
06:21 PM
6
06
21
PM
PDT
Winston, As a protégé of Dembski, I'm surprised that you're not more familiar with his writings. In your linked article, you said:
The argument of specified complexity was never intended to show that natural selection has a low probability of success. It was intended to show that if natural selection has a low probability of success, then it cannot be the explanation for life as we know it.
That's not correct, and the quote I gave above shows this:
We can summarize our findings to this point: (1) Chance generates contingency, but not complex specified information. (2) Laws (i.e., Eigen’s algorithms and natural laws, or what in section 6.2 we called functions) generate neither contingency nor information, much less complex specified information. (3) Laws at best transmit already present information or else lose it. Given these findings, it seems intuitively obvious that no chance-law combination is going to generate information either. After all, laws can only transmit the CSI they are given, and whatever chance gives to a law is not CSI. Ergo, chance and laws working in tandem cannot generate information. This intuition is exactly right, and I will provide a theoretical justification for it shortly. Nevertheless the sense that laws can sift chance and thereby generate CSI is deep-seated in the scientific community. Intelligent Design, p. 167
keith s
November 15, 2014
November
11
Nov
15
15
2014
05:48 PM
5
05
48
PM
PDT
Winston Ewert
Keith S is right. Sort of. Dembski’s design argument is a circular argument.
And later
Dembski’s original argument isn’t circular.
Is KeithS right to say that Dembski’s argument is circular, or is KeithS wrong to say that Dembski’s argument is circular? You appear to have quietly changed your mind. Does the word “original” mean something in this context? Winston to KeithS
Before I’d even consider discussing these other alleged flaws, I need you to explicitly acknowledge that the alleged circularity isn’t a flaw in specified complexity, but only in some people’s mistaken interpretation of it. Dembski’s original argument isn’t circular.
Have you forgotten that KeithS was not referring to others' interpretations of specified complexity? He laid the charge of circularity on Dembski and Dembski's argument directly--and you agreed? If you are now saying that KeithS is wrong and Dembski’s argument is not circular, you are saying it too quietly. If you want to be heard now, you will have to roar. From a reader: “Can you show the original (Dembski’s) version as well? I cannot find.”
I’m not sure what you are looking for here. This is the argument as originally developed by Dembski.
I think what the poor reader is asking is why you introduced the word “original.” Come to think of it, so am I.StephenB
November 15, 2014
November
11
Nov
15
15
2014
05:06 PM
5
05
06
PM
PDT
Winston Ewert:
The notion of specified complexity exists for one purpose: to give force to probability arguments. If we look at Behe’s irreducible complexity, Axe’s work on proteins, or practically any work by any intelligent design proponent, the work seeks to demonstrate that the Darwinian account of evolution is vastly improbable. Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.
As I see it: the reason the Darwinian account of "evolution" is vastly improbable is due to a logical construct made of generalizations that hide all pertaining to intelligence (and other things) in what can be described as a black-box of natural selection. Instead of answers to the question of how intelligence works throughout biology words like "altruism" are invented that really only look smart while not answering anything. The way around the very serious and misleading weaknesses of Darwinian theory is through theory premised for "intelligent cause", which does not even need a crutch word like "evolved" to explain the origin/development of intelligent living things. It is then possible to make reliable predictions in regards to all that is or is not "intelligent". Going on theory for something else is only a good way to get wrong answers that look right to those who none the less find the conclusions useful, for reasons that do not fully pertain to science.Gary S. Gaulin
November 15, 2014
November
11
Nov
15
15
2014
02:46 PM
2
02
46
PM
PDT
Good work, Winston Ewert :) You demonstrated that ID proponents at UD (significantly BA and KF, not to mention the lesser lights) do not understand ID. This is most welcome to settle some record straight, even though this profoundly undermines the ID cause. Brave move :)E.Seigner
November 15, 2014
November
11
Nov
15
15
2014
02:03 PM
2
02
03
PM
PDT
No, the measurement demonstrates how complex-specified it is. Probabilities come into play after the CSI has been established and a case is being made for best cause.
No. In Dembski's work the probability is calculated in order to establish the complexity of the complex-specification criteria. Probability comes before, not after the establishment of CSI.Winston Ewert
November 15, 2014
November
11
Nov
15
15
2014
01:26 PM
1
01
26
PM
PDT
Can you show the original (Dembski’s) version as well? I cannot find.
I'm not sure what you are looking for here. This is the argument as originally developed by Dembski.
For example, in Chapter 6 of his book Intelligent Design, Dembski devoted pages to arguing that “law” and “chance” cannot produce CSI.
See my previous post: http://www.evolutionnews.org/2013/04/information_pas071201.html. Dembski has always required the calculation of probabilities according to relevant chance hypotheses. What Dembski is arguing in that chapter is that the impossibility of chance or necessity producing CSI derives directly from the definition of CSI. That is, natural processes cannot produce CSI by definition. That's true now and it was true then.
Dembski’s specified improbability concept, as laid out in The Design Inference, was an attempt to justifiably infer low P(H|T) from low P(T|H) without going through Bayes. I would argue that this can’t be done (which is another discussion), but I would also argue that Dembski’s attempt is not circular.
That's very true and insightful. I agree, except for the part about Bayes.
I think that this renders specified complexity somewhat superfluous in terms providing traction for ID. If it could be demonstrated that biological structures are in fact irreducibly complex, in the sense that there are no gradual evolutionary pathways to them, then everyone would concede that modern evolutionary theory is flat-out wrong. There would be no need to invoke specified complexity in order to get everyone on board.
If we could show that there are no gradual pathways to biological structures, we have shown that P(T|H) is very small. But to reject modern evolution theory we need P(H|T). As you noted, CSI seeks to get P(H|T) from P(T|H). So we do need to invoke specified complexity, or something playing a simliar role, to complete the ID argument.
And speaking of the CSI mess, HeKS is under the impression that you agree with his interpretation of CSI. One entailment of this interpretation is that designers cannot create CSI as Dembski defines it, as the very idea of designed CSI is incoherent. You may want to clear that up.
I don't see anything in the linked post that suggest a belief that designed CSI is incoherent.Winston Ewert
November 15, 2014
November
11
Nov
15
15
2014
01:05 PM
1
01
05
PM
PDT
WJM: Precisely, the explanatory filter process is an observationally anchored evaluatory process. Where, specified complexity in cases of relevance is functionality based and that is observable. Steps to create metric models buuild on that observability but they do not erase it such that CSI becomes utterly unrelated to the FSCO/I that we observe. Instead the relationship is that FSCO/I has dFSCI as a subset (one that is often not irreducibly complex). Likewise irreducibly complex things do manifest FSCO/I. And by abstracting out the type of specification you create a superset, CSI. CSI has been useful for some forms of analysis but precisely because of its abstractions is open to all sorts of debate points in a situation where selective hyperskepticism is common. By anchoring down to empirical observables imposed by requisites of functional specificity based on interactive parts that collectively achieve functionality, many of those debate points are readily seen as hollow, flawed, question-begging or even strawman tactic. KFkairosfocus
November 15, 2014
November
11
Nov
15
15
2014
12:51 PM
12
12
51
PM
PDT
5th: Biosystems do not need to be irreducibly complex for FSCO/I to be a relevant criterion. There are many cases of multipart systems with a degree of redundancy that have no core set of parts that the removal of any one of these destroys relevant function; irreducible complexity is a rather strict claim and criterion. At crude, simplistic level that's why we have two lungs and two kidneys as well as two arms. Similar things happen in cells. In the old Apollo rocket systems, there were typically five ways for vital functions to be carried out by design. As I pointed out to WE and as he accepted, in tech systems such as the dFSCI in error correcting codes there are no kill-points where single point failures are catastrophic. But at the same time the systems in question -- bio or technological -- exhibit such a degree of complex, functionally specific interactive organisation to achieve function that such is not plausibly a result of blind chance and mechanical necessity, but of intelligently directed configuration. KFkairosfocus
November 15, 2014
November
11
Nov
15
15
2014
12:43 PM
12
12
43
PM
PDT
keith said:
Those statements make no sense in light of today’s version of CSI, which rules out natural and algorithmic explanations by definition.
Natural and algorithmic explanations are not ruled out by definition. They are ruled out by evaluation.William J Murray
November 15, 2014
November
11
Nov
15
15
2014
12:36 PM
12
12
36
PM
PDT
WE: The analytical metric model WmAD proposed is not itself observable, being an analytical construct. Similarly, in generalising from functionally linked informational organisation to a more abstract general specification, that moves away from observables to an analytical term. However so soon as one deals with a description detachable from but designating a zone T in W for a real world case, one is back at criteria of choice that are at minimum in principle observable so that one may decide in/out with reasonable reliability. And, in much the same NFL context of pp 144 and 148, WmAD pointed out that the form of specification relevant to life is functional; which is a highly observable phenomenon -- does the AA string fold, agglomerate and function in that enzyme, or not and does the enzyme have x-level of activity? Functional specificity pivoting on information rich organisation and linked interacting parts in living forms becomes highly relevant. And that, is highly observable and amenable to metric models that are linked to what WmAD did. Orgel and Wicken spoke to the context of the world of life and to recognisable phenomena, though they did not at that time essay on quantification. Quantification, analysis and abstractions have their use but that use does not imply a barrier that erects a middle wall of partition such that the models and quantities are essentially unrelated to the context of the world of life. Where, you should be aware that some objectors have tried to deny or dismiss that there is such a thing as real specified complexity, or that relevant specification can be functional, or that requisites of multi-part interactive function based on correctly arranged and coupled parts has the effect of requiring that configs and parts be specific to zones Z of a much wider configuration space of clumped or scattered parts that will not function. Which in turn puts us in the situation of sparse blind chance and necessity driven needle in haystack search that is predictably fruitless on available atomic and temporal resources. Where by contrast, intelligently directed configuration aka design, routinely creates such functionally specific, complex organisation and associated information, through knowledge, purpose and creative skill as is quite easily seen in a technological world, or even just reflecting on what is going on on text strings in posts in this thread. It is in that context, that on observing high contingency on similar initial conditions, we may infer that a particular aspect of a phenomenon or object is not reasonably explained by mechanical, lawlike necessity rooted in the forces and materials of nature. Though other aspects obviously must reflect such physical or chemical etc necessities. The objects we deal with are composed of atomic matter. High contingency under similar initial conditions has two main known causes: chance factors, and intelligently directed configuration. Where, chance in some form is default; absent the manifestation of functionally specific complex organisation and associated information that bring to bear the sort of sparse needle in haystack search challenge to blind chance and necessity search or sampling that makes such an explanation maximally implausible. Where, such FSCO/I is routinely created by design so that per vera causa it is an empirically reliable sign of it. That is, we see how FSCO/I as an empirically evident manifestation of specified complexity, is a sign of design and plays a role in the design inference explanatory process. Going beyond, the search for search challenge and the concept of active injected information allow us to see as well how such information can reduce the odds against finding relevant zones exhibiting FSCO/I or the like forms of specified complexity as Marks and Dembski have explored in some fairly recent work. (That is, active information in effect steers results to zones of interest in various ways overcoming the sparse search challenge. Such active information is a manifestation of intelligently directed configuration.) KFkairosfocus
November 15, 2014
November
11
Nov
15
15
2014
12:30 PM
12
12
30
PM
PDT
Winston, And speaking of the CSI mess, HeKS is under the impression that you agree with his interpretation of CSI. One entailment of this interpretation is that designers cannot create CSI as Dembski defines it, as the very idea of designed CSI is incoherent. You may want to clear that up.R0bb
November 15, 2014
November
11
Nov
15
15
2014
11:21 AM
11
11
21
AM
PDT
Winston:
You can certainly have a notion of specified complexity that is observable, like Orgel and Wicken did. But care must be taken not to conflate it with Dembski’s conception.
Thank you. Although you should be warned about how the moderator of this board has responded to someone else who pointed out that Orgel and Dembski are talking about two different concepts. Barry:
Mathgrrl, I will tell you what is ridiculous: Your attempt to convince people that Orgel and Dembski are talking about two different concepts, when that is plainly false. Like the Wizard of Oz you can tell people “don’t look behind that curtain” until you are blue in the face. But I’ve looked behind your curtain, and there is nothing there but a blustering old man. I will not retract an obviously true statement no matter how much you huff. You’ve been found out. Deal with it.
But it's also worth noting that in a follow-up thread, Barry scoffed when I told him that Dembski's examples of specified complexity include simple repetitive sequences, plain rectangular monoliths, and narrowband signals. And in a recent thread, he claimed that CSI can be assessed without a chance hypothesis. So the board moderator, who has been "studying the origins issue for 22 years", doesn't understand what Dembski means by CSI. Which means that if you want to clean up the CSI mess, you have an uphill battle ahead of you.R0bb
November 15, 2014
November
11
Nov
15
15
2014
11:01 AM
11
11
01
AM
PDT
Robb said, I think that this renders specified complexity somewhat superfluous in terms providing traction for ID. If it could be demonstrated that biological structures are in fact irreducibly complex, in the sense that there are no gradual evolutionary pathways to them, then everyone would concede that modern evolutionary theory is flat-out wrong. I say. I agree However I think that the concept of CSI comes in handy as a general marker of a set of noncomputable functions including IC but also including things like cosmological fine tuning. Hope that makes sense peacefifthmonarchyman
November 15, 2014
November
11
Nov
15
15
2014
10:50 AM
10
10
50
AM
PDT
Winston:
But its a combination of irreducible complexity and specified complexity to produce a whole argument for intelligent design.
Indeed that seems to be Dembski's argument, although I don't know of anyplace that he has said so as straightforwardly as you have. I think that this renders specified complexity somewhat superfluous in terms providing traction for ID. If it could be demonstrated that biological structures are in fact irreducibly complex, in the sense that there are no gradual evolutionary pathways to them, then everyone would concede that modern evolutionary theory is flat-out wrong. There would be no need to invoke specified complexity in order to get everyone on board.R0bb
November 15, 2014
November
11
Nov
15
15
2014
10:36 AM
10
10
36
AM
PDT
Thank you, Dr. Ewert. I think the confusion here arises from the conflation of P(T|H) with P(H|T). keith_s summarizes thusly:
To summarize: to establish that something has CSI, we need to show that it could not have been produced by unguided evolution or any other unintelligent process. Once we know that it has CSI, we conclude that it is designed -- that is, that it could not have been produced by unguided evolution or any other unintelligent process.
But the phrase "could not have been produced by unguided evolution or any other unintelligent process" is ambiguous. It may mean "Given unintelligent processes, the event is very unlikely to occur", i.e. P(T|H) << 1. Or it may be interpreted as "Given that the event occurred, it's very unlikely that an unintelligent process was the cause", i.e. P(H|T) << 1. This happens all the time when talking about probabilities -- it's one of the hazards of using informal language. Dembski's specified improbability concept, as laid out in The Design Inference, was an attempt to justifiably infer low P(H|T) from low P(T|H) without going through Bayes. I would argue that this can't be done (which is another discussion), but I would also argue that Dembski's attempt is not circular.R0bb
November 15, 2014
November
11
Nov
15
15
2014
10:12 AM
10
10
12
AM
PDT
Even though this discussion is dialed in on CSI, improbabilities, and bacterial flags. etc. I can't help but take pause to think about things in a broader context. After all, life with it's "development" has a long history of step by step, and apparently coordinated multiple step progressions originally emerging from the menu of chemicals available. And within the context of the environmental conditions that were and have been present. Scientific experiments that try and duplicate what might have lead to certain phases of life's development at its earliest stages seem to underscore the need for intelligent manipulation to overcome "probability" barriers (frustration because of normal chemical responses). Chemical laws at that stage, do not seem to be sufficient in producing reactions required for sympathetic results. In fact, chemical laws there, left to themselves, have demonstrated the propensity to destroy any meaningful progression. Science does have the ability to observe in real time how chemicals respond under certain conditions. Seems to me this might be an area in the study of progression towards "living" chemistry that probability calculations might have particular objective significance. "Living" chemical systems had to go through all phases of progression and development to arrive at where they are today. Sorry, no short-cuts. I don't want to divert the discussion. But just to express a thought. Maybe for another discussion.bpragmatic
November 15, 2014
November
11
Nov
15
15
2014
09:52 AM
9
09
52
AM
PDT
Against my better judgement KeithS said Those statements make no sense in light of today’s version of CSI, which rules out natural and algorithmic explanations by definition. I say. Of course they do if we see them as part of an extended effort to define CSI For example I might have to spend a lot of ink explaining that "The temperature in Cleveland is 21 degrees" is not a self evident truth as part of my explanation of what self evident truths are. That is how explanations work. Peacefifthmonarchyman
November 15, 2014
November
11
Nov
15
15
2014
09:47 AM
9
09
47
AM
PDT
WOW! Someone from UD openly admitting that an opponent is right (sort of)? Never thought I'd see the day. It's a small step for Winston, a huge leap for UD. Congrats ladies!AVS
November 15, 2014
November
11
Nov
15
15
2014
09:45 AM
9
09
45
AM
PDT
keiths:
With the circularity issue out of the way, I’d like to draw attention to the other flaws of Dembski’s CSI.
Winston Ewert:
Before I’d even consider discussing these other alleged flaws, I need you to explicitly acknowledge that the alleged circularity isn’t a flaw in specified complexity, but only in some people’s mistaken interpretation of it. Dembski’s original argument isn’t circular.
Winston, The problem is that Dembski does (or at least did) take the presence of CSI as a non-tautological indication that something could not have evolved or been produced by "chance" or "necessity". For example, in Chapter 6 of his book Intelligent Design, Dembski devoted pages to arguing that "law" and "chance" cannot produce CSI. Here is an excerpt:
We can summarize our findings to this point: (1) Chance generates contingency, but not complex specified information. (2) Laws (i.e., Eigen's algorithms and natural laws, or what in section 6.2 we called functions) generate neither contingency nor information, much less complex specified information. (3) Laws at best transmit already present information or else lose it. Given these findings, it seems intuitively obvious that no chance-law combination is going to generate information either. After all, laws can only transmit the CSI they are given, and whatever chance gives to a law is not CSI. Ergo, chance and laws working in tandem cannot generate information. This intuition is exactly right, and I will provide a theoretical justification for it shortly. Nevertheless the sense that laws can sift chance and thereby generate CSI is deep-seated in the scientific community.
And also:
It is CSI that for Manfred Eigen constitutes the great mystery of life's origin, and one he hopes eventually to unravel in terms of algorithms and natural laws.
Those statements make no sense in light of today's version of CSI, which rules out natural and algorithmic explanations by definition. Dembski clearly thought, back then at least, that CSI could be an indicator that something had not evolved.keith s
November 15, 2014
November
11
Nov
15
15
2014
09:34 AM
9
09
34
AM
PDT
Winston said What do you mean by “highly complex.”? I say, I would very tentatively say infinite Kolmogorov complexity or zero entropy. Which I believe are equivalent values.fifthmonarchyman
November 15, 2014
November
11
Nov
15
15
2014
08:51 AM
8
08
51
AM
PDT
No, its both. The measurement is based on how specified and improbable an object is.
No, the measurement demonstrates how complex-specified it is. Probabilities come into play after the CSI has been established and a case is being made for best cause.
I mean that the argument of specified complexity doesn’t establish that evolution is improbable, instead it assumes that we have established that in some other way.
No, it doesn't. The reason you measure the CSI is because you suspect that design is necessary. We do not know that the flagellum was designed; we suspect design was necessary, so we measure the CSI and we look for known, natural explanations that otherwise acquire the target.
You even call it an argument at the end of your post.
Pardon my mistake. It's not an argument.William J Murray
November 15, 2014
November
11
Nov
15
15
2014
08:50 AM
8
08
50
AM
PDT
My $.02 I think it would be a great advance to ID, if ID proponents would get it clear in their heads what the I in CSI is.Upright BiPed
November 15, 2014
November
11
Nov
15
15
2014
08:35 AM
8
08
35
AM
PDT
Winston, You have provided your version on the CSI argument.
Premise 1) The evolution of the bacterial flagellum is astronomically improbable. Premise 2) The bacterial flagellum is highly specified. Conclusion) The bacterial flagellum did not evolve.
Can you show the original (Dembski's) version as well? I cannot find it.Box
November 15, 2014
November
11
Nov
15
15
2014
08:27 AM
8
08
27
AM
PDT
No. CSI isn’t an argument; it’s a measurement.
No, its both. The measurement is based on how specified and improbable an object is. You even call it an argument at the end of your post. The argument shows that high CSI events don't happen naturally.
CSI doesn’t assume evolution is highly improbable; it makes no assumption about evolution whatsoever.
When I say assumption, I don't mean that we assume that evolution is highly improbable without proof. I mean that the argument of specified complexity doesn't establish that evolution is improbable, instead it assumes that we have established that in some other way.
The CSI argument determines that life as a result of natural forces is highly improbable.
How does it do that? The CSI argument asks you to calculate probabilities, it doesn't tell you how to calculate those probabilities.Winston Ewert
November 15, 2014
November
11
Nov
15
15
2014
08:02 AM
8
08
02
AM
PDT
Winston said What do you mean by “highly complex.”? I say, Check out the discussion in gpuccio's thread. Measuring complexity and establishing an objective standard is where the action is IMHO Peacefifthmonarchyman
November 15, 2014
November
11
Nov
15
15
2014
08:01 AM
8
08
01
AM
PDT
Premise 1) the bacterial flagellum is specified and highly complex.
What do you mean by "highly complex."?
WE: Pardon, a note. The point on multipart interaction to achieve a specific function is not necessarily an appeal to irreducible complexity.
Fair enough.
PS: Specified complexity is first an observable phenomenon (as noticed by Orgel and Wicken etc) that becomes puzzling as it seems intuitively unlikely to result from blind watchmaker type mechanisms.
The version of specified complexity developed by Dembski isn't an observable phenomenom. See http://www.metanexus.net/essay/explaining-specified-complexity. You can certainly have a notion of specified complexity that is observable, like Orgel and Wicken did. But care must be taken not to conflate it with Dembski's conception.Winston Ewert
November 15, 2014
November
11
Nov
15
15
2014
07:52 AM
7
07
52
AM
PDT
1 2 3 4 5

Leave a Reply