Uncommon Descent Serving The Intelligent Design Community

The Tragedy of Two CSIs

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

CSI has come to refer to two distinct and incompatible concepts. This has lead to no end of confusion and flawed argumentation.

CSI, as developed by Dembski, requires the calculation of the probability of an artefact under the mechanisms actually in operation. It a measurement of how unlikely the artefact was to emerge given its context. This is the version that I’ve been defending in my recent posts.

CSI, as used by others is something more along the lines of the appearance of design. Its typically along the same lines as notion of complicated developed by Richard Dawkin in The Blind Watchmaker:

complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone.

This is similar to Dembski’s formulation, but where Dawkins merely requires that the quality be unlikely to have been acquired by random chance, Dembski’s formula requires that the quality by unlikely to have been acquired by random chance and any other process such as natural selection. The requirements of Dembski’s CSI is much more stringent than Dawkin’s complicated or the non-Dembski CSI.

Under Dembski’s formulation, we do not know whether or not biology contains specified complexity. As he said:

Does nature exhibit actual specified complexity? The jury is still out. – http://www.leaderu.com/offices/dembski/docs/bd-specified.html

The debate for Dembski is over whether or not nature exhibits specified complexity. But for the notion of complicated or non-Dembski CSI, biology is clearly complicated, and the debate is over whether or not Darwinian evolution can explain that complexity.

For Dembski’s formulation of specified complexity, the law of the conservation of information is a mathematical fact. For non-Dembski formulations of specified complexity, the law of the conversation of information is a controversial claim.

These are two related but distinct concepts. We must not conflate them. I think that non-Dembski CSI is a useful concept. However, it is not the same thing as Dembski’s CSI. They differ on critical points. As such, I think it is incorrect to refer to any these ideas as CSI or specified complexity. I think that only Dembski’s formulation or variations thereof should be termed CSI.

Perhaps the toothpaste is already out the bottle, and this confusion of the notion of specified complexity cannot be undone. But as it stands, we’ve got a situations where CSI is used to referred to two distinct concepts which should not be conflated. And that’s the tragedy.

Comments
why these concepts are not the same.
Durston at least calculates his "functional sequence omplexity". Complex specified information remains undefined. BTW How's the paper on semiotics coming?Alan Fox
November 27, 2013
November
11
Nov
27
27
2013
12:54 PM
12
12
54
PM
PDT
Jerry:
We are not advocating replacing it with a design hypothesis but with a statement that the best current science knows no known method to account for life’s changes.
Well, I'm not sure this statement of yours is entirely accurate Jerry. Michael Behe doesn't dispute common descent. Since life first got going on Earth, there have been huge changes. The environment, the continents, the climate have all changed dramatically over the last three billion years. There have been several mass extinctions such as the KT event and who is to say there are not more catastrophic changes in the pipeline. Evolutionary theory may only a partial explanation for the way lifeforms were moulded by these events but there is no other current theory that approaches it in explanatory power. ID explains nothing. There is no ID theory.Alan Fox
November 27, 2013
November
11
Nov
27
27
2013
12:50 PM
12
12
50
PM
PDT
GP: It’s absolutely the same thing. Alan Fox: Sorry, gpuccio, I don’t agree. Kirk Durston has indeed made some calculations that stand up to scrutiny, even though they are far from immune from criticism. My own view is that Durston’s calculations tell us nothing we don’t already know.
With the absolute certainty of any physical law we have ever known, we can rest assured that Alan Fox will not enage/debate/explain/argue with GP as to why these concepts are not the same. He will simply make his assertions and hide behind them. Nothing, and no-thing, will ever change that.Upright BiPed
November 27, 2013
November
11
Nov
27
27
2013
12:39 PM
12
12
39
PM
PDT
except where they are in conflict with reality, in which case I reserve the right to point out the disparity.
Reality says that your definition of the TOE never produced anything but trivial changes. Now this does not say that changes in life forms do not have a naturalistic origin but until a process comes along that can explain the changes in information in organisms, intelligence will have to remain a viable option and in the meantime Darwinian processes have been eliminated as a viable theory. And you should welcome the effort to get this theory eliminated from the curriculum. We are not advocating replacing it with a design hypothesis but with a statement that the best current science knows no known method to account for life's changes. I assume you will support that effort based on your comment about pointing out disparity when things are in conflict with reality.jerry
November 27, 2013
November
11
Nov
27
27
2013
12:37 PM
12
12
37
PM
PDT
It’s absolutely the same thing.
Sorry, gpuccio, I don't agree. Kirk Durston has indeed made some calculations that stand up to scrutiny, even though they are far from immune from criticism. My own view is that Durston's calculations tell us nothing we don't already know.
One can use any term, but the concept is the same:
I'm afraid I think that is sloppy. It wouldn't do in medical diagnosis, it doesn't do in science in general.
...the complexity linked to the function, the improbability of the target space given the random hypothesis. That is very clear in Durston’s paper. If you understand the concepts, it is the same thing.
But those who are persuaded by evolutionary theory reject the premise that it is a purely random process. Design by the environment is assuredly not random.
It is really strange that you, who are so ready to believe that CSI cannot be really calculated, don’t even try to realize that Durston has abundantly calculated it, and that your only argument is that he uses another term (very similar, however) for it.
Lets leave belief out of it. I am simply unconvinced that CSI is a quantifiable quantity. Should I turn out to be wrong on that, the next hurdle is whether a calculation can demonstrate that a system, process, entity arose by "Design" while that too remains a vague, undefined concept. It just isn't Science. I don't want to interfere in people's beliefs or religion - except where they are in conflict with reality, in which case I reserve the right to point out the disparity.Alan Fox
November 27, 2013
November
11
Nov
27
27
2013
11:49 AM
11
11
49
AM
PDT
Alan Fox:
The term he uses, which originated with Leslie Orgel and developed by Robert Hazen is “Functional Sequence Complexity”.
It's absolutely the same thing. One can use any term, but the concept is the same: the complexity linked to the function, the improbability of the target space given the random hypothesis. That is very clear in Durston's paper. If you understand the concepts, it is the same thing. It is really strange that you, who are so ready to believe that CSI cannot be really calculated, don't even try to realize that Durston has abundantly calculated it, and that your only argument is that he uses another term (very similar, however) for it.gpuccio
November 27, 2013
November
11
Nov
27
27
2013
11:23 AM
11
11
23
AM
PDT
don’t raise a mention of CSI
You are confusing a concept with a measure of some aspect of that concept.
Functional Sequence Complexity
This is a measure of the information in something (a protein sequence or the analogous gene sequence) that is complex and specifies the function of something else. Sounds like a measure of FCSI to me.jerry
November 27, 2013
November
11
Nov
27
27
2013
09:59 AM
9
09
59
AM
PDT
A thousand coins get another distribution.
With respect to the proportion of heads vs. tails it is the binomial distribution. I provided details here along with expectation and standard deviations: https://uncommondescent.com/mathematics/ssdd-a-22-sigma-event-is-consistent-with-the-physics-of-fair-coins/ Strictly speaking we could even use the binomial for 1 coin.scordova
November 27, 2013
November
11
Nov
27
27
2013
09:57 AM
9
09
57
AM
PDT
Coin flips are a uniform distribution where there are two equi-probable events. A die is a uniform distribution where there are six equi-probable events. Two dies are the combination of two equip-probable event that gets another distribution. Two coins get a different distribution. A thousand coins get another distribution. So I suggest we try to use uniform distribution instead or the proper name for the particular distribution and why this distribution is relevant. And if it comes down to one of just two outcomes, explain why it is relevant especially as it relates to natural selection or other natural processes. Coin flips just obscure what is going on. After 8 years of them, no one seems to understands their relevance. Nor can they define CSI in any way that people understand. I think there is a correlation.jerry
November 27, 2013
November
11
Nov
27
27
2013
09:53 AM
9
09
53
AM
PDT
Why does homochirality need a solution? Half of a racemic mixture is available as a substrate.
Same reason if you found a collection of coins all heads in some locality. There is a pool of millions of "racemic" coins out there. It is a problem, otherwise we wouldn't have teams of OOL researcher trying to find mindless solutions to the problem, not to mention even if they did find initial conditions to create homochirality, thermal and quantum noise will dissipate homochirality over time, just like shaking a table of coins that initially start out all heads. It is a serious problem for the Blind watchmaker hypothesis. And now another quotation from Design Infernece page 50:
If however multiple chance hypotheses could be responsible for E...
We can consider multiple chance hypotheses. For the sake of completeness, 1 of the 20 common amino acids in life is not chiral (neither left or right). I seem to recall there might be one amino acid that may not naturally have a 50/50 chance of L and D forms. The point is, like coins we can empirically and theoretically determine an approximate probability. We can even be generous and say the ratios are 60% favorable on average to the L state. Even then, the binomial distribution will reject homochirality as the result of chance from a pre-biotic soup. The only explanation for homochirality in the present day are the robots we call cells, but then that raises the question, who made the robot?scordova
November 27, 2013
November
11
Nov
27
27
2013
09:51 AM
9
09
51
AM
PDT
Why does homochirality need a solution? Half of a racemic mixture is available as a substrate.Alan Fox
November 27, 2013
November
11
Nov
27
27
2013
09:15 AM
9
09
15
AM
PDT
Related comment – And please no one use coin flips or dice rolls to illustrate CSI. I have never seen the relevance
It relates well to the homo chirality argument and any time we see duplicates of things in biology (a bacterial colony evidences high duplication of the ancestor bacteria). Coin analogies have similar if not identical distributions as these questions. Coins are textbook examples of how to illustrate relevant distributions. The robot analogous to the copy machines in living cells. I chose the coins and robots to clarify the issues at hand. If we can't solve the paradoxes for coins and robots, we aren't going to solve it for homochirality and self-replicating cells since the same statistic are in play.scordova
November 27, 2013
November
11
Nov
27
27
2013
09:10 AM
9
09
10
AM
PDT
Recent Exchanges with Kirk Durston don't raise a mention of CSI. One hit in a comment of mine and the full phrase by someone else. Durston doesn't use it at all.Alan Fox
November 27, 2013
November
11
Nov
27
27
2013
08:51 AM
8
08
51
AM
PDT
Oops missed blockquotes. First paragragh is gpuccio.Alan Fox
November 27, 2013
November
11
Nov
27
27
2013
08:36 AM
8
08
36
AM
PDT
Yes, I do want to go through it again. If you have arguments, please detail them, and show why Durston has not measured CSI in 35 protein families. Or just admit you were wrong. If you like. Well, my main issue with your claim is that nowhere does Durston claim to calculate CSI of anything. The term he uses, which originated with Leslie Orgel and developed by Robert Hazen is "Functional Sequence Complexity".Alan Fox
November 27, 2013
November
11
Nov
27
27
2013
08:35 AM
8
08
35
AM
PDT
The word "definition" has finally appeared on this thread. Does anyone not see the irony of this thread as we try to understand CSI. There have been probably been more than 10,000 comments on probably over a hundred previous threads trying to understand this concept. There is no lay man's definition, mainly, because there is no definition of the word "specified." It seems there are long discussions on this blog about concepts where the people discussing them do not agree on a common definition for the concept being discussed. I believe "complex" can be adequately defined for the lay person and so can "information." But "specified" for such a common word seems to be left out of the common understanding. I am aware that the word "information" can have nearly hundreds of definitions but the average lay person will not have a hard time understanding how it is being used in a biological framework. Those who disagree should contact those who do bioinformatics. http://en.wikipedia.org/wiki/Bioinformatics Related comment - And please no one use coin flips or dice rolls to illustrate CSI. I have never seen the relevance. Probability distributions are fair game. And as far as probability is concerned, can there be a probability distribution for something that has never happened as least according to our current knowledge. Natural selection has never produced any useful biological information in terms of the evolution debate. What would such a probability distribution look like that considers it as a cause of something of consequence when there are zero instances of such an event given there were a gazillion of potential events where natural selection could operate.jerry
November 27, 2013
November
11
Nov
27
27
2013
08:33 AM
8
08
33
AM
PDT
Alan Fox @10: Ah, so Dawkins was using Weasel to demonstrate something un-Darwinian? Of course not; let’s not pretend that he wasn’t attempting to show how Darwinian processes operate. That was the whole point of his program. Yet his program has that little thing that is wholly un-Darwinian – that target phrase, that careful forcing of climbing up Mount Improbable. Subsequent evolutionary algorithms that generate anything of consequence, whether Avida or NASA’s antenna, utilize the same approach: a guiding target phrase, a goal, a purpose-driven, ends-oriented process. Thoroughly un-Darwinian. I'm glad you acknowledge, thought, that his program doesn't demonstrate how Darwinian evolution works. :)
Laughably untrue. Dawkins is on record as saying he didn’t even bother to keep his code because it wasn’t important.
And yet, here you are, defending Weasel. :)
It showed the power of selection against random draw.
Almost. But you are painting it in a generic light in order to avoid the context of his claim. His purpose was to show how a Darwinian process could produce something that random draw couldn't. So, yes, it showed the power of selection. But it is the power of selection when: (i) it is carefully coaxed toward a target phrase through intelligent design (thoroughly un-Darwinian), and (ii) there is a sequence of slight, successive intermediate steps leading from A to Z (thoroughly unproven in the case of biological structures).
The environment designs.
Yeah, sure. Let’s confuse the conversation by saying that nature “designs.” Great evolutionist talking point. What plan or purpose or thought or intention does nature have in mind when it designs? Look, just FYI, every time I use “design” in the context of the evolution/design debate it means “design” in the ordinary, dictionary definition of the word, not a twisted, forced definition (again, attempting to bring design in through the backdoor of natural processes) that can support materialism.Eric Anderson
November 27, 2013
November
11
Nov
27
27
2013
08:07 AM
8
08
07
AM
PDT
Sal @30:
We don’t ask, “what is the probability a Designer will make a protein from a pre-biotic soup” we ask, “what is the probability a protein will emerge from a random prebiotic soup”. For physical artifacts, the CSI score is based on the rejected mechanism (chance hypothesis, Shannon degrees of freedom) not the actual mechanism that created the object.
Exactly. ----- Winston @36:
That’s where specification comes in. Improbable events which are specified are rare. Improbable events themselves are not rare. In order to deem an event to[o] rare to plausibly happen [by chance] we need to show that it is specified and complex. That’s the use of specified complexity.
Exactly.Eric Anderson
November 27, 2013
November
11
Nov
27
27
2013
07:48 AM
7
07
48
AM
PDT
wd400:
So your major claim seems to be that there are “selectable intermediates”, and CSI is of little relavance as no one in their right mind thinks proteins arose at random?
Is that addressed to me??? For anyone who can read, my "major claim" is that there are no “selectable intermediates”, and that CSI is of fundamental relevance, both to evaluate the improbability of the whole sequence for RV, or (if and when selectable intermediates will be shown) to evaluate the improbability of the role of RV before and after the expansion of the selectable intermediate. For your convenience, I paste the relevant phrases from my previous posts: "The NS part must be supported by facts, like all necessity mechanisms. It is not. " "unfortunately, no selectable intermediates are known for basics protein domains, for the simple fact that there is no reason they should exist and because none was ever found." "In an old post I showed how selectable intermediates, if they existed, could help the process and how CSI allows us to quantify the RV part even in the presence of NS events." My english may not be so good, but I thought that was clear enough.gpuccio
November 27, 2013
November
11
Nov
27
27
2013
12:50 AM
12
12
50
AM
PDT
wd400 =>no one in their right mind thinks proteins arose at random?. =>You are rightcoldcoffee
November 27, 2013
November
11
Nov
27
27
2013
12:42 AM
12
12
42
AM
PDT
So your major claim seems to be that there are "selectable intermediates", and CSI is of little relavance as no one in their right mind thinks proteins arose at random?wd400
November 27, 2013
November
11
Nov
27
27
2013
12:27 AM
12
12
27
AM
PDT
wd400:
So, how much CSI selection can create?
The role of NS in the neodarwinisn model is not to "create CSI", but only to expand positive selections and fix them. That does not create CSI (the functional information, however complex, must already be in the sequence generated by RV). But it can certainly modify the probabilistic resources of the system, because CSI must be computed separately for each transition made by RV, and selectable intermediates, if they exist, can "fragment" the process in smaller sub processes, as I have shown, with computations, in an old post. Again, the problem is that complex information is not deconstructable into simpler selectable intermediates. That's why the neo darwinian model is based on a myth, and form a scientific point of view the whole transition to a new basic protein domain can only be explained by RV (which is ruled out by CSI) or by a design inference. Is that clear?gpuccio
November 27, 2013
November
11
Nov
27
27
2013
12:19 AM
12
12
19
AM
PDT
Alan Fox: I can go through anything you like. You challenged us to "Calculate the CSI of something" (#12). I answered that "Durston has calculated the CSI of 35 protein families" (#20). You objected "Not true. It’s a different metric" (#23). I answered: "No, it isn’t" (#24). wd400 then asked: "How did Dunston calculate p(T|H)?" (#25) I answered: "Durston calculated the functional complexity of each of the 35 protein families by comparing the reduction in uncertainness given by the functional specification (being a member of the family), versus the random state. That is exactly the improbability of getting a functional sequence by a random search. It is exactly CSI. The simple truth is that CSI, or any of its subsets, like dFSCI, measures the improbability of the target state. The target state is defined, in the functional subset of CSI, by the function. In this case, the function is very simply the function of the protein family. CSI is simply the complexity linked to the function. It’s just as simple as that. The confusion is only created by the dogmatism of neo darwinists who cannot accept the truth." (#34) wd400 has not answered that. Neither have you. Yes, I do want to go through it again. If you have arguments, please detail them, and show why Durston has not measured CSI in 35 protein families. Or just admit you were wrong. If you like.gpuccio
November 27, 2013
November
11
Nov
27
27
2013
12:11 AM
12
12
11
AM
PDT
(cos, at the moment it sounds like you think neo-darwinism is wrong, so you aren't including it in your calculations. In which case, you wouldn't need CSI!)wd400
November 27, 2013
November
11
Nov
27
27
2013
12:09 AM
12
12
09
AM
PDT
So, how much CSI selection can create?wd400
November 27, 2013
November
11
Nov
27
27
2013
12:07 AM
12
12
07
AM
PDT
wd400: Must we really go back to the basics? The "explanation" we talk of is, obviously, neo darwinism. It is based on two sequencial processes: RV and NS. The RV part is essential to the model. It has to be quantified. CSI allows us to quantify it. The NS part must be supported by facts, like all necessity mechanisms. It is not. In an old post I showed how selectable intermediates, if they existed, could help the process and how CSI allows us to quantify the RV part even in the presence of NS events. But, unfortunately, no selectable intermediates are known for basics protein domains, for the simple fact that there is no reason they should exist and because none was ever found. Therefore, the neo darwinisn model is neither reasonable nor supported by any facts. On the contrary, the design inference is strongly and positively supported by the easily observed connection between CSI and conscious design. These are really the basics, I supposed you knew them.gpuccio
November 27, 2013
November
11
Nov
27
27
2013
12:01 AM
12
12
01
AM
PDT
Hmm, probably shouldn't comment form my phone. Comment 37 should say Your two comments here seem contradictory.First you say CSI compares observed data with a random expectation, then you say CSI tests explanations for observed data. How does that work when the mechanism isn't random? How much CSI (as you defined it) can natural selection create? How do we know that?wd400
November 26, 2013
November
11
Nov
26
26
2013
11:50 PM
11
11
50
PM
PDT
That is exactly the improbability of getting a functional sequence by a random search.
Which is exactly irrelevant to evolutionary processes.Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
11:42 PM
11
11
42
PM
PDT
@ gpuccio Yes the pantomime season is upon us. Oh, no it isn't! Do really want to go through this again?Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
11:37 PM
11
11
37
PM
PDT
Sal, By "mechanisms in operation", I was referring the natural laws that operate in a system. I'm not referring the actual operations that produced the object. I'm in total agreement with what you've written there.Winston Ewert
November 26, 2013
November
11
Nov
26
26
2013
10:04 PM
10
10
04
PM
PDT
1 2 3 4

Leave a Reply