Uncommon Descent Serving The Intelligent Design Community

The Tragedy of Two CSIs

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

CSI has come to refer to two distinct and incompatible concepts. This has lead to no end of confusion and flawed argumentation.

CSI, as developed by Dembski, requires the calculation of the probability of an artefact under the mechanisms actually in operation. It a measurement of how unlikely the artefact was to emerge given its context. This is the version that I’ve been defending in my recent posts.

CSI, as used by others is something more along the lines of the appearance of design. Its typically along the same lines as notion of complicated developed by Richard Dawkin in The Blind Watchmaker:

complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone.

This is similar to Dembski’s formulation, but where Dawkins merely requires that the quality be unlikely to have been acquired by random chance, Dembski’s formula requires that the quality by unlikely to have been acquired by random chance and any other process such as natural selection. The requirements of Dembski’s CSI is much more stringent than Dawkin’s complicated or the non-Dembski CSI.

Under Dembski’s formulation, we do not know whether or not biology contains specified complexity. As he said:

Does nature exhibit actual specified complexity? The jury is still out. – http://www.leaderu.com/offices/dembski/docs/bd-specified.html

The debate for Dembski is over whether or not nature exhibits specified complexity. But for the notion of complicated or non-Dembski CSI, biology is clearly complicated, and the debate is over whether or not Darwinian evolution can explain that complexity.

For Dembski’s formulation of specified complexity, the law of the conservation of information is a mathematical fact. For non-Dembski formulations of specified complexity, the law of the conversation of information is a controversial claim.

These are two related but distinct concepts. We must not conflate them. I think that non-Dembski CSI is a useful concept. However, it is not the same thing as Dembski’s CSI. They differ on critical points. As such, I think it is incorrect to refer to any these ideas as CSI or specified complexity. I think that only Dembski’s formulation or variations thereof should be termed CSI.

Perhaps the toothpaste is already out the bottle, and this confusion of the notion of specified complexity cannot be undone. But as it stands, we’ve got a situations where CSI is used to referred to two distinct concepts which should not be conflated. And that’s the tragedy.

Comments
Your two comments here seem .First you say compares observed data with a random expectation, then you say csi tests explanations for observed data. How does that work? How much (as you defined it) can natural selection create? How do we know that?wd400
November 26, 2013
November
11
Nov
26
26
2013
10:02 PM
10
10
02
PM
PST
I’m sorry, this reads like you are saying CSI argues that improbable explanations are improbable. If that’s the case, what use is it?
But improbable outcomes happen all the time. Each snowflake is highly improbable. But it still snows. Every possible poker hand is highly improbable, yet we can still deal card. In certain situations, a highly improbable outcome is highly probable. This can happen because there can be a large number of possible outcomes each individually improbable, but when combined highly probable. That's where specification comes in. Improbable events which are specified are rare. Improbable events themselves are not rare. In order to deem an event to rare to plausibly happen we need to show that it is specified and complex. That's the use of specified complexity.Winston Ewert
November 26, 2013
November
11
Nov
26
26
2013
09:57 PM
9
09
57
PM
PST
wd400:
I’m sorry, this reads like you are saying CSI argues that improbable explanations are improbable. If that’s the case, what use is it?
The use, obviously, is in giving a metrics to evaluate how improbable an explanation is. That's what science does.gpuccio
November 26, 2013
November
11
Nov
26
26
2013
08:29 PM
8
08
29
PM
PST
wd400: Durston calculated the functional complexity of each of the 35 protein families by comparing the reduction in uncertainness given by the functional specification (being a member of the family), versus the random state. That is exactly the improbability of getting a functional sequence by a random search. It is exactly CSI. The simple truth is that CSI, or any of its subsets, like dFSCI, measures the improbability of the target state. The target state is defined, in the functional subset of CSI, by the function. In this case, the function is very simply the function of the protein family. CSI is simply the complexity linked to the function. It's just as simple as that. The confusion is only created by the dogmatism of neo darwinists who cannot accept the truth.gpuccio
November 26, 2013
November
11
Nov
26
26
2013
08:08 PM
8
08
08
PM
PST
If the mechanisms that created you distribution of chirality is not random then a CSI calculation based on that assumption is useless.
Agreed. The assumed chance hypothesis can be falsified.scordova
November 26, 2013
November
11
Nov
26
26
2013
06:30 PM
6
06
30
PM
PST
When we calculate the CSI in the homochirality of a protein, we presume the CSI score from the natural binomial distribution in evidence from chemistry for L and D amino-acids (just like fair coins obey a binomial distribution) Which is precisely the problem, surely. If the mechanisms that created you distribution of chirality is not random then a CSI calculation based on that assumption is useless.wd400
November 26, 2013
November
11
Nov
26
26
2013
06:22 PM
6
06
22
PM
PST
It argues that if evolution is an improbable account of life, we are justified in dismissing I'm sorry, this reads like you are saying CSI argues that improbable explanations are improbable. If that's the case, what use is it?wd400
November 26, 2013
November
11
Nov
26
26
2013
06:19 PM
6
06
19
PM
PST
CSI, as developed by Dembski, requires the calculation of the probability of an artefact under the mechanisms actually in operation.
In the case of artifacts, we may not have access to the mechanism in operation, the mechanism is unknown, the EF was meant to adjudicate whether an artifact was designed independent of the mechanism that facilitated its creation. In the case of human designers, it doesn't make sense to say what probability there is that a human designer will make Mt. Rushmore. The a priori probability that the true mechanism actually doing a task (considering it's abilities or "willingness" or programming) should not figure into the CSI score of a physical artifact. We don't ask, "what is the probability a Designer will make a protein from a pre-biotic soup" we ask, "what is the probability a protein will emerge from a random prebiotic soup". For physical artifacts, the CSI score is based on the rejected mechanism (chance hypothesis, Shannon degrees of freedom) not the actual mechanism that created the object. When we calculate the CSI in the homochirality of a protein, we presume the CSI score from the natural binomial distribution in evidence from chemistry for L and D amino-acids (just like fair coins obey a binomial distribution). We don't base the CSI score on the probability that the intelligent designer (the true mechanism) created life.scordova
November 26, 2013
November
11
Nov
26
26
2013
06:17 PM
6
06
17
PM
PST
Alan Fox, You've asked for a CSI calculation. Dembski's CSI is not about determining the probabilities, but about the consequence of those probabilities. It argues that if evolution is an improbable account of life, we are justified in dismissing it. It provides absolutely nothing to attempt to establish that life is, in fact, improbable under Darwinian mechanisms. However, almost every single argument put forward by intelligent design whether irreducible complexity, protein folding, no free lunch, etc. seek to establish that the probability is very low. Those arguments we will point to in order to establish that the probability of life is low. We will argue that those argument show that the probability of life is far too low to accept Darwinism as an account for it.Winston Ewert
November 26, 2013
November
11
Nov
26
26
2013
06:05 PM
6
06
05
PM
PST
Mung wrote: whether the coins are “fair” or not is irrelevant.
Example of why Mung is a waste of time, and why I toss him from my discussions.scordova
November 26, 2013
November
11
Nov
26
26
2013
05:43 PM
5
05
43
PM
PST
My post in that thread before Sal modified it's content:
Salvador:
The coins were fair, they just happen to all show heads. The coins weren’t flipped, they are found that way in some box or on the floor.
If the coins were not flipped, whether the coins are "fair" or not is irrelevant. So what does this have to do with CSI, if anything? For those who manage to read this before Salvador turns it onto something I did not write, Shannon information is based upon probabilities. Is CSI any different? If a coin is not flipped, how do you calculate the probabilities? What's "fair" got to do with it?
Winston Ewert:
CSI, as developed by Dembski, requires the calculation of the probability of an artefact under the mechanisms actually in operation.
This is the same objection I made to Sal's nonsense about CSI. But according to Salvador my posts are "off topic" and I am a "troll."Mung
November 26, 2013
November
11
Nov
26
26
2013
05:40 PM
5
05
40
PM
PST
CSI has come to refer to two distinct and incompatible concepts. This has lead to no end of confusion and flawed argumentation.
Exhibit AMung
November 26, 2013
November
11
Nov
26
26
2013
05:31 PM
5
05
31
PM
PST
gpuccio, How did Dunston calculate p(T|H)?wd400
November 26, 2013
November
11
Nov
26
26
2013
03:32 PM
3
03
32
PM
PST
Alan Fox: No, it isn't.gpuccio
November 26, 2013
November
11
Nov
26
26
2013
01:55 PM
1
01
55
PM
PST
@ gpuccio Not true. It's a different metric.Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
01:34 PM
1
01
34
PM
PST
I think pastor Joe Boot, although he is talking about the universe as a whole in the following quote, illustrates the insurmountable problem, that 'context dependency' places on reductive materialism, very well:
"If you have no God, then you have no design plan for the universe. You have no prexisting structure to the universe.,, As the ancient Greeks held, like Democritus and others, the universe is flux. It's just matter in motion. Now on that basis all you are confronted with is innumerable brute facts that are unrelated pieces of data. They have no meaningful connection to each other because there is no overall structure. There's no design plan. It's like my kids do 'join the dots' puzzles. It's just dots, but when you join the dots there is a structure, and a picture emerges. Well, the atheists is without that (final picture). There is no preestablished pattern (to connect the facts given atheism)." Pastor Joe Boot - 13:20 minute mark of the following video Defending the Christian Faith – Pastor Joe Boot – video http://www.youtube.com/watch?v=wqE5_ZOAnKo
Supplemental quote:
‘Now one more problem as far as the generation of information. It turns out that you don’t only need information to build genes and proteins, it turns out to build Body-Plans you need higher levels of information; Higher order assembly instructions. DNA codes for the building of proteins, but proteins must be arranged into distinctive circuitry to form distinctive cell types. Cell types have to be arranged into tissues. Tissues have to be arranged into organs. Organs and tissues must be specifically arranged to generate whole new Body-Plans, distinctive arrangements of those body parts. We now know that DNA alone is not responsible for those higher orders of organization. DNA codes for proteins, but by itself it does not insure that proteins, cell types, tissues, organs, will all be arranged in the body. And what that means is that the Body-Plan morphogenesis, as it is called, depends upon information that is not encoded on DNA. Which means you can mutate DNA indefinitely. 80 million years, 100 million years, til the cows come home. It doesn’t matter, because in the best case you are just going to find a new protein some place out there in that vast combinatorial sequence space. You are not, by mutating DNA alone, going to generate higher order structures that are necessary to building a body plan. So what we can conclude from that is that the neo-Darwinian mechanism is grossly inadequate to explain the origin of information necessary to build new genes and proteins, and it is also grossly inadequate to explain the origination of novel biological form.’ - Stephen Meyer - (excerpt taken from Meyer/Sternberg vs. Shermer/Prothero debate - 2009) Stephen Meyer - Functional Proteins And Information For Body Plans - video http://www.metacafe.com/watch/4050681
bornagain77
November 26, 2013
November
11
Nov
26
26
2013
01:19 PM
1
01
19
PM
PST
Mr. Fox, despite your, and other Darwinists stubborn reluctance to admit to the abject failure inherent in the "Weasel" project for providing any support whatsoever for Darwinian claims, I am grateful for what Dawkins' "Weasel" project has personally taught to novices like me. Because of the simplicity of the program and the rather modest result, "Methinks it is like a weasel", that the program was trying to achieve, it taught me in fairly short order, in an easy to understand way, that,,
"Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information." - William Dembski
In fact so effective was Dawkins' "Weasel" project at teaching me this basic, 'brick wall', limitation for material processes to create even trivial levels of functional information, that I highly recommend Wiker & Witt's book "A Meaningful World" in which they show, using the "Methinks it is like a weasel" phrase that Dawkins' used from from Shakespeare's Hamlet, that the problem is much worse for Darwinists than just finding the "Methinks it is like a weasel" phrase by a blind search, since the "Methinks it is like a weasel" phrase makes no sense at all unless the entire play of Hamlet is taken into consideration so as to give the "Weasel" phrase context. Moreover the context in which the phrase derives its meaning is derived from several different levels of the play. i.e. The ENTIRE play provides meaning for the individual "Weasel" phrase.
A Meaningful World: How the Arts and Sciences Reveal the Genius of Nature - Book Review Excerpt: They focus instead on what "Methinks it is like a weasel" really means. In isolation, in fact, it means almost nothing. Who said it? Why? What does the "it" refer to? What does it reveal about the characters? How does it advance the plot? In the context of the entire play, and of Elizabethan culture, this brief line takes on significance of surprising depth. The whole is required to give meaning to the part. http://www.thinkingchristian.net/C228303755/E20060821202417/
In fact it is interesting to note what the overall context is for "Methinks it is like a weasel" that is used in the Hamlet play. The context in which the phrase is used is to illustrate the spineless nature of one of the characters of the play. To illustrate how easily the spineless character can be led to say anything that Hamlet wants him to say:
Ham. Do you see yonder cloud that ’s almost in shape of a camel? Pol. By the mass, and ’t is like a camel, indeed. Ham. Methinks it is like a weasel. Pol. It is backed like a weasel. Ham. Or like a whale? Pol. Very like a whale. http://www.bartleby.com/100/138.32.147.html
After realizing what the context of 'Methinks it is like a weasel' actually was, I remember thinking to myself that it was perhaps the worse possible phrase Dawkins could have possibly chosen to try to illustrate his point, since the phrase, when taken into context, actually illustrates that the person saying it was easily deceived and manipulated into saying the phrase by another person. Which I am sure is hardly the idea, i.e. deception and manipulation by a person to get a desired phrase, that Dawkins was trying to convey with his 'Weasel' example. But is this context dependency that is found in literature also found in life? Yes! Starting at the amino acids of proteins we find context dependency:
Fred Sanger, Protein Sequences and Evolution Versus Science - Are Proteins Random? Cornelius Hunter - November 2013 Excerpt: Standard tests of randomness show that English text, and protein sequences, are not random.,,, http://darwins-god.blogspot.com/2013/11/fred-sanger-protein-sequences-and.html (A Reply To PZ Myers) Estimating the Probability of Functional Biological Proteins? Kirk Durston , Ph.D. Biophysics – 2012 Excerpt (Page 4): The Probabilities Get Worse This measure of functional information (for the RecA protein) is good as a first pass estimate, but the situation is actually far worse for an evolutionary search. In the method described above and as noted in our paper, each site in an amino acid protein sequence is assumed to be independent of all other sites in the sequence. In reality, we know that this is not the case. There are numerous sites in the sequence that are mutually interdependent with other sites somewhere else in the sequence. A more recent paper shows how these interdependencies can be located within multiple sequence alignments.[6] These interdependencies greatly reduce the number of possible functional protein sequences by many orders of magnitude which, in turn, reduce the probabilities by many orders of magnitude as well. In other words, the numbers we obtained for RecA above are exceedingly generous; the actual situation is far worse for an evolutionary search. http://powertochange.com/wp-content/uploads/2012/11/Devious-Distortions-Durston-or-Myers_.pdf
Moreover, context dependency is found on at least three different levels of the protein structure:
"Why Proteins Aren't Easily Recombined, Part 2" - Ann Gauger - May 2012 Excerpt: "So we have context-dependent effects on protein function at the level of primary sequence, secondary structure, and tertiary (domain-level) structure. This does not bode well for successful, random recombination of bits of sequence into functional, stable protein folds, or even for domain-level recombinations where significant interaction is required." http://www.biologicinstitute.org/post/23170843182/why-proteins-arent-easily-recombined-part-2
Moreover, it is interesting to note that many (most?) proteins are now found to be multifunctional depending on the overall context (i.e. position in cell, cell type, tissue type, etc..) that the protein happens to be involved in. Thus, the sheer brick wall that Darwinian processes face in finding ANY novel functional protein to perform any specific single task in a cell in the first place (Axe; Sauer) is only exponentially exasperated by the fact that many proteins are multifunctional and, serendipitously, perform several different 'context dependent' functions within the cell:
Human Genes: Alternative Splicing (For Proteins) Far More Common Than Thought: Excerpt: two different forms of the same protein, known as isoforms, can have different, even completely opposite functions. For example, one protein may activate cell death pathways while its close relative promotes cell survival. http://www.sciencedaily.com/releases/2008/11/081102134623.htm Genes Code For Many Layers of Information - They May Have Just Discovered Another - Cornelius Hunter - January 21, 2013 Excerpt: “protein multifunctionality is more the rule than the exception.” In fact, “Perhaps all proteins perform many different functions by employing as many different mechanisms." http://www.fasebj.org/content/23/7/2022.full
Context dependency, and the problem it presents for 'bottom up' Darwinian evolution is perhaps most dramatically illustrated by the following examples in which 'form' dictates how the parts are used:
An Electric Face: A Rendering Worth a Thousand Falsifications - Cornelius Hunter - September 2011 Excerpt: The video suggests that bioelectric signals presage the morphological development of the face. It also, in an instant, gives a peak at the phenomenal processes at work in biology. As the lead researcher said, “It’s a jaw dropper.” https://www.youtube.com/watch?v=wi1Qn306IUU What Do Organisms Mean? Stephen L. Talbott – Winter 2011 Excerpt: Harvard biologist Richard Lewontin once described how you can excise the developing limb bud from an amphibian embryo, shake the cells loose from each other, allow them to reaggregate into a random lump, and then replace the lump in the embryo. A normal leg develops. Somehow the form of the limb as a whole is the ruling factor, redefining the parts according to the larger pattern. Lewontin went on to remark: “Unlike a machine whose totality is created by the juxtaposition of bits and pieces with different functions and properties, the bits and pieces of a developing organism seem to come into existence as a consequence of their spatial position at critical moments in the embryo’s development. Such an object is less like a machine than it is like a language whose elements … take unique meaning from their context.[3]“,,, http://www.thenewatlantis.com/publications/what-do-organisms-mean
bornagain77
November 26, 2013
November
11
Nov
26
26
2013
01:18 PM
1
01
18
PM
PST
Alan Fox: Durston has calculated the CSI of 35 protein families. Is that cogent enough?gpuccio
November 26, 2013
November
11
Nov
26
26
2013
12:54 PM
12
12
54
PM
PST
Oops: status of minor god.Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
12:41 PM
12
12
41
PM
PST
By the way it is possible to calculate the complexity of a rock and and a strand of DNA and a protein.
A demonstration would elevate you to the status on minor god. Go for it. Show me how to calculate the complexity of a rock. Do I get to pick which one?Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
12:40 PM
12
12
40
PM
PST
Yes, on the fact that information can be created and it has always been trivial. If I am not correct on both please provide a counter example.
You are assuredly correct, Jerry. Man, you are on a roll. (PM Ras)Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
12:38 PM
12
12
38
PM
PST
Jerry is again correct.
Yes, on the fact that information can be created and it has always been trivial. If I am not correct on both please provide a counter example. By the way it is possible to calculate the complexity of a rock and and a strand of DNA and a protein.jerry
November 26, 2013
November
11
Nov
26
26
2013
12:35 PM
12
12
35
PM
PST
There are various aspects of the design issues: 1. the probability an algorithm or information processing mechanism can generate new concepts 2. the probability designed physical artifacts can be synthesized by random processes 3. situations where both probabilities are in play I don't think there is much disagreement in the ID community about #1. An evolutionary computation, a biological "computation" is fundamentally limited in the class of new concepts (platonic forms, ideas, etc.) that it can generate that match what we humans subjectively perceive as designed. This is where No-Free-Lunch is blatantly obvious. When we speak of an algorithm being unable to spontaneously increase its algorithmic information, I don't think there is much dispute about that. All the algorithm can do is at best is to make a variety of representations of what is already inside it. For example, suppose we have an algorithm to make a variety of rectangles by stating Euclidian X,Y coordinates of the corners. Here are some example outputs 0,0, and 4,4 -1,-1 and 3.14, 3.14 etc. This can be done with random number generator and we simply take two random numbers and duplicate them. The random input is constrained by the program, it will never generate more algorithmic information than the concept of rectangles (or pairs of identical numbers). It will not describe space shuttles. We might try to mutate the computer code randomly, and all you'll get is a mess. There is no real increase in conceptual information, the only variety proceeds from the random inputs, but this does not add new insights, it does not add new platonic concepts. There is no free lunch. I don't think there is much disagreement about the limits of evolutionary computation to create fundamentally new conceptions (specifications) than what was front loaded into it either explicitly or implicitly (implicit is usually the case). It's when we get out of the realm of conceptual information increase to the transfer of conceptual information into a physical representation we run into issues. In the case of the robot, let us say all it knows how to do is make coins heads. It can never self-evolve new concepts. Randomly mutating the robot will likely result in robot malfunction, failure, not increase in new conceptual abilities. The NFL theorems clearly work well in the case of the Robot's algorithmic information. If we can agree at least about the Robot's inability to create new specifications beyond that which it was front loaded with, then we have at least one thing we can agree on.scordova
November 26, 2013
November
11
Nov
26
26
2013
12:33 PM
12
12
33
PM
PST
I believe the "C" only has reference to the "I." So it is the "I" that is being assessed as far as complexity. The "S" is what differentiates one event from another event. Without the "S" the concept would have no meaning. A rock sitting in the middle of the river bed contains information, namely the arrangement of the molecules that compose it but no one would say it specifies or is specified but it surely can be very complex. Specifies is necessary as there must be two independent entities and one specifies the other or is specified by the other. The "F" is added to limit the events to those where the specified entity has a recognizable function. FCSI is a subset which is easily understood because of the specified function.jerry
November 26, 2013
November
11
Nov
26
26
2013
12:31 PM
12
12
31
PM
PST
PS @ Eric Anderson You may infer from my previous comment n° 11 that CSI is unquantifiable, meaningless, useless. Is that cogent enough?Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
12:24 PM
12
12
24
PM
PST
CSI is an incredibly simple concept. I have yet to hear any cogent criticism of the concept.
OK then. Calculate the CSI of something. Please show your working.Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
12:21 PM
12
12
21
PM
PST
To say that natural forces do not create information is a dead end argument. Of course natural forces create information once original information is available.
Jerry is again correct. The environment is the designing element in evolutionary processes. Of course, Creationists should direct their fire to Origin-of-Life theories where the science is far from settled. But nobody listens to me. ;)Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
12:19 PM
12
12
19
PM
PST
“Cumulative selection” as you call it, is after all, precisely what Darwinian evolution is supposed to provide. It is quite clear that Dawkins was trying to demonstrate the “power of cumulative selection [read Darwinian evolution].”
Eric, have a look at Wikipedia as you appear to have only absorbed Creationist propaganda on the subject.
Look, it shouldn't be that hard for people to say, “Sorry, bad example.” Instead, Dawkins lovers continue to defend Weasel tooth and nail.
Laughably untrue. Dawkins is on record as saying he didn't even bother to keep his code because it wasn't important. Creationists got their teeth into Weasel 30 years or more after it appeared in "Blind Watchmaker". I note they have been much less critical of later more sophisticated programs such as bio-morphs and those that generated sea shells and spider webs.
It was wrong. It didn’t demonstrate what he thought it did.* He was called on it, and rightly so. Let’s stop trying to defend the indefensible or rewrite history.
It did all it was ever meant to do. It showed the power of selection against random draw.
Ironically, instead it showed how you can sneak design in through the back door, as evolutionists are so often wont to do and as virtually every subsequent “evolutionary algorithm” that performs anything interesting does.
Who is disputing that design happens? The environment designs. Breeders design.Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
12:16 PM
12
12
16
PM
PST
EA @ 6,8 Bravo :)Optimus
November 26, 2013
November
11
Nov
26
26
2013
12:12 PM
12
12
12
PM
PST
I fear this discussion may be generating confusion, rather than light. In order to help remedy the situation, I want to lay out, if I may, the crux of the matter. Known Mechanism? The whole point of CSI, as Dembski proposed, was to identify the likely provenance of an artifact in those cased in which the actual origin, meaning the actual mechanism that produced the artifact, is unknown. Further, if the particular mechanism that produced an artifact is known, then we never invoke the concept of CSI, because we already know the provenance. The entire concept is useful precisely in those instances in which the actual, historical, source or mechanism is unknown. So it is certainly not the case that we calculate CSI only in those cases in which the mechanism that produced the artifact is known. Quite the opposite is true. The only time we use the concept to try and infer the best explanation for the origin of the artifact is when the actual origin is unknown. Pro-forma Mechanism Now, we could say that we calculate CSI with respect to various competing "pro-forma mechanisms" or "potential mechanisms" or "proposed mechanisms," etc. That is perfectly fine. And in those cases the mechanisms are broad in nature: chance, necessity, design. And of those three the only one with respect to which it makes sense to do any calculation is chance, because necessity already carries a probability of 1 and design is not amenable to a probability calculation (or, per another viewpoint could also be viewed as 1). So as a result, we always calculate 'C' with respect to chance, and typically that is adequately accomplished through the simplest known parameters: nucleotides interacting naturally to form a chain of DNA, amino acids interacting to form a protein chain, etc. Thus, in virtually all cases, we are calculating the 'C' of CSI with respect to a hypothetical or a pro-forma chance scenario. And we do so irrespective of whether we know that chance is the actual mechanism or not. Problems with CSI? CSI is an incredibly simple concept. I have yet to hear any cogent criticism of the concept. Are there interesting corner cases, like Sal's self-reproducing cells? Sure. But in essentially all those cases we are dealing with semantics and can easily resolve the imagined problems with CSI by stating the particular case with more clarity. Problems do arise when we start to think that we can calculate CSI with some kind of mathematical precision that will be the final definitive demonstration of CSI's existence or non-existence in a particular case. We don't and we can't calculate CSI, per se. We calculate 'C'. The 'S' and the 'I' are not amenable to simple mathematical reduction. They are concepts that depend on experience, logic, meaning, context, understanding, etc. Are those concepts challenging in their own right at times? Certainly. But our inability to precisely calculate them in no way invalidates or diminishes the importance of CSI as a tool for helping us arrive at an inference to the best explanation precisely in those cases in which the origin of a particular artifact is unknown. That is the whole point of CSI, and it is remarkably effective at carrying the weight of that burden, if we keep our eye on the ball.Eric Anderson
November 26, 2013
November
11
Nov
26
26
2013
11:41 AM
11
11
41
AM
PST
1 2 3 4

Leave a Reply