Uncommon Descent Serving The Intelligent Design Community

Why there’s no such thing as a CSI Scanner, or: Reasonable and Unreasonable Demands Relating to Complex Specified Information

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

It would be very nice if there was a magic scanner that automatically gave you a readout of the total amount of complex specified information (CSI) in a system when you pointed it at that system, wouldn’t it? Of course, you’d want one that could calculate the CSI of any complex system – be it a bacterial flagellum, an ATP synthase enzyme, a Bach fugue, or the faces on Mt. Rushmore – by following some general algorithm. It would make CSI so much more scientifically rigorous, wouldn’t it? Or would it?

This essay is intended as a follow-up to the recent thread, On the calculation of CSI by Mathgrrl. It is meant to address some concerns about whether CSI is sufficiently objective to qualify as a bona fide scientific concept.

But first, some definitions. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define complex specified information (or CSI) as follows (p. 311):

Information that is both complex and specified. Synonymous with SPECIFIED COMPLEXITY.

Dembski and Wells then define specified complexity on page 320 as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY).

In this post, I’m going to examine seven demands which Intelligent Design critics have made with regard to complex specified information (CSI):

(i) that it should be calculable not only in theory but also in practice, for real-life systems;
(ii) that for an arbitrary complex system, we should be able to calculate its CSI as being (very likely) greater than or equal to some specific number, X, without knowing anything about the history of the system;
(iii) that it should be calculable by independent agents, in a consistent manner;
(iv) that it should be knowable with absolute certainty;
(v) that it should be precisely calculable (within reason) by independent agents;
(vi) that it should be readily computable, given a physical description of the system;
(vii) that it should be computable by some general algorithm that can be applied to an arbitrary system.

I shall argue that the first three demands are reasonable and have been met in at least some real-life biological cases, while the last four are not.

Now let’s look at each of the seven demands in turn.

(i) CSI should be calculable not only in theory but also in practice, for real-life systems

This is surely a reasonable request. After all, Professor William Dembski describes CSI as a number in his writings, and even provides a mathematical formula for calculating it.

On page 34 of his essay, Specification: The Pattern That Signifies Intelligence, Professor Dembski writes:

In my present treatment, specified complexity … is now … an actual number calculated by a precise formula (i.e., Chi=-log2[10^120.Phi_s(T).P(T|H)]). This number can be negative, zero, or positive. When the number is greater than 1, it indicates that we are dealing with a specification. (Emphases mine – VJT.)

The reader will recall that according to the definition given in The Design of Life (The Foundation for Thought and Ethics, Dallas, 2008), on page 311, specified complexity is synonymous with complex specified information (CSI).

On page 24 of his essay, Professor Dembski defines the specified complexity Chi of a pattern T given chance hypothesis H, minus the tilde and context sensitivity, as:

Chi=-log2[10^120.Phi_s(T).P(T|H)]

On page 17, Dembski defines Phi_s(T) as the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T.

P(T|H) is defined throughout the essay as a probability: the probability of a pattern T with respect to the chance hypothesis H.

During the past couple of days, I’ve been struggling to formulate a good definition of “chance hypothesis”, because for some people, “chance” means “totally random”, while for others it means “not directed by an intelligent agent possessing foresight of long-term results” and hence “blind” (even if law-governed), as far as long-term results are concerned. In his essay, Professor Dembski is quite clear in his essay that he means to include Darwinian processes (which are not totally random, because natural selection implies non-random death) under the umbrella of “chance hypotheses”. So here’s how I envisage it. A chance hypothesis describes a process which does not require the input of information, either at the beginning of the process or during the process itself, in order to generate its result (in this case, a complex system). On this definition, Darwinian processes would qualify as a chance hypotheses, because they claim to be able to grow information, without the need for input from outside – whether by a front-loading or a tinkering Designer of life.

CSI has already been calculated for some quite large real-life biological systems. In a post on the recent thread, On the calculation of CSI, I calculated the CSI in a bacterial flagellum, using a naive provisional estimate of the probability P(T|H). The numeric value of the CSI was calculated as being somewhere between 2126 and 3422. Since this is far in excess of 1, the cutoff point for a specification, I argued that the bacterial flagellum was very likely designed. Of course, a critic could fault the naive provisional estimate I used for the probability P(T|H). But my point was that the calculated CSI was so much greater than the minimum value needed to warrant a design inference that it was incumbent on the critic to provide an argument as to why the calculated CSI should be less than or equal to 1.

In a later post on the same thread, I provided Mathgrrl with the numbers she needed to calculate the CSI of another irreducibly complex biological system: ATP synthase. As far as I am aware, Mathgrrl has not taken up my (trivially easy) challenge to complete the calculation, so I shall now do it for the benefit of my readers. The CSI of ATP synthase can be calculated as follows. The shortest semiotic description of the specific function of this molecule is: “stator joining two electric motors” which is five words. If we imagine (following Dembski) that we have a dictionary of basic concepts, and assume (generously) that there are no more than 10^5 (=100,000) entries in this dictionary, then the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T is (10^5)^5 or 10^25. This is Phi_s(T). I then quoted a scientifically respectable source (see page 236) which estimated the probability of ATP synthase forming by chance, under the most favorable circumstances (i.e with a genetic code available), at 1 in 1.28×10^266. This is P(H|T). Thus Chi=-log2[10^120.Phi_s(T).P(T|H)]=-log2[(10^145)/(1.28×10^266)]
=-log2[1/(1.28×10^121)]=log2[1.28×10^121]
=log2[1.28x(2^(3.321928))^121]=log2[1.28×2^402],
or about 402, to the nearest whole number.
Thus for ATP synthase, the CSI Chi is 402. 402 is far greater than 1, the cutoff point for a specification, so we can safely conclude that ATP synthase was designed by an intelligent agent.

[Note: Someone might be inclined to argue that conceivably, other biological structures might perform the same function as ATP synthase, and we’d have to calculate their probabilities of arising by chance too, in order to get a proper figure for P(T|H) if T is the pattern “stator joining two electric motors.” In reply: any other structures with the same function would have a lot more components – and hence be much more improbable on a chance hypothesis – than ATP synthase, which is a marvel of engineering efficiency. See here and here. As ATP synthase is the smallest biological molecule – and hence most probable, chemically speaking – that can do the job that it does, we can safely ignore the probability of any other more complex biological structures arising with the same functionality, as negligible in comparison.]

Finally, in another post on the same thread, I attempted to calculate the CSI in a 128×128 Smiley face found on a piece of rock on a strange planet. I made certain simplifying assumptions about the eyes on the Smiley face, and the shape of the smile. I also assumed that every piece of rock on the planet was composed of mineral grains in only two colors (black and white). The point was that these CSI calculations, although tedious, could be performed on a variety of real-life examples, both organic and inorganic.

Does this mean that we should be able to calculate the CSI of any complex system? In theory, yes; however in practice, it may be very hard to calculate P(T|H) for some systems. Nevertheless, it should be possible to calculate a provisional upper bound for P(T|H), based on what scientists currently know about chemical and biological processes.

(ii) For an arbitrary complex system, we should be able to calculate its CSI as being (very likely) greater than or equal to some specific number, X, without knowing anything about the history of the system.

This is an essential requirement for any meaningful discussion of CSI. What it means in practice is that if a team of aliens were to visit our planet after a calamity had wiped out human beings, they should be able to conclude, upon seeing Mt. Rushmore, that intelligent beings had once lived here. Likewise, if human astronauts were to discover a monolith on the moon (as in the movie 2001), they should still be able to calculate a minimum value for its CSI, without knowing its history. I’m going to show in some detail how this could be done in these two cases, in order to convince the CSI skeptics.

Aliens visiting Earth after a calamity had wiped out human beings would not need to have a detailed knowledge of Earth history to arrive at the conclusion that Mt. Rushmore was designed by intelligent agents. A ballpark estimate of the Earth’s age and a basic general knowledge of Earth’s geological processes would suffice. Given this general knowledge, the aliens should be able to roughly calculate the probability of natural processes (such as wind and water erosion) being able to carve features such as a flat forehead, two eyebrows, two eyes with lids as well as an iris and a pupil, a nose with two nostrils, two cheeks, a mouth with two lips, and a lower jaw, at a single location on Earth, over 4.54 billion years of Earth history. In order to formulate a probability estimate for a human face arising by natural processes, the alien scientists would have to resort to decomposition. Assuming for argument’s sake that something looking vaguely like a flat forehead would almost certainly arise naturally at any given location on Earth at some point during its history, the alien scientists would then have to calculate the probability that over a period of 4.54 billion years, each of the remaining facial features was carved naturally at the same location on Earth, in the correct order and position for a human face. That is, assuming the existence of a forehead-shaped natural feature, scientists would have to calculate the probability (over a 4.54 billion year period) that two eyebrows would be carved by natural processes, just below the forehead, as well as two eyes below the eyebrows, a nose below the eyes, two cheeks on either side of the nose, a mouth with two lips below the nose, and a jawline at the bottom, making what we would recognize as a face. The proportions would also have to be correct, of course. Since this probability is order-specific (as the facial features all have to appear in the right place), we can calculate it as a simple product – no combinatorics here. To illustrate the point, I’ll plug in some estimates that sound intuitively right to me, given my limited background knowledge of geological processes occurring over the past 4.54 billion years: 1*(10^-1)*(10^-1)*(10^-10)*(10*-10)*(10^-6)*(10^-1)*(10^-1)*(10*-4)*(10^-2), for the forehead, two eyebrows, two eyes, nose, cheeks, mouth and jawline respectively, giving a product of 10^(-36) – a very low number indeed. Raising that probability to the fourth power – giving a figure of 10^(-144) – would enable the alien scientists to calculate the probability of four faces being carved at a single location by chance, or P(T|H). The alien scientists would then have to multiply this number (10^(-144)) by their estimate for Phi_s(T), or the number of patterns for which a speaker S’s semiotic description of them is at least as simple as S’s semiotic description of T. But how would the alien scientists describe the patterns they had found? If the aliens happened to find some dead people or dig up some human skeletons, they would be able to identify the creatures shown in the carvings on Mt. Rushmore as humans. However, unless they happened to find a book about American Presidents, they would not know who the faces were. Hence the aliens would probably formulate a modest semiotic description of the pattern they observed on Mt. Rushmore: four human faces. A very generous estimate for Phi_s(T) is 10^15, as the description “four human faces” has three words (I’m assuming here that the aliens’ lexicon has no more than 10^5 basic words), and (10^5)^3=10^15. Thus the product Phi_s(T).P(T|H) is (10^15)*(10^(-144)) or 10^(-129). Finally, after multiplying the product Phi_s(T).P(T|H) by 10^120 (the maximum number of bit operations that could have taken place within the entire observable universe during its history, as calculated by Seth Lloyd), taking the log to base 2 of this figure and multiplying by -1, the alien scientists would then be able to derive a very conservative minimum value for the specified complexity Chi of the four human faces on Mt. Rushmore, without knowing anything specific about the Earth’s history. (I say “conservative” because the multiplier 10^120 is absurdly large, given that we are only talking about events occurring on Earth, rather than the entire universe.) In our worked example, the conservative minimum value for the specified complexity Chi would be -log2(10^(-9)), or approximately -log2(2^(-30))=30. Since the calculated specified complexity value of 30 is much greater than the cutoff level of 1 for a specification, the aliens could be certain beyond reasonable doubt that Mt. Rushmore was designed by an intelligent agent. They might surmise that this intelligent agent was a human agent, as the faces depicted are all human, but they could not be sure of this fact, without knowing the history of Mt. Rushmore.

Likewise, if human astronauts were to discover a monolith on the moon (as in the movie 2001), they should still be able to calculate a minimum value for its CSI, without knowing its history. Even if they were unable to figure out the purpose of the monolith, the astronauts would still realize that the likelihood of natural processes on the moon being able to generate a black cuboid figure with perfectly flat faces, whose lengths were in the ratio of 1:4:9, is very low indeed. To begin with, the astronauts might suppose that at some stage in the past, volcanic processes on the moon, similar to the volcanic processes that formed the Giants’ Causeway in Ireland, were able to produce a cuboid with fairly flat faces – let’s say to an accuracy of one millimeter, or 10^(-3) meters. However, the probability that the sides’ lengths would be in the exact ratio of 1:4:9 (to the level of precision of human scientists’ instruments) would be astronomically low, and the probability that the faces of the monolith would be perfectly flat would be infinitesimally low. For instance, let’s suppose for simplicity’s sake that the length of each side of a naturally formed cuboid has a uniform probability distribution over a finite range of 0 to 10 meters, and that the level of precision of scientific measuring instruments is to the nearest nanometer (1 nanometer=10^(-9) meters). Then the length of one side of a cuboid can assume any of 10×10^9=10^10 possible values, all of which are equally probable. Let’s also suppose that the length of the shortest side just happens to be 1 meter, for simplicity’s sake. Then the probability that the other two sides would have lengths of 4 and 9 meters would be 6*(10^(-10))*(10^(-10)) (as there are six ways in which the sides of a cube can have lengths in the ratio of 1:4:9), or 6*10^(-100). Now let’s go back to the faces, which are not fairly flat but perfectly flat, to within an accuracy of one nanometer, as opposed to one millimeter (the level of accuracy achieved by natural processes). At any particular point on the monolith’s surface, the probability that it will be accurate to that degree is (10^(-9))/(10^(-3)) or 10^(-6). The number of distinct points on the surface of the monolith which scientists can measure at nanometer accuracy is (10^9)*(10^9)*(surface area in square meters), or 98*(10^81) or about 10^83. Thus the probability that each and every point on the monolith’s surface will perfectly flat, to within an accuracy of one nanometer, is (10^(-6))^(10^83), or about 10^(-10^84), which dwarfs 10^-100, so we’ll let 10^(-10^84) be our P(T|H), as a ballpark approximation. This probability would then need to be multiplied by Phi_s(T). The simplest semiotic description of the pattern observed by the astronauts would be: flat-faced cuboid, sides’ lengths 1, 4, 9. Treating “flat-faced” as one word, this description has seven terms, so Phi_s(T) is (10^5)^7=10^35. Next, the astronauts would multiply the product Phi_s(T).P(T|H) by 10^120, but because the index 10^84 is so much greater in magnitude than the other indices (120 and 35), the overall result will still be about 10^(-10^84). Thus the specified complexity Chi=-log2[10^120.Phi_s(T).P(T|H)]=3.321928*10^84, or about 3*(10^84). This is an astronomically large number, much greater than the cutoff point of 1, so the astronauts could be certain that the monolith was made by an intelligent agent, even if they knew nothing about its history and had only a basic knowledge of lunar geological processes.

Having said that, it has to be admitted that sometimes, a lack of knowledge about the history of a complex system can skew CSI calculations. For example, if a team of aliens visiting Earth after a nuclear holocaust found the body of a human being buried in the Siberian permafrost, and managed to sequence the human genome using cells taken from that individual’s body, they might come across a duplicated gene. If they did not know anything about gene duplication – which might not occur amongst organisms on their planet – they might at first regard the discovery of two neighboring genes having virtually the same DNA sequence as proof positive that the human genome was designed – like lightning striking in the same place twice – causing them to arrive at an inflated estimate for the CSI in the genome. Does this mean that gene duplication can increase CSI? No. All it means is that someone (e.g. a visiting alien scientist) who doesn’t know anything about gene duplication, will overestimate the CSI of a genome in which a gene is duplicated. But since modern scientists know that gene duplication does occur as a natural process, and since they also know the rare circumstances that make it occur, they also know that the probability of duplication for the gene in question, given these circumstances, is exactly 1. Hence, the duplication of a gene adds nothing to the probability of the original gene occurring by chance. P(T|H) is therefore the same, and since the verbal descriptions of the two genomes are almost exactly the same – the only difference, in the case of a gene duplication, being “x2” plus brackets that go around the duplicated gene – the CSI will be virtually the same. Gene duplication, then does not increase CSI.

Even in this case, where the aliens, not knowing anything about gene duplication, are liable to be misled when estimating the CSI of a genome, they could still adopt a safe, conservative strategy of ignoring duplications (as they generate nothing new per se) and focusing on genes that have a known, discrete function, which is capable of being described concisely, thereby allowing them to calculate Phi_s(T) for any functional gene. And if they also knew the exact sequence of bases along the gene in question, the number of alternative base sequences capable of performing the same function, and finally the total number of base sequences which are physically possiblefor a gene of that length, the aliens could then attempt to calculate P(T|H), and hence calculate the approximate CSI of the gene, without a knowledge of the gene’s history. (I am of course assuming here that at least some genes found in the human genome are “basic” in their function, as it were.)

(iii) CSI should be calculable by independent agents, in a consistent manner.

This, too, is an essential requirement for any meaningful discussion of CSI. Beauty may be entirely in the eye of the beholder, but CSI is definitely not. The following illustration will serve to show my point.

Supose that three teams of scientists – one from the U.S.A, one from Russia and one from China – visited the moon and discovered four objects there that looked like alien artifacts: a round mirror with a picture of what looks like Pinocchio playing with a soccer ball on the back; a calculator; a battery; and a large black cube made of rock whose sides are equal in length, but whose faces are not perfectly smooth. What I am claiming here is that the various teams of scientists should all be able to rank the CSI of the four objects in a consistent fashion – e.g. “Based on our current scientific knowledge, object 2 has the highest level of CSI, followed by object 3, followed by object 1, followed by object 4” – and that they should be able to decide which objects are very likely to have been designed and which are not – e.g. “Objects 1, 2 and 3 are very likely to have been designed; we’re not so sure about object 4.” If this level of agreement is not achievable, then CSI is no longer a scientific concept, and its assessment becomes more akin to art than science.

We can appreciate this point better if we consider the fact that three art teachers from the same cultural, ethnic and socioeconomic backgrounds (e.g. three American Hispanic middle class art teachers living in Miami and teaching at the same school) might reasonably disagree over the relative merits of four paintings by different students at their school. One teacher might discern a high degree of artistic maturity in a certain painting, while the other teachers might see it as a mediocre work. Because it is hard to judge the artistic merit of a single painting by an artist, in isolation from that artist’s body of work, some degree of subjectivity when assessing the merits of an isolated work of art is unavoidable. CSI is not like this.

First, Phi_s(T) depends on the basic concepts in your language, which are public and not private, as you share them with other speakers of your language. These concepts will closely approximate the basic concepts of other languages; again, the concepts of other languages are shareable with speakers of your language, or translation would be impossible. Intelligent aliens, if they exist, would certainly have basic concepts corresponding to geometrical and other mathematical concepts and to biological functions; these are the concepts that are needed to formulate a semiotic description of a pattern T, and there is no reason in principle why aliens could not share their concepts with us, and vice versa. (For the benefit of philosophers who might be inclined to raise Quine’s “gavagai” parable: Quine’s mistake, in my view, was that he began his translation project with nouns rather than verbs, and that he failed to establish words for “whole” and “part” at the outset. This is what one should do when talking to aliens.)

Second, your estimate for P(T|H) will depend on your scientific choice of chance hypothesis and the mathematics you use to calculate the probability of T given H. A scientific hypothesis is capable of being critiqued in a public forum, and/or tested in a laboratory; while mathematical calculations can be checked by anyone who is competent to do the math. Thus P(T|H) is not a private assessment; it is publicly testable or checkable.

Let us now return to our illustration regarding the three teams of scientists examining four lunar artifacts. It is not necessary that the teams of scientists are in total agreement about the CSI of the artifacts, in order for it to be a meaningful scientific concept. For instance, it is possible that the three teams of scientists might arrive at somewhat different estimates of P(T|H), the probability of a pattern T with respect to the chance hypothesis H, for the patterns found on the four artifacts. This may be because the chance hypotheses considered by the various teams of scientists may be subtly different in their details. However, after consulting with each other, I would expect that the teams of scientists should be able to resolve their differences and (eventually) arrive at an agreement concerning the most plausible chance hypothesis for the formation of the artifacts in question, as well as a ballpark estimate of its magnitude. (In difficult cases, “eventually” might mean: over a period of some years.)

Another source of potential disagreement lies in the fact that the three teams of scientists speak different languages, whose basic concepts are very similar but not 100% identical. Hence their estimates of Phi_s(T), or the number of patterns for which a speaker S’s semiotic description is at least as simple as S’s semiotic description of a pattern T identified in a complex system, may be slightly different. To resolve these differences, I would suggest that as far as possible, the scientists should avoid descriptions which are tied to various cultures or to particular individuals, unless the resemblance is so highly specific as to be unmistakable. Also, the verbs employed should be as clear and definite as possible. Thus a picture on an alien artifact depicting what looks like Pinocchio playing with a soccer ball would be better described as a long-nosed boy kicking a black and white truncated icosahedron.

(iv) CSI should be knowable with absolute certainty.

Science is provisional. Based on what scientists know, it appears overwhelmingly likely that the Earth is 4.54 billion years old, give or take 50 million years. A variety of lines of evidence point to this conclusion. But if scientists discovered some new astronomical phenomena that could only be accounted for by positing a much younger Universe, then they’d have to reconsider the age of the Earth. In principle, any scientific statement is open to revision or modification of some sort. Even a statement like “Gold has an atomic number of 79”, which expresses a definition, could one day fall into disuse if scientists found a better concept than “atomic number” for explaining the fundamental differences between the properties of various elements.

Hence the demand by some CSI skeptics for absolute ironclad certainty that a specified complex system is the product of intelligent agency is an unscientific one.

Likewise, the demand by CSI skeptics for an absolutely certain, failproof way to measure the CSI of a system is also misplaced. Just as each of the various methods used by geologists to date rocks has its own limitations and situations where it is liable to fail, so too the various methods that Intelligent Design scientists come up with for assessing P(T|H) for a given pattern T and chance hypothesis H, will have their own limitations, and there will be circumstances when they yield the wrong results. That does not invalidate them; it simply means that they must be used with caution.

(v) CSI should be precisely calculable (within reason) by independent agents.

In a post (#259) on the recent thread, On the calculation of CSI, Jemima Racktouey throws down the gauntlet to Intelligent Design proponents:

If “CSI” objectively exists then you should be able to explain the methodology to calculate it and then expect independent calculation of the exact same figure (within reason) from multiple sources for the same artifact.

On the surface this seems like a reasonable request. For instance, the same rock dating methods are used by laboratories all around the world, and they yield consistent results when applied to the same rock sample, to a very high degree. How sure can we be that a lab doing Intelligent Design research in, say, Moscow or Beijing, would yield the same result when assessing the CSI of a biological sample as the Biologic Institute in Seattle, Washington?

The difference between the procedures used in the isochron dating of a rock sample and those used when assessing the CSI of a biological sample is that in the former case, the background hypotheses that are employed by the dating method have already been spelt out, and the assumptions that are required for the method to work can be checked in the course of the actual dating process; whereas in the latter case, the background chance hypothesis H regarding the most likely process whereby the biological sample might have formed naturally has not been stipulated in advance, and different labs may therefore yield different results because they are employing different chance hypotheses. This may appear to generate confusion; in practice, however, I would expect that two labs that yielded wildly discordant CSI estimates for the same biological sample would resolve the issue by critiquing each other’s methods in a public forum (e.g. a peer-reviewed journal).

Thus although in the short term, labs may disagree in their estimates of the CSI in a biological sample, I would expect that in the long term, these disagreements can be resolved in a scientific fashion.

(vi) CSI should be readily computable, given a physical description of the system.

In a post (#316) on the recent thread, On the calculation of CSI, a contributor named Tulse asks:

[I]f this were a physics blog and an Aristotelian asked how to calculate the position of an object from its motion, … I’d expect someone to simply post:

y = x + vt + 1/2at**2

If an alchemist asked on a chemistry blog how one might calculate the pressure of a gas, … one would simply post:

p=(NkT)/V

And if a young-earth creationist asked on a biology blog how one can determine the relative frequencies of the alleles of a gene in a population, … one would simply post:

p² + 2pq + q² = 1

These are examples of clear, detailed ways to calculate values, the kind of equations that practicing scientists uses all the time in quotidian research. Providing these equations allows one to make explicit quantitative calculations of the values, to test these values against the real world, and even to examine the variables and assumptions that underlie the equations.

Is there any reason the same sort of clarity cannot be provided for CSI?

The answer is that while the CSI of a complex system is calculable, it is not computable, even given a complete physical knowledge of the system. The reason for this fact lies in the formula for CSI.

On page 24 of his essay, Specification: The Pattern That Signifies Intelligence, Professor Dembski defines the specified complexity Chi of a pattern T given chance hypothesis H, minus the tilde and context sensitivity, as:

Chi=-log2[10^120.Phi_s(T).P(T|H)]

where Phi_s(T) as the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T, and P(T|H) is the probability of a pattern T with respect to the chance hypothesis H.

The problem here lies in Phi_s(T). In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define Kolmogorov complexity and descriptive complexity as follows (p. 311):

Kolmogorov complexity is a form of computational complexity that measures the length of the minimum program needed to solve a computational problem. Descriptive complexity is likewise a form of computational complexity, but generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern. (Emphasis mine – VJT.)

In a comment (#43) on the recent thread, On the calculation of CSI, I addressed a problem raised by Mathgrrl:

While I understand your motivation for using Kolmogorov Chaitin complexity rather than the simple string length, the problem with doing so is that KC complexity is uncomputable.

To which I replied:

Quite so. That’s the point. Intelligence is non-computational. That’s one big difference between minds and computers. But although CSI is not computable, it is certainly measurable mathematically.

The reason, then, why CSI is not physically computable is that it is not only a physical property but also a semiotic one: its definition invokes both a semiotic description of a pattern T and the physical probability of a non-foresighted (i.e. unintelligent) process generating that pattern according to chance hypothesis H.

(vii) CSI should be computable by some general algorithm that can be applied to an arbitrary system.

In a post (#263) on the recent thread, On the calculation of CSI, Jemima Racktouey issues the following challenge to Intelligent Design proponents:

If CSI cannot be calculated then the claims that it can are bogus and should not be made. If it can be calculated then it can be calculated in general and there should not be a very long thread where people are giving all sorts of reasons why in this particular case it cannot be calculated. (Emphasis mine – VJT.)

And again in post #323, she writes:

Can you provide such a definition of CSI so that it can be applied to a generic situation?

I would like to note in passing how the original demand of ID critics that CSI should be calculable has grown into a demand that it should be physically computable, which has now been transformed into a demand that it should be computable by a general algorithm. This demand is tantamount to putting CSI in a straitjacket of the materialists’ making. What the CSI critics are really demanding here is a “CSI scanner” which automatically calculates the CSI of any system, when pointed in the direction of that system. There are two reasons why this demand is unreasonable.

First, as I explained earlier in part (vi), CSI is not a purely physical property. It is a mixed property – partly semiotic and partly physical.

Second, not all kinds of problems admit of a single, generic solution that can be applied to all cases. An example of this in mathematics is the Halting problem. I shall quote here from the Wikipedia entry:

In computability theory, the halting problem is a decision problem which can be stated as follows: Given a description of a program, decide whether the program finishes running or continues to run forever. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. We say that the halting problem is undecidable over Turing machines. (Emphasis mine – VJT.)

So here’s my counter-challenge to the CSI skeptics: if you’re happy to acknowledge that there’s no generic solution to the halting problem, why do you demand a generic solution to the CSI problem – that is, the problem of calculating, after being given a complete physical description of a complex system, how much CSI the system embodies?

Comments
vjtorley,
Could you please clarify what you mean by this statement? Is it or is it not possible to measure, with some degree of precision, the CSI present in a particular artifact?
In answer to your second question: Yes, it is. In my post I gave the examples of Mt. Rushmore and the discovery of a monolith on the moon. In answer to your first question: as I use the terms, “calculable” means “capable of being assigned a specific numeric value on the basis of a mathematical formula whose terms have a definite meaning that everyone can agree on,” whereas “computable” means “calculable on the basis of a suitable physical description alone.”
If you agree with Dembski that CSI can be calculated with some degree of precision for a particular artifact, why do you raise the issue of calculable vs. computable, using your definitions?MathGrrl
March 29, 2011
March
03
Mar
29
29
2011
04:06 PM
4
04
06
PM
PDT
vjtorley,
I would amend (viii) to read:
viii) It must be demonstrated that a CSI of greater than 1 is a reliable indicator of the involvement of intelligent agency.
That obviously depends on having a rigorous mathematical definition of CSI, but I don't think it changes my proposed criterion materially. Do you agree that it is essential?
Second, the demonstration is already exists: it is an empirical one. The CSI Chi of a system is a number which is 400 or so bits less than what Professor Dembski defines as the specificity sigma, which is -log2[Phi_s(T).P(T|H)].
As demonstrated in the CSI thread, there is currently no mathematically rigorous definition of CSI. Dembski's terms are more than problematic to apply to real world systems.
Nowhere in nature has there ever been a case of an unintelligent cause generating anything with a specificity in excess of 400 bits.
Nowhwere has CSI been calculated objectively and rigorously for any natural system. This claim is baseless.
This is a falsifiable statement; but it has never been falsified experimentally.
You've got the burden of proof backward. Scientists making claims of this nature are not only responsible for demonstrating that their hypothesis explains certain data, they also must attempt to falsify it themselves. Thus far, there are no objective calculations of CSI for any biological systems. In order to support your claim, you would need to show how to calculate CSI for some systems that are known to be the result of intelligent agency and some that are not. Interestingly, this has been noted before, but no ID proponents have addressed the problem. Wesley Elsberry and Jeffrey Shallit reviewed Dembski's CSI concept back in 2003 and noted a number of challenges for ID proponents:
12.1 Publish a mathematically rigorous definition of CSI 12.2 Provide real evidence for CSI claims 12.3 Apply CSI to identify human agency where it is currently not known 12.4 Distinguish between chance and design in archaeoastronomy 12.5 Apply CSI to archaeology 12.6 Provide a more detailed account of CSI in biology 12.7 Use CSI to classify the complexity of animal communication 12.8 Animal cognition
(That first one sounds really familiar for some reason.) Each of these is explained in more detail in the paper. If an ID proponent were interested in demonstrating the scientific usefulness of CSI, he or she could do worse than to address Elsberry's and Shallit's challenges.MathGrrl
March 29, 2011
March
03
Mar
29
29
2011
04:05 PM
4
04
05
PM
PDT
bornagain77,
MathGrrl, I admire your tenacity for trying to get any leeway you can for showing that material processes may possibly be able to create functional information, even if you have to use Evolutionary Algorithms that are jerry-rigged to converge on that solution you so desparately want!
You misunderstand my intention. I simply want to learn the mathematically rigorous definition of CSI and get some detailed examples of how to calculate it for the scenarios I describe in the CSI thread. Can you assist me?MathGrrl
March 29, 2011
March
03
Mar
29
29
2011
04:04 PM
4
04
04
PM
PDT
QuiteID,
MathGrrl, you may have already done this, so forgive me if this is a dumb question. But where, precisely, does Dr. Dembski’s “Specification” paper go wrong? I think people here might understand your challenge more if you pointed out the places where it’s particularly confusing or at odds with what you think.
The two broad areas where I find Dembski's description wanting are the creation of a specification and the determination of the chance hypothesis. The semiotic description underlying a specification is subjective, highly dependent on the background knowledge of the agent. This makes it very difficult, if not impossible, to calculate CSI objectively. It also increases the probability of false positives, since new knowledge can dramatically alter the calculation. Dembski sometimes seems to use a uniform probability distribution for the chance hypothesis, but he also defines "chance" so broadly that he includes evolutionary mechanisms which are not based on chance in the usual sense. Bringing in these historical contingencies seems to contradict the premises of his original question: "Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?" In addition to those two issues, the lack of detailed calculations for biological systems makes it very difficult to understand how to apply Dembski's concepts to that realm. I would also note that the confusion is not mine alone. On the CSI thread we are over 400 comments without anyone directly addressing the questions I raised in the original post. I find that level of disagreement and lack of evidence very surprising for such a core ID concept.MathGrrl
March 29, 2011
March
03
Mar
29
29
2011
04:04 PM
4
04
04
PM
PDT
JemimaRacktouey, if you want a 'teleological' signature for life, a signature that signifies 'higher dimensional' origination for life that is over and above the finely tuned 3-Dimensional material constraints of this universe, I suggest this: notes: 4-Dimensional Quarter Power Scaling In Biology - video http://www.metacafe.com/w/5964041/ The predominance of quarter-power (4-D) scaling in biology Excerpt: Many fundamental characteristics of organisms scale with body size as power laws of the form: Y = Yo M^b, where Y is some characteristic such as metabolic rate, stride length or life span, Yo is a normalization constant, M is body mass and b is the allometric scaling exponent. A longstanding puzzle in biology is why the exponent b is usually some simple multiple of 1/4 (4-Dimensional scaling) rather than a multiple of 1/3, as would be expected from Euclidean (3-Dimensional) scaling. http://www.nceas.ucsb.edu/~drewa/pubs/savage_v_2004_f18_257.pdf “Although living things occupy a three-dimensional space, their internal physiology and anatomy operate as if they were four-dimensional. Quarter-power scaling laws are perhaps as universal and as uniquely biological as the biochemical pathways of metabolism, the structure and function of the genetic code and the process of natural selection.,,, The conclusion here is inescapable, that the driving force for these invariant scaling laws cannot have been natural selection." Jerry Fodor and Massimo Piatelli-Palmarini, What Darwin Got Wrong (London: Profile Books, 2010), p. 78-79 https://uncommondescent.com/evolution/16037/#comment-369806 Though Jerry Fodor and Massimo Piatelli-Palmarini rightly find it inexplicable for 'random' Natural Selection to be the rational explanation for the scaling of the physiology, and anatomy, of living things to four-dimensional parameters, they do not seem to fully realize the implications this 'four dimensional scaling' of living things presents. This 4-D scaling is something we should rightly expect from a Intelligent Design perspective. This is because Intelligent Design holds that ‘higher dimensional transcendent information’ is more foundational to life, and even to the universe itself, than either matter or energy are. This higher dimensional 'expectation' for life, from a Intelligent Design perspective, is directly opposed to the expectation of the Darwinian framework, which holds that information, and indeed even the essence of life itself, is merely an 'emergent' property of the 3-D material realm. Earth’s crammed with heaven, And every common bush afire with God; But only he who sees, takes off his shoes, The rest sit round it and pluck blackberries. - Elizabeth Barrett Browning Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH Excerpt: It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate. http://journals.witpress.com/journals.asp?iid=47 Quantum entanglement holds together life’s blueprint - 2010 Excerpt: “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ Quantum Information/Entanglement In DNA & Protein Folding - short video http://www.metacafe.com/watch/5936605/ Further evidence that quantum entanglement/information is found throughout entire protein structures: https://uncommondescent.com/intelligent-design/we-welcome-honest-exchanges-here/#comment-374898 It is very interesting to note that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure 'quantum form' is completely transcendent of any time and space constraints, should be found in molecular biology, for how can the quantum entanglement effect in biology possibly be explained by a material (matter/energy) cause when the quantum entanglement effect falsified material particles as its own causation in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various 'special' configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! Yet it is also very interesting to note, in Darwinism's inability to explain this 'transcendent quantum effect' adequately, that Theism has always postulated a transcendent component to man that is not constrained by time and space. i.e. Theism has always postulated a 'eternal soul' for man that lives past the death of the body. Quantum no-hiding theorem experimentally confirmed for first time - March 2011 Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html JemimaRacktouey so what do you think? Pretty neat huh? Or will you just scoff at this as well even though it is such a powerful 'signature'?bornagain77
March 29, 2011
March
03
Mar
29
29
2011
03:58 PM
3
03
58
PM
PDT
Joseph, Did Flew ever convert to Christianity? I thought he died a sort of non-religious theist. Or maybe I'm thinking of someone else.Collin
March 29, 2011
March
03
Mar
29
29
2011
03:28 PM
3
03
28
PM
PDT
Jemima, the presence of a biological operating system is evidence against ID?Collin
March 29, 2011
March
03
Mar
29
29
2011
03:20 PM
3
03
20
PM
PDT
JR:
If the “process” was teleological I think we’d see a bit more evidence of it.
How much do you need? Do you know what evidence is? JR:
Perhaps they don’t like it because it’s not supported by any evidence?
Yet there was enough evidence to convince long-time atheist anthony Flew (talk about bias) and the people who don't like ID need to suck it up because it is their failure to produce positive evidence for their position that has allowed ID to persist. Thank you- you are a fine representative of the anti-ID position.Joseph
March 29, 2011
March
03
Mar
29
29
2011
03:14 PM
3
03
14
PM
PDT
VJ, The Old Man is long gone but there are hundreds of other less publicized natural rock formation- like patterns to choose from in New Hampshire.Joseph
March 29, 2011
March
03
Mar
29
29
2011
03:09 PM
3
03
09
PM
PDT
vjtorley
I would reply that any process with a bias to produce the specified information we find in living things must itself be teleological.
Heads you win, heads you win eh? Well I would reply that your conclusion is not supported by the available evidence. I.E the universe. Every single object observed in the universe so far does not show any signs of life. If the "process" was teleological I think we'd see a bit more evidence of it. After all, the entire universe empty of life despite teleological guidance? Not much teleological guidance going on there if you ask me. Perhaps it's local to our solar system? Or how do you explain that apparent contradiction - is the universe designed for life, but just 1 planet's worth? Seems like a bit of a waste of a universe to me. More likely the universe is designed for gas clouds and black holes then us, if designed at all...
Darwinists don’t like this conclusion, as they want their theory to be non-teleological.
Perhaps they don't like it because it's not supported by any evidence? After all, when I said:
But, as I say, such biases were built in from the start.
Then you said:
I would reply that any process with a bias to produce the specified information we find in living things must itself be teleological.
But earlier you said
Nature contains no hidden biases that help explain the sequence we see in a protein, or for that matter in a DNA double helix.
So which is it? Nature either has a hidden bias or it does not. I'd call "teleological guidance" the ultimate "hidden bias".JemimaRacktouey
March 29, 2011
March
03
Mar
29
29
2011
03:03 PM
3
03
03
PM
PDT
uoflcard (#51) By the way, I'd agree with the point of your parable, which is the opposite of aliens coming upon Mt. Rushmore, in that we're the alien explorers. The error arose because our dictionary of concepts was incomplete, leading us to err on the side of chance, not design.vjtorley
March 29, 2011
March
03
Mar
29
29
2011
02:55 PM
2
02
55
PM
PDT
uoflcard (#51) Here's another one: The Old Man of the Mountain (The natural equivalent of Mt. Rushmore) http://www.epodunk.com/cgi-bin/genInfo.php?locIndex=30vjtorley
March 29, 2011
March
03
Mar
29
29
2011
02:41 PM
2
02
41
PM
PDT
Jemima Racktouey (#54) Thank you for your post and links. Concerning the evolution of life from non-living matter, you write:
But, as I say, such biases were built in from the start. The fact that you don’t appear to notice them there now is a testament to the power of evolution.
I would reply that any process with a bias to produce the specified information we find in living things must itself be teleological. See Professor William Dembski and Bob Marks II's paper, Life's Conservation Law: Why Darwinian Evolution Cannot Create Biological Information . Darwinists don't like this conclusion, as they want their theory to be non-teleological. If it's teleological then it still requires an Intelligent Designer.vjtorley
March 29, 2011
March
03
Mar
29
29
2011
02:36 PM
2
02
36
PM
PDT
uoflcard (#50) Some examples that might help you: Eoliths (not genuine tools) http://en.wikipedia.org/wiki/Eolith Oldowan (the earliest recognizable tools) http://en.wikipedia.org/wiki/Oldowan Yonaguni, the Japanese Atlantis - or is it natural? http://news.nationalgeographic.com/news/2007/09/070919-sunken-city.html http://news.nationalgeographic.com/news/bigphotos/5467377.html See also: Alleged human tracks in Carboniferous rocks in Kentucky http://www.paleo.cc/paluxy/berea-ky.htm (Man-made carvings made recently by Native Americans, in all likelihood.) Food for thought.vjtorley
March 29, 2011
March
03
Mar
29
29
2011
02:27 PM
2
02
27
PM
PDT
Markf (#9) I now have a little time to address the five questions you raised. Let's look at (1) and (2). You write:
1) The formula Chi=-log2[10^120.Phi_s(T).P(T|H)] contains a rather basic error. If you have n independent events and the probability of single event having outcome x is p, then the probability of at least one event having outcome x is not np . It is (1 – (1-p)^n). So the calculation 10^120.Phi_s(T).P(T|H) is wrong . The answer is still very small if p is small relative to n. But it does illustrate the lack of attention to detail and general sloppiness in some of the work on CSI. 2) There is some confusion as to whether Phi_s(T) includes all patterns that are "at least as simple as the observed pattern" or whether it is only those patterns that are "at least as simple as the observed pattern AND are at least as improbable". If we use the "at least as simple" criterion then some of the other patterns may be vastly more probable than the observed pattern. So we really have to use the "at least as simple AND are at least as improbable" criterion. However, there is no adequate justification for only using the patterns that are less probable.
In reply: (1) Dembski is not trying to calculate the probability of at least one event having outcome x. As I see it, the n serves as a multiplier, to give the expected number of events having outcome x (E=np), given the long history of the universe. That's why the 10^120 multiplier is used. (2) I'll try to make my point with a story. Imagine for argument's sake that you have a reputation for being something of a card sharp. (I have no idea whether you play - I only know strip jack, gin rummy and UNO off the top of my head, although I have played blackjack in Las Vegas. I only had $24 to gamble with, mind you - I was backpacking, and I had to budget. Anyway, I managed to visit 34 of states of the U.S.A. in just three months, courtesy of Greyhound buses and "Let's Go USA.") Anyway, you're playing poker, and you happen to bring up a royal flush right away. Your partner accuses you of cheating, citing the high specificity of the result: royal flush (describable in just two words). You reply by saying that "single pair" is just as verbally specific (two words) and that there are lots of two word-descriptions of card hands - some probable, some not - and that if you add them all up together, the chances of satisfying some two-word description is not all that low. Undaunted, your opponent points out that that's not relevant. All of these other hands are much more probable than a royal flush. But then your opponent relents a little. He allows you to multiply the probability of your getting a royal flush by the number of two-word descriptions of card hands (e.g. "Full house", "single pair") that are commonly in use. If you can demonstrate that this product is not a very low number, then he will continue to play cards with you. Why does your opponent do this? Because he is trying to take into account the fact that whereas the probability of a royal flush is low, it's not the only hand with that level of verbal specificity (two words). On the other hand, adding the probabilities of the various hands that can be specified in two words would be too generous to you. To get a good sense of whether you are cheating or not, it seems more reasonable to multiply the number of card hands that can be specified in two words by the probability of getting a royal flush, in order to determine whether the royal flush which you got was outrageously improbable. Putting it more formally: we're not trying to just calculate the probability of getting a royal flush, and we're not trying to calculate the probability of getting some card-hand that can be described in two words (e.g. royal flush, full house, single pair). Rather, we're trying to calculate a notional figure: the probability of getting A card-hand which is just as improbable as a royal flush AND just as verbally specific. Since the only other card-hands with the same verbal specificity are much more probable than the royal flush, we have to pretend (for a moment) that all these card-hands have the same probability as a royal flush, count them up and multiply the number of these hands by the probability of getting a royal flush. That, I think, is a fairer measure of whether you're cheating. So in answer to your question: Phi_s(T) does include all patterns that are "at least as simple as the observed pattern", even though "some of the other patterns may be vastly more probable than the observed pattern." But in order not to be overly generous, we don't just sum the probabilities of all the patterns with the same level of verbal simplicity. Rather, we multiply the number of patterns that are at least as simple as the observed pattern by the very low probability of the observed pattern. OK, let's go on to your objection (3).
(3) When Dembski (and you) estimate Phi_s(T) you use a conceptual definition of T: "bidirectional rotary motor-driven propeller". This is not necessarily (in fact is almost certainly not) the same as the exact configuration of proteins. You do attempt to address this for the ATP case with a note in brackets. You say that any other configuration of proteins would be a lot more complex and therefore vastly more improbable. I am not a biochemist (are you?). I think you have to admit this is unproven.
I studied chemistry and physics for two years at university but not biology, so I can't give a definite answer to your question. Here's what I'd ask a biologist: assuming that there are other bidirectional rotary motor-driven propellers in Nature, (a) how many of them are there, and (b) how much bigger than a bacterial flagellum is the second smallest one? If the answer to (a) is "a half-dozen at the most", and the answer to (b) is "more than twice as big", I'd be inclined to neglect the other cases, as it would be much more difficult for them to arise by a non-foresighted "chance" process. All your objection shows is that P(T|H) is revisable, if we find a large number of other structures in Nature with the same function, and having a comparable probability of arising by "chance" as I've defined it. But I'd be the first one to admit that P(T|H) is revisable, and ditto for Chi. There's no such thing as absolute certitude in science. Next, you write:
4) The attempt to identify simple or simpler patterns through number of concepts is an absolute minefield. A concept is not the same as a word. For example, a "motor" can be broken down into many other concepts e.g. machine that converts other forms of energy into mechanical energy and so imparts motion.
You are quite right to say that a concept is not the same as a word, but wrong to infer that a word which is capable of being defined using several words is not basic. Any word can be defined in this way. The question is: which words are best learned holistically, rather than by breaking them down into conceptual parts? These words I'd consider to be epistemically basic. For human beings, the word "human" is surely epistemically basic, but of course a zoologist would take a paragraph to define it properly. Is "motor" basic? I'd say yes. The great physicist James Clerk Maxwell, was a very curious toddler. By the age of three, everything that moved, shone, or made a noise drew the same question: "What's the go o' that?" Although he didn't know the word "motor", he had a strong, deep-seated urge to find out what made things move. If I were trying to find the basic concepts of a language, I might try to find the smallest set of words that can be (practicably) used to define all the other words of the language. I believe some dictionaries published by Longman now use a list of 2,000 words for defining every other word. Actually, the number 2,000 sounds about right to me, because it's the same as the number of Japanese characters (kanji) that students are expected to be able to read, after 12 years of schooling. Of course, a few individuals can read as many as 10,000 of the more obscure kanji, but the standard kanji number 2,000, altogether. Personally I think that most young children would have no trouble understanding the four terms that Professor Dembski used to define a bacterial flagellum. Of course, "bidirectional" would be a new word to them, but they could pick it up immediately if you showed them something that could rotate clockwise and anti-clockwise. I'm sure of that. Finally, you write:
5) The definition of H is very flaky. You admit that you are not certain what Dembski means. So you adopt your own definition – "a process which does not require the input of information". But as we are currently using this formula to clarify what we mean by information this is circular. In one case you want to include the possibility of gene duplication in the chance hypothesis so you don't end up with the awkward result that gene duplication doubles the CSI. But once you admit that knowing about gene duplication radically affects the level of CSI you are open to the possibility that other unspecified or unknown events such as gene duplication can have enormous affects on the supposed CSI. In other words we cannot even make a rough estimate of CSI without having a good account of all possible natural processes.
In response to the charge of circularity: when I wrote the words "a process which does not require the input of information", I had in mind not CSI, but Professor Dembski's concept of active information, which he explains in his paper, Life's Conservation Law: Why Darwinian Evolution Cannot Create Biological Information (pages 13-14):
In such discussions, it helps to transform probabilities to information measures (note that all logarithms in the sequel are to the base 2). We therefore define the endogenous information I_omega as –log(p), which measures the inherent difficulty of a blind or null search in exploring the underlying search space omega to locate the target T. We then define the exogenous information I_s as –log(q), which measures the difficulty of the alternative search S in locating the target T. And finally we define the active information I+ as the difference between the endogenous and exogenous information: I+ = I_omega – I_s = log(q/p). Active information therefore measures the information that must be added (hence the plus sign in I+) on top of a null search to raise an alternative search's probability of success by a factor of q/p.
Dembski shows that contrary to what Darwinians often maintain about their own theory, NDE is deeply teleological, in that it requires active information to make it work. His parable of the sailor on page 31 is worth a read. You also argue that the correction I make for gene duplication (a process that appears at first glance to raise CSI) leaves me "open to the possibility that other unspecified or unknown events such as gene duplication can have enormous affects on the supposed CSI." Yes, that's always possible. But there are different senses of "possible." Theoretically, someone could demonstrate that life is a lot less specific than we all imagined - but on a practical level, the demonstration would have to simplify the specificity of life by so many orders of magnitude that I don't lose any sleep over the prospect. And from the theoretical possibility that my estimates of the CSI in a bacterial flagellum may be out by several orders of magnitude (e.g. 0.2126 instead of 2126), it simply does not follow that "we cannot even make a rough estimate of CSI without having a good account of all possible natural processes" (italics mine), as you claim.vjtorley
March 29, 2011
March
03
Mar
29
29
2011
02:09 PM
2
02
09
PM
PDT
vjtorley
In other words, Nature contains no hidden biases that help explain the sequence we see in a protein, or for that matter in a DNA double helix.
There are obviously some biases, hidden or not. Currently apparent or not. For example, some reactions needed for "DNA-like" substances work better at a temperature found on the earth. Many factors would change the available sequence path for how and of what specific makeup a "DNA-like" substance could come about. See the recent "Nasa says alien life on earth" story for instance.
If there were, these biases would serve to reduce the Shannon information content in DNA and proteins, leading to a simple redundant, repetitive order, as opposed to complexity, which is required for living things.
But, as I say, such biases were built in from the start. The fact that you don't appear to notice them there now is a testament to the power of evolution. DNA appears to operate in a space, a "biological operating system", particularly suited to it. Replication is largely error free, there are no "biases" as you say that reduce the information content in unpredictable (from the DNA's POV) ways. DNA can do it's thing largely uninterrupted.
as opposed to complexity, which is required for living things.
It is apparently so. And yet I'm unconvinced. If nature contains no hidden biases that help explain the sequence we see in a protein, or for that matter in a DNA double helix then perhaps "hidden biases" is the wrong place to be looking. NASA did a great workshop on the origin of life (information). http://astrobiology.nasa.gov/nai/ool-www/program/ Have you checked it out? Some fantastic materials there. And they've looking specifically at prebiotic chemistry. Including the now somewhat notorious Alternative Biochemistry and Arsenic, or Life as We Might Not Expect It but it's all good stuff. So for me just because there are no such biases as you say does not mean automatically that the "information" must have been designed in. It's too easy, too shallow and simple and I don't see how one followed from the other. You don't say it in that post but you might as well have had.JemimaRacktouey
March 29, 2011
March
03
Mar
29
29
2011
01:59 PM
1
01
59
PM
PDT
But to save the royal flush analogy, I would just say that the royal flush is the key while the person (having knowledge of poker) is the lock.Collin
March 29, 2011
March
03
Mar
29
29
2011
01:49 PM
1
01
49
PM
PDT
A royal flush obviously has no meaning without humans. But it is a good analogy. What might be a yet better analogy is a key and lock system. If you discover a key and want to know if it is designed and then you discover a lock that it fits perfectly into, then you can infer design. DNA seems to be the key and proteins (and other things) the lock.Collin
March 29, 2011
March
03
Mar
29
29
2011
01:47 PM
1
01
47
PM
PDT
The other thing, besides 10^120 events (I'm still assuming that's what that is), that makes this conservative is that if you don't know of a function that an object actually has, you automatically assume it to be less complex than you think. Let me use the opposite of the Mt. Rushmore example. Let's say we're the alien explorers. We stumble upon a planet which we know nothing of, and we find a mountain face with some curious features. Perhaps we recognize two objects that look something like eyes, and some kind of potential orifice that might be a mouth of some type, but that's all we recognize. It is pretty eroded, so our Chi calculation ends up suggesting that it could have developed by chance. But after some time studying the planet and artifacts found on it, we learn about some intelligent creatures that lived there. We find something like a medical record that describes some small feature on the "face" of these creatures that is nothing like we've seen on Earth, like an symmetrical lobe that senses temperature accurately. We go back to Mt. Alien and sure enough, there is a weird little rock jutting out of the face of the mountain in the same location as described in the medical texts. We recalculate Chi and it now triumphantly declares Design. The point is that originally we were misinformed, so our error was on the side of Chance, not Design. This should not be arguing point for ID critics regarding this calculation. The only way I see that this calculation could err on the side of Design is P(T|H), which is simply a difficult probability to estimate in biological systems. Increased knowledge could raise this probability, shifting Chi towards the Chance side of this spectrum. FYI - This was mainly me thinking aloud, so I welcome correctionsuoflcard
March 29, 2011
March
03
Mar
29
29
2011
01:22 PM
1
01
22
PM
PDT
Are there any known examples whose calculated specified complexity Chi
Chi=-log2[10^120.Phi_s(T).P(T|H)]
is around 1? I tend to believe that this formula is severely conservative in favor of Darwinism, given the staggering 10^120 events assumption (That is what that number is, right? The max number of "events" in the history of the Universe?). I'm just curious as to what events have a Chi that comes in around 1, and then what Chi is for events that we could decide are somewhat on the "border" of our intuitions about their origins, regarding the Explanatory Filter. So, some obvious selections: Law: Motion of the Earth around the Sun Chance: Order of sand on a beach Design: iPad 2.0 But what about something on the border between chance and design, intuitively? I'm having trouble thinking of something, so feel free to suggest something... Maybe a severely eroded arrowhead? I'm just curious as to where something like that would come out from the Chi equation. I'm guessing less than one, which would, to me, bode well for the conservativeness of this calculationuoflcard
March 29, 2011
March
03
Mar
29
29
2011
01:09 PM
1
01
09
PM
PDT
vjtorley: Excellent throughout.PaV
March 29, 2011
March
03
Mar
29
29
2011
01:03 PM
1
01
03
PM
PDT
Jemima Racktouey (#7) Thank you for your post. With regard to the calculation of P(T|H) you ask:
Does your reference to "what scientists currently know" refer to an ID scientist who presumably does not believe that Evolution can create the life we see about us or to a non-ID scientist who does understand that Evolution can create such life?
It refers to scientific knowledge acquired on the basis of observations either of nature or of experiments in laboratories. Personal beliefs don't come into it. You also write:
It seems to me if you are calculating probabilities based on the spontaneous formation of (for example) a given protein (tornado in a junkyard) you’ll get a different answer to assuming it evolved.
Yes. But as my new post shows, I'm willing to count as a chance hypothesis any process that lacks foresight of long-term results and that does not require the input of information from outside - either at the beginning (front-loading) or during the process itself (manipulation). And Professor Dembski in his article on specification expressly includes Darwinian evolution as a chance hypothesis - even though natural selection is, as we all know, non-random. So "chance" as Dembski uses the term does not mean "totally random." You continue:
And my question is what are the options available to that process? Random iteration through the total available possibility space or a gradual step by step process?
Both. Please see my remarks above. You add:
Which one makes a big difference. And don't forget that no actual biologist claims that the components of cells came together randomly and so the "tornado in a junkyard" calculations so beloved of Kairosfocus and others are simply irrelevant and I'd like to think they were not deliberately misleading but they've been corrected so many times by now its a reasonable assumption.
There are some situations where "tornado in a junkyard" calculations are relevant, and that's where no unintelligent non-random process has been shown to achieve better results. A good example of this is protein formation. After showing in chapter 9 of Signature in the Cell that the chance formation of a single protein is mathematically out of the question, he then goes on to consider other alternatives - e.g. biochemical predestination - before rejecting them on empirical grounds. Biochemical predestination can be rejected for DNA as well, on the same grounds:
In sum, two features of DNA ensure that "self-organizing" bonding affinities cannot explain the specific arrangement of nucleotides in the molecule: (1) there are no bonds between bases along the information-bearing axis of the molecule and (2) there are no differential affinities between the backbone and the specific bases that could account for variations in the sequence. (p. 244)
In other words, Nature contains no hidden biases that help explain the sequence we see in a protein, or for that matter in a DNA double helix. If there were, these biases would serve to reduce the Shannon information content in DNA and proteins, leading to a simple redundant, repetitive order, as opposed to complexity, which is required for living things. I hope that helps.vjtorley
March 29, 2011
March
03
Mar
29
29
2011
11:44 AM
11
11
44
AM
PDT
markf: [41] Can anyone on this discussion explain to me what Joseph is trying to say!!! Look at Dembski's Specification paper on "prespecifications". Section 5, I believe.PaV
March 29, 2011
March
03
Mar
29
29
2011
11:40 AM
11
11
40
AM
PDT
Wm. J. Murray [24]: How about challenging Darwinists to provide the math that demonstrates non-intelligent processes to be up to the task that they are claimed as fact to be capable of? That's exactly right. In fact, Motoo Kimura---one of, if not the, brightest and best of population geneticists---came up with his Neutral Theory because the newly discovered protein variations found through gel electrophoresis during the sixties, was vastly too high to be accounted for by strictly Darwinian processes. And it's gotten worse ever since. And with whole genome analysis, the level of variation amongst species themselves (intra-species variation) is staggering---and completely unexplainable using supposed Darwimian mechanisms.PaV
March 29, 2011
March
03
Mar
29
29
2011
11:18 AM
11
11
18
AM
PDT
The book is "Probability's Nature and Nature's Probbility"- it exposes the fallacy of Mark Frank's position.Joseph
March 29, 2011
March
03
Mar
29
29
2011
11:08 AM
11
11
08
AM
PDT
markf (#36) As the foregoing article shows, I have changed my mind about whether gene duplication can increase CSI. I don't think it can. Please see part (ii) of my article. I originally thought that P(T|H) was lower for a genome with a duplicated gene, but that's because I was mentally picturing a longer string and reasoning that the probability of all the base characters arising randomly in the longer string is less than the probability of the characters arising randomly in a shorter string. But that's not how gene duplication works. See my remarks above. Here are three helpful articles on gene duplication that might be of use to you and Mathgrrl: http://www.evolutionnews.org/2009/10/jonathan_wells_hits_an_evoluti026791.html http://www.discovery.org/a/4278 http://www.discovery.org/a/14251vjtorley
March 29, 2011
March
03
Mar
29
29
2011
11:03 AM
11
11
03
AM
PDT
I can splain it ferya mark- You sed:
This doesn’t conflict with anything I wrote. I say of a Royal Flush “is in some sense special”. This would be even more true of 5 Royal Flushes. I assume that you understand that 5 Royal Flushes is no more improbable than any other sequence of 65 cards? This is after all the whole reason for Dembski’s work on specification. The whole discussion is over why, in that case, do we find a Royal Flush (or 5 Royal Flushes) special.
I responded with: And if I were to receive the same 5 cards for 5 hands in a row- whatever those cards are- I would have a problem with that. Dembski has no problem with getting one of something. The problem would come in if A) the dealer called the hands before they were dealt an the dealer was right and B) You keep getting thesame thing over and over. So again getting one royal flush dealt on the first hand isn’t so surprising. Calling it and then dealing it to yourself would be questionable and getting more than one royal flush in a row would also be questionable. Does that flow any better for you or is there something specific you don't get? Any hand is highly improbable- I agree but when playing cards the probaility you will get a hand dealt to you is ONE. It is unavoidable. But if I were to get the same cards dealt to me for five hands in a row I would suspect foul play. If one person gets a royal flush on the first deal of the night, that is not an issue. But getting 5 in a row would be. The odds of getting a hand is ONE. The odds of geting a specific hand are much lower. The odds of getting that same hand again are even lower.Joseph
March 29, 2011
March
03
Mar
29
29
2011
11:03 AM
11
11
03
AM
PDT
markf:
I assume that you understand that 5 Royal Flushes is no more improbable than any other sequence of 65 cards?
Why 65? Gettng the same cards dealt to you for five hands in a row is more improbable than getting any other combination of 5 cards dealt to you 5 hands in a row. The same goes for one person hitting a 5 number lottery 5 times in a row. If that happened people would question the system.Joseph
March 29, 2011
March
03
Mar
29
29
2011
10:56 AM
10
10
56
AM
PDT
#39 Joseph Can anyone on this discussion explain to me what Joseph is trying to say!!!markf
March 29, 2011
March
03
Mar
29
29
2011
10:51 AM
10
10
51
AM
PDT
marf:
This must be true because as soon as you discover that a non-directed process can easily generate a pattern you drastically reduce the level of CSI. Gene duplication is a perfect example.
Sorry markf but gene duplications in the origin of life is just plain misleading. Also there isn't any evidence that says gene duplications are non-directed. You would have to demonstrate that the OoL was undirected- which is one reason CSI pertains to origins, just as Dembski said.Joseph
March 29, 2011
March
03
Mar
29
29
2011
10:50 AM
10
10
50
AM
PDT
1 3 4 5 6 7

Leave a Reply