Uncommon Descent Serving The Intelligent Design Community

Why there’s no such thing as a CSI Scanner, or: Reasonable and Unreasonable Demands Relating to Complex Specified Information

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

It would be very nice if there was a magic scanner that automatically gave you a readout of the total amount of complex specified information (CSI) in a system when you pointed it at that system, wouldn’t it? Of course, you’d want one that could calculate the CSI of any complex system – be it a bacterial flagellum, an ATP synthase enzyme, a Bach fugue, or the faces on Mt. Rushmore – by following some general algorithm. It would make CSI so much more scientifically rigorous, wouldn’t it? Or would it?

This essay is intended as a follow-up to the recent thread, On the calculation of CSI by Mathgrrl. It is meant to address some concerns about whether CSI is sufficiently objective to qualify as a bona fide scientific concept.

But first, some definitions. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define complex specified information (or CSI) as follows (p. 311):

Information that is both complex and specified. Synonymous with SPECIFIED COMPLEXITY.

Dembski and Wells then define specified complexity on page 320 as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY).

In this post, I’m going to examine seven demands which Intelligent Design critics have made with regard to complex specified information (CSI):

(i) that it should be calculable not only in theory but also in practice, for real-life systems;
(ii) that for an arbitrary complex system, we should be able to calculate its CSI as being (very likely) greater than or equal to some specific number, X, without knowing anything about the history of the system;
(iii) that it should be calculable by independent agents, in a consistent manner;
(iv) that it should be knowable with absolute certainty;
(v) that it should be precisely calculable (within reason) by independent agents;
(vi) that it should be readily computable, given a physical description of the system;
(vii) that it should be computable by some general algorithm that can be applied to an arbitrary system.

I shall argue that the first three demands are reasonable and have been met in at least some real-life biological cases, while the last four are not.

Now let’s look at each of the seven demands in turn.

(i) CSI should be calculable not only in theory but also in practice, for real-life systems

This is surely a reasonable request. After all, Professor William Dembski describes CSI as a number in his writings, and even provides a mathematical formula for calculating it.

On page 34 of his essay, Specification: The Pattern That Signifies Intelligence, Professor Dembski writes:

In my present treatment, specified complexity … is now … an actual number calculated by a precise formula (i.e., Chi=-log2[10^120.Phi_s(T).P(T|H)]). This number can be negative, zero, or positive. When the number is greater than 1, it indicates that we are dealing with a specification. (Emphases mine – VJT.)

The reader will recall that according to the definition given in The Design of Life (The Foundation for Thought and Ethics, Dallas, 2008), on page 311, specified complexity is synonymous with complex specified information (CSI).

On page 24 of his essay, Professor Dembski defines the specified complexity Chi of a pattern T given chance hypothesis H, minus the tilde and context sensitivity, as:

Chi=-log2[10^120.Phi_s(T).P(T|H)]

On page 17, Dembski defines Phi_s(T) as the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T.

P(T|H) is defined throughout the essay as a probability: the probability of a pattern T with respect to the chance hypothesis H.

During the past couple of days, I’ve been struggling to formulate a good definition of “chance hypothesis”, because for some people, “chance” means “totally random”, while for others it means “not directed by an intelligent agent possessing foresight of long-term results” and hence “blind” (even if law-governed), as far as long-term results are concerned. In his essay, Professor Dembski is quite clear in his essay that he means to include Darwinian processes (which are not totally random, because natural selection implies non-random death) under the umbrella of “chance hypotheses”. So here’s how I envisage it. A chance hypothesis describes a process which does not require the input of information, either at the beginning of the process or during the process itself, in order to generate its result (in this case, a complex system). On this definition, Darwinian processes would qualify as a chance hypotheses, because they claim to be able to grow information, without the need for input from outside – whether by a front-loading or a tinkering Designer of life.

CSI has already been calculated for some quite large real-life biological systems. In a post on the recent thread, On the calculation of CSI, I calculated the CSI in a bacterial flagellum, using a naive provisional estimate of the probability P(T|H). The numeric value of the CSI was calculated as being somewhere between 2126 and 3422. Since this is far in excess of 1, the cutoff point for a specification, I argued that the bacterial flagellum was very likely designed. Of course, a critic could fault the naive provisional estimate I used for the probability P(T|H). But my point was that the calculated CSI was so much greater than the minimum value needed to warrant a design inference that it was incumbent on the critic to provide an argument as to why the calculated CSI should be less than or equal to 1.

In a later post on the same thread, I provided Mathgrrl with the numbers she needed to calculate the CSI of another irreducibly complex biological system: ATP synthase. As far as I am aware, Mathgrrl has not taken up my (trivially easy) challenge to complete the calculation, so I shall now do it for the benefit of my readers. The CSI of ATP synthase can be calculated as follows. The shortest semiotic description of the specific function of this molecule is: “stator joining two electric motors” which is five words. If we imagine (following Dembski) that we have a dictionary of basic concepts, and assume (generously) that there are no more than 10^5 (=100,000) entries in this dictionary, then the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T is (10^5)^5 or 10^25. This is Phi_s(T). I then quoted a scientifically respectable source (see page 236) which estimated the probability of ATP synthase forming by chance, under the most favorable circumstances (i.e with a genetic code available), at 1 in 1.28×10^266. This is P(H|T). Thus Chi=-log2[10^120.Phi_s(T).P(T|H)]=-log2[(10^145)/(1.28×10^266)]
=-log2[1/(1.28×10^121)]=log2[1.28×10^121]
=log2[1.28x(2^(3.321928))^121]=log2[1.28×2^402],
or about 402, to the nearest whole number.
Thus for ATP synthase, the CSI Chi is 402. 402 is far greater than 1, the cutoff point for a specification, so we can safely conclude that ATP synthase was designed by an intelligent agent.

[Note: Someone might be inclined to argue that conceivably, other biological structures might perform the same function as ATP synthase, and we’d have to calculate their probabilities of arising by chance too, in order to get a proper figure for P(T|H) if T is the pattern “stator joining two electric motors.” In reply: any other structures with the same function would have a lot more components – and hence be much more improbable on a chance hypothesis – than ATP synthase, which is a marvel of engineering efficiency. See here and here. As ATP synthase is the smallest biological molecule – and hence most probable, chemically speaking – that can do the job that it does, we can safely ignore the probability of any other more complex biological structures arising with the same functionality, as negligible in comparison.]

Finally, in another post on the same thread, I attempted to calculate the CSI in a 128×128 Smiley face found on a piece of rock on a strange planet. I made certain simplifying assumptions about the eyes on the Smiley face, and the shape of the smile. I also assumed that every piece of rock on the planet was composed of mineral grains in only two colors (black and white). The point was that these CSI calculations, although tedious, could be performed on a variety of real-life examples, both organic and inorganic.

Does this mean that we should be able to calculate the CSI of any complex system? In theory, yes; however in practice, it may be very hard to calculate P(T|H) for some systems. Nevertheless, it should be possible to calculate a provisional upper bound for P(T|H), based on what scientists currently know about chemical and biological processes.

(ii) For an arbitrary complex system, we should be able to calculate its CSI as being (very likely) greater than or equal to some specific number, X, without knowing anything about the history of the system.

This is an essential requirement for any meaningful discussion of CSI. What it means in practice is that if a team of aliens were to visit our planet after a calamity had wiped out human beings, they should be able to conclude, upon seeing Mt. Rushmore, that intelligent beings had once lived here. Likewise, if human astronauts were to discover a monolith on the moon (as in the movie 2001), they should still be able to calculate a minimum value for its CSI, without knowing its history. I’m going to show in some detail how this could be done in these two cases, in order to convince the CSI skeptics.

Aliens visiting Earth after a calamity had wiped out human beings would not need to have a detailed knowledge of Earth history to arrive at the conclusion that Mt. Rushmore was designed by intelligent agents. A ballpark estimate of the Earth’s age and a basic general knowledge of Earth’s geological processes would suffice. Given this general knowledge, the aliens should be able to roughly calculate the probability of natural processes (such as wind and water erosion) being able to carve features such as a flat forehead, two eyebrows, two eyes with lids as well as an iris and a pupil, a nose with two nostrils, two cheeks, a mouth with two lips, and a lower jaw, at a single location on Earth, over 4.54 billion years of Earth history. In order to formulate a probability estimate for a human face arising by natural processes, the alien scientists would have to resort to decomposition. Assuming for argument’s sake that something looking vaguely like a flat forehead would almost certainly arise naturally at any given location on Earth at some point during its history, the alien scientists would then have to calculate the probability that over a period of 4.54 billion years, each of the remaining facial features was carved naturally at the same location on Earth, in the correct order and position for a human face. That is, assuming the existence of a forehead-shaped natural feature, scientists would have to calculate the probability (over a 4.54 billion year period) that two eyebrows would be carved by natural processes, just below the forehead, as well as two eyes below the eyebrows, a nose below the eyes, two cheeks on either side of the nose, a mouth with two lips below the nose, and a jawline at the bottom, making what we would recognize as a face. The proportions would also have to be correct, of course. Since this probability is order-specific (as the facial features all have to appear in the right place), we can calculate it as a simple product – no combinatorics here. To illustrate the point, I’ll plug in some estimates that sound intuitively right to me, given my limited background knowledge of geological processes occurring over the past 4.54 billion years: 1*(10^-1)*(10^-1)*(10^-10)*(10*-10)*(10^-6)*(10^-1)*(10^-1)*(10*-4)*(10^-2), for the forehead, two eyebrows, two eyes, nose, cheeks, mouth and jawline respectively, giving a product of 10^(-36) – a very low number indeed. Raising that probability to the fourth power – giving a figure of 10^(-144) – would enable the alien scientists to calculate the probability of four faces being carved at a single location by chance, or P(T|H). The alien scientists would then have to multiply this number (10^(-144)) by their estimate for Phi_s(T), or the number of patterns for which a speaker S’s semiotic description of them is at least as simple as S’s semiotic description of T. But how would the alien scientists describe the patterns they had found? If the aliens happened to find some dead people or dig up some human skeletons, they would be able to identify the creatures shown in the carvings on Mt. Rushmore as humans. However, unless they happened to find a book about American Presidents, they would not know who the faces were. Hence the aliens would probably formulate a modest semiotic description of the pattern they observed on Mt. Rushmore: four human faces. A very generous estimate for Phi_s(T) is 10^15, as the description “four human faces” has three words (I’m assuming here that the aliens’ lexicon has no more than 10^5 basic words), and (10^5)^3=10^15. Thus the product Phi_s(T).P(T|H) is (10^15)*(10^(-144)) or 10^(-129). Finally, after multiplying the product Phi_s(T).P(T|H) by 10^120 (the maximum number of bit operations that could have taken place within the entire observable universe during its history, as calculated by Seth Lloyd), taking the log to base 2 of this figure and multiplying by -1, the alien scientists would then be able to derive a very conservative minimum value for the specified complexity Chi of the four human faces on Mt. Rushmore, without knowing anything specific about the Earth’s history. (I say “conservative” because the multiplier 10^120 is absurdly large, given that we are only talking about events occurring on Earth, rather than the entire universe.) In our worked example, the conservative minimum value for the specified complexity Chi would be -log2(10^(-9)), or approximately -log2(2^(-30))=30. Since the calculated specified complexity value of 30 is much greater than the cutoff level of 1 for a specification, the aliens could be certain beyond reasonable doubt that Mt. Rushmore was designed by an intelligent agent. They might surmise that this intelligent agent was a human agent, as the faces depicted are all human, but they could not be sure of this fact, without knowing the history of Mt. Rushmore.

Likewise, if human astronauts were to discover a monolith on the moon (as in the movie 2001), they should still be able to calculate a minimum value for its CSI, without knowing its history. Even if they were unable to figure out the purpose of the monolith, the astronauts would still realize that the likelihood of natural processes on the moon being able to generate a black cuboid figure with perfectly flat faces, whose lengths were in the ratio of 1:4:9, is very low indeed. To begin with, the astronauts might suppose that at some stage in the past, volcanic processes on the moon, similar to the volcanic processes that formed the Giants’ Causeway in Ireland, were able to produce a cuboid with fairly flat faces – let’s say to an accuracy of one millimeter, or 10^(-3) meters. However, the probability that the sides’ lengths would be in the exact ratio of 1:4:9 (to the level of precision of human scientists’ instruments) would be astronomically low, and the probability that the faces of the monolith would be perfectly flat would be infinitesimally low. For instance, let’s suppose for simplicity’s sake that the length of each side of a naturally formed cuboid has a uniform probability distribution over a finite range of 0 to 10 meters, and that the level of precision of scientific measuring instruments is to the nearest nanometer (1 nanometer=10^(-9) meters). Then the length of one side of a cuboid can assume any of 10×10^9=10^10 possible values, all of which are equally probable. Let’s also suppose that the length of the shortest side just happens to be 1 meter, for simplicity’s sake. Then the probability that the other two sides would have lengths of 4 and 9 meters would be 6*(10^(-10))*(10^(-10)) (as there are six ways in which the sides of a cube can have lengths in the ratio of 1:4:9), or 6*10^(-100). Now let’s go back to the faces, which are not fairly flat but perfectly flat, to within an accuracy of one nanometer, as opposed to one millimeter (the level of accuracy achieved by natural processes). At any particular point on the monolith’s surface, the probability that it will be accurate to that degree is (10^(-9))/(10^(-3)) or 10^(-6). The number of distinct points on the surface of the monolith which scientists can measure at nanometer accuracy is (10^9)*(10^9)*(surface area in square meters), or 98*(10^81) or about 10^83. Thus the probability that each and every point on the monolith’s surface will perfectly flat, to within an accuracy of one nanometer, is (10^(-6))^(10^83), or about 10^(-10^84), which dwarfs 10^-100, so we’ll let 10^(-10^84) be our P(T|H), as a ballpark approximation. This probability would then need to be multiplied by Phi_s(T). The simplest semiotic description of the pattern observed by the astronauts would be: flat-faced cuboid, sides’ lengths 1, 4, 9. Treating “flat-faced” as one word, this description has seven terms, so Phi_s(T) is (10^5)^7=10^35. Next, the astronauts would multiply the product Phi_s(T).P(T|H) by 10^120, but because the index 10^84 is so much greater in magnitude than the other indices (120 and 35), the overall result will still be about 10^(-10^84). Thus the specified complexity Chi=-log2[10^120.Phi_s(T).P(T|H)]=3.321928*10^84, or about 3*(10^84). This is an astronomically large number, much greater than the cutoff point of 1, so the astronauts could be certain that the monolith was made by an intelligent agent, even if they knew nothing about its history and had only a basic knowledge of lunar geological processes.

Having said that, it has to be admitted that sometimes, a lack of knowledge about the history of a complex system can skew CSI calculations. For example, if a team of aliens visiting Earth after a nuclear holocaust found the body of a human being buried in the Siberian permafrost, and managed to sequence the human genome using cells taken from that individual’s body, they might come across a duplicated gene. If they did not know anything about gene duplication – which might not occur amongst organisms on their planet – they might at first regard the discovery of two neighboring genes having virtually the same DNA sequence as proof positive that the human genome was designed – like lightning striking in the same place twice – causing them to arrive at an inflated estimate for the CSI in the genome. Does this mean that gene duplication can increase CSI? No. All it means is that someone (e.g. a visiting alien scientist) who doesn’t know anything about gene duplication, will overestimate the CSI of a genome in which a gene is duplicated. But since modern scientists know that gene duplication does occur as a natural process, and since they also know the rare circumstances that make it occur, they also know that the probability of duplication for the gene in question, given these circumstances, is exactly 1. Hence, the duplication of a gene adds nothing to the probability of the original gene occurring by chance. P(T|H) is therefore the same, and since the verbal descriptions of the two genomes are almost exactly the same – the only difference, in the case of a gene duplication, being “x2” plus brackets that go around the duplicated gene – the CSI will be virtually the same. Gene duplication, then does not increase CSI.

Even in this case, where the aliens, not knowing anything about gene duplication, are liable to be misled when estimating the CSI of a genome, they could still adopt a safe, conservative strategy of ignoring duplications (as they generate nothing new per se) and focusing on genes that have a known, discrete function, which is capable of being described concisely, thereby allowing them to calculate Phi_s(T) for any functional gene. And if they also knew the exact sequence of bases along the gene in question, the number of alternative base sequences capable of performing the same function, and finally the total number of base sequences which are physically possiblefor a gene of that length, the aliens could then attempt to calculate P(T|H), and hence calculate the approximate CSI of the gene, without a knowledge of the gene’s history. (I am of course assuming here that at least some genes found in the human genome are “basic” in their function, as it were.)

(iii) CSI should be calculable by independent agents, in a consistent manner.

This, too, is an essential requirement for any meaningful discussion of CSI. Beauty may be entirely in the eye of the beholder, but CSI is definitely not. The following illustration will serve to show my point.

Supose that three teams of scientists – one from the U.S.A, one from Russia and one from China – visited the moon and discovered four objects there that looked like alien artifacts: a round mirror with a picture of what looks like Pinocchio playing with a soccer ball on the back; a calculator; a battery; and a large black cube made of rock whose sides are equal in length, but whose faces are not perfectly smooth. What I am claiming here is that the various teams of scientists should all be able to rank the CSI of the four objects in a consistent fashion – e.g. “Based on our current scientific knowledge, object 2 has the highest level of CSI, followed by object 3, followed by object 1, followed by object 4” – and that they should be able to decide which objects are very likely to have been designed and which are not – e.g. “Objects 1, 2 and 3 are very likely to have been designed; we’re not so sure about object 4.” If this level of agreement is not achievable, then CSI is no longer a scientific concept, and its assessment becomes more akin to art than science.

We can appreciate this point better if we consider the fact that three art teachers from the same cultural, ethnic and socioeconomic backgrounds (e.g. three American Hispanic middle class art teachers living in Miami and teaching at the same school) might reasonably disagree over the relative merits of four paintings by different students at their school. One teacher might discern a high degree of artistic maturity in a certain painting, while the other teachers might see it as a mediocre work. Because it is hard to judge the artistic merit of a single painting by an artist, in isolation from that artist’s body of work, some degree of subjectivity when assessing the merits of an isolated work of art is unavoidable. CSI is not like this.

First, Phi_s(T) depends on the basic concepts in your language, which are public and not private, as you share them with other speakers of your language. These concepts will closely approximate the basic concepts of other languages; again, the concepts of other languages are shareable with speakers of your language, or translation would be impossible. Intelligent aliens, if they exist, would certainly have basic concepts corresponding to geometrical and other mathematical concepts and to biological functions; these are the concepts that are needed to formulate a semiotic description of a pattern T, and there is no reason in principle why aliens could not share their concepts with us, and vice versa. (For the benefit of philosophers who might be inclined to raise Quine’s “gavagai” parable: Quine’s mistake, in my view, was that he began his translation project with nouns rather than verbs, and that he failed to establish words for “whole” and “part” at the outset. This is what one should do when talking to aliens.)

Second, your estimate for P(T|H) will depend on your scientific choice of chance hypothesis and the mathematics you use to calculate the probability of T given H. A scientific hypothesis is capable of being critiqued in a public forum, and/or tested in a laboratory; while mathematical calculations can be checked by anyone who is competent to do the math. Thus P(T|H) is not a private assessment; it is publicly testable or checkable.

Let us now return to our illustration regarding the three teams of scientists examining four lunar artifacts. It is not necessary that the teams of scientists are in total agreement about the CSI of the artifacts, in order for it to be a meaningful scientific concept. For instance, it is possible that the three teams of scientists might arrive at somewhat different estimates of P(T|H), the probability of a pattern T with respect to the chance hypothesis H, for the patterns found on the four artifacts. This may be because the chance hypotheses considered by the various teams of scientists may be subtly different in their details. However, after consulting with each other, I would expect that the teams of scientists should be able to resolve their differences and (eventually) arrive at an agreement concerning the most plausible chance hypothesis for the formation of the artifacts in question, as well as a ballpark estimate of its magnitude. (In difficult cases, “eventually” might mean: over a period of some years.)

Another source of potential disagreement lies in the fact that the three teams of scientists speak different languages, whose basic concepts are very similar but not 100% identical. Hence their estimates of Phi_s(T), or the number of patterns for which a speaker S’s semiotic description is at least as simple as S’s semiotic description of a pattern T identified in a complex system, may be slightly different. To resolve these differences, I would suggest that as far as possible, the scientists should avoid descriptions which are tied to various cultures or to particular individuals, unless the resemblance is so highly specific as to be unmistakable. Also, the verbs employed should be as clear and definite as possible. Thus a picture on an alien artifact depicting what looks like Pinocchio playing with a soccer ball would be better described as a long-nosed boy kicking a black and white truncated icosahedron.

(iv) CSI should be knowable with absolute certainty.

Science is provisional. Based on what scientists know, it appears overwhelmingly likely that the Earth is 4.54 billion years old, give or take 50 million years. A variety of lines of evidence point to this conclusion. But if scientists discovered some new astronomical phenomena that could only be accounted for by positing a much younger Universe, then they’d have to reconsider the age of the Earth. In principle, any scientific statement is open to revision or modification of some sort. Even a statement like “Gold has an atomic number of 79”, which expresses a definition, could one day fall into disuse if scientists found a better concept than “atomic number” for explaining the fundamental differences between the properties of various elements.

Hence the demand by some CSI skeptics for absolute ironclad certainty that a specified complex system is the product of intelligent agency is an unscientific one.

Likewise, the demand by CSI skeptics for an absolutely certain, failproof way to measure the CSI of a system is also misplaced. Just as each of the various methods used by geologists to date rocks has its own limitations and situations where it is liable to fail, so too the various methods that Intelligent Design scientists come up with for assessing P(T|H) for a given pattern T and chance hypothesis H, will have their own limitations, and there will be circumstances when they yield the wrong results. That does not invalidate them; it simply means that they must be used with caution.

(v) CSI should be precisely calculable (within reason) by independent agents.

In a post (#259) on the recent thread, On the calculation of CSI, Jemima Racktouey throws down the gauntlet to Intelligent Design proponents:

If “CSI” objectively exists then you should be able to explain the methodology to calculate it and then expect independent calculation of the exact same figure (within reason) from multiple sources for the same artifact.

On the surface this seems like a reasonable request. For instance, the same rock dating methods are used by laboratories all around the world, and they yield consistent results when applied to the same rock sample, to a very high degree. How sure can we be that a lab doing Intelligent Design research in, say, Moscow or Beijing, would yield the same result when assessing the CSI of a biological sample as the Biologic Institute in Seattle, Washington?

The difference between the procedures used in the isochron dating of a rock sample and those used when assessing the CSI of a biological sample is that in the former case, the background hypotheses that are employed by the dating method have already been spelt out, and the assumptions that are required for the method to work can be checked in the course of the actual dating process; whereas in the latter case, the background chance hypothesis H regarding the most likely process whereby the biological sample might have formed naturally has not been stipulated in advance, and different labs may therefore yield different results because they are employing different chance hypotheses. This may appear to generate confusion; in practice, however, I would expect that two labs that yielded wildly discordant CSI estimates for the same biological sample would resolve the issue by critiquing each other’s methods in a public forum (e.g. a peer-reviewed journal).

Thus although in the short term, labs may disagree in their estimates of the CSI in a biological sample, I would expect that in the long term, these disagreements can be resolved in a scientific fashion.

(vi) CSI should be readily computable, given a physical description of the system.

In a post (#316) on the recent thread, On the calculation of CSI, a contributor named Tulse asks:

[I]f this were a physics blog and an Aristotelian asked how to calculate the position of an object from its motion, … I’d expect someone to simply post:

y = x + vt + 1/2at**2

If an alchemist asked on a chemistry blog how one might calculate the pressure of a gas, … one would simply post:

p=(NkT)/V

And if a young-earth creationist asked on a biology blog how one can determine the relative frequencies of the alleles of a gene in a population, … one would simply post:

p² + 2pq + q² = 1

These are examples of clear, detailed ways to calculate values, the kind of equations that practicing scientists uses all the time in quotidian research. Providing these equations allows one to make explicit quantitative calculations of the values, to test these values against the real world, and even to examine the variables and assumptions that underlie the equations.

Is there any reason the same sort of clarity cannot be provided for CSI?

The answer is that while the CSI of a complex system is calculable, it is not computable, even given a complete physical knowledge of the system. The reason for this fact lies in the formula for CSI.

On page 24 of his essay, Specification: The Pattern That Signifies Intelligence, Professor Dembski defines the specified complexity Chi of a pattern T given chance hypothesis H, minus the tilde and context sensitivity, as:

Chi=-log2[10^120.Phi_s(T).P(T|H)]

where Phi_s(T) as the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T, and P(T|H) is the probability of a pattern T with respect to the chance hypothesis H.

The problem here lies in Phi_s(T). In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define Kolmogorov complexity and descriptive complexity as follows (p. 311):

Kolmogorov complexity is a form of computational complexity that measures the length of the minimum program needed to solve a computational problem. Descriptive complexity is likewise a form of computational complexity, but generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern. (Emphasis mine – VJT.)

In a comment (#43) on the recent thread, On the calculation of CSI, I addressed a problem raised by Mathgrrl:

While I understand your motivation for using Kolmogorov Chaitin complexity rather than the simple string length, the problem with doing so is that KC complexity is uncomputable.

To which I replied:

Quite so. That’s the point. Intelligence is non-computational. That’s one big difference between minds and computers. But although CSI is not computable, it is certainly measurable mathematically.

The reason, then, why CSI is not physically computable is that it is not only a physical property but also a semiotic one: its definition invokes both a semiotic description of a pattern T and the physical probability of a non-foresighted (i.e. unintelligent) process generating that pattern according to chance hypothesis H.

(vii) CSI should be computable by some general algorithm that can be applied to an arbitrary system.

In a post (#263) on the recent thread, On the calculation of CSI, Jemima Racktouey issues the following challenge to Intelligent Design proponents:

If CSI cannot be calculated then the claims that it can are bogus and should not be made. If it can be calculated then it can be calculated in general and there should not be a very long thread where people are giving all sorts of reasons why in this particular case it cannot be calculated. (Emphasis mine – VJT.)

And again in post #323, she writes:

Can you provide such a definition of CSI so that it can be applied to a generic situation?

I would like to note in passing how the original demand of ID critics that CSI should be calculable has grown into a demand that it should be physically computable, which has now been transformed into a demand that it should be computable by a general algorithm. This demand is tantamount to putting CSI in a straitjacket of the materialists’ making. What the CSI critics are really demanding here is a “CSI scanner” which automatically calculates the CSI of any system, when pointed in the direction of that system. There are two reasons why this demand is unreasonable.

First, as I explained earlier in part (vi), CSI is not a purely physical property. It is a mixed property – partly semiotic and partly physical.

Second, not all kinds of problems admit of a single, generic solution that can be applied to all cases. An example of this in mathematics is the Halting problem. I shall quote here from the Wikipedia entry:

In computability theory, the halting problem is a decision problem which can be stated as follows: Given a description of a program, decide whether the program finishes running or continues to run forever. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. We say that the halting problem is undecidable over Turing machines. (Emphasis mine – VJT.)

So here’s my counter-challenge to the CSI skeptics: if you’re happy to acknowledge that there’s no generic solution to the halting problem, why do you demand a generic solution to the CSI problem – that is, the problem of calculating, after being given a complete physical description of a complex system, how much CSI the system embodies?

Comments
Thanks a bunch Kairosfocus. You helped a great deal in putting the mentioned research into a much needed perspective for me. Always a pleasure listening to your insightful comments. above
PS: I add, that "template copying" is here used to suggest that there is an accounting for the step by step, information coded translation of mRNA information into a protein chain. Even if that were so, it would not account for how the DNA comes to store the relevant information, how mRNA is created by cellular machines step by step, and how the resulting proteins are so ordered that we get correct folding and function. But worse, the process that creates a protein is a step by step algorithmic one, not a case of some sort of catalysis on a template. kairosfocus
Above Now that you give a more direct link, that works, here is the key excerpt from the abstract: ___________________ >> . . . Fatty acids and their corresponding alcohols and glycerol monoesters are attractive candidates for the components of protocell membranes because they are simple amphiphiles that form bilayer membrane vesicles3–5 that retain encapsulated oligonucleotides3,6 and are capable of growth and division7–9. Here we show that such membranes allow the passage of charged molecules such as nucleotides, so that activated nucleotides added to the outside of a model protocell spontaneously cross the membrane and take part in efficient template copying in the protocell interior. The permeability properties of prebiotically plausible membranes suggest that primitive protocells could have acquired complex nutrients from their environment in the absence of any macromolecular transport machinery; that is, they could have been obligate heterotrophs. >> ___________________ This is a suggestion about the chemical composition of the membrane bag for an imagined protocell. Unfortunately, it is not only speculative and uses terms like growth and division in ways that fudge the difference between chemical processes and the information controlled step by step process of cell growth and division, but ducks the material point that what is to be accounted for in the origin of observed cell based life is a metabolising entity that integrates an information-storing, von Neumann self replicator centred on DNA with the code of life in it. The paper discusses little more than one or two of the scenarios long since discussed and evaluated by Thaxton et al in ch 10 of TMLO in 1984. That you can form a "plastic bag" using a version of fatty molecules, and that these may break up into two different bags [much as a soap bubble can sometimes break into two], is utterly irrelevant to real cell division. That such globules can contain chemicals relevant to life, does not explain the origin of the observed information system based operation of life, especially the coded DNA information, the code, the algorithms, the regulation of expression of genes and so on. That, for decades, we routinely see the sort of gross exaggeration of actual results into claimed justification for a grand metaphysical story of the origin of life dressed up in a lab coat, is telling. Indeed, it is a mark of desperation. GEM of TKI kairosfocus
Thanks for the help Kairosfocus. Here's the link to the article in case you wanted to have a look: http://genetics.mgh.harvard.edu/szostakweb/publications/Szostak_pdfs/Mansy_et_al_Nature_2008.pdf I just tried it and it works for me. above
Above: First check: if there really was a solution to the OOL problem on evolutionary materialist grounds, or something that looked close, it would be all over every major news network. So, you can be sure that the claims are grossly exaggerated. Here's the lead for the Wiki article you clipped: ____________ >> Telomerase is an enzyme that adds DNA sequence repeats ("TTAGGG" in all vertebrates) to the 3' end of DNA strands in the telomere regions, which are found at the ends of eukaryotic chromosomes. This region of repeated nucleotide called telomeres contains non-coding DNA material and prevents constant loss of important DNA from chromosome ends. As a result, every time the chromosome is copied only 100-200 nucleotides are lost, which causes no damage to the organism's DNA. Telomerase is a reverse transcriptase that carries its own RNA molecule, which is used as a template when it elongates telomeres, which are shortened after each replication cycle. The existence of a compensatory shortening of telomere (telomerase) mechanism was first predicted by Soviet biologist Alexey Olovnikov in 1973,[1] who also suggested the telomere hypothesis of aging and the telomere's connections to cancer. Telomerase was discovered by Carol W. Greider and Elizabeth Blackburn in 1984 in the ciliate Tetrahymena.[2] Together with Jack W. Szostak, Greider and Blackburn were awarded the 2009 Nobel Prize in Physiology or Medicine for their discovery.[3] >> _____________ Not very promising relative to the origin of a self-replicating entity that uses a von Neumann self-replicator tied to a metabolic entity. A Nobel Prize announcement article at Harvard -- your second link will not work for me -- says in part:
Jack Szostak, a genetics professor at Harvard Medical School and Harvard-affiliated Massachusetts General Hospital (MGH), has won the 2009 Nobel Prize in physiology or medicine for pioneering work in the discovery of telomerase, an enzyme that protects chromosomes from degrading. The work not only revealed a key cellular function, it also illuminated processes involved in disease and aging . . . . The three won the prize for work conducted during the 1980s to discover and understand the operation of telomerase, an enzyme that forms protective caps called telomeres on the ends of chromosomes. Subsequent research has shown that telomerase and telomeres hold key roles in cell aging and death and also play a part in the aging of the entire organism. Research has also shown that cancer cells have increased telomerase activity, protecting them from death.
In short, the two issues -- telomerase activity and the origin of cell based life with a vNSR joined to a metabolic entity -- are almost completely irrelevant. The commenter at Amazon is plainly in gross and distractive error. GEM of TKI kairosfocus
PAV: A bit woozy from a rougher than expected return ferry trip to Montserrat, Yellow Hole having lived up to its reputation. Wasn't even able to get a glimpse of the Green Flash by way of compensation on the way home due to some clouds low on the W horizon. Anyway, let's pick up quickly:
What Dembski has in mind, I believe, is the criticism leveled at ID that goes like this: “You say that life is highly improbable. But there it is. This is just like a lottery ticket. It’s likelihood is very low. Yet they have a lottery and someone always wins.”
Lotteries are winnable of course because they are designed to be winnable. There is no comparison tot he challenge for origin of FSCI by chance plus mechanical necessity without intelligent direction. For that, the infinite monkeys theorem is the killer. GEM of TKI kairosfocus
@ Kairosfocus -"Until the advocates of abiogenesis can show a reasonable, empirically supported pathway to first cell based life that does not require searches deeper than 1 in 10^50 or so, and cumulate to give a system with integrated metabolism and von Neumann type informationally based self-replication, they have no root to their imagined tree of life. Yesterday I run into a poster on amazon that claimed the following: “If you’re dealing with the Origins of Life on Earth, we actually have discovered (in 2009 in fact) how life began on earth. This has been CONFIRMED in Dr. Jack Szostak’s LAB – 2009 Nobel Laurette in medicine for his work on telomerase. (http://en.wikipedia.org/wiki/Telomerase) The scientific research documentation can be read here: http://genetics.mgh.harvard.ed.....e_2008.pdf” I asked this elwhere and UprightBiped told me there's not much to the claim. I also wanted to hear what you have to say as you have helped me a lot in putting the whole darwinism/ID issue into perspective in the past. Is there any truth to the claim that Szostak's work has provided evidence of abiogenesis? Much appreciated. above
markf: @[147]:
Unfortunately Dembski introduces the formula on page 18 as a general way of calculating specificity when it is not known whether n is large or small compared to p.
On page 18, in SP, Dembski uses the example of the bacterial flagellum, and he identifies N= 10^20 specification resources for its description. So we know what N is in this case. And, if p > 10^-120, then "specified complexity" is out of the question. So, it is safe to assume that p = P(T|H) is extremely small. And therefore, p^2 is of no importance to our consideration. And, of course, even if N = 1, this is hugely greater than p. @ [10]:
The formula Chi=-log2[10^120.Phi_s(T).P(T|H)] contains a rather basic error. If you have n independent events and the probability of single event having outcome x is p, then the probability of at least one event having outcome x is not np . It is (1–(1-p)^n). So the calculation 10^120.Phi_s(T).P(T|H) is wrong .
You're assuming that he wants to calculate the probability of "at least one event having outcome x". That's not his intention. This is what he says about the relevant probability: Factoring in these N specificational resources then amounts to checking whether the probability of hitting any of these targets by chance is small, which in turn amounts to showing that the product Np is small. This, obviously, is not "the probability of at least one event having outcome x". What Dembski has in mind, I believe, is the criticism leveled at ID that goes like this: "You say that life is highly improbable. But there it is. This is just like a lottery ticket. It's likelihood is very low. Yet they have a lottery and someone always wins." N = the specification resources involved. So, if the probability of a single lottery ticket winning is 1 in 100 million, and you sell 100 million tickets, then the probability of someone winning is 10^8 x 10^-8 = approx 1. That is, someone is going to win. Yes, there's all kinds of variables, and this number may not be precise; but it makes clear that the more specificational resources that are available (printed lottery tickets) the less improbable it is that someone is going to hit the right "target" (the "winning" lottery numbers). PaV
PAV: The basic problem with ev and similar things, is that they START on an island of function, based on intelligent design. At most, they are capable of showing how intelligent design can drive evolutionary adaptation of a base design in an established functional environment. In terms of search capacity of such a system, the Wiki infinite monkeys theorem page comments:
The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation. One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[20] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
Now, 128^25 = 4.79*10^52, i.e. we see that a feasible search can approach something that is isolated to 1 in 10^50 or so of a config space. But, when we move the space to 1 in 10^301 or more [1,000 bits worth of configs], the entire resources of the observable cosmos, working at the fastest physically plausible rates, for its thermodynamic lifespan cannot credibly sample over 1 in 10^150 of the space. A practical zero. Until the advocates of abiogenesis can show a reasonable, empirically supported pathway to first cell based life that does not require searches deeper than 1 in 10^50 or so, and cumulate to give a system with integrated metabolism and von Neumann type informationally based self-replication, they have no root to their imagined tree of life. We already have a known source of functionally specific, complex, information-rich organisation: intelligence. The infinite monkeys type analysis backs that up. So, on inference to best explanation, the best, most credible and warranted explanation for origin of life is design. When we then turn to t6he next level, origin of major body plans, we find much larger increments of integrated, regulated bio information:10's to 100's of millions of bits as a reasonable minimum, dozens of times over, not just 100 - 1,000 k bits. It is reasonable to also infer that such body plans were designed. (And no, this is not "cows are designed," but that plans ranging from arthropods to trees to banana plants, to whales, bats and birds as well as ourselves, are designed.) Let us hear from the objectors, that hey have empirically based, reasonable grounds for showing that life's origin and that of major body plans is adequately explained on blind chance plus mechanical necessity. Failing that, the inference ot design is as well warranted as any empirical inference to best explanation we make. Regardless of hair-splitting debates on quantitative models, analyses and metrics for CSI and/or FSCI. G'day GEM of TKI kairosfocus
Thanks KF: Your analysis reminds me of something. When it comes to the supposed Shannon information, there is as much "Shannon Information" when the 265 bit string is selected randomly at the start, as at the finish. The real claim is that "specificity" was brought about; i.e., that the first half of the bit string, which is to represent the protein to be "bound to", matches, in places, the second half of the bit string. And, indeed, this does happen. But the complexity, as I have pointed out countless times already, does not rise to the UPB. And, the lingering question is: what influence does the "wieght matrix" Schneider use, and the fact that "mistakes" are calculated have on the true "chance" character of the final output. So, clearly "specificity" has arisen; but is it due, truly, to pure chance. Very likely not. PaV
PAV, 169:
MathGrrl [151]: Schneider has demonstrated that known evolutionary mechanisms can create Shannon information. [PAV, 169:] So does flipping a coin sequentially, and generating a bit string by letting 1 equal “heads”, and 0 equal “tails”.
Shannon info is a metric of info carrying capacity with a particular code pattern and system where symbols si have probabilities of occurence pi. So, we do a sum over i of pi log pi metric, H. (Please note my summary here; which is linked from every comment-post I have ever made at UD.) That info carrying capacity metric has nothing in itself to do with the meaningful functionality of information, except that the highest case of H is with a string where there is no correlation between symbols in a string, i.e flat random distribution. A meaningful message is not going to peak out H, where the point of most communication systems is to store, carry or process just such meaningful or functional information. That is where the idea of functionally specific, complex information comes from, and it is why being able to identify its properties is important. As, we are usually interested in working -- meaningful -- information. For instance, when I prepared a tutorial note some years ago, I put the matter this way:
[In the context of computers] information is data -- i.e. digital representations of raw events, facts, numbers and letters, values of variables, etc. -- that have been put together in ways suitable for storing in special data structures [strings of characters, lists, tables, "trees" etc], and for processing and output in ways that are useful [i.e. functional]. . . . Information is distinguished from [a] data: raw events, signals, states etc represented digitally, and [b] knowledge: information that has been so verified that we can reasonably be warranted, in believing it to be true. [GEM, UWI FD12A Sci Med and Tech in Society Tutorial Note 7a, Nov 2005.]
I also note again that signal to noise ratio is an important characteristic of communication systems, and it pivots on distinct characteristics of intelligent signals vs meaningless noise. Indeed, every time one infers to signal as opposed to noise, one is making an inference to design. GEM of TKI PS: Have been having some difficulties with communication access, so pardon gappiness. kairosfocus
Collin [173]:
Perhaps arrowheads and rocks that look like arrowheads.
I suspected (and hoped) you would say this. Why? Because we often contend with the Darwinist who say, "Well, who Designed the Designer?" We point out to them that if you find rocks that "look" like they've been cut to form arrowheads, then you're assuming design without knowing who designed the 'arrowheads'. Your comment suggests that they could be wrong. But nevertheless, despite the fact that they might be confusing the 'natural' for the 'designed',they will call this kind of work "science", and give it a a name, "paleontology". But, of course, ID is not a science. They just know these things! Ask them. PaV
Collin, I am sure MathGrrl knows natural selection when she sees it. :cool: Joseph
Re MF, 128:
Dembski’s paper and definition of CSI makes no references to outcomes being valuable (or functional). He seeks to define the specification purely in terms of KC simplicity. The issue of using function or value as a specification is a different one.
Functional specifications are of course, just that: specifications. That is, FSCI is a subset of CSI. Cf. Orgel and Wicken, in the 1970's, as repeatedly linked and excerpted. Also, cf the Abel, Trevors, Chiu and Durston et al work on FSC [cf the paper on distinguishing OSC, RSC and FSC here, especially the figure here, and the onward development and application of a metric of FSC to 35 protein families here as has been cited and/or linked repeatedly in recent discussions], which builds on the same principles Dembski uses, and focuses specifically on functionality as specification. KC complexity is a way of saying that the pattern in the specifications, is distinct from the simple reproduction of the sequence in question, by quoting it, or the mere repetition of a given block. Notice Thaxton Bradley and Olsen in TMLO 1984 [the very first modern design theory technical book; cf here -- fat pdf] in ch 8, contrasting:
1. [Class 1:] An ordered (periodic) and therefore specified arrangement: THE END THE END THE END THE END Example: Nylon, or a crystal . . . . 2. [Class 2:] A complex (aperiodic) unspecified arrangement: AGDCBFE GBCAFED ACEDFBG Example: Random polymers (polypeptides). 3. [Class 3:] A complex (aperiodic) specified arrangement: THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE! Example: DNA, protein.
(Think, how the number picked as winner has to be fully quoted in a lottery. A truly random sequence has no redundancy -- no correlation between any one digit and any other -- and is therefore highly resistant to compression; the digit value at any one place in a string is maximally contingent. An orderly -- thus low contingency -- sequence will normally be highly compressible and periodic, typically: "repeat BLOCK X n times." A functional sequence will normally be aperiodic, thus fairly close to random in resistance to string compression, but will have some redundancy, reflecting the underlying linguistic rules and/or data structures required to specify a functional entity and communicate the information in a usable manner. Function may typically be algorithmic, linguistic or structural. Recall, a structural entity or mechanism can as a rule be converted into a net list with nodes, interfaces and connecting arcs; i.e. the Wicken wiring diagram.) KC complexity is an index of being simply describable/compressible, for instance, cf. what VJT has done in the worked out examples above. It is mainly a way to give an indicator of the complexity in the first instance, and does not exclude functional specificity. Describing the function to be carried out by a particular body of information, can easily be a way of specifying it, e.g. a particular mRNA gives the sequence of amino acids for a particular protein used to do X in the cell. As a second example, each of the 20 or so tRNA's will carry a particular AA, and will fit a specific codon with its anticodon sub-string. In turn, the key-lock functionality of such RNA's is required to step by step -- i.e. algorithmically, by definition -- chain a given protein in the ribosome. This brings to bear structure, function, algorithm and code aspects of functionality, and we see as well that we can give a functional description independent of quoting the strings. That the resulting protein has to fold properly and have the right functional elements in the right place, shows that we are dealing with islands of function in large configuration spaces. Relatively few of the 2.04 * 10^390 possible AA sequences for a 300 AA string will do the required job in the cell. Cells of course use hundreds of different proteins to do their jobs, and in turn the required mRNAs and regulatory controls are coded for in the DNA. That serves to indicate the particular fold domain for the protein, and the specific role it fulfills. The mRNA therefore fits on an island of function in a wider config space of possible chained codons, the vast majority of which would carry out no relevant function in a living cell. The attempt to drive a rhetorical wedge between the specification by functionality and specification by KC compressibility, reflects poorly indeed on the level of thought involved. GEM of TKI kairosfocus
Joseph, I hope that Mattgrrl knows that natural selection is a heuristic, not any kind of a law or principal. It is not rigorous (mathematically or otherwise) because it depends on ever-changing environmental signals. Collin
Collin, You are absoutely correct and it looks like MathGrrl "knows" blind watchmaker evolution when she sees it. :o She needs to pull her head out of her bias... :) Joseph
http://en.wikipedia.org/wiki/Marine_archaeology_in_the_Gulf_of_Cambay Collin
Thanks Pav. I'll admit I'm not sure what could be used. Perhaps arrowheads and rocks that look like arrowheads. Or things like Saturn's rings versus ripples from astroid strikes on planets compared to city lights viewed from space. VJtorley pointed this out:http://en.wikipedia.org/wiki/Yonaguni_Monument Collin
Substitute "Jon" for "Joe" in the previous post. Oops. PaV
Collin [95]:
Perhaps an experiment can be done to verify or falsify CSI. A group should gather 100 objects of known origin. 50 of them known to be man made but look like they might not be and 50 known to be natural but look like they might be designed. Then gather several individuals who independently use CSI to test which objects are artificial and which are natural. If they are consistent and correct, then CSI has resisted falsification.
Collin, I think you would have trouble finding one such object either way. I can't think of a single example, either way. So, when Joe Specter characterizes your view as: "I know it when I sees it," I don't think Joe's thought this through much, because you're actually saying "I DON'T know it when I sees it." But, of course, this example, as far as I can see, is strictly hypothetical: just like Darwinism. Darwin: "I know that most scientists see sterility in hybrids. But I think, really, it doesn't exist." "I know that fossil intermediates have not been found (seen); but I'm sure they're there. Just dig around more." "I know that scientists believe that domesticated animals can regress to wild species. But I think that's just an illusion." Let's hear it for science! Right, Joe?! PaV
Joseph, So, for example, a key that fits only one lock and that lock only accepts that one key, then you have tight specification? I guess my position is that this is readibly observable and that Mathgrrl should be able to recognize it even if she can't calculate it. Collin
MathGrrl [151]:
Schneider has demonstrated that known evolutionary mechanisms can create Shannon information.
So does flipping a coin sequentially, and generating a bit string by letting 1 equal "heads", and 0 equal "tails". PaV
MathGrrl: An example just came into my mind. F=mgh. We know "g", and we know "m" for a baseball. We drop the baseball from a given height, h, and measure the force as it hits a measuring device. From our knowledge of "m" and "g", we calculate F. When we measure it, we find that instead of it being 5.6788 pounds of force, it's actually 5.67423 pounds. Should we then conclude that F=mg is NOT a "rigorous mathematical definition" of force in a gravitational field? PaV
Collin, Thanks but there are varying degrees of specification that need to be accounted for. That is why I said if you have a 200 amino acid long polypeptide that forms functioning protein- if any arrangement gets you that function then it ain't so specified. However If only one sequence gives you that function then you have a very tight specification. And then there are degrees in between. The same for 500 bits- if that 500 bits can be arranged in any order and provide the same specification then it ain't that specified. That said if one can do those 3 steps they are on their way. Joseph
MathGrrl [151]:
Your explanation demonstrates one significant problem with calculating CSI — the dependence on the knowledge of the person doing the calculation. If the strings were longer, to get past the 500 bit limit you specify, you could easily calculate that there is sufficient specified complexity (assuming you are assuming, arguendo, that these strings are somehow functional) to constitute CSI. Subsequent discoveries could lower that number to below the 500 bit threshold. That subjectivity in the calculation makes CSI prone to false positives and, again, contradicts Dembski’s claim that CSI can be calculated ahistorically.
In the particular case I've given, that isn't possible since ASCII is ASCII, and letters are letters. What I mean is that if, indeed, you have a pattern, i.e., recognizable letters constructed in a way that is specified (I can understand them), then there will only be one way of spelling it correctly. So, simple familiarity with both ASCII and English would rule anything else out. Now if someone who ONLY spoke Spanish decided to try and interpret the pattern using ASCII and putting a '1' in the middle as they went along, then, to them, this wouldn't be "specified" (it would look like gibberish), and they would conclude that it wasn't CSI. But you say "CSI [is] prone to false positives." That's a head-scratcher. Are you trying to say that if you had a bit string 500 digits long, and it inscribes some English phrase, but it's exactly 500 digits long, then if someone were to say, without knowing the 'history' of the bit string, that this constitutes CSI, you would then come along and say, e.g. "Well, 'Methinks it is a weasel' (assuming this was part of the complete phrase) can easily be written 'I think it is a weasel', which is 18 bits shorter, therefore we have a "false positive"? Do you really think this is being "prone to false positives"? Are you really going to say, "Well, just flipping a coin randomly could have produced this"? This would mean that someone would have to flip 500 coins all at once 10^149 times to reach the pattern by chance. Is this really "prone to false positives"? Maybe we can say that the UPB is 10^180. That would take care of false positives. And proteins would still be CSI based on this level of improbability/complexity. Just giving a rough guess, a protein consisting of 240 bases (4^240, or, 2^480), which is the equivalent of 80 a.a.s, would reach this higher limit. Cytochrome C, which is ESSENTIAL to cell division (i.e., no Cytochrome C, NO replication; hence, no nothing, certainly no NS) is 110 a.a.s long. I use the biological example because of your choice of words: "subsequent discoveries". This is the great argument from ignorance that Darwinists like to use: some day we'll understand just how NS is able to do this; we just haven't discovered it yet. Well, it is an argument from ignorance, while in the meantime we can calculate the tremendous improbabilities involved in cellular realities. Seth Lloyd gave 10^120 as the maximum number of quantum calculations that could take place in the entire universe. Well, using that computer to "flip coins" would not allow us to reach a binary string that could be translated by ASCII into a meaningful English phrase 234 letters long simply by chance. Isn't reality just staring us in the face? Isn't the "Design Inference" the most intelligible, reasonable, logical conclusion to make. And, if we wanted to be really logical and reasonable, we would conclude that nothing will ever be discovered that can overcome these improbabilities. PaV
MathGrrl [151]:
ev is just a model of some simplified evolutionary mechanisms. It sounds like you’re saying that knowledge of how those mechanisms resulted in the pattern is necessary to calculate the CSI of the pattern. That contradicts Dembski’s claim that CSI can be calculated “even if nothing is known about how they arose”.
Nice try, but it won't work. In the case of ev we're dealing with artificial intelligence, and it is uncertain just what is "random" and what is not. To come up with a "chance hypothesis" that is realistic, and meaningful, requires digging into the programming and then determining various probability measures for the individual steps. You would have to take it step-by-step. This isn't the case with biological systems. From the time of Watson and Crick, as Steven Meyer illuminates so well in Signature in the Cell, it's been known that there are no chemical/quantum mechanical laws or forces that show any kind of bias at all when it come to nucleotide base selection. Hence, in the case of a protein sequence, each amino acid has a 1 in 21(2) chance of being selected. Hence, for the event, E, of a nucleotide sequence, the numerator would be (1/21)^N, where N = the length of the sequence. Now, because of mutations, and because there are parts of a protein sequence that aren't as essential as others, there will be more than one "T": that is, there is more than one way to arrive at a functional protein sequence for any given sequence. This then would correspond in the biological case with added "specificational resources" (which is a SP way of looking at it), and so the complexity, i.e., improbability, associated with any given functional protein would be the number of these functional sequences of length N, divided by the above numerator. Nevertheless, in the case of ev, since the binary string is less than 500 digits, it fails to rise to the needed level of improbability/complexity. So, really, why bother. If it were completely all chance events, which we know isn't the case, even then it would not constitute CSI as it is properly defined. PaV
Mathgrrl, 3 questions 1. Can you recognize Shannon information when you see it? 2. Can you tell when Shannon information has meaning or function? 3. Can you count to 500? If your answer is yes to all 3, then you can recognize CSI (according to Josephs definition). If you can recognize its presence then you can introduce variables to see what happens to the CSI. you can also do correlational studies. This is science. Collin
Quiet ID [149]: No, it's not problematic. I could have chosen a longer bit string---but that would have meant that I had to toss a coin five hundred times to make my point when the exchange took place that prompted these bit strings. PaV
Alex73 at post 153. If she says yes, then all economists are out of a job. Collin
Not every definition is rigorously and mathematically definable but is precise enough for scientific usefulness. Like I said, people are thrown in jail (or exhonerated) over concepts like schizophrenia and major depressive disorder. Pills are prescribed based on much squishier definitions than CSI. Why don't you tell me why Joseph's definition of CSI is imprescise. It may not be calculable mathematically, but it is still tightly defined. Collin
Mathgrrl, That is not a contradiction. Light is readily observable even when it is not calculable. Before light was measurable (due to technological limitations), people could observe it and even conduct scientific experiments to test it. Collin
Mathgrrl: I have to disagree with you on one point:
You’ve contradicted yourself in a single paragraph. Either CSI is a mathematically rigorous concept that can be used to reliably identify intelligent agency or it is not. You first claim that it is identifiable and observable, but then immediately admit that it does not have a rigorous mathematical definition.
I don't believe Collin has contradicted himself when he says CSI is identifiable and observable, but not calculable. But, to understand how those two statements are reconciable you need to see Collins position on CSI as follows: "I knows it when I sees it." I suspect, however, you won't find this particularly useful. jon specter
BTW MathGrrl thank you for admiting the anti-ID positon is not science- no math, no figures, pure opinion. Life is good... Joseph
MathGrrl, What does the theory of evolution have that is mathematically rigorousy defined? Does your position have anything that we can compare to CSI? Or are you just another intellectual coward? And do you realize that people use CSI every day? Joseph
MathGrrl: blockquote>Neither of those is CSI as discussed by Dembski. Schneider has demonstrated that known evolutionary mechanisms can create Shannon information. 1- "Evolutionary mechanisms" is an equivocation 2- Shannon information is not CSI 3- IDists freely admit that blind, undirected processes can produce Shannon information 4- You don't know what you are talking about and just make stuff up Joseph
Alex73,
Do you mean that only mathematically rigorously defined functions are reliable?
What I mean, and I think I've been very clear about this through the CSI thread and this one, is that unless a metric is clearly and unambiguously defined, with clarifying examples, such that it can be objectively calculated by anyone so inclined, it cannot be used as the basis for claims such as those being made by some ID proponents. If my Lord Kelvin quote didn't make that clear, here's a shorter one from Robert Heinlein: "If it can't be expressed in figures, it is not science; it is opinion." MathGrrl
MathGrrl: blckquote>First, I am not making a claim in these threads discussing CSI, I am attempting to evaluate the claims of ID proponents. If a theory of ID is to grow out of the hypotheses being put forth by those proponents, it must stand on its own, explaining the available evidence and making testable predictions that could serve to falsify it. Right now, the metric of CSI does not meet those criteria. CSI does meet those criteria and ID does not stand on its own. It has to contrasted with necessity and chance.
Second, methodological (not philosophical) naturalism is essential to the scientific method.
That's it? Just a bald assertion? Strange the things you blindly accept and the things you refuse to accept even though they have been thoroughly explained to you.
Third, there is no a priori assumption that intelligent agents were not involved in evolution on this planet. There is, however, no empirical evidence that would suggest such involvement.
Actually there is plenty of such evidence. OTOH there isn'y any evidence for your position's claims.
If ID proponents can produce such evidence, it can be assessed using the scientific method, just as any other empirical evidence is assessed.
Strange how some scientists are assessing that evidence. OTOH there still isn't any eidence to assess from your position. Ya see it is the total failure of your position tht has allowed ID to persist. I can see that bothers you. :cool: Joseph
MathGrrl says: Either CSI is a mathematically rigorous concept that can be used to reliably identify intelligent agency or it is not. Do you mean that only mathematically rigorously defined functions are reliable? Alex73
Collin,
“We have seen in another thread that a rigorous mathematical definition of CSI is not readily available, so the prediction cannot be tested in this case.” Not true. CSI is readily identifiable and observable. I proposed a scientific test of CSI’s reliability in detecting design. See comment 95. This experiment is as rigorous as many I’ve seen/read about. While it would not conclusively settle the matter, it would be a scientific experiment that would shed light on it even without a rigorous mathematical definition of it. (But it DOES have a rigorous definition, just not a rigorous MATHEMATICAL definition).
You've contradicted yourself in a single paragraph. Either CSI is a mathematically rigorous concept that can be used to reliably identify intelligent agency or it is not. You first claim that it is identifiable and observable, but then immediately admit that it does not have a rigorous mathematical definition. "In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be." -- Lord Kelvin MathGrrl
PaV,
The pattern under consideration is the bit string that serves as a genome for the digital organism. You don’t need to analyze ev, just what ev is modeling.
Yes, that is the pattern. But, as in SP, there is a “chance hypothesis” associated with this pattern. To understand what chance mechanisms are actually in play, you would have to understand the ev program at great depth; then analyze where chance enters into it, and then formulate some kind of chance hypothesis based upon all this pain-staking work.
ev is just a model of some simplified evolutionary mechanisms. It sounds like you're saying that knowledge of how those mechanisms resulted in the pattern is necessary to calculate the CSI of the pattern. That contradicts Dembski's claim that CSI can be calculated "even if nothing is known about how they arose".
Do you know of such a metric? If so, could you please provide a rigorous mathematical definition for it and some examples of how to calculate it? If such a metric existed, I’d be happy to apply it to your two strings.
Well, I imagine you know about Shannon information. You know about Chaitin-Kolmogorov information. Can’t you use those metrics?
Neither of those is CSI as discussed by Dembski. Schneider has demonstrated that known evolutionary mechanisms can create Shannon information. No one has provided any evidence that Shannon information or Chaitin-Kolmogorov information are reliable indicators of intelligent agency.
If we look at String #1, and then, using ASCII code to convert letters into binary code while inserting the integer ’1? after the first four digits of the code for each letter, the binary string represents, “Methinks it is a weasel”.
I started lurking here around about the time of the weasel wars. Please, in the name of all you hold dear, don't start those again. ;-) Your explanation demonstrates one significant problem with calculating CSI -- the dependence on the knowledge of the person doing the calculation. If the strings were longer, to get past the 500 bit limit you specify, you could easily calculate that there is sufficient specified complexity (assuming you are assuming, arguendo, that these strings are somehow functional) to constitute CSI. Subsequent discoveries could lower that number to below the 500 bit threshold. That subjectivity in the calculation makes CSI prone to false positives and, again, contradicts Dembski's claim that CSI can be calculated ahistorically. MathGrrl
William J. Murray,
“You are mistaken about the purpose of the scientific method and the use of a null hypothesis. A scientific hypothesis must make testable predictions.” No, I’m not mistaken about it; you are attempting to avoid meeting the same obligation you wish to enforce on ID advocates: it was asserted by Darwin, and is asserted throughout evolutionary literature ever since, that chance and non-intelligent (blind) processes can sufficiently account for biological diversity and success – IOW, that intelligence (teleology) is not needed.
First, I am not making a claim in these threads discussing CSI, I am attempting to evaluate the claims of ID proponents. If a theory of ID is to grow out of the hypotheses being put forth by those proponents, it must stand on its own, explaining the available evidence and making testable predictions that could serve to falsify it. Right now, the metric of CSI does not meet those criteria. Second, methodological (not philosophical) naturalism is essential to the scientific method. It is not specific to biology. Third, there is no a priori assumption that intelligent agents were not involved in evolution on this planet. There is, however, no empirical evidence that would suggest such involvement. If ID proponents can produce such evidence, it can be assessed using the scientific method, just as any other empirical evidence is assessed.
It seems to me that you are saying that neither claims of X or not-X can be supported
Not at all. What I am saying is that if you are making a claim, you need to support it. There is, as yet, no support for the claim that intelligent agency was involved in evolution on this planet. Perhaps a metric superior to CSI will allow testing of that claim. MathGrrl
PaV, wouldn't a a comparison of two longer strings be problematic -- that is, if the escape hatch here is that the CSI is less than 500 bits? QuiteID
Well it looks likeaouh a couple of whoppers, but the are FoS and brain-dead. And look- JR the sock puppet shows up here spewing more meaningless nonsense! Unfortunately for JR my adversaries are incapable of debate as evidenced by their comments. So sure, please check it out. You will see how intellectually barren evos are- as if anyone needed more evidence for that. Life is good... Joseph
#145 vj Unfortunately Dembski introduces the formula on page 18 as a general way of calculating specificity when it is not known whether n is large or small compared to p. markf
Ah, I now read that you said "in this case." Perhaps I should amend my comment. Too bad this blog doesn't let you amend comments. Collin
Markf (#124) I've realized that Professor Dembski's formula for CSI uses a very good approximation. It is not a mistake. In the case he discusses, np works fine as a first order approximation for (1–(1-p)^n). In (#10) above, you wrote:
The formula Chi=-log2[10^120.Phi_s(T).P(T|H)] contains a rather basic error. If you have n independent events and the probability of single event having outcome x is p, then the probability of at least one event having outcome x is not np . It is (1–(1-p)^n). So the calculation 10^120.Phi_s(T).P(T|H) is wrong . The answer is still very small if p is small relative to n. But it does illustrate the lack of attention to detail and general sloppiness in some of the work on CSI.
The binomial expansion of (1-p)^n is: 1 + (nC1).(-p) + (nC2).(-p)^2 + (nC3).(-p)^3 + ..... + (nC(n-1)).(-p)^(n-1) + (nCn).(-p)^n. (nC1) is of course n, so the second term is -np. In the case where p is very small, the third and all subsequent terms are very small relative to np, and may safely be neglected. For example, (10^(-100))^2 is 10^(-200) which is much smaller than 10^-100. So for all practical intents and purposes we can approximate (1-p)^n by 1-np. But in that case (1–(1-p)^n) can be approximated by (1-(1-np)) which is np. Perhaps this was what you meant when you wrote:
The answer is still very small if p is small relative to n.
But in that case, all you are saying is that Dembski should have spelt out the fact that he was using an excellent approximation more clearly in his paper. Fair enough. However, he was writing for a mathematical audience. It would be wrong to accuse him of "general sloppiness" here, when in the case he is discussing, his formula is essentially correct. I hope this answers your question. vjtorley
Mathgrrl, you said "We have seen in another thread that a rigorous mathematical definition of CSI is not readily available, so the prediction cannot be tested in this case." Not true. CSI is readily identifiable and observable. I proposed a scientific test of CSI's reliability in detecting design. See comment 95. This experiment is as rigorous as many I've seen/read about. While it would not conclusively settle the matter, it would be a scientific experiment that would shed light on it even without a rigorous mathematical definition of it. (But it DOES have a rigorous definition, just not a rigorous MATHEMATICAL definition). Collin
I encourage everybody to check out Joseph's blog. He's a fascinating character. His arguments are precise and to the point. Rarely repeating himself he asserts his viewpoint confidently, directly and concisely. His intellectual vigor is matched only by the breath of his argumentation, from the physical sciences to the metaphysical his recall is unmatched and instant. His grip on point and counterpoint unnerving. Joesph, in short, is an unmatched peerless artist working in a realm, at a level I'm only starting to truly appreciate now. Bit by bit, blog post by blog post Joseph is clarifying the important and vital issues of the day. Check his blog out! He has many more posts (look for the ones that have many comments for posts sparking intense debate between him and his adversaries) then the one he links to in 141. His erudite essays simply cannot be missed! JemimaRacktouey
I went fishing- Of MathGrrl, CSI and...- so far it is going as predicted. Joseph
BTW, any CSI or FSCO/I analysis only makes a provisional finding of "best explanation" under current knowledge. As PaV's example shows, we can find false negatives all the time simply by not knowing about the pattern that a sequence describes; it might appear totally random until we find the pattern (in cryptanalysis, that would be the "key") that reveals the functional specificity of the sequence. William J. Murray
I should have included this at the point where I say: "This also constitutes the event, E." The 'pattern' T is also "“Methinks it is a weasel” in ASCII with the integer ’1? interspersed each four digits of each letter translated." This is the "descriptive" part of the pattern T. PaV
MathGrrl [137]: Let me begin by thanking you for responding. You had a choice to do otherwise.
The pattern under consideration is the bit string that serves as a genome for the digital organism. You don’t need to analyze ev, just what ev is modeling.
Yes, that is the pattern. But, as in SP, there is a "chance hypothesis" associated with this pattern. To understand what chance mechanisms are actually in play, you would have to understand the ev program at great depth; then analyze where chance enters into it, and then formulate some kind of chance hypothesis based upon all this pain-staking work.
Do you know of such a metric? If so, could you please provide a rigorous mathematical definition for it and some examples of how to calculate it? If such a metric existed, I’d be happy to apply it to your two strings.
Well, I imagine you know about Shannon information. You know about Chaitin-Kolmogorov information. Can't you use those metrics? To anticipate the correct answer---I don't want to waste time---they can't give you ANY information at all as to which of the two might be "designed". So, what shall we do? Why not apply the concepts of CSI as found in Dembski's No Free Lunch? Well, the process begins with the ability to discern a "pattern". To the naked eye, both of these appear to be randomly generated. As noted above, traditional informational 'metrics' can't help us. But, to reach the conclusion that either of the strings reaches to the level of CSI, the 'chance hypothesis' associated with the pattern must generate a rejection region that is so extremal, that any element of that rejection region has to have a probability of less than 10^-150. Don't bother counting, but I believe that there are 196 digits laid out in binary form. For a 'pattern' of this length, the chance of any digit appearing is 1/2. The improbability of this 'pattern' then, based on a 'pattern' of 196 binary digits is (1/2)^196. This is much, much greater a probability than 10^-150. Hence, patently, on the face of it, without knowing the "causal history" of this pattern, we can eliminate the possibility of it being CSI. That is, using the concept of CSI, we would rule out "intelligent agency" in the case of this 'pattern', simply because any pattern (binary string) of this length (196) could never be improbable enough. So, using the "metric" of CSI, we would conclude that neither of the strings is "designed". This turns out to be wrong; BUT, it is NOT a false positive, which would render CSI suspect, and of limited use. So, was one of these strings really "designed"? Well, if we want to work this out as an example of CSI, were going to need to discern what the pattern is? Naturally, I didn't want to make things easy for computer people to figure out. But I didn't want to make it too difficult either. However, someone with a passing familiarity with Dawkin's Blind Watchmaker should have had an easy time of it. (Specificational resources per SP) [I've given a hint. If you want to play around with that hint, you might stumble upon the pattern. I give it away explicitly below. So, if you want to take a stab at guessing, stop here.] xxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxx If we look at String #1, and then, using ASCII code to convert letters into binary code while inserting the integer '1' after the first four digits of the code for each letter, the binary string represents, "Methinks it is a weasel". Since we now know the 'pattern', and since we now know how it was generated, clearly it is CSI---since it was produced by an intelligent agent. However, without knowing the "causal history" of the "pattern", we could not make this claim as noted above. The "chance hypothesis" = T, is as above: a binary string of length 196. This also constitutes the event, E. So, to arrive at a "rejection region", we note that the P(T|E) is the ratio of the total number of ways of writing "Methinks it is a weasel" in ASCII with the integer '1' interspersed each four digits of each letter translated, divided by the total number of possible ways of describing event E. Now, there is only ONE way that "Me thinks it is a weasel" can be translated as I've indicated. However, the number of possible ways in which a binary string of this length could be generated---all constituting an event E---is equal to the total number of permutations of a bit string of that length, which should turn out to be 2^196. So, what is the "rejection region"? It is 1/(2^196) ~= 10^-74. Does the 'pattern' T fall into this "rejection region"? Yes. So, it's CSI is 196 bits; far less than the needed 500. Therefore, we cannot conclude---without knowing its causal history---that it is "designed". Q.E.D. PaV
Mathgrrl wrote: "You are mistaken about the purpose of the scientific method and the use of a null hypothesis. A scientific hypothesis must make testable predictions." No, I'm not mistaken about it; you are attempting to avoid meeting the same obligation you wish to enforce on ID advocates: it was asserted by Darwin, and is asserted throughout evolutionary literature ever since, that chance and non-intelligent (blind) processes can sufficiently account for biological diversity and success - IOW, that intelligence (teleology) is not needed. The only way to support that claim is to provide a metric for X, or for determining what characteristics of the phenomena in question would demonstrate intelligent manipulation, and then show that such characteristics are not present. The claim that intelligent guidance (teleology) is not necessary is a positive claim, and can only be supported by showing that intelligence (teleology) is not (at least theoretically) necessary. It seems to me that you are saying that neither claims of X or not-X can be supported; therefore, it seems to me that you agree that it was an error on Darwin's part, and it has been an erroneous assertion on the part of evolutionary literature ever since, to assert or imply that the processes necessary to produce the biological diversity we see today are fairly described as "unintelligent" or "chance" or "non-teleological", since there is no "not-X" metric for making such a determination. Is that your position? William J. Murray
PaV,
she wants us to do her dirty work.
I don't find math to be dirty, but that's a personal aesthetic. What I want is for ID proponents to show the work behind their claims. That's not an unreasonable request in scientific circles.
Can you please tell me how I can give a rigorous mathematical definition of CSI for the ev program?
The pattern under consideration is the bit string that serves as a genome for the digital organism. You don't need to analyze ev, just what ev is modeling.
So, if you’re interested in how a “chance hypothesis” works, let’s take a look at those two strings: String #1: 1001110111010101111101001 1011000110110011101111011 0110111111001101010000110 1100111110100010100001101 1001111100110101000011010 0010101000011110111110101 0111010001111100111101010 11101110001011110 String #2: 1001001101101000101011111 1111110101000101111101001 0110010100101100101110101 0110010111100000001010101 0111110101001000110110011 0110100111110100110101011 0010001111110111111011010 00001110100100111 Now, MathGrrl, which is which?
If only there were a metric I could apply to each of these strings to determine, without any knowledge of their history, whether or not either of them is the result of intelligent agency. Do you know of such a metric? If so, could you please provide a rigorous mathematical definition for it and some examples of how to calculate it? If such a metric existed, I'd be happy to apply it to your two strings. MathGrrl
vjtorley,
Thank you for your post. Please see my comments in #100 above, paragraph 2.
Comment numbers seem a bit fluid today. I think you are referring to this paragraph:
I’ll have a post up soon on an alternative metric for CSI which is much more hard-nosed and empirical. That should please you and Mathgrrl.
I look forward to seeing that. My question, though, was: "How exactly would one formulate a falsifiable hypothesis for a metric that cannot be measured even in theory?" Based on your recognition that CSI as discussed by Dembski is not "measurable in a laboratory", would you agree that it, as opposed to your forthcoming metric, cannot be used to formulate a falsifiable hypothesis?
You also write:
Either CSI can be calculated without reference to the historical provenance of the object under investigation or that history must be considered. You can’t have it both ways.
In my post, I defended the claim that for an arbitrary complex system, we should be able to calculate its CSI as being (very likely) greater than or equal to some specific number, X, without knowing anything about the history of the system. I should add that when performing this an ahistorical calculation of CSI, duplications occurring within the pattern should be assumed NOT to be independent events. Only later, when we become familiar with the history of the pattern, can we assess whether in fact this assumption is in fact correct or not.
I'm still confused about what you're saying. Either CSI can be calculated without knowing anything about the history of the object or it cannot. By introducing terms such as "independent events" you seem to be suggesting that the history of the object can change the CSI measurement. This is a contradiction of the claim that CSI can be computed ahistorically. MathGrrl
William J. Murray,
if there is no metric that can measure or validate X (ID CSI), how can one reach a finding of not-X?
You are mistaken about the purpose of the scientific method and the use of a null hypothesis. A scientific hypothesis must make testable predictions. The testable prediction of CSI is that it is a reliable indicator of the involvement of intelligent agency. We have seen in another thread that a rigorous mathematical definition of CSI is not readily available, so the prediction cannot be tested in this case. This does not mean that not-X is proven, or even better supported. It simply means that the ID proponents who make claims about CSI have not supported their arguments. The real explanation for biological diversity may still be intelligent agency, it may be modern evolutionary theory, or it may be any of an infinite number of other explanations. MathGrrl
I suspect she does not have a physics-mathematics-physical chemistry backgroun
Seriously? You haven't figured out who Mathgrrl is yet? jon specter
Mel: Prezactly. G kairosfocus
OOPS: Pardon the half post then full post. I am struggling with net access this morning, and saw an odd error message. kairosfocus
PAV: You seem to be right, or on the right track. Sadly. Had MG simply asked the question the right [straight] way around, we would have had a very different and much more productive discussion. I suspect she does not have a physics-mathematics-physical chemistry background, and has not done much of statistical thermodynamics, the underlying field for all of the issues on the table. BTW, want of such a background is exactly why there has been a major misunderstanding it seems of Hoyle's Tornado in a Junkyard assembles a Jumbo Jet example. He is actually scaling up and using a colourful metaphor on molecular scale interactions, and is giving an equivalent form of the infinite monkeys theorem. But, the issue is not to construct a jumbo jet by a tornado passing through Seattle and hitting a junkyard; it starts long before that. Namely, at even the level of 125 bytes worth of functional information, a relatively small amount to do anything of consequence, we are already well beyond the credible search capacity of our cosmos, once the search is not an intelligent and informed one.
(NOTE: Here, using the idea of assembly of a micro-jet from tiny parts sufficiently small to be subject to random fluctuations in a liquid medium, I scale the matter back down to molecular scale, and enlist brownian motion and nanobots, to draw out the implications in ways that are more in line with what is going on on the molecular scale. What happens is that to clump parts in near vicinity, to click together in any arbitrary fashion, requires a huge amount of specifically directed and information-rich work, as the number of ways of arranging scattered parts vastly outnumbers the number of ways that parts may be clumped. So, parts under brownian forces will be maximally unlikely to spontaneously clump. Then, the number of clumped states vastly outnumbers the number of Wicken functional wiring diagram ones, and in turn, there is a huge directed work input that would be required in the real world to move from clumped to functionally organised states. Notice, this is not dependent on what particular way you do the work, as entropy is a STATE function, not a path function. Indeed, in thermodynamic analysis, it is routine to propose an idealised and unrealistic but analytically convenient path from an initial to a final state to estimate shift in entropy.)
The root problem on understanding the challenge facing chance hypotheses [or chance plus blind mechanical forces] is therefore that the underlying analysis is thermodynamic, specifically, statistical-thermodynamic. As a perusal of the just linked will show, once we have clustering of states discernible by an observer per some macro-variable or other, we move away from an underlying per microstate distribution view. (Notice how MF blunders into exactly this confusion, in his objection that a Royal Flush is no more special or improbable in itself than any arbitrary hand of cards. Of course, the very point of the game is that such a RF is a recognisably special hand indeed, as opposed to the TYPICAL run of the mill. Cf the analysis of hands of cards as already excerpted, and as was presented in the UD weak argument corrective no 27. This analysis was originally brought to MF's attention some years ago, at his earlier blog, in response to a challenge he posed on -- surprise [not] -- calculating values of CSI. So he knows, or should know about it. Let me put that another way: the calculation seen in summary form is in answer to a question posed by MF about three years ago in his Clapham Omnibus blog . . . ) Once we see the implication of recognisable and distinct clusters, with very different relative statistical weights in the set of all possible configs, we then face the question of how likely are we to be in one or the other of these distinct clusters of recognisably distinguish-able states within the wider space of possibilities. Especially, relative to unintelligent processes such as trigger random walks and/or trial and error on arbitrary initial conditions. In particular, we now see the significance of deeply isolated zones of interest, or target- or hot- zones or -- especially -- islands of function, which then can be compared in one way or another to the space of possibilities. And, the question then becomes: how does one best explain arrival at such an island. If a space is sufficiently large, and the available resources are limited, the best explanation of getting to an island, is typically that you have a map and a means of navigation, or there is a beacon that attracts/magnetises attention, or you were wafted there under directed control of forces that push one to an island. That is why I chose the brute-force threshold of 1,000 bits of info-storage capacity, measured as the number of basic yes-no decisions cascaded to specify the system, i.e. its wiring diagram. As I showed in outline here, any particular wiring diagram can be reduced to a specifying list of the network, on its nodes, arcs and interfaces. In particular, textual sequences of symbols are functionally ordered strings wired together like so: S-T-R-I-N-G-S. For, 1,000 basic yes/no decisions specifies a space of 1.07*10^301 possibilities. Whether by converting the entire cosmos into terrestrial planets orbiting appropriate class stars in the right habitable zones, with banana plantations and armies of monkeys banging away at keyboards in cubicles, or otherwise, it can be shown that the number of possibilities for the observed cosmos across its thermodynamic lifespan [~ 50 mn times longer than is held to have already lapsed since the big bang] would not exceed 10^150 possibilities. And as Abel shows through his per-reviewed, published universal, galactic and solar system level plausibility analysis -- which again MG shows no sign of profitably interacting with [she should also look at the Durston analysis here and the underlying FSC, RSC OSC analysis here] -- one planet would be far, far less than that. 10^150 is 1 in 10^151 of 1.07*10^301. In short, a cosmic scope search rounds down very nicely to zero scale. The best comparison I can think of is to mark a single atom at random in our cosmos for 10^-45 s [about as fast as a physical process can conceivably happen]. Then, imagine a lottery where a single atom is to be picked at random, any time, any place in the cosmos, on just one trial. 1 in 10^150 is the odds of picking that single marked atom just when it is marked. The odds of that are practically zero. So once functional states based on wiring diagram organisation are rare in the space of possibilities [which is known to be so], no unintelligent search on the gamut of the cosmos is likely ever to hit upon any such island of function. Especially, one based on a metabolising entity integrated with a coded, stored information based von Neumann self-replicating facility, that reasonably requires at least 10 - 100,000 bytes of information based on analysis of requisites of observed life. Remember, such a vNSR implicates an irreducibly complex entity involving:
(i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[ . . . in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility; (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with (iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling: (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by (v) either:
(1) a pre-existing reservoir of required parts and energy sources, or (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.
Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor. That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [[Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]
And, yes, I am agreeing with Orgel, Wicken, Yockey, Hoyle et al that the origin of life based on the C-Chemistry, self-replicating cell is absolutely pivotal in all our considerations on scientific exploration of origins. I also notice hints of the long since abandoned biochemical predestination thesis of Kenyon, put up in 1969. Directly, if biochemistry and life-functional DNA and/or protein chains are written into the underlying physics that drives the creation, abundance and environments of H, He, C, O and N -- the main atoms involved -- then that would be the ultimate proof that the laws of physics are a program designed to create life. But in fact, the strong evidence is that for both D/RNA and proteins, there are no stringent constraints on chaining sufficient to account for the information. This was already investigated by Bradley et al in the mid 1980's, and is a big part of why Kenyon chose to take the opportunity of writing a preface to The Mystery of Life's Origin [the first technical ID work] to recant publicly from biochemical predestination. Non-starter. So, you can easily see why I am so deeply suspicious of the tendency to want to sweep this issue under the carpet in analyses on origin of biologically functional complex wiring diagram based organisation and related information. The OOL question must not be begged; it decides the balance of plausibilities. The only credible alternatives are: intelligently directed search or an effectively infinite array of sub-cosmi, such that the search space challenge is overwhelmed by having infinite monkeys, so to speak. Of such a quasi-infinity of sub-cosmi, there is nowhere the faintest empirical trace. But, routinely, we know intelligences create FSCI. So, we have excellent inductive and analytical reason to infer from FSCI to intelligence as its most reasonable source. Going up to the origin of body plans, we are dealing with large scale additional increments of functional information, expressed starting with the embryo,and generally speaking assembling new wiring diagrams [body plans] that imply much larger sets of possibilities and much deeper isolation of islands of function. Can such be assembles incrementally, step by step by trial and error -- chance variation, plus culling on differential reproductive success of sub populations and related mechanisms -- that improves function until a transformational change appears? On the face of it, utterly implausible. To see why, consider the challenge to transform a Hello World into ev or a similar complex program, step by step, improving or even just preserving function all along the way. I think the program would break down at the first complex loop with a count or comparison constraint. For, such is irreducibly complex and would not arise by increments or by co-opting something else. Similarly "See Spot run" is not transmuted into a Doctoral dissertation step by trial and error step, preserving function all the way. To move from hello world to ev or from see spot run to a PhD dissertation requires a lot of learning, and serious, information-rich, knowledgeable intelligent input. So, if MG or others are prepared to argue that chance variation and natural selection etc account for the origin of dozens of body plans starting with the Cambrian era fossil life forms, then it is incumbent on them to show this empirically. Just as, those who claim that a perpetual motion machine of the second kind is feasible, need to show this in the teeth of the existing body of observations and analysis. (And yes, I am claiming that at micro level, there is a reasonable connexion between thermodynamic analysis and information issues. Cf the summary in appendix 1 my always linked, as was linked in the above already.) So, far, they have not been forthcoming. GEM of TKI kairosfocus
If there was a game where the card series A(H), 4(C), J(C), 8(D), 4(S) was the best series of cards to obtain, and one was playing that game and kept getting that sequence, then we would again suspect something or someone was gaming the system. The string itself, as MarkF points out, is no more or less probable than any other 5-card string; the important aspect of the evaluation is the specificity to an extraneous pattern. That is the target that is referred to; the extraneous pattern offers the target values that the physical system in question is either hitting via a chance distribution or via a rigged system. If it is a rigged system, it is either intelligently rigged or not; if it is not intelligently rigged, then we should have a physical explanation of why the materials hit the target above what a chance distribution describes. If all known physical explanations (chemical attractions, natural selection, random mutation, etc) fail to account for why the target pattern is acquired as often as it is, there is no reason not to suspect an empirically-known agency of such rigging: intelligence. Meleagar
PAV: You seem to be right, or on the right track. Sadly. Had MG simply asked the question the right [straight] way around, we would have had a very different and much more productive discussion. I suspect she does not have a physics-mathematics-physical chemistry background, and has not done much of statistical thermodynamics, the underlying field for all of the issues on the table. BTW, want of such a background is exactly why there has been a major misunderstanding it seems of Hoyle's Tornado in a Junkyard assembles a Jumbo Jet example. He is actually scaling up and using a colourful metaphor on molecular scale interactions, and is giving an equivalent form of the infinite monkeys theorem. But, the issue is not to construct a jumbo jet by a tornado passing through Seattle and hitting a junkyard; it starts long before that. Namely, at even the level of 125 bytes worth of functional information, a relatively small amount to do anything of consequence, we are already well beyond the credible search capacity of our cosmos, once the search is not an intelligent and informed one.
(NOTE: Here, using the idea of assembly of a micro-jet from tiny parts sufficiently small to be subject to random fluctuations in a liquid medium, I scale the matter back down to molecular scale, and enlist brownian motion and nanobots, to draw out the implications in ways that are more in line with what is going on on the molecular scale. What happens is that to clump parts in near vicinity, to click together in any arbitrary fashion, requires a huge amount of specifically directed and information-rich work, as the number of ways of arranging scattered parts vastly outnumbers the number of ways that parts may be clumped. So, parts under brownian forces will be maximally unlikely to spontaneously clump. Then, the number of clumped states vastly outnumbers the number of Wicken functional wiring diagram ones, and in turn, there is a huge directed work input that would be required in the real world to move from clumped to functionally organised states. Notice, this is not dependent on what particular way you do the work, as entropy is a STATE function, not a path function. Indeed, in thermodynamic analysis, it is routine to propose an idealised and unrealistic but analytically convenient path from an initial to a final state to estimate shift in entropy.)
The root problem on understanding the challenge facing chance hypotheses [or chance plus blind mechanical forces] is therefore that the underlying analysis is thermodynamic, specifically, statistical-thermodynamic. As a perusal of the just linked will show, once we have clustering of states discernible by an observer per some macro-variable or other, we move away from an underlying per microstate distribution view. (Notice how MF blunders into exactly this confusion, in his objection that a Royal Flush is no more special or improbable in itself than any arbitrary hand of cards. Of course, the very point of the game is that such a RF is a recognisably special hand indeed, as opposed to the TYPICAL run of the mill. Cf the analysis of hands of cards as already excerpted, and as was presented in the UD weak argument corrective no 27. This analysis was originally brought to MF's attention some years ago, at his earlier blog, in response to a challenge he posed on -- surprise [not] -- calculating values of CSI. So he knows, or should know about it. Let me put that another way: the calculation seen in summary form is in answer to a question posed by MF about three years ago in his Clapham Omnibus blog . . . ) Once we see the implication of recognisable and distinct clusters, with very different relative statistical weights in the set of all possible configs, we then face the question of how likely are we to be in one or the other of these distinct clusters of recognisably distinguish-able states within the wider space of possibilities. Especially, relative to unintelligent processes such as trigger random walks and/or trial and error on arbitrary initial conditions. In particular, we now see the significance of deeply isolated zones of interest, or target- or hot- zones or -- especially -- islands of function, which then can be compared in one way or another to the space of possibilities. And, the question then becomes: how does one best explain arrival at such an island. If a space is sufficiently large, and the available resources are limited, the best explanation of getting to an island, is typically that you have a map and a means of navigation, or there is a beacon that attracts/magnetises attention, or you were wafted there under directed control of forces that push one to an island. That is why I chose the brute-force threshold of 1,000 bits of info-storage capacity, measured as the number of basic yes-no decisions cascaded to specify the system, i.e. its wiring diagram. As I showed in outline here, any particular wiring diagram can be reduced to a specifying list of the network, on its nodes, arcs and interfaces. In particular, textual sequences of symbols are functionally ordered strings wired together like so: S-T-R-I-N-G-S. For, 1,000 basic yes/no decisions specifies a space of 1.07*10^301 possibilities. Whether by converting the entire cosmos into terrestrial planets orbiting appropriate class stars in the right habitable zones, with banana plantations and armies of monkeys banging away at keyboards in cubicles, or otherwise, it can be shown that the number of possibilities for the observed cosmos across its thermodynamic lifespan [~ 50 mn times longer than is held to have already lapsed since the big bang] would not exceed 10^150 possibilities. And as Abel shows through his per-reviewed, published universal, galactic and solar system level plausibility analysis -- which again MG shows no sign of profitably interacting with [she should also look at the Durston analysis here and the underlying FSC, RSC OSC analysis here] -- one planet would be far, far less than that. 10^150 is 1 in 10^151 of 1.07*10^301. In short, a cosmic scope search rounds down very nicely to zero scale. The best comparison I can think of is to mark a single atom at random in our cosmos for 10^-45 s [about as fast as a physical process can conceivably happen]. Then, imagine a lottery where a single atom is to be picked at random, any time, any place in the cosmos, on just one trial. 1 in 10^150 is the odds of picking that single marked atom just when it is marked. The odds of that are practically zero. [ . . . ] kairosfocus
#127 meleager It is when a supposedly chance distribution of materials repeatedly forms these kinds of valuable specifications when they are under no chemical or physical compulsion to do so that one can reasonably infer that a teleological process is involved in ordering the specifications, just as we would suspect intelligence or a gamed system of some sort to the culprit if we are dealt 5 royal flushes in a row. Dembski's paper and definition of CSI makes no references to outcomes being valuable (or functional). He seeks to define the specification purely in terms of KC simplicity. The issue of using function or value as a specification is a different one. markf
Joseph, I appreciate the correction. That was an interesting read. One wonders what king of regulatory system must exist in order for a dynamic non-folding protein or partial sequence to perform valuable work. I also wonder if this significantly expands the number of protein sequences capable of performing work. MarkF: In a post above you referred us to your column, where you say: "For those who don't know the rules of Poker - this hand is known as a Royal Flush and it's the highest Hand You can get" And then you elaborate: "It is also an important question for the intelligent design movement and its proponents believe they have the answer. They would claim the first hand is not just improbable but also that it is specified. That is, it conforms to a pattern and this is what makes it so special." The pattern that the royal flush specifies is "the best hand one can get in poker"; IOW, it serves a function in a system that is not a necessary extrapolation or consequence of the physical system in question. Cards can exist without the game of poker. Sequences of cards by themselves do not necessarily invent or generate poker games or rules. The game of poker is a separate system of rules that specifies what sequences of cards mean winning and losing. The royal flush is a sequence that is specified in terms of the pattern of winning and losing hands as defined by the rules of poker; thus, the royal flush functions in that system as a winning hand. It is when a supposedly chance distribution of materials repeatedly forms these kinds of valuable specifications when they are under no chemical or physical compulsion to do so that one can reasonably infer that a teleological process is involved in ordering the specifications, just as we would suspect intelligence or a gamed system of some sort to the culprit if we are dealt 5 royal flushes in a row. Meleagar
Here ya go markF: But anyway- Claude Shannon provided the math for information. Specification is Shannon information with meaning/ function (in biology specified information is cashed out as biological function). And Complex means it is specified information of 500 bits or more- that math being taken care of in "No Free Lunch". That is it- specified information of 500 bits or more is Complex Specified Information. It is that simple. The point being is we use CSI in our every day lives. So why do evos have convulsions when they try to discuss it? Joseph
markf, Your use of th card analogy is way off the mark- pun intended. IOW your use is unsatisfactory. Joseph
vj #101 Thanks for accepting that there is indeed a fundamental error in the maths of Dembski's CSI calculation. I imagine others have noticed this before - but I am not aware of it. I look forward to your alternative definition of CSI. I almost hope that it will take you several days so I am able to study it when it does come out. markf
#121 meleager MF’s point that one state is the chance equivalent of any other state would be meaningful if every string of amino acids could perform work – i.e., function. I guess MF is me. I am not sure what the context is for your remark. I certainly don't think all strings of amino acids are functionally equivalent. I don't think they are equally probable either. The analogy between playing cards and living phenomena is Dembski's not mine. He uses it to try and define a sense of specified which is independent of function. And this sense of specified which proves to be unsatisfactory. (To demonstrate this I tried to pick up on his analogy). The issue of functional specification is different and I have not gone into that on this thread. markf
Meleager: blockquote>Not only must target strings fold into stable 3D objects, there must be a form-fitting receptacle it fits into where the fit generates significant work./blockquote> Apologies but not all proteins fold into stable 3D objects. see- Understanding protein non-folding Joseph
MF's point that one state is the chance equivalent of any other state would be meaningful if every string of amino acids could perform work - i.e., function. Not only must target strings fold into stable 3D objects, there must be a form-fitting receptacle it fits into where the fit generates significant work. So, not all strings of cards are the same when evolution is dealing hands; if the analogy is to hold true, then out of an incredibly huge potential assortment of hands, the vast majority of them cannot even be entered into the game; they must be discarded, because they do not fold into stable shapes. Of that tiny fraction left, the vast majority can do no significant work unless there happens to be, at the time, a corresponding receptacle that happens to perform a function when the folded protein is applied. The correct analogy is that the chance distribution of cards into hands, for any length of time, sorted by any blind process (blind to the future), can take those hands and successfully manufacture the functioning equivalent of a computerized battleship (the human body & brain). That is an outrageous hypothesis that appears only to be a case of atheist-materialist wish fulfillment. It should only be taken seriously if it is accompanied by a rigorous demonstration that chance, known natural processes, and the sorting mechanism offered are indeed at least theoretically up to the job. Darwin and all biologist since have offered no evidence that their categories of proposed processes (unintelligent, blind, random) are even theoretically up to the task; yet they insist that others disprove them by coming up with the very metric they have failed and refused to provide, and which they insist does not and cannot exist! They claim to have observed variation in the lab; one cannot discern if such variation is generated by chance or non-chance forces unless they have a metric for making such a determination. One cannot "see" chance acting on anything; one cannot "see" intelligence acting on anything. One can only see physical commodities interacting and, without the X-metric, assume or offer a best guess based on other factors that it is an intelligent or non-intelligent occurrence. Even if one finds the current CSI or FSCO/I metric wanting, at least ID theorists have offered a means of evaluating the actual capacity of categorical materialist processes in accomplishing the product they are claimed to have produced. What have proponents of materialist Darwinism offered? Shifting of the burden and appeals to chance and deep time and infinite universes. Those are not "explanations", they are bare possibilities. While it is a bare possibility that a chance distribution of materials sorted by a non-teleological process can produce a fully functioning human body or a 747 or a battleship, the bare possibility that it could happen is not a scientific theory of how it happened. Meleagar
utidjian: I think you will see why in the UD Weak Argument Corrective 27 [pace JR at 8 above who artfully clipped off at a point that turns what is there into a strawman caricature . . . ], we started with the intuitive, common sense concept then went on to a simple brute force metric before linking the Durston work on a functional extension to Shannon's H metric and the Dembski model: __________________ >> 27] The Information in Complex Specified Information (CSI) Cannot Be Quantified That’s simply not true. Different approaches have been suggested for that, and different definitions of what can be measured are possible. As a first step, it is possible to measure the number of bits used to store any functionally specific information, and we could term such bits “functionally specific bits.” [ADDED: This is the basis of the X = C*S*B metric; if the complexity is beyond 1,000 bits AND the information is functionally specific, then the number of bits to express it is the FSCI metric in functionally specific bits.] Next, the complexity of a functionally specified unit of information (like a functional protein) could be measured directly or indirectly based on the reasonable probability of finding such a sequence through a random walk based search or its functional equivalent. This approach is based on the observation that functionality of information is rather specific to a given context, so if the islands of function are sufficiently sparse in the wider search space of all possible sequences, beyond a certain scope of search, it becomes implausible that such a search on a planet wide scale or even on a scale comparable to our observed cosmos, will find it. But, we know that, routinely, intelligent actors create such functionally specific complex information; e.g. this paragraph. (And, we may contrast (i) a “typical” random alphanumeric character string showing random sequence complexity: kbnvusgwpsvbcvfel;’.. jiw[w;xb xqg[l;am . . . and/or (ii) a structured string showing orderly sequence complexity: atatatatatatatatatatatatatat . . . [The contrast also shows that a designed, complex specified object may also incorporate random and simply ordered components or aspects.]) Another empirical approach to measuring functional information in proteins has been suggested by Durston, Chiu, Abel and Trevors in their paper “Measuring the functional sequence complexity of proteins”, and is based on an application of Shannon’s H (that is “average” or “expected” information communicated per symbol: H(Xf(t)) = -[SUM]P(Xf(t)) logP(Xf(t)) ) to known protein sequences in different species. A more general approach to the definition and quantification of CSI can be found in a 2005 paper by Dembski: “Specification: The Pattern That Signifies Intelligence”. For instance, on pp. 17 – 24, he argues:
define p_S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [Chi, let's use X] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 logarithm of the conditional probability P(T|H) multiplied by the number of similar cases p_S(t) and also by the maximum number of binary search-events in our observed universe 10^120] X = – log2 [10^120*p_S(T)* P(T|H)]. To illustrate consider a hand of 13 cards with all spades, which is unique. 52 cards may have 635 *10^9 possible combinations, giving odds of 1 in 635 billions as P(T|H). Also, there are four similar all-of-one-suite hands, so ?S(T) = 4. Calculation yields ? = -361, i.e. [less than] 1, so that such a hand is not improbable enough that the – rather conservative — ? metric would conclude “design beyond reasonable doubt.” (If you see such a hand in the narrower scope of a card game, though, you would be very reasonable to suspect cheating.) Debates over Dembski’s models and metrics notwithstanding, the basic point of a specification is that it stipulates a relatively small target zone in so large a configuration space that the reasonably available search resources — on the assumption of a chance-based information-generating process — will have extremely low odds of hitting the target. So low, that random information generation becomes an inferior and empirically unreasonable explanation relative to the well-known, empirically observed source of CSI: design. >> ___________________ In short, there is a wider conceptual case that has a stronger logical force than the specifics and limits of any particular metric model applied. That greater logical force is essentially the same as what grounds the second law of thermodynamics, in its statistical form, cf. here, i.e on chance and blind mechanical necessity, the statistically dominant clusters of specific states will overwhelmingly dominate the observed outcomes, especially once we are dealing with systems that have very large numbers of possible configurations. I add to this, that if this basic point is missed, and the point of the X-metric is dismissed, the more sophisticated models will be similarly dismissed, because of failure to think through the basic issue of isolation of islands of specific function in vast spaces dominated by non-functional configurations. Indeed, I believe MF in this and/or a previous thread, was trying to make the objection that any one state is as improbable as any other single state. True but irrelevant to the point of being a red herring. For, clusters of microstates can be distinguished on observables such as functionality or failure of such function [as a relevant example of a pattern], and the observationally distinguishable clusters of states are NOT equi-probable on blind chance plus blind mechanical necessity. So much so, that for the case of deeply isolated islands of function, the best empirically supported explanation for their occurrence, is design. For instance, functional text in English in posts in this thread, computer programs, and arguably DNA code. As to what isolation means, consider that 125 bytes of info storage capacity can accommodate 2^1,000 distinct possibilities, from 0000 ... 0 to 11111 . . . 1 inclusive. That is 1.07*10^301 possibilities, where the 10^80 or so atoms of our observed cosmos, changing state every 10^-45 s [about 10^20 times faster than the fastest, strong force interactions], for the thermodynamic lifetime of our observed cosmos [about 50 mn times the time said to have elapsed since the big bang], would only be able to sample some 1 in 10^150 of those states. Or, if the cosmos were converted into terrestrial planets with banana plantations and monkeys at keyboards, typing at any feasible rate, for the thermodynamic lifespan of the cosmos, they could not exhaust as much as 1 in 10^150 of the possibilities for just 1,000 bits. After all that time, and effort, they would have completed only a fraction practically indistinguishable from a zero fraction of possibilities. The likelihood of getting as much as one full length tweet full of meaningful, properly configured information [143 ASCII characters] would be a practical zero. 125 bytes being a very small amount of meaningful information indeed. And yet, intelligent observers routinely and easily produce that much. So, we have excellent reason -- much obfuscatory, distractive and dismissive rhetoric and talking points notwithstanding -- to infer from FSCI as reliable sign to its empirically credible cause, intelligence. Regardless of the final balance on the merits of the debates over Dembski's particular model, analysis and metric. Or, Durston's for that matter. G'evening. GEM of TKI
kairosfocus
vjtorley [112]:
I should add that when performing this an ahistorical calculation of CSI, duplications occurring within the pattern should be assumed NOT to be independent events. Only later, when we become familiar with the history of the pattern, can we assess whether in fact this assumption is in fact correct or not.
vjt: I don't share your reticence when it comes to gene duplications. As you've pointed out above in your analysis, you would have some pattern and then, per algorithmic information theory (Chaitin-Kolgomorow), the complexity increase is a handful of bits. Now I know that MathGrrl wants to shove C-K theory aside, but in Dembski's SP it is there when you're dealing with "phi", so to presume that C-K theory need be jettisoned in this instance is to succumb to a kind of extremism. Let's face it, the classic definition of CSI that Dembski gives, in terms of improbability, and hence, complexity, is a simple variant of Shannon information. These are all tools when it comes to our handling of information, and there is no need to jettison them at all. CSI simply attempts to get at a kind of information that is intuitive to us, and not to machines. So, in the end, I'm curious as to your reticence. PaV
KF: You're doing great work. Keep it up. Just a thought---prompted by KF's quotes from Crick: Here we have Darwinist/naturalists telling us that we don't know what kind of probability distribution is at work when it comes to the cell's DNA (tantamount to saying that some kinds of natural laws are at work in the formation of the DNA that is hidden to us), and then telling us that the genome could be assembled through "random processes". To anticipate future inanities, let's pretend the "Life" magically appeared---some aliens spit it down from the skies. "Well, then," we would be told, "once this life appeared, Darwinian processes took over." And what were those "processes"? RANDOM mutations. So, once again, they assume that mutations can take place randomly, while at the same time maintaining that there MUST BE some kind of "law-like" behavior present in DNA and which remains 'hidden' to us. Just like those pesky "intermediate fossils" are "hidden"! PaV
utidjian [113]: So uh, why not do that one? If you've read my posts---all of it---then you would know that Schneider considers his ev program the cream-of-the-crop when it comes to such EA programs. And, his output falls below the needed level of bits---per Dembski's NFL definition---to rise to CSI. So, it becomes an impossible task. I can't give you a definition for CSI in a specification that DOESN'T contain CSI. Now, as I understood from the very beginning, MathGrrl was only interested in how one generates the 'chance hypothesis'. She's now admitted that. In the case of ev, it is very involved because you're dealing with random, and non-random processes, which means that you would almost have to 'derive' a chance hypothesis. Well, that's a lot of work. If MathGrrl is interested in learning "how" CSI works, then she could have---should have---asked for one example. Or, quite simply, she could have just come out and said: "I have a hard time seeing what the chance hypothesis is in these 'scenarios'. Can you help?" But, instead, she makes a demand---in a way that can only be described as acting with great hubris---thinking that because she's having a hard time, they'll have a hard time too. Let's put them on the spot and see just how slippery a concept CSI is." Now, admittedly, it is indeed hard. But where's the humility and courtesy? I've now included two bit strings which can be a helpful exercise. Then, if she wants, she can apply it to ev, or whatever she wants to apply it to. But, to a thinking, reasoning individual, this would be a big waste of time. Why? Because her concern, apparently, is either to show that these scenarios contain CSI (which they don't), or to show that a rigorous mathematical definition of CSI isn't possible. In both these instances, these are poor examples. Now, if some computer generated output could be shown to generate an output containing sufficient improbability to warrant a "design" assignation, and she could demonstrate that, indeed, the chance hypothesis associated with the program did not reduce the improbability of the result---that is, that it was "effectively" random throughout---this then would be remarkable in many ways, and at many levels. Till then, she should save her time and energy for other matters. Just my advice. PaV
F/N: Please see my last comment on the calculation thread. The gap between the new talking point: no "rigorous" mathematical def'n and no reality/meaningfulness or utility to the CSI concept, and reality -- per Orgel and Wicken in the 1970's -- is becoming blatant. Who are we to believe, objectors who pretend that the genetic code is not a code, and who want to pretend that the validity of the CSI concept depends on models being to their taste [while studiously ignoring the Durston metric with 35 published values of FSC in FITS in the peer reviewed literature], or Crick, Orgel and Wicken: _____________ Crick, March 19, 1953 in letter to son, Michael: >> Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another) . . . >> Orgel, 1973 on specified complexity: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] >> Wicken, 1979, on functionally specific, complex, wiring diagram organisation: >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.] >> ____________________ It is indeed interesting to discuss models, analyses and metrics, but we must first be clear about fundamental realities and their implications. What is the only empirically known known source of codes, language, algorithms and programs? What is the empirically known source of wiring diagram based functional organisation? What is the empirically known source of functionally specific complex organised patterns of symbols in strings that express messages in language beyond say 143 ASCII characters? What do we know on the infinite monkeys analysis about the capacity of chance and trial and error based on chance configurations? GEM of TKI kairosfocus
Joseph: The two bit-strings I posted will give a very straightforward look at what a pattern is, and the chance hypothesis that is associated with it. We can even go the route of a rejection region (which, of course, will not be enough since the bit string is not long enough; but the exercise will be good to do). It's now a question, I believe, of MathGrrl's sincerity. Does she sincerely want to learn? PaV
PaV @ 111:
ONE WOULD HAVE BEEN SUFFICIENT.
So uh, why not do that one? I think Mathgrrl claims she didn't even get one of her questions answered... though perhaps she knows more about what CSI might be. I want to thank Mathgrrl and all the UD regulars for such an interesting discussion. I really learned quite a bit about CSI from these threads here and meta-discussions elsewhere. Especially since, by your own admission, none of you were up to doing the math (as it were) but gave it your best shot anyhow. -DU- utidjian
PaV- I feel your pain but I did try to warn people about what they were getting into. Joseph
Onlookers, please note this as well: If MathGrrl came to UD wanting to really "know more" about CSI so she could "understand it better", then she would have: (1) informed herself better before coming here; (2) would not have made the unpardonable mistake of asking for a definition for CSI in instances where anyone with a rudimentary understanding of CSI would, just on the face of it, realize that CSI isn't present in some of the "scenarios" she espoused; (3) would NOT HAVE ASKED that FOUR SCENARIOS be given a "rigorous mathematical definition: of CSI. ONE WOULD HAVE BEEN SUFFICIENT. PaV
MathGrrl [66]:
The two broad areas where I find Dembski’s description wanting are the creation of a specification and the determination of the chance hypothesis.
I want every onlooker, at everyone here at UD to notice that: (1) I was exactly right about MathGrrl's intentions: she wants us to do her dirty work. I have contended from the outside that her request was outrageous given the difficulty involved in formulating the "chance hypothesis". She has just stated that. She doesn't know how. She doesn't want to learn how. She wants us to do it for her. (2) please notice, as I've written before, that she wants us to do the IMPOSSIBLE. MathGrrl, there is NO CSI, as defined in NFL directly, or "specified complexity" as defined indirectly, in the ev program of Thomas Schnieder. It is fabulously easy to ascertain. All you have to do is to look at his "output" string. There it is a bit string that is 265 bits long. Thus, it's "complexity" is no more (and actually quite a bit less) than 2^265. This is well below the UPB of 10^150 in NFL. In SP (the Specification Paper of Dembski), he uses the 10^120 limit of total quantum computational steps in the entire universe used by Seth Lloyd. In a footnote (I know you don't read footnotes), he says that the 10^150 figure is, IIRC, still the "stable" UPB for CSI. Can you please tell me how I can give a rigorous mathematical definition of CSI for the ev program? If you can't, or won't give an answer to that, then you don't deserve a minute's more attention here at UD. Why don't you be honest and just admit your real reason for coming here: you're hoping someone will work out the chance hypothesis for the ev program for you. This is what Schneider says of his ev program: "An advantage of the ev model over previous evolutionary models, such as biomorphs, Avida, and Tierra, is that it starts with a completely random genome, and no further intervention is required." So, ev is the best you can do. And it doesn't rise to the level of CSI. So, if you want us to come up with the "chance hypothesis" for ev, then just tell us, insteand of laying down the gauntlet by demanding a definition of CSI for the "descriptive" specifications you gave---which I'm sure you did because you were emboldened by Dembski's three-fold understanding of a "pattern". Here's my answer to your request: Go jump in a lake! @[68]
Interestingly, this has been noted before, but no ID proponents have addressed the problem. Wesley Elsberry and Jeffrey Shallit reviewed Dembski’s CSI concept back in 2003 and noted a number of challenges for ID proponents: 12.1 Publish a mathematically rigorous definition of CSI . . . (That first one sounds really familiar for some reason.) Each of these is explained in more detail in the paper.
Yes, indeed, it does sound familiar. And it identifies you for what you are: not someone interested in CSI, but a foe of CSI, as are Shallit and Elsberry. Again, @[68]:
If an ID proponent were interested in demonstrating the scientific usefulness of CSI, he or she could do worse than to address Elsberry’s and Shallit’s challenges.
That's not true. I challenged Shallit years ago. I had read his paper---which I was very unimpressed with---and told him that there were a number of areas where I thought he was wrong, and would like to discuss any one of them with him. I asked him to choose an area. He wouldn't choose. So, I picked an obvious place where he was wrong: the "pseudo-unary" algorithm. As the discussion unfurled, lo and behold, I proved beyond a doubt that the rejection regions involved were the same both 'before' and 'after' the conversion, completely negating his claim that "information" had been "created", contrary to Dembski's Law of Conservation of Information. At the same time, I gave him two bit strings to examine using his vaunted SAI (which is a hoot, it is so ill founded). One string was randomly-generated by tossing a coin; the other was "designed" by me. He couldn't tell the one from the other. Interesting. So, if you're interested in how a "chance hypothesis" works, let's take a look at those two strings: String #1: 1001110111010101111101001 1011000110110011101111011 0110111111001101010000110 1100111110100010100001101 1001111100110101000011010 0010101000011110111110101 0111010001111100111101010 11101110001011110 String #2: 1001001101101000101011111 1111110101000101111101001 0110010100101100101110101 0110010111100000001010101 0111110101001000110110011 0110100111110100110101011 0010001111110111111011010 00001110100100111 Now, MathGrrl, which is which? Are you game? PaV
Joseph, Computer programs must not exist. Or maybe they are supernatural so they are not within the purview of science. Collin
Can anyone produce a mathematically rigorous definition of a house? Do you have a 'house' equation? I asked MathGrrl for a mathematically rigorous definition of a computer program, but she refused my request. Computer programs, computers, cars, houses (built to code), etc., etc., all contain and are made from Complex Specified Information. If CSI needs a mathematically rigorous definition then it follows that everything containing / made from CSI should have the same 'fate'. If not how can you say those things exist? :o So how about it- any equations for a computer program? IOW if I give you a program can you, MathGrrl, JR, MarkF, reduce it to an equation? Joseph
From my experience in the "soft sciences" (sociology, psychology etc) I would say that if Mathgrrl demanded such rigorous standards to experts in those fields, then they would never publish anything. 90% of what they do would not be counted as science to her because the concepts (like "motivation" or "having a father in the home") cannot be as rigorously defined as she would require I would also point to the use of medicines like anti-depressants that often only work for 50% of people and the FDA does not know exactly how or why they work. For example see lamotragine's use in bipolar disorder (http://en.wikipedia.org/wiki/Lamictal (look under "mechanisms of action"). The FDA seems to have lower standards for drug prescriptions than Mathgrrl has for CSI. Collin
uoflcard (#101) Thanks for your post in response to Jemima Racktouey (#60). I couldn't have put it better. vjtorley
I remember when the argument against ID was that "no real scientist supported it", then "they don't publish papers", then "they don't do research", then "they don't make predictions." Now, they don't publish enough; they don't predict enough; they don't research enough; not enough scientists support it. Translation: they'll accept it when the so-called "scientific consensus" accepts it. William J. Murray
Mathgrrl (#96) Thank you for your post. Please see my comments in #100 above, paragraph 2. You also write:
Either CSI can be calculated without reference to the historical provenance of the object under investigation or that history must be considered. You can't have it both ways.
In my post, I defended the claim that for an arbitrary complex system, we should be able to calculate its CSI as being (very likely) greater than or equal to some specific number, X, without knowing anything about the history of the system. I should add that when performing this an ahistorical calculation of CSI, duplications occurring within the pattern should be assumed NOT to be independent events. Only later, when we become familiar with the history of the pattern, can we assess whether in fact this assumption is in fact correct or not. vjtorley
A footnote, while waiting for customer service . . . VJT: a good effort, but the exchanges serve to amply illustrate the basic point that the root objections are deeper than actual provision of a CSI metric, a calculation and a rationale. It seems that there is still the a priori fixed concept that the CSI concept is inherently dubious, and would only be satisfactory if it can jump arbitrarily high hurdles, one after the other. Hurdles that go far beyond what a concept or description answering to empirical reality should have to face. So, it is still necessary to highlight that the CSI and related FSCI concepts are rooted in the observations of Orgel and Wicken in the technical literature in the 1970's, and that more broadly, they answer to commonly observed features of objects in the real world, the joint appearance of complexity and specification, leading to a distinction from mere complexity. Further, I note that it is often possible to have an empirical criterion, such as observable function, that can cluster particular configurations in relevant groups. And, when islands of function are sufficiently isolated in a space of possible configs, then being on the island is significant. Now, too, the trivial objection that any one of n configs is 1/n of all possibilities ignores the functional/nonfunctional clustering distinction: the macroscopically distinct clusters of states are not one state in size, so the same issue of overwhelminfg relative statistical weight of one macrostate over another prevails as in statistical thermodynamics; the foundation of the 2nd law. Namely, if we are not picking states intelligently, states from clusters sufficiently isolated in the space of configs will be too rare to show up. (BTW, the reason why lotteries are won is that they are designed to be won . . . they do not run into the config space scope issue we are facing.) Similarly, to arrive at a self-replicating, metabolising entity that stores coded instructions to build a fresh copy of itself, one must meet the von Neumann kinematic replicator cluster; which is irreducibly complex. Once such a construct exceeds 125 bytes [=1,000 bits] worth of specification, it is beyond the reasonable reach of the blind search resources of our cosmos. 125 bytes is a very short stretch of space to build a program to do anything of consequence (much less a self-replication facility that specifies as well a separate metabolic entity that is to be replicated), and so the "747 threshold" is actually far more complicated than is needed to be beyond the credible search capacity of the observed cosmos. Accordingly, the attempted brushing aside above, on a tornado in a junkyard forming a 747, ducks the material point. (In praxis, the observed von Neumann replicators start at about 10,000 bytes, and the sort of novel body plans we see in the Cambrian fossils, run to probably 1-10+ mn bytes, dozens of times over.) Finally, I note that the WAC as noted is on the simple X-metric for FSCI, which is different from the Dembski metric or the Durston et al metric, but all three make the same basic point. The X-metric by using a brute force approach: at 125 bytes, the number of states for the cosmos as a whole [which is where the Dembski 500 bits threshold comes from] is no more than 1 in 10^150 of the possible configs. There is therefore excellent reason to conclude that to expect a search of 1 in 10^150 of a space, uninstructed by intelligence, to round down to zero in practical terms. If we see a functionally specific complex organised entity that has in it at least 125 bytes of FSCI -- information that has to have a fairly tight cluster of possible specific patterns, to work -- is best explained on the only observed source for such FSCI: design. The objections we have been seeing for weeks now, pivot on not having an observationally anchored answer to this challenge. That is why, ther eis a pretence that the concepts CSI and FSCI cannot be meaningful absent an exacting mathematical definition and metric, why there are all sorts of ever rising hurdle objections to models, and metrics and calculations [whether simple as in teh X-metric, or more complex as VJT has provided], and why the Durston case of FSC values for 35 protein families on an extension of the Shannon H metric of average info per symbol, is passed over in silence. It may not be politically correct, but it is empirically well warranted to conclude that FSCI or more broadly CSI, is an excellent, reliable indicator of design. GEM of TKI kairosfocus
Mathgrrl states: "How exactly would one formulate a falsifiable hypothesis for a metric that cannot be measured even in theory?" I again ask: if there is no metric that can measure or validate X (ID CSI), how can one reach a finding of not-X? If there can be no reasonable, scientific means to come to a conclusion of X, then there can be no reasonable, scientific means of coming to a conclusion of not-X, which makes Darwin's theory that evolution could be accomplished without intelligent guidance non-scientific, and not reasonable. "It doesn't look designed" is no more an argument against design than "It looks designed" is an argument for it. The claim: "Unintelligent nature can compile small variations over time into the eventual formation of an organized, complex, functioning feature" is no more valid a statement than the converse, because, according to you, there is no metric for making such a determination. While it is fine to assume such a premise as the heuristic for one's investigation, it is not fine to pronounce that assumption as a scientific fact. In other disciplines, it is not claimed as scientific fact, for instance, that "the behavior of all celestial objects can be completely described through natural law and chance". Chance and natural law are not assserted as factually complete explanations in any other scientific discipline that I'm aware of. However, when it comes to biological evolution, it is positively asserted and vehemently defended as scientific fact that chance and non-intelligent, non-teleological processes, like random mutatation and natural selection, are sufficient explanations. In order to make that positive claim, that chance and non-teleological, non-intelligent process are sufficiently explanatory, there must be a "not-X" metric, and consequently a metric for determining X. If, as you claim, there is no such metric, then it cannot be claimed as fact that unintelligent forces are sufficient to explain evolution. William J. Murray
JemimaRacktoney #60, First...
If the “process” was teleological I think we’d see a bit more evidence of it. After all, the entire universe empty of life despite teleological guidance? Not much teleological guidance going on there if you ask me. Perhaps it’s local to our solar system? Or how do you explain that apparent contradiction – is the universe designed for life, but just 1 planet’s worth? Seems like a bit of a waste of a universe to me. More likely the universe is designed for gas clouds and black holes then us, if designed at all…
This is an unscientific argument. You are arguing that the designer did not create a very efficient system, if it was designed for humans. You are making arguments (or simply assumptions) about the designer's intentions. Perhaps they intended to create humans and to give them a vast Universe to explore and marvel at? ID does not attempt to address this issue as it is only interested in scientific questions. And from ID's standpoint, even a single example of CSI is validation of the theory. But that doesn't mean ID advocates aren't interested in or have opinions about these types of issues, independent of ID theory. And now about...
vjtorley
I would reply that any process with a bias to produce the specified information we find in living things must itself be teleological.
Heads you win, heads you win eh?
and..
(vjtorley)Darwinists don’t like this conclusion, as they want their theory to be non-teleological.
Perhaps they don’t like it because it’s not supported by any evidence? After all, when I said:
"But, as I say, such biases were built in from the start."
Then you said:
I would reply that any process with a bias to produce the specified information we find in living things must itself be teleological.
But earlier you said
Nature contains no hidden biases that help explain the sequence we see in a protein, or for that matter in a DNA double helix.
So which is it? Nature either has a hidden bias or it does not. I’d call “teleological guidance” the ultimate “hidden bias”.
First, judging solely from the quotes you cited, VJ does not claim that nature contains hidden biases, then contradicts that statement by saying that it doesn't. He first says that if a process DOES have a bias to produce CSI, then it's teleological, but then affirms that nature (from what we currently know) does not contain any of these biases. They are not contradictory statements. Combining the two, it might read something like: Nature contains no hidden biases towards creating CSI, but if it is discovered that it does, then nature itself must be teleological. Now about your apparent argument that the following idea is circular (or some other type of logical fallacy): any process with a bias to produce the specified information we find in living things must itself be teleological. If the laws of physics turn out to repeatedly produce the genome of the first living organism, in the correct environment, if that genome is CSI, as defined by an agreed-upon calculation, then the laws of physics themselves are now complex and specified, and hence teleological. The specification does not just disappear into the materialist's bosom, nature, it just begs the question of where THAT specification came from. Look at another example. Let's say you have a lump of clay and a falling brick. The brick hits the clay and bounces off. Now the clay reads "HELLO". The last step to produce this CSI was simply gravity acting between the Earth and the brick, then electromagnetic surface forces preventing the clay molecules and brick molecules from becoming mixed together. So gravity and electromagnetism, teleologically unbiased laws of nature, are responsible for the production of CSI? ID is defeated! But wait, it doesn't stop there. It turns out that the negative imprint of "HELLO" is raised on the brick. So the brick is biased to produce CSI, therefore it is teleologically biased, therefore designed. Where there is CSI, there must be design somewhere back in the chain of causation. uoflcard
Markf (#90) Re point (1) of the five points you raised earlier in #9: I think you've established your point, on the basis of the quotes you presented. There does seem to have been an error in the formula used. By the way, I'll have a post up soon on an alternative metric for CSI which is much more hard-nosed and empirical. That should please you and Mathgrrl. Good luck with your assignment! vjtorley
MathGrrl:
How exactly would one formulate a falsifiable hypothesis for a metric that cannot be measured even in theory? Information can be measured. Specification can be observed and perhaps measured. Complexity can be measured.
Joseph
Collin, SETI is looking for artificiality- ie something that nature, operating freely could not produce. MathGrrl is stuck on the fact that CSI is just ONE tool in ID's tool-box of design detection. She thinks it is the only tool. Joseph
Collin,
Does anyone know what criteria SETI uses? Do they have a “rigorous mathematical” formula to detect intelligent signals from space?
SETI is actually looking for very simple signals. There are details available in their FAQ: http://www.seti.org/page.aspx?pid=558 MathGrrl
vjtorley,
With respect to my modified version of your condition (viii), which required a demonstration that a CSI of greater than 1 is a reliable indicator of intelligent agency, you asked:
Do you agree that it is essential?
Yes. I would however add that the demonstration need not be an a priori mathematical one, but an experimental one – i.e. the statement has yet to be falsified, despite repeated attempts to do so.
How exactly would one formulate a falsifiable hypothesis for a metric that cannot be measured even in theory?
Nowhwere has CSI been calculated objectively and rigorously for any natural system. This claim is baseless.
Please define “objective” and “rigorous.” If you mean “measurable in a laboratory” then I’m afraid you’re wasting your time; that was the whole point of my thread.
That is what I meant, and I appreciate your clear statements about the limitations of CSI as a metric.
You add:
Wesley Elsberry and Jeffrey Shallit reviewed Dembski’s CSI concept back in 2003 and noted a number of challenges for ID proponents
I agree. CSI should be able to meet these challenges.
I'm just going to bask in the glow of our agreement here for a moment (seriously, it's nice). Okay, back to the rest of your post.
CSI should indeed be able to identify the features of intelligent agency in an object "even if nothing is known about how they arose", but where duplication of a feature in a complex system occurs, there may be legitimate uncertainty as to whether the two occurrences of the feature in the system are dependent or independent of one another.
I can't see how to interpret this without concluding that you're contradicting yourself. Either CSI can be calculated without reference to the historical provenance of the object under investigation or that history must be considered. You can't have it both ways. MathGrrl
Does anyone know what criteria SETI uses? Do they have a "rigorous mathematical" formula to detect intelligent signals from space? Collin
VJTorley, Perhaps an experiment can be done to verify or falsify CSI. A group should gather 100 objects of known origin. 50 of them known to be man made but look like they might not be and 50 known to be natural but look like they might be designed. Then gather several individuals who independently use CSI to test which objects are artificial and which are natural. If they are consistent and correct, then CSI has resisted falsification. Some of these objects could be codes or heiroglyphs which would be fairly easily fake (or real). By the way, I think that we could all benefit from a professional cryptologist's take on this discussion. Anyone know one? I used to... Collin
“Elsberry and Shallit would be better served focusing on their position. After all it is their failure to support their position that has allowed ID to persist.” QuiteID:
Joseph, this is strategically wrong.
I strongly disagree.
Responses such as Elsberry and Shallit constitute a real advance, because at least they take ID seriously.
No, they don't. Their paper demonstrates they don't.
The proper scientific way to respond is to take apart those arguments carefully and with respect, not dismiss them cavalierly.
Their "argument" sould be producing positive evidence for their position. THAT is how you properly attack ID. Strange how long-time atheist Anthony Flew took ID sriously... Joseph
QuiteID:
My feeling is that ID has misspent its energy by focusing on popularization before establishing impact in the scientific community.
Methinks your alleged scientific community is full of bias, and other stuff. Joseph
QuiteID:
Joseph, with all due respect, ID has largely been a non-starter within the scientific community.
That same "scientific community" that cannot produce positive evidence for their position? That same "scientific community" tat has to erect a strawman of ID and attack the strawman? Do you have any idea what you are talking about? Joseph
  vj #55 I am going to have to drastically cut back on the time I put into this – ironically I am falling behind on an assignment in maths course.  So I am going to stick the most trivial point because it is easy and also because it is new to me.
Dembski is not trying to calculate the probability of at least one event having outcome x. As I see it, the n serves as a multiplier, to give the expected number of events having outcome x (E=np), given the long history of the universe. That’s why the 10^120 multiplier is used.
I think not.  (a) Throughout the paper he refers to Phi_s(T).P(T|H) as being a probability. For example, on page 18 he writes:
consider first that the product Phi_s(T).P(T|H) provides an upper bound on the probability (with respect to chance hypothesis H) for the chance occurrence of an event that matches any pattern whose descriptive complexity is not more than T and whose probability is no more than P(T|H).
This theme is continued later in the paragraph. Where he writes:
That’s what Phi_s(T).P(T|H) computes, namely, whether of all the other targets T~ for which P(T~ |H) ? P(T|H) and Phi_s(T).(T~ ) ? Phi_s(T), the probability of any of these targets being hit by chance according to H is still small.
On page 19 he writes:
Note that putting the logarithm to the base 2 in front of the product Phi_s(T).P(T|H) has the effect of changing scale and directionality, turning probabilities into number of bits and thereby making the specificity a measure of information.
(b) If Phi_s(T) were meant to refer to the expected value then the value of the “information” –log2(Phi_s(T).P(T|H) would have no relation to other similar definitions of information which are logs of probability – including his own writing on such things as the law of conservation of information. (c)   If Phi_s(T) were meant to refer to the expected value then whenever it exceeded 2 the negative logarithm would be negative and we would have negative CSI – a concept which has never cropped up before in my experience. Hey - even geniuses make elementary errors. I am just surprised it has taken so long for this to emerge (or maybe someone else did spot it and I didn't know). markf
oops, question answered. paragwinn
vjtorley, how did you arrive at your definitions of 'calculable' and 'computable'? paragwinn
Mathgrrl (#68, 69, 70) Thank you for your posts. With respect to my modified version of your condition (viii), which required a demonstration that a CSI of greater than 1 is a reliable indicator of intelligent agency, you asked:
Do you agree that it is essential?
Yes. I would however add that the demonstration need not be an a priori mathematical one, but an experimental one - i.e. the statement has yet to be falsified, despite repeated attempts to do so. Concerning this assertion, you wrote:
Nowhwere has CSI been calculated objectively and rigorously for any natural system. This claim is baseless.
Please define "objective" and "rigorous." If you mean "measurable in a laboratory" then I'm afraid you're wasting your time; that was the whole point of my thread. But if you mean something less demanding than that, please explain. You add:
Wesley Elsberry and Jeffrey Shallit reviewed Dembski’s CSI concept back in 2003 and noted a number of challenges for ID proponents...
I agree. CSI should be able to meet these challenges. I gave uoflcard a list of "grey areas" *uncertain artifacts) in an earlier post - e.g. Yonaguni in Japan. I'd be happy to have a stab at calculating the CSI, but it will be rough, based on the little I know. In #69 you ask:
If you agree with Dembski that CSI can be calculated with some degree of precision for a particular artifact, why do you raise the issue of calculable vs. computable, using your definitions?
The difference between these terms, as I define them, lies in the fact that a computation requires only a physical description of the system, whereas a calculation may require more - in this case, a semiotic description. If you like, a computation is more "mechanical" than a calculation. That's just the way I use these terms; others may use them differently.
Finally, in #70, on the subject of gene duplication, you write:
That is, however, how CSI works, according to Dembski. It should be able to identify the features of intelligent agency in an object "even if nothing is known about how they arose". If you modify the definition of CSI to eliminate this capability, you are once again simply measuring our ignorance about how an object came to be rather than clearly identifying intelligent agency.
Short answer: CSI should indeed be able to identify the features of intelligent agency in an object "even if nothing is known about how they arose", but where duplication of a feature in a complex system occurs, there may be legitimate uncertainty as to whether the two occurrences of the feature in the system are dependent or independent of one another. vjtorley
Quite, one more thing before I sign off, 'true' science is not even possible without God! Thus since science is in fact based on the reality of God, I REALLY don't think ID will ever be 'outside of science', ,,,when you really think about the implications of the materialistic/atheistic worldview, it is a wonder that a naturalist/materialist is capable of any consistency in his thoughts and behaviors at all! Can atheists trust their own minds? – William Lane Craig – video http://www.youtube.com/watch?v=byN38dyZb-k ‘But then with me the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?’ – Charles Darwin THE HISTORIC ALLIANCE OF CHRISTIANITY AND SCIENCE Excerpt: Christian philosopher Alvin Plantinga gives his opinion: “Modern science was conceived, and born, and flourished in the matrix of Christian theism. Only liberal doses of self-deception and double-think, I believe, will permit it to flourish in the context of Darwinian naturalism.” http://www.reasons.org/historic-alliance-christianity-and-science notes: This following site is a easy to use, and understand, interactive website that takes the user through what is termed ‘Presuppositional apologetics’. The website clearly shows that our use of the laws of logic, mathematics, science and morality cannot be accounted for unless we believe in a God who guarantees our perceptions and reasoning are trustworthy in the first place. Proof That God Exists – easy to use interactive website http://www.proofthatgodexists.org/index.php Nuclear Strength Apologetics – Presuppositional Apologetics – video http://www.answersingenesis.org/media/video/ondemand/nuclear-strength-apologetics/nuclear-strength-apologetics Materialism simply dissolves into absurdity when pushed to extremes and certainly offers no guarantee to us for believing our perceptions and reasoning within science are trustworthy in the first place: Dr. Bruce Gordon – The Absurdity Of The Multiverse & Materialism in General – video http://www.metacafe.com/watch/5318486/ THE GOD OF THE MATHEMATICIANS – DAVID P. GOLDMAN – August 2010 Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel’s critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes. http://www.faqs.org/periodicals/201008/2080027241.html bornagain77
QuiteID, I'm sure my personal effect on science is probably as pointless as the nihilistic philosophy of materialist :) well maybe not that bad, but close :) bornagain77
All true. And yet your approach will leave us outside of science, smugly sure that we're right but having no effect on science itself. QuiteID
QuiteID, Buddy we ain't even on the same page as far thinking about this :) , for I think Darwinism is absolutely the dumbest idea ever taken seriously by science!!!, and you are concerned about ID looking bad??? :) You have got to be kidding me!!! Have you looked at the cell recently? Molecular Biology Animations - Demo Reel http://www.metacafe.com/watch/5915291/ QuiteID you go ahead and worry about ID looking bad, I'll just let the evidence let Darwinists look bad. bornagain77
bornagain77, do you want ID to be taken seriously as science or do you want it to remain on the margins? Your path leads to the latter result. QuiteID
QuiteID, let's see, scientifically, Darwinists have no materialistic basis to stand on in the first place in which to make their materialistic conjectures, which was the point of my 'other issues' cites, and though their foundation in science is completely removed with this vein of evidence I was alluding to, which I could extend much further, you are having trouble seeing the point??? QuiteID, if Darwinism has no foundation within empirical science it has no right to have respect period!!! I don't know QuiteID, perhaps you somehow think Darwinism can operate dangling in thin air with no materialistic foundation in which to support its conjectures, but their position Kind of reminds me of this: Wile E. Coyote vs Road Runner http://www.youtube.com/watch?v=hz65AOjabtM bornagain77
bornagain77, "etc. etc." is right. I have a hard time understanding the point of your posts, which tend to be long lists of links to videos. I'm talking about the best way for ID to respond to specific critiques as science. Gesturing triumphantly toward a whole set of other issues is not the way to get respect from scientists no matter what the issue. QuiteID
"Elsberry and Shallit would be better served focusing on their position. After all it is their failure to support their position that has allowed ID to persist." Joseph, this is strategically wrong. Responses such as Elsberry and Shallit constitute a real advance, because at least they take ID seriously. The proper scientific way to respond is to take apart those arguments carefully and with respect, not dismiss them cavalierly. QuiteID
QuiteID, scientifically the burden really is on Darwinists, no matter what the 'popularity may or may not be of ID, to actually prove that material processes can generate information,, for we know for a fact that 'mind' generates information,, almost as a force of habit; i.e. ID has the upper-hand over Darwinism as far as science is concerned, in reasoning, just as Darwin did in originally presenting his theory, i.e. ID is inferring from the only known presently acting cause sufficient to explain events in the remote past! Stephen C. Meyer - The Scientific Basis For Intelligent Design - video http://www.metacafe.com/watch/4104651/ ------- further notes, Despite what Darwinists claim,,, the mechanism for Theistic ID is certainly in place; "It was not possible to formulate the laws (of quantum theory) in a fully consistent way without reference to consciousness." Eugene Wigner (1902 -1995) from his collection of essays "Symmetries and Reflections – Scientific Essays"; Eugene Wigner laid the foundation for the theory of symmetries in quantum mechanics, for which he received the Nobel Prize in Physics in 1963. http://eugene-wigner.co.tv/ Here is the key experiment that led Wigner to his Nobel Prize winning work on quantum symmetries: Eugene Wigner Excerpt: To express this basic experience in a more direct way: the world does not have a privileged center, there is no absolute rest, preferred direction, unique origin of calendar time, even left and right seem to be rather symmetric. The interference of electrons, photons, neutrons has indicated that the state of a particle can be described by a vector possessing a certain number of components. As the observer is replaced by another observer (working elsewhere, looking at a different direction, using another clock, perhaps being left-handed), the state of the very same particle is described by another vector, obtained from the previous vector by multiplying it with a matrix. This matrix transfers from one observer to another. http://www.reak.bme.hu/Wigner_Course/WignerBio/wb1.htm i.e. In the experiment the 'world' (i.e. the universe) does not have a ‘privileged center’. Yet strangely, the conscious observer does exhibit a 'privileged center'. This is since the 'matrix', which determines which vector will be used to describe the particle in the experiment, is 'observer-centric' in its origination! Thus explaining Wigner’s dramatic statement, “It was not possible to formulate the laws (of quantum theory) in a fully consistent way without reference to consciousness.” Alain Aspect and Anton Zeilinger by Richard Conn Henry - Physics Professor - John Hopkins University Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the "illusion" of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one's own mind is sure to exist). (Dr. Henry's referenced experiment and paper - “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 - “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 (personally I feel the word "illusion" was a bit too strong from Dr. Henry to describe material reality and would myself have opted for his saying something a little more subtle like; "material reality is a "secondary reality" that is dependent on the primary reality of God's mind" to exist. Then again I'm not a professor of physics at a major university as Dr. Henry is.) http://henry.pha.jhu.edu/aspect.html Wheeler's Classic Delayed Choice Experiment: Excerpt: Now, for many billions of years the photon is in transit in region 3. Yet we can choose (many billions of years later) which experimental set up to employ – the single wide-focus, or the two narrowly focused instruments. We have chosen whether to know which side of the galaxy the photon passed by (by choosing whether to use the two-telescope set up or not, which are the instruments that would give us the information about which side of the galaxy the photon passed). We have delayed this choice until a time long after the particles "have passed by one side of the galaxy, or the other side of the galaxy, or both sides of the galaxy," so to speak. Yet, it seems paradoxically that our later choice of whether to obtain this information determines which side of the galaxy the light passed, so to speak, billions of years ago. So it seems that time has nothing to do with effects of quantum mechanics. And, indeed, the original thought experiment was not based on any analysis of how particles evolve and behave over time – it was based on the mathematics. This is what the mathematics predicted for a result, and this is exactly the result obtained in the laboratory. http://www.bottomlayer.com/bottom/basic_delayed_choice.htm With the refutation of the materialistic 'hidden variable' argument and with the patent absurdity of the materialistic 'Many-Worlds' hypothesis, then I can only think of one sufficient explanation for quantum wave collapse to photon; Psalm 118:27 God is the LORD, who hath shown us light:,,, etc..etc.. bornagain77
"If that majority had the evidence to support their position ID would have been a non-starter." Joseph, with all due respect, ID has largely been a non-starter within the scientific community. A few scattered papers in low- to medium-tier journals over the past twenty years do not constitute impact. My feeling is that ID has misspent its energy by focusing on popularization before establishing impact in the scientific community. Some seem to think that is a lost cause; I don't. QuiteID
QuiteID, The burden is on anyone making a claim. And even if ID didn't exist the anti-IDists still wouldn't have any positive evidence for their position. Also science isn't a democracy. If that majority had the evidence to support their position ID would have been a non-starter. Joseph
Joseph, the burden of proof is really on ID; given evolution's dominance of biology, that burden falls on those who dispute the views held by the vast majority of scientists. QuiteID
MathGrrl:
If an ID proponent were interested in demonstrating the scientific usefulness of CSI, he or she could do worse than to address Elsberry’s and Shallit’s challenges.
Elsberry and Shallit would be better served focusing on their position. After all it is their failure to support their position that has allowed ID to persist. What does your position have to offer so that we can compare it to CSI? Joseph
MathGrrl:
That is, however, how CSI works, according to Dembski. It should be able to identify the features of intelligent agency in an object “even if nothing is known about how they arose”.
Right- if CSI is present it tells us it arose via designing agency- what he is saying is we do not need direct observation of the event.
If you modify the definition of CSI to eliminate this capability, you are once again simply measuring our ignorance about how an object came to be rather than clearly identifying intelligent agency.
The point is- as Dembski, Meyer, et al., have explained, is every time we have observed CSI and knew the cause it has always been via some designing agency- always. We have never observed mother nature producing CSI- never. And again CSI is defined as 500 bits or more of specified information. Shannon took care of information and specification is equivalent to meaning/ function. In the case of biology it is function- just as Dembski stated. Joseph
Collin, Don't know of any religious conversion just that he accepted ID based on the scientific evidence. Joseph
MathGrrl, I have my own problem that I need help with, could you please show me a single violation of the 'fitness test'? For if you could show me a single violation of 'genetic entropy' by passing the fitness test I would be more than willing to calculate the 'FITS' (Functional Information Bits) gained since Darwinism would at least have a tenuous leg to stand on as far as empirical science itself is concerned!!! Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution. http://www.evolutionnews.org/2010/03/thank_goodness_the_ncse_is_wro.html For me MathGrrl, you are playing silly games until you address the actual evidence (or poverty of evidence from your position)!! ------------- OT notes to JemimaRacktouey, in conjunction with the 4-Dimensional evidence I cited at post 65, both 4-D power scaling in biology and 'transcendent' quantum information in biology, I think the following lines up extremely well for confirming that man has a 'higher dimensional' component to his being that is inexplicable to the Darwinian framework; It is also very interesting to point out that the 'light at the end of the tunnel', reported in many Near Death Experiences(NDEs), is also corroborated by Special Relativity when considering the optical effects for traveling at the speed of light. Please compare the similarity of the optical effect, noted at the 3:22 minute mark of the following video, when the 3-Dimensional world 'folds and collapses' into a tunnel shape around the direction of travel as an observer moves towards the 'higher dimension' of the speed of light, with the 'light at the end of the tunnel' reported in very many Near Death Experiences: Traveling At The Speed Of Light - Optical Effects - video http://www.metacafe.com/watch/5733303/ The NDE and the Tunnel - Kevin Williams' research conclusions Excerpt: I started to move toward the light. The way I moved, the physics, was completely different than it is here on Earth. It was something I had never felt before and never felt since. It was a whole different sensation of motion. I obviously wasn't walking or skipping or crawling. I was not floating. I was flowing. I was flowing toward the light. I was accelerating and I knew I was accelerating, but then again, I didn't really feel the acceleration. I just knew I was accelerating toward the light. Again, the physics was different - the physics of motion of time, space, travel. It was completely different in that tunnel, than it is here on Earth. I came out into the light and when I came out into the light, I realized that I was in heaven.(Barbara Springer) Near Death Experience - The Tunnel, The Light, The Life Review - view http://www.metacafe.com/watch/4200200/ and this; Blind Woman Can See During Near Death Experience (NDE) - Pim von Lommel - video http://www.metacafe.com/watch/3994599/ Kenneth Ring and Sharon Cooper (1997) conducted a study of 31 blind people, many of who reported vision during their Near Death Experiences (NDEs). 21 of these people had had an NDE while the remaining 10 had had an out-of-body experience (OBE), but no NDE. It was found that in the NDE sample, about half had been blind from birth. (of note: This 'anomaly' is also found for deaf people who can hear sound during their Near Death Experiences(NDEs).) http://findarticles.com/p/articles/mi_m2320/is_1_64/ai_65076875/ Speed Of Light - Near Death Experience Tunnel - Turin Shroud - video http://www.vimeo.com/18371644 bornagain77
vjtorley,
As the foregoing article shows, I have changed my mind about whether gene duplication can increase CSI. I don’t think it can. Please see part (ii) of my article. I originally thought that P(T|H) was lower for a genome with a duplicated gene, but that’s because I was mentally picturing a longer string and reasoning that the probability of all the base characters arising randomly in the longer string is less than the probability of the characters arising randomly in a shorter string. But that’s not how gene duplication works.
That is, however, how CSI works, according to Dembski. It should be able to identify the features of intelligent agency in an object "even if nothing is known about how they arose". If you modify the definition of CSI to eliminate this capability, you are once again simply measuring our ignorance about how an object came to be rather than clearly identifying intelligent agency. MathGrrl
vjtorley,
Could you please clarify what you mean by this statement? Is it or is it not possible to measure, with some degree of precision, the CSI present in a particular artifact?
In answer to your second question: Yes, it is. In my post I gave the examples of Mt. Rushmore and the discovery of a monolith on the moon. In answer to your first question: as I use the terms, “calculable” means “capable of being assigned a specific numeric value on the basis of a mathematical formula whose terms have a definite meaning that everyone can agree on,” whereas “computable” means “calculable on the basis of a suitable physical description alone.”
If you agree with Dembski that CSI can be calculated with some degree of precision for a particular artifact, why do you raise the issue of calculable vs. computable, using your definitions? MathGrrl
vjtorley,
I would amend (viii) to read:
viii) It must be demonstrated that a CSI of greater than 1 is a reliable indicator of the involvement of intelligent agency.
That obviously depends on having a rigorous mathematical definition of CSI, but I don't think it changes my proposed criterion materially. Do you agree that it is essential?
Second, the demonstration is already exists: it is an empirical one. The CSI Chi of a system is a number which is 400 or so bits less than what Professor Dembski defines as the specificity sigma, which is -log2[Phi_s(T).P(T|H)].
As demonstrated in the CSI thread, there is currently no mathematically rigorous definition of CSI. Dembski's terms are more than problematic to apply to real world systems.
Nowhere in nature has there ever been a case of an unintelligent cause generating anything with a specificity in excess of 400 bits.
Nowhwere has CSI been calculated objectively and rigorously for any natural system. This claim is baseless.
This is a falsifiable statement; but it has never been falsified experimentally.
You've got the burden of proof backward. Scientists making claims of this nature are not only responsible for demonstrating that their hypothesis explains certain data, they also must attempt to falsify it themselves. Thus far, there are no objective calculations of CSI for any biological systems. In order to support your claim, you would need to show how to calculate CSI for some systems that are known to be the result of intelligent agency and some that are not. Interestingly, this has been noted before, but no ID proponents have addressed the problem. Wesley Elsberry and Jeffrey Shallit reviewed Dembski's CSI concept back in 2003 and noted a number of challenges for ID proponents:
12.1 Publish a mathematically rigorous definition of CSI 12.2 Provide real evidence for CSI claims 12.3 Apply CSI to identify human agency where it is currently not known 12.4 Distinguish between chance and design in archaeoastronomy 12.5 Apply CSI to archaeology 12.6 Provide a more detailed account of CSI in biology 12.7 Use CSI to classify the complexity of animal communication 12.8 Animal cognition
(That first one sounds really familiar for some reason.) Each of these is explained in more detail in the paper. If an ID proponent were interested in demonstrating the scientific usefulness of CSI, he or she could do worse than to address Elsberry's and Shallit's challenges. MathGrrl
bornagain77,
MathGrrl, I admire your tenacity for trying to get any leeway you can for showing that material processes may possibly be able to create functional information, even if you have to use Evolutionary Algorithms that are jerry-rigged to converge on that solution you so desparately want!
You misunderstand my intention. I simply want to learn the mathematically rigorous definition of CSI and get some detailed examples of how to calculate it for the scenarios I describe in the CSI thread. Can you assist me? MathGrrl
QuiteID,
MathGrrl, you may have already done this, so forgive me if this is a dumb question. But where, precisely, does Dr. Dembski’s “Specification” paper go wrong? I think people here might understand your challenge more if you pointed out the places where it’s particularly confusing or at odds with what you think.
The two broad areas where I find Dembski's description wanting are the creation of a specification and the determination of the chance hypothesis. The semiotic description underlying a specification is subjective, highly dependent on the background knowledge of the agent. This makes it very difficult, if not impossible, to calculate CSI objectively. It also increases the probability of false positives, since new knowledge can dramatically alter the calculation. Dembski sometimes seems to use a uniform probability distribution for the chance hypothesis, but he also defines "chance" so broadly that he includes evolutionary mechanisms which are not based on chance in the usual sense. Bringing in these historical contingencies seems to contradict the premises of his original question: "Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?" In addition to those two issues, the lack of detailed calculations for biological systems makes it very difficult to understand how to apply Dembski's concepts to that realm. I would also note that the confusion is not mine alone. On the CSI thread we are over 400 comments without anyone directly addressing the questions I raised in the original post. I find that level of disagreement and lack of evidence very surprising for such a core ID concept. MathGrrl
JemimaRacktouey, if you want a 'teleological' signature for life, a signature that signifies 'higher dimensional' origination for life that is over and above the finely tuned 3-Dimensional material constraints of this universe, I suggest this: notes: 4-Dimensional Quarter Power Scaling In Biology - video http://www.metacafe.com/w/5964041/ The predominance of quarter-power (4-D) scaling in biology Excerpt: Many fundamental characteristics of organisms scale with body size as power laws of the form: Y = Yo M^b, where Y is some characteristic such as metabolic rate, stride length or life span, Yo is a normalization constant, M is body mass and b is the allometric scaling exponent. A longstanding puzzle in biology is why the exponent b is usually some simple multiple of 1/4 (4-Dimensional scaling) rather than a multiple of 1/3, as would be expected from Euclidean (3-Dimensional) scaling. http://www.nceas.ucsb.edu/~drewa/pubs/savage_v_2004_f18_257.pdf “Although living things occupy a three-dimensional space, their internal physiology and anatomy operate as if they were four-dimensional. Quarter-power scaling laws are perhaps as universal and as uniquely biological as the biochemical pathways of metabolism, the structure and function of the genetic code and the process of natural selection.,,, The conclusion here is inescapable, that the driving force for these invariant scaling laws cannot have been natural selection." Jerry Fodor and Massimo Piatelli-Palmarini, What Darwin Got Wrong (London: Profile Books, 2010), p. 78-79 https://uncommondescent.com/evolution/16037/#comment-369806 Though Jerry Fodor and Massimo Piatelli-Palmarini rightly find it inexplicable for 'random' Natural Selection to be the rational explanation for the scaling of the physiology, and anatomy, of living things to four-dimensional parameters, they do not seem to fully realize the implications this 'four dimensional scaling' of living things presents. This 4-D scaling is something we should rightly expect from a Intelligent Design perspective. This is because Intelligent Design holds that ‘higher dimensional transcendent information’ is more foundational to life, and even to the universe itself, than either matter or energy are. This higher dimensional 'expectation' for life, from a Intelligent Design perspective, is directly opposed to the expectation of the Darwinian framework, which holds that information, and indeed even the essence of life itself, is merely an 'emergent' property of the 3-D material realm. Earth’s crammed with heaven, And every common bush afire with God; But only he who sees, takes off his shoes, The rest sit round it and pluck blackberries. - Elizabeth Barrett Browning Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH Excerpt: It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate. http://journals.witpress.com/journals.asp?iid=47 Quantum entanglement holds together life’s blueprint - 2010 Excerpt: “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ Quantum Information/Entanglement In DNA & Protein Folding - short video http://www.metacafe.com/watch/5936605/ Further evidence that quantum entanglement/information is found throughout entire protein structures: https://uncommondescent.com/intelligent-design/we-welcome-honest-exchanges-here/#comment-374898 It is very interesting to note that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure 'quantum form' is completely transcendent of any time and space constraints, should be found in molecular biology, for how can the quantum entanglement effect in biology possibly be explained by a material (matter/energy) cause when the quantum entanglement effect falsified material particles as its own causation in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various 'special' configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! Yet it is also very interesting to note, in Darwinism's inability to explain this 'transcendent quantum effect' adequately, that Theism has always postulated a transcendent component to man that is not constrained by time and space. i.e. Theism has always postulated a 'eternal soul' for man that lives past the death of the body. Quantum no-hiding theorem experimentally confirmed for first time - March 2011 Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html JemimaRacktouey so what do you think? Pretty neat huh? Or will you just scoff at this as well even though it is such a powerful 'signature'? bornagain77
Joseph, Did Flew ever convert to Christianity? I thought he died a sort of non-religious theist. Or maybe I'm thinking of someone else. Collin
Jemima, the presence of a biological operating system is evidence against ID? Collin
JR:
If the “process” was teleological I think we’d see a bit more evidence of it.
How much do you need? Do you know what evidence is? JR:
Perhaps they don’t like it because it’s not supported by any evidence?
Yet there was enough evidence to convince long-time atheist anthony Flew (talk about bias) and the people who don't like ID need to suck it up because it is their failure to produce positive evidence for their position that has allowed ID to persist. Thank you- you are a fine representative of the anti-ID position. Joseph
VJ, The Old Man is long gone but there are hundreds of other less publicized natural rock formation- like patterns to choose from in New Hampshire. Joseph
vjtorley
I would reply that any process with a bias to produce the specified information we find in living things must itself be teleological.
Heads you win, heads you win eh? Well I would reply that your conclusion is not supported by the available evidence. I.E the universe. Every single object observed in the universe so far does not show any signs of life. If the "process" was teleological I think we'd see a bit more evidence of it. After all, the entire universe empty of life despite teleological guidance? Not much teleological guidance going on there if you ask me. Perhaps it's local to our solar system? Or how do you explain that apparent contradiction - is the universe designed for life, but just 1 planet's worth? Seems like a bit of a waste of a universe to me. More likely the universe is designed for gas clouds and black holes then us, if designed at all...
Darwinists don’t like this conclusion, as they want their theory to be non-teleological.
Perhaps they don't like it because it's not supported by any evidence? After all, when I said:
But, as I say, such biases were built in from the start.
Then you said:
I would reply that any process with a bias to produce the specified information we find in living things must itself be teleological.
But earlier you said
Nature contains no hidden biases that help explain the sequence we see in a protein, or for that matter in a DNA double helix.
So which is it? Nature either has a hidden bias or it does not. I'd call "teleological guidance" the ultimate "hidden bias". JemimaRacktouey
uoflcard (#51) By the way, I'd agree with the point of your parable, which is the opposite of aliens coming upon Mt. Rushmore, in that we're the alien explorers. The error arose because our dictionary of concepts was incomplete, leading us to err on the side of chance, not design. vjtorley
uoflcard (#51) Here's another one: The Old Man of the Mountain (The natural equivalent of Mt. Rushmore) http://www.epodunk.com/cgi-bin/genInfo.php?locIndex=30 vjtorley
Jemima Racktouey (#54) Thank you for your post and links. Concerning the evolution of life from non-living matter, you write:
But, as I say, such biases were built in from the start. The fact that you don’t appear to notice them there now is a testament to the power of evolution.
I would reply that any process with a bias to produce the specified information we find in living things must itself be teleological. See Professor William Dembski and Bob Marks II's paper, Life's Conservation Law: Why Darwinian Evolution Cannot Create Biological Information . Darwinists don't like this conclusion, as they want their theory to be non-teleological. If it's teleological then it still requires an Intelligent Designer. vjtorley
uoflcard (#50) Some examples that might help you: Eoliths (not genuine tools) http://en.wikipedia.org/wiki/Eolith Oldowan (the earliest recognizable tools) http://en.wikipedia.org/wiki/Oldowan Yonaguni, the Japanese Atlantis - or is it natural? http://news.nationalgeographic.com/news/2007/09/070919-sunken-city.html http://news.nationalgeographic.com/news/bigphotos/5467377.html See also: Alleged human tracks in Carboniferous rocks in Kentucky http://www.paleo.cc/paluxy/berea-ky.htm (Man-made carvings made recently by Native Americans, in all likelihood.) Food for thought. vjtorley
Markf (#9) I now have a little time to address the five questions you raised. Let's look at (1) and (2). You write:
1) The formula Chi=-log2[10^120.Phi_s(T).P(T|H)] contains a rather basic error. If you have n independent events and the probability of single event having outcome x is p, then the probability of at least one event having outcome x is not np . It is (1 – (1-p)^n). So the calculation 10^120.Phi_s(T).P(T|H) is wrong . The answer is still very small if p is small relative to n. But it does illustrate the lack of attention to detail and general sloppiness in some of the work on CSI. 2) There is some confusion as to whether Phi_s(T) includes all patterns that are "at least as simple as the observed pattern" or whether it is only those patterns that are "at least as simple as the observed pattern AND are at least as improbable". If we use the "at least as simple" criterion then some of the other patterns may be vastly more probable than the observed pattern. So we really have to use the "at least as simple AND are at least as improbable" criterion. However, there is no adequate justification for only using the patterns that are less probable.
In reply: (1) Dembski is not trying to calculate the probability of at least one event having outcome x. As I see it, the n serves as a multiplier, to give the expected number of events having outcome x (E=np), given the long history of the universe. That's why the 10^120 multiplier is used. (2) I'll try to make my point with a story. Imagine for argument's sake that you have a reputation for being something of a card sharp. (I have no idea whether you play - I only know strip jack, gin rummy and UNO off the top of my head, although I have played blackjack in Las Vegas. I only had $24 to gamble with, mind you - I was backpacking, and I had to budget. Anyway, I managed to visit 34 of states of the U.S.A. in just three months, courtesy of Greyhound buses and "Let's Go USA.") Anyway, you're playing poker, and you happen to bring up a royal flush right away. Your partner accuses you of cheating, citing the high specificity of the result: royal flush (describable in just two words). You reply by saying that "single pair" is just as verbally specific (two words) and that there are lots of two word-descriptions of card hands - some probable, some not - and that if you add them all up together, the chances of satisfying some two-word description is not all that low. Undaunted, your opponent points out that that's not relevant. All of these other hands are much more probable than a royal flush. But then your opponent relents a little. He allows you to multiply the probability of your getting a royal flush by the number of two-word descriptions of card hands (e.g. "Full house", "single pair") that are commonly in use. If you can demonstrate that this product is not a very low number, then he will continue to play cards with you. Why does your opponent do this? Because he is trying to take into account the fact that whereas the probability of a royal flush is low, it's not the only hand with that level of verbal specificity (two words). On the other hand, adding the probabilities of the various hands that can be specified in two words would be too generous to you. To get a good sense of whether you are cheating or not, it seems more reasonable to multiply the number of card hands that can be specified in two words by the probability of getting a royal flush, in order to determine whether the royal flush which you got was outrageously improbable. Putting it more formally: we're not trying to just calculate the probability of getting a royal flush, and we're not trying to calculate the probability of getting some card-hand that can be described in two words (e.g. royal flush, full house, single pair). Rather, we're trying to calculate a notional figure: the probability of getting A card-hand which is just as improbable as a royal flush AND just as verbally specific. Since the only other card-hands with the same verbal specificity are much more probable than the royal flush, we have to pretend (for a moment) that all these card-hands have the same probability as a royal flush, count them up and multiply the number of these hands by the probability of getting a royal flush. That, I think, is a fairer measure of whether you're cheating. So in answer to your question: Phi_s(T) does include all patterns that are "at least as simple as the observed pattern", even though "some of the other patterns may be vastly more probable than the observed pattern." But in order not to be overly generous, we don't just sum the probabilities of all the patterns with the same level of verbal simplicity. Rather, we multiply the number of patterns that are at least as simple as the observed pattern by the very low probability of the observed pattern. OK, let's go on to your objection (3).
(3) When Dembski (and you) estimate Phi_s(T) you use a conceptual definition of T: "bidirectional rotary motor-driven propeller". This is not necessarily (in fact is almost certainly not) the same as the exact configuration of proteins. You do attempt to address this for the ATP case with a note in brackets. You say that any other configuration of proteins would be a lot more complex and therefore vastly more improbable. I am not a biochemist (are you?). I think you have to admit this is unproven.
I studied chemistry and physics for two years at university but not biology, so I can't give a definite answer to your question. Here's what I'd ask a biologist: assuming that there are other bidirectional rotary motor-driven propellers in Nature, (a) how many of them are there, and (b) how much bigger than a bacterial flagellum is the second smallest one? If the answer to (a) is "a half-dozen at the most", and the answer to (b) is "more than twice as big", I'd be inclined to neglect the other cases, as it would be much more difficult for them to arise by a non-foresighted "chance" process. All your objection shows is that P(T|H) is revisable, if we find a large number of other structures in Nature with the same function, and having a comparable probability of arising by "chance" as I've defined it. But I'd be the first one to admit that P(T|H) is revisable, and ditto for Chi. There's no such thing as absolute certitude in science. Next, you write:
4) The attempt to identify simple or simpler patterns through number of concepts is an absolute minefield. A concept is not the same as a word. For example, a "motor" can be broken down into many other concepts e.g. machine that converts other forms of energy into mechanical energy and so imparts motion.
You are quite right to say that a concept is not the same as a word, but wrong to infer that a word which is capable of being defined using several words is not basic. Any word can be defined in this way. The question is: which words are best learned holistically, rather than by breaking them down into conceptual parts? These words I'd consider to be epistemically basic. For human beings, the word "human" is surely epistemically basic, but of course a zoologist would take a paragraph to define it properly. Is "motor" basic? I'd say yes. The great physicist James Clerk Maxwell, was a very curious toddler. By the age of three, everything that moved, shone, or made a noise drew the same question: "What's the go o' that?" Although he didn't know the word "motor", he had a strong, deep-seated urge to find out what made things move. If I were trying to find the basic concepts of a language, I might try to find the smallest set of words that can be (practicably) used to define all the other words of the language. I believe some dictionaries published by Longman now use a list of 2,000 words for defining every other word. Actually, the number 2,000 sounds about right to me, because it's the same as the number of Japanese characters (kanji) that students are expected to be able to read, after 12 years of schooling. Of course, a few individuals can read as many as 10,000 of the more obscure kanji, but the standard kanji number 2,000, altogether. Personally I think that most young children would have no trouble understanding the four terms that Professor Dembski used to define a bacterial flagellum. Of course, "bidirectional" would be a new word to them, but they could pick it up immediately if you showed them something that could rotate clockwise and anti-clockwise. I'm sure of that. Finally, you write:
5) The definition of H is very flaky. You admit that you are not certain what Dembski means. So you adopt your own definition – "a process which does not require the input of information". But as we are currently using this formula to clarify what we mean by information this is circular. In one case you want to include the possibility of gene duplication in the chance hypothesis so you don't end up with the awkward result that gene duplication doubles the CSI. But once you admit that knowing about gene duplication radically affects the level of CSI you are open to the possibility that other unspecified or unknown events such as gene duplication can have enormous affects on the supposed CSI. In other words we cannot even make a rough estimate of CSI without having a good account of all possible natural processes.
In response to the charge of circularity: when I wrote the words "a process which does not require the input of information", I had in mind not CSI, but Professor Dembski's concept of active information, which he explains in his paper, Life's Conservation Law: Why Darwinian Evolution Cannot Create Biological Information (pages 13-14):
In such discussions, it helps to transform probabilities to information measures (note that all logarithms in the sequel are to the base 2). We therefore define the endogenous information I_omega as –log(p), which measures the inherent difficulty of a blind or null search in exploring the underlying search space omega to locate the target T. We then define the exogenous information I_s as –log(q), which measures the difficulty of the alternative search S in locating the target T. And finally we define the active information I+ as the difference between the endogenous and exogenous information: I+ = I_omega – I_s = log(q/p). Active information therefore measures the information that must be added (hence the plus sign in I+) on top of a null search to raise an alternative search's probability of success by a factor of q/p.
Dembski shows that contrary to what Darwinians often maintain about their own theory, NDE is deeply teleological, in that it requires active information to make it work. His parable of the sailor on page 31 is worth a read. You also argue that the correction I make for gene duplication (a process that appears at first glance to raise CSI) leaves me "open to the possibility that other unspecified or unknown events such as gene duplication can have enormous affects on the supposed CSI." Yes, that's always possible. But there are different senses of "possible." Theoretically, someone could demonstrate that life is a lot less specific than we all imagined - but on a practical level, the demonstration would have to simplify the specificity of life by so many orders of magnitude that I don't lose any sleep over the prospect. And from the theoretical possibility that my estimates of the CSI in a bacterial flagellum may be out by several orders of magnitude (e.g. 0.2126 instead of 2126), it simply does not follow that "we cannot even make a rough estimate of CSI without having a good account of all possible natural processes" (italics mine), as you claim. vjtorley
vjtorley
In other words, Nature contains no hidden biases that help explain the sequence we see in a protein, or for that matter in a DNA double helix.
There are obviously some biases, hidden or not. Currently apparent or not. For example, some reactions needed for "DNA-like" substances work better at a temperature found on the earth. Many factors would change the available sequence path for how and of what specific makeup a "DNA-like" substance could come about. See the recent "Nasa says alien life on earth" story for instance.
If there were, these biases would serve to reduce the Shannon information content in DNA and proteins, leading to a simple redundant, repetitive order, as opposed to complexity, which is required for living things.
But, as I say, such biases were built in from the start. The fact that you don't appear to notice them there now is a testament to the power of evolution. DNA appears to operate in a space, a "biological operating system", particularly suited to it. Replication is largely error free, there are no "biases" as you say that reduce the information content in unpredictable (from the DNA's POV) ways. DNA can do it's thing largely uninterrupted.
as opposed to complexity, which is required for living things.
It is apparently so. And yet I'm unconvinced. If nature contains no hidden biases that help explain the sequence we see in a protein, or for that matter in a DNA double helix then perhaps "hidden biases" is the wrong place to be looking. NASA did a great workshop on the origin of life (information). http://astrobiology.nasa.gov/nai/ool-www/program/ Have you checked it out? Some fantastic materials there. And they've looking specifically at prebiotic chemistry. Including the now somewhat notorious Alternative Biochemistry and Arsenic, or Life as We Might Not Expect It but it's all good stuff. So for me just because there are no such biases as you say does not mean automatically that the "information" must have been designed in. It's too easy, too shallow and simple and I don't see how one followed from the other. You don't say it in that post but you might as well have had. JemimaRacktouey
But to save the royal flush analogy, I would just say that the royal flush is the key while the person (having knowledge of poker) is the lock. Collin
A royal flush obviously has no meaning without humans. But it is a good analogy. What might be a yet better analogy is a key and lock system. If you discover a key and want to know if it is designed and then you discover a lock that it fits perfectly into, then you can infer design. DNA seems to be the key and proteins (and other things) the lock. Collin
The other thing, besides 10^120 events (I'm still assuming that's what that is), that makes this conservative is that if you don't know of a function that an object actually has, you automatically assume it to be less complex than you think. Let me use the opposite of the Mt. Rushmore example. Let's say we're the alien explorers. We stumble upon a planet which we know nothing of, and we find a mountain face with some curious features. Perhaps we recognize two objects that look something like eyes, and some kind of potential orifice that might be a mouth of some type, but that's all we recognize. It is pretty eroded, so our Chi calculation ends up suggesting that it could have developed by chance. But after some time studying the planet and artifacts found on it, we learn about some intelligent creatures that lived there. We find something like a medical record that describes some small feature on the "face" of these creatures that is nothing like we've seen on Earth, like an symmetrical lobe that senses temperature accurately. We go back to Mt. Alien and sure enough, there is a weird little rock jutting out of the face of the mountain in the same location as described in the medical texts. We recalculate Chi and it now triumphantly declares Design. The point is that originally we were misinformed, so our error was on the side of Chance, not Design. This should not be arguing point for ID critics regarding this calculation. The only way I see that this calculation could err on the side of Design is P(T|H), which is simply a difficult probability to estimate in biological systems. Increased knowledge could raise this probability, shifting Chi towards the Chance side of this spectrum. FYI - This was mainly me thinking aloud, so I welcome corrections uoflcard
Are there any known examples whose calculated specified complexity Chi
Chi=-log2[10^120.Phi_s(T).P(T|H)]
is around 1? I tend to believe that this formula is severely conservative in favor of Darwinism, given the staggering 10^120 events assumption (That is what that number is, right? The max number of "events" in the history of the Universe?). I'm just curious as to what events have a Chi that comes in around 1, and then what Chi is for events that we could decide are somewhat on the "border" of our intuitions about their origins, regarding the Explanatory Filter. So, some obvious selections: Law: Motion of the Earth around the Sun Chance: Order of sand on a beach Design: iPad 2.0 But what about something on the border between chance and design, intuitively? I'm having trouble thinking of something, so feel free to suggest something... Maybe a severely eroded arrowhead? I'm just curious as to where something like that would come out from the Chi equation. I'm guessing less than one, which would, to me, bode well for the conservativeness of this calculation uoflcard
vjtorley: Excellent throughout. PaV
Jemima Racktouey (#7) Thank you for your post. With regard to the calculation of P(T|H) you ask:
Does your reference to "what scientists currently know" refer to an ID scientist who presumably does not believe that Evolution can create the life we see about us or to a non-ID scientist who does understand that Evolution can create such life?
It refers to scientific knowledge acquired on the basis of observations either of nature or of experiments in laboratories. Personal beliefs don't come into it. You also write:
It seems to me if you are calculating probabilities based on the spontaneous formation of (for example) a given protein (tornado in a junkyard) you’ll get a different answer to assuming it evolved.
Yes. But as my new post shows, I'm willing to count as a chance hypothesis any process that lacks foresight of long-term results and that does not require the input of information from outside - either at the beginning (front-loading) or during the process itself (manipulation). And Professor Dembski in his article on specification expressly includes Darwinian evolution as a chance hypothesis - even though natural selection is, as we all know, non-random. So "chance" as Dembski uses the term does not mean "totally random." You continue:
And my question is what are the options available to that process? Random iteration through the total available possibility space or a gradual step by step process?
Both. Please see my remarks above. You add:
Which one makes a big difference. And don't forget that no actual biologist claims that the components of cells came together randomly and so the "tornado in a junkyard" calculations so beloved of Kairosfocus and others are simply irrelevant and I'd like to think they were not deliberately misleading but they've been corrected so many times by now its a reasonable assumption.
There are some situations where "tornado in a junkyard" calculations are relevant, and that's where no unintelligent non-random process has been shown to achieve better results. A good example of this is protein formation. After showing in chapter 9 of Signature in the Cell that the chance formation of a single protein is mathematically out of the question, he then goes on to consider other alternatives - e.g. biochemical predestination - before rejecting them on empirical grounds. Biochemical predestination can be rejected for DNA as well, on the same grounds:
In sum, two features of DNA ensure that "self-organizing" bonding affinities cannot explain the specific arrangement of nucleotides in the molecule: (1) there are no bonds between bases along the information-bearing axis of the molecule and (2) there are no differential affinities between the backbone and the specific bases that could account for variations in the sequence. (p. 244)
In other words, Nature contains no hidden biases that help explain the sequence we see in a protein, or for that matter in a DNA double helix. If there were, these biases would serve to reduce the Shannon information content in DNA and proteins, leading to a simple redundant, repetitive order, as opposed to complexity, which is required for living things. I hope that helps. vjtorley
markf: [41] Can anyone on this discussion explain to me what Joseph is trying to say!!! Look at Dembski's Specification paper on "prespecifications". Section 5, I believe. PaV
Wm. J. Murray [24]: How about challenging Darwinists to provide the math that demonstrates non-intelligent processes to be up to the task that they are claimed as fact to be capable of? That's exactly right. In fact, Motoo Kimura---one of, if not the, brightest and best of population geneticists---came up with his Neutral Theory because the newly discovered protein variations found through gel electrophoresis during the sixties, was vastly too high to be accounted for by strictly Darwinian processes. And it's gotten worse ever since. And with whole genome analysis, the level of variation amongst species themselves (intra-species variation) is staggering---and completely unexplainable using supposed Darwimian mechanisms. PaV
The book is "Probability's Nature and Nature's Probbility"- it exposes the fallacy of Mark Frank's position. Joseph
markf (#36) As the foregoing article shows, I have changed my mind about whether gene duplication can increase CSI. I don't think it can. Please see part (ii) of my article. I originally thought that P(T|H) was lower for a genome with a duplicated gene, but that's because I was mentally picturing a longer string and reasoning that the probability of all the base characters arising randomly in the longer string is less than the probability of the characters arising randomly in a shorter string. But that's not how gene duplication works. See my remarks above. Here are three helpful articles on gene duplication that might be of use to you and Mathgrrl: http://www.evolutionnews.org/2009/10/jonathan_wells_hits_an_evoluti026791.html http://www.discovery.org/a/4278 http://www.discovery.org/a/14251 vjtorley
I can splain it ferya mark- You sed:
This doesn’t conflict with anything I wrote. I say of a Royal Flush “is in some sense special”. This would be even more true of 5 Royal Flushes. I assume that you understand that 5 Royal Flushes is no more improbable than any other sequence of 65 cards? This is after all the whole reason for Dembski’s work on specification. The whole discussion is over why, in that case, do we find a Royal Flush (or 5 Royal Flushes) special.
I responded with: And if I were to receive the same 5 cards for 5 hands in a row- whatever those cards are- I would have a problem with that. Dembski has no problem with getting one of something. The problem would come in if A) the dealer called the hands before they were dealt an the dealer was right and B) You keep getting thesame thing over and over. So again getting one royal flush dealt on the first hand isn’t so surprising. Calling it and then dealing it to yourself would be questionable and getting more than one royal flush in a row would also be questionable. Does that flow any better for you or is there something specific you don't get? Any hand is highly improbable- I agree but when playing cards the probaility you will get a hand dealt to you is ONE. It is unavoidable. But if I were to get the same cards dealt to me for five hands in a row I would suspect foul play. If one person gets a royal flush on the first deal of the night, that is not an issue. But getting 5 in a row would be. The odds of getting a hand is ONE. The odds of geting a specific hand are much lower. The odds of getting that same hand again are even lower. Joseph
markf:
I assume that you understand that 5 Royal Flushes is no more improbable than any other sequence of 65 cards?
Why 65? Gettng the same cards dealt to you for five hands in a row is more improbable than getting any other combination of 5 cards dealt to you 5 hands in a row. The same goes for one person hitting a 5 number lottery 5 times in a row. If that happened people would question the system. Joseph
#39 Joseph Can anyone on this discussion explain to me what Joseph is trying to say!!! markf
marf:
This must be true because as soon as you discover that a non-directed process can easily generate a pattern you drastically reduce the level of CSI. Gene duplication is a perfect example.
Sorry markf but gene duplications in the origin of life is just plain misleading. Also there isn't any evidence that says gene duplications are non-directed. You would have to demonstrate that the OoL was undirected- which is one reason CSI pertains to origins, just as Dembski said. Joseph
Mark Frank, And if I were to receive the same 5 cards for 5 hands in a row- whatever those cards are- I would have a problem with that. Dembski has no problem with getting one of something. The problem would come in if A) the dealer called the hands before they were dealt an the dealer was right and B) You keep getting thesame thing over and over. So again getting one royal flush dealt on the first hand isn't so surprising. Calling it and then dealing it to yourself would be questionable and getting more than one royal flush in a row would also be questionable. Joseph
AHH but markf, contrary to your claim, there is a 'specialness' to a royal flush (a specific 'optimal' sequence) in biology, as is demonstrated here: Common Design in Bat and Whale Echolocation Genes? "The natural world is full of examples of species that have evolved similar characteristics independently, such as the tusks of elephants and walruses," said Stephen Rossiter of the University of London, an author on one of the studies. "However, it is generally assumed that most of these so-called convergent traits have arisen by different genes or different mutations. Our study shows that a complex trait -- echolocation -- has in fact evolved by identical genetic changes in bats and dolphins."[...]"We were surprised by the strength of support for convergence between these two groups of mammals and, related to this, by the sheer number of convergent changes in the coding DNA that we found," Rossiter said http://www.evolutionnews.org/2011/01/common_design_in_bat_and_whale042291.html As well markf, there is no chemical reason for why the sequences in DNA should be in any particular order, but to drive this point home, Darwinism is shown to be 'historically contingent'; Lenski's Citrate E-Coli - Disproof of Convergent Evolution - Fazale Rana - video (the disproof of convergence starts at the 2:45 minute mark of the video) http://www.metacafe.com/watch/4564682 Thus markf, you go to Vegas and explain the reason why you always get royal flushes right when you need them! Then say hello to Venny in the back room! :) Markf, since you're a 'poker playing' man, how much do you want to bet that even a single beneficial mutation can become fixed in the hypothesized whale lineage? Whale Evolution Vs. Population Genetics - Richard Sternberg PhD. in Evolutionary Biology - video http://www.metacafe.com/watch/4165203 further note: Assessing the NCSE’s Citation Bluffs on the Evolution of New Genetic Information - Feb. 2010 http://www.evolutionnews.org/2010/02/assessing_the_ncses_citation_b.html How to Play the Gene Evolution Game - Casey Luskin - Feb. 2010 http://www.evolutionnews.org/2010/02/how_to_play_the_gene_evolution.html bornagain77
Mathgrrl (#17) Thank you for your post. Referring to my earlier comment:
The answer is that while the CSI of a complex system is calculable, it is not computable, even given a complete physical knowledge of the system.
... you ask:
Could you please clarify what you mean by this statement? Is it or is it not possible to measure, with some degree of precision, the CSI present in a particular artifact?
In answer to your second question: Yes, it is. In my post I gave the examples of Mt. Rushmore and the discovery of a monolith on the moon. In answer to your first question: as I use the terms, "calculable" means "capable of being assigned a specific numeric value on the basis of a mathematical formula whose terms have a definite meaning that everyone can agree on," whereas "computable" means "calculable on the basis of a suitable physical description alone." As I explained in my post, Kolmogorov complexity is semiotic rather than physical. I hope that helps. vjtorley
#34 vjtorley No undirected process will demonstrate the capacity to generate 500 bits of new information starting from a non-biological source. This must be true because as soon as you discover that a non-directed process can easily generate a pattern you drastically reduce the level of CSI. Gene duplication is a perfect example. markf
#31 Joseph What happens if every hand the computer deals you is a Royal Flush, or maybe just the first 5 hands are all a Royal Flush? How about you never get anything worth betting- never- no pair of anything, not even an Ace high? I would say in either case there should be cause for concern. This doesn't conflict with anything I wrote. I say of a Royal Flush "is in some sense special". This would be even more true of 5 Royal Flushes. I assume that you understand that 5 Royal Flushes is no more improbable than any other sequence of 65 cards? This is after all the whole reason for Dembski's work on specification. The whole discussion is over why, in that case, do we find a Royal Flush (or 5 Royal Flushes) special. markf
Mathgrrl (#16) I'd like to amend your proposed eighth criterion:
viii) It must be demonstrated that CSI is a reliable indicator of the involvement of intelligent agency.
First of all, everything in the world has CSI; the question is how much. In his paper, Dembski argues that when Chi, the CSI of a pattern T, exceeds 1, then we have a reliable indicator of the involvement of intelligent agency in the production of the pattern. So I would amend (viii) to read:
viii) It must be demonstrated that a CSI of greater than 1 is a reliable indicator of the involvement of intelligent agency.
Second, the demonstration is already exists: it is an empirical one. The CSI Chi of a system is a number which is 400 or so bits less than what Professor Dembski defines as the specificity sigma, which is -log2[Phi_s(T).P(T|H)]. Nowhere in nature has there ever been a case of an unintelligent cause generating anything with a specificity in excess of 400 bits. This is a falsifiable statement; but it has never been falsified experimentally. You continue:
This requirement is as essential as your first three. The most straightforward way to meet it would be to calculate CSI for systems known to be created by intelligent agents and those known to be of non-intelligent provenance.
By all means do. I might mention that on page 496 of his book, Signature in the Cell (HarperOne, New York, 2009), Dr. Stephen Meyer makes the following falsifiable ID-inspired prediction:
No undirected process will demonstrate the capacity to generate 500 bits of new information starting from a non-biological source.
vjtorley
Paul, Why don't we make it super easy and assume that all of the 747's mechanical parts are in a large container and a giant shakes it until it forms into an airplane? Maybe someone should do an ev type program along those lines to see how probable or improbable that is. Collin
of note to previous post @ 30; it is very interesting to point out that quantum computation/information is found in molecular biology, on a massive scale, when they are having such a extremely difficult time achieving even the first steps of quantum computation, even though the payoff, and investment, is huge!; Scientists take another step towards quantum computing using flawed diamonds Excerpt: Scientists have for years been intrigued by the idea of a quantum computer,,, Such a machine would dwarf the capabilities of modern computers,,, http://www.physorg.com/news/2011-03-scientists-quantum-flawed-diamonds.html bornagain77
My apologies Mark Frank but your talk reason article is laughable. What happens if every hand the computer deals you is a Royal Flush, or maybe just the first 5 hands are all a Royal Flush? How about you never get anything worth betting- never- no pair of anything, not even an Ace high? I would say in either case there should be cause for concern. And googled mathgrrl but didn't find anything impressive. But she could impress us by reading "No Free Lunch". Joseph
MathGrrl, I admire your tenacity for trying to get any leeway you can for showing that material processes may possibly be able to create functional information, even if you have to use Evolutionary Algorithms that are jerry-rigged to converge on that solution you so desparately want! :) But another question pops up from this recent verification for 'Conservation of Quantum Information'; Quantum no-hiding theorem experimentally confirmed for first time Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html So MathGrrl, With 'conservation of quantum information' now verified, exactly how does materialistic evolution propose to explain the novel 'creation' of the quantum information that we find at the most foundational levels of molecular biology? Quantum Information In DNA & Protein Folding - short video http://www.metacafe.com/watch/5936605/ Or MathGrrl, does Darwinism exempt itself from falsification in these matter? The Failure Of Local Realism/ Materialism by Quantum Entanglement – Alain Aspect – video http://www.metacafe.com/w/4744145 MathGrrl, please tell me exactly why Darwinism is above falsification from Quantum Mechanics?!? bornagain77
#25 QuiteID But where, precisely, does Dr. Dembski’s “Specification” paper go wrong? I think people here might understand your challenge more if you pointed out the places where it’s particularly confusing or at odds with what you think. (BTW, that might also illustrate your mathy credentials.) Mathgrrl has plenty on her plate so let me see if I can save her some effort. Some years ago I wrote a criticism of the paper which is here.. For a summary of some (but not all) of the problems see my comment #9 above. A few minutes Googling should satisfy you of Mathgrrl's maths credentials. markf
Another question is: if there is no method that acceptably demonstrates that a phenomena is the result of intelligent agency, how can it be claimed as a matter of scientific fact that something is not the result of intelligent agency? If one cannot calculate or scientifically describe X, one can certainly not calculate or scientifically describe not-X. If intelligence cannot be claimed, neither can chance (non-intelligence) to be factually responsible. William J. Murray
QuiteID- You are welcome. It may need to be tweaked- CJYman uses "can be processed by an information processor" where I use meaning/ function:
"If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information."--CJYman
Joseph
Perhaps we need a thread about why some people have convulsions whenever the word "information" is discussed. In this world in which information technology rules and communication is commonplace, you would think that everyone who doesn't live a secluded existnce would understand what "information" is. Even those people who are confused by Shannon should understand that what he was talking about wasn't "information" as commonly used. However we can use his methodology for measuring specified information- specied information is Shannon information with meaning/ function. It should be applicable. So we have the math and the definition, which also has a math content (UPB). So why the convulsions? Joseph
MathGrrl, you may have already done this, so forgive me if this is a dumb question. But where, precisely, does Dr. Dembski's "Specification" paper go wrong? I think people here might understand your challenge more if you pointed out the places where it's particularly confusing or at odds with what you think. (BTW, that might also illustrate your mathy credentials.) QuiteID
Darwinists demand calculable and computational models for proposed ID models, yet happily provide none to describe the creative functional capacity of random mutation processes or the functionally-relevant sorting power of natural selection. Where is the probability analysis, based upon real-world, computational mutation and sorting values, that predicts Darwinian mechanisms to be capable of generating what they are claimed to have generated as a matter of scientific fact? IOW, they get to assume chance and non-intelligence, claim it as a scientific fact without providing one whit of evidence that it is so, then shift the burden and demand that ID theorists "prove them wrong" according to a standard they themselves cannot even approach. How about challenging Darwinists to provide the math that demonstrates non-intelligent processes to be up to the task that they are claimed as fact to be capable of? William J. Murray
Joseph (21), that's the shortest and clearest definition of CSI I've ever read. Thanks! I'm going to work with that. QuiteID
That said biological function is specified information. So 500 bits (or more) worth of biological function is CSI. To refute CSI is an indicator of a deigning agency just demonstrate that 500 bits of biological functionality can arise via necessity and chance. Joseph
MathGrrl:
You have yet to define CSI rigorously, so that claim is not even wrong.
It has been defined rigorously. That you can act all obtuse about it doesn't phase me any. Shannon rigorously defined information. Specified information is Shannon information with meaning/ function. Complex Specified Information is SI of 500 or more bits. It is tha simple. Now stop whining and read "No Free Lunch"- your "criticisms" of CSI are hollow, meaning devoid of content and without foundation. Joseph
MathGrrl:
Is it or is it not possible to measure, with some degree of precision, the CSI present in a particular artifact?
1-It would depend on what the artifact is 2- I don't believe that was what the concept was designed to do but that doesn't mean it can't be done Joseph
Joseph,
every time we have observed CSI and knew the cause it has always ben via some designing agency
You have yet to define CSI rigorously, so that claim is not even wrong. MathGrrl
MathGrrl:
viii) It must be demonstrated that CSI is a reliable indicator of the involvement of intelligent agency
Ummm that is the whole point- meaning every time we have observed CSI and knew the cause it has always ben via some designing agency. We don't have any observations nor experience with necessity and chance producing CSI. MathGrrl:
The most straightforward way to meet it would be to calculate CSI for systems known to be created by intelligent agents and those known to be of non-intelligent provenance.
What do you have to represent the known non-intelligent side? I can point to thousands of examples from known designing agecies. Joseph
vjtorley,
The answer is that while the CSI of a complex system is calculable, it is not computable, even given a complete physical knowledge of the system.
Could you please clarify what you mean by this statement? Is it or is it not possible to measure, with some degree of precision, the CSI present in a particular artifact? Your first three criteria suggest that it is, while this statement suggests that it is not. MathGrrl
vjtorley, There is one more criteria that I think you need to add: viii) It must be demonstrated that CSI is a reliable indicator of the involvement of intelligent agency. This requirement is as essential as your first three. The most straightforward way to meet it would be to calculate CSI for systems known to be created by intelligent agents and those known to be of non-intelligent provenance. MathGrrl
This post has helped to clarify in my mind issues that I raised on the previous mathgrrl-related thread posted by O'Leary. Essentially, I was asking in that thread for a "CSI scanner". And, apparently... there isn't one. Even in principle. My feelings are essentially those of markf [9], mathgrrl or even Jemima. CSI needs to be spelt out (much) more clearly. I like the idea of simple algorithmic description combined with improbability serving as an indicator of design. It's a nice idea... but it doesn't work. Actually, I will contest one point. I believe that objections of the form "no [real] biologist believes that DNA came about like a tornado etc. etc." miss the point. Effectively many biologists do believe exactly this, and when pressed, seek to avoid the charge by falling back on word-pictures, or complaining that their opponents are asking the wrong questions, etc., much as CSI advocates do here. equinoxe
JemimaRacktouey you state; 'but the non-ID scientist “does understand that Evolution can create such life”.' Please do tell how he 'understands' this since he has no compelling evidence whatsoever that purely material processes randomly created life; -------- To get a range on the enormous challenges involved in bridging the gaping chasm between non-life and life, consider the following: “The difference between a mixture of simple chemicals and a bacterium, is much more profound than the gulf between a bacterium and an elephant.” (Dr. Robert Shapiro, Professor Emeritus of Chemistry, NYU) Illya Prigogine, (Nobel Prize-Chemistry, 1977), once wrote that, “let us have no illusions… [we] are unable to grasp the extreme complexity of the simplest of organisms.” The DNA of a bacterium (the simplest type of living organism known to have existed) contains an encyclopedic amount of pure digitally encoded information that directs the highly sophisticated molecular machinery within the cell membrane. “The machine code of the genes is uncannily computer-like… DNA characters are copied with an accuracy that rivals anything that modern engineers can do… DNA messages are pure digital code.” JemimaRacktouey, If you do have compelling evidence that this 'quantum leap' from non-life to life occurred by purely material processes, please present it and I will listen. At least for myself, I can say that I am willing to follow the evidence wherever it may lead, but can you truthfully look yourself in the mirror and say the same thing JemimaRacktouey?? For the evidence certainly does not lead to a purely 'naturalistic' origin for life! In fact the evidence, as it now sits, shows that a purely materialistic/naturalistic origin of life is impossible from first principles of science and reason; ---- Quantum entanglement holds together life’s blueprint - 2010 Excerpt: “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ The Failure Of Local Realism/ Materialism by Quantum Entanglement - Alain Aspect - video http://www.metacafe.com/w/4744145 The falsification for local realism (materialism) was recently greatly strengthened: Physicists close two loopholes while violating local realism - November 2010 Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview. http://www.physorg.com/news/2010-11-physicists-loopholes-violating-local-realism.html Quantum Measurements: Common Sense Is Not Enough, Physicists Show - July 2009 Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables. http://www.sciencedaily.com/releases/2009/07/090722142824.htm JemimaRacktouey please tell me how purely material processes can explain the quantum entanglement of the DNA molecule when quantum entanglement falsified local realism (reductive materialism) in the first place? You cannot explain a effect by a cause that was falsified by the effect in the first place! JemimaRacktouey, if you are following the evidence, you are forced to postulate a 'transcendent' cause that is sufficient to explain the entanglement effect! bornagain77
Could anyone supply a reason for using a semiotic description [Phi_s(T)] that does not require higher math? It would seem that (for the very same item being described) this value could vary greatly based on whether the description was done in English, Spanish, Swahili, Chinese or other language. I do see how descriptions (in the same language) of several different objects could need longer descriptions for more complex objects. SteveGoss
And I will say it again- Evolutionists need not worry aout CSI. All they hae to do is step up and actually produce positive evidnce or their position and CSI an ID will fade way. Evos shouldn't even be allowed to comment on it as they have had their opportunity to produce and have failed miserably. Now they should step aside and stop beng pimples on the arse of progress. Don't gt me wrong- criticism is good- but rock throwing, which is all "they" have, isn't. So how about it evos? Can you produce some methodology that yur position uses so that we can compare it to CSI or is your intellectual cowardice preventing you from actually putting up? Joseph
JR:
Does your reference to “what scientists currently know” refer to an ID scientist who presumably does not believe that Evolution can create the life we see about us or to a non-ID scientist who does understand that Evolution can create such life?
ID is not anti-evolution so you have a huge problem right off the bat. And there isn't any evidence that non-living matter and energy can give rise to living organisms via blind, undirected chemical processes. IOW you are just plain wrong but that s what happens when one tries to argue from ignorance. JR:
And don’t forget that no actual biologist claims that the components of cells came together randomly
By definition biologists deal with living organisms so I doubt they have much to say on the OoL (beyond speculation). That said ifit wasn't by chance, ie random, then what else is there? natral selection comes into play only once there is a living organism. Joseph
JemimaRacktouey (#7), Your prejudice is showing. You state that an ID scientist (good to know that they exist) "presumably does not believe that Evolution can create the life we see about us", but the non-ID scientist "does understand that Evolution can create such life". Why not "does believe"? Your prejudice may turn out to be correct, but you really don't know which understanding will turn out to be correct. You say,
It seems to me if you are calculating probabilities based on the spontaneous formation of (for example) a given protein (tornado in a junkyard) you’ll get a different answer to assuming it evolved.
You're right that one gets a different answer if one assumes it evolved. But that is begging the question. A more legitimate different answer would be after demonstrating a probabilistically plausible way that it could evolve. For example, there is the virus cited by Behe in his 2010 paper in the Quarterly Review of Biology, "Experimental evolution, loss-of-function mutations, and "the first rule of adaptive evolution."" There, a virus had 4 RNA bases (with sugars and phosphates) removed, which caused its fitness to be drastically reduced. The virus found several ways to partially recover, one of which was to replace the four missing bases with four bases which were duplicates of nearby bases, then changing them gradually back to almost the wild type (and presumably with one mutation to the wild type itself). Each change improved the fitness of the virus, and every change but the first was a reasonable point mutation. That's a probabilistically plausible pathway. But the fact of the matter is that such demonstrations are extremely rare, and do not presently exist for whole proteins. That's why James Shapiro said, "There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system, only a variety of wishful speculations." (National Review, Sept. 16, 1996). You say,
And my question is what are the options available to that process? Random iteration through the total available possibility space or a gradual step by step process?
I can't speak for Dr. Torley, but my answer would be, a gradual step-by-step process is definitely allowed, but not an imaginary process where the probabilities are assigned to fit a theory instead of being either experimentally determined, or at least determined by a mathematically defensible process. You say,
Which one makes a big difference. And don’t forget that no actual biologist claims that the components of cells came together randomly and so the “tornado in a junkyard” calculations so beloved of Kairosfocus and others are simply irrelevant and I’d like to think they were not deliberately misleading but they’ve been corrected so many times by now it’s a reasonable assumption.
Your complaint about Kairosfocus is misplaced until you can show that those explanations are not just wishful speculations. Does it really significantly improve the odds of a 747 being formed if instead of a tornado, we have a series of small landslides or creek floodings in the junkyard? Paul Giem
vj I agree that (i) to (iii) are reasonable and (iv) to (vii) not required. I suspect that Mathgrrl would also agree on consideration. However, I disagree that (i) has been done satisfactorily for real-life systems. I have written a response but it is far too long for a comment.  So I have put it on my blog and will summarise it here. For me the main beneift of Mathgrrl’s challenge was to clarify what is meant by CSI. I don’t think any reasonable person can dispute there is some confusion, even amongst the ID community, about what it means. Just look at the number of conflicting comments following her challenge. And indeed your attempts to estimate the CSI for real situations have shown up a number of ways where the concept is unclear. When you did the calculation for bacterial flagellum on the previous thread I made five objections which you kindly recognised as substantial.  So I going to present a revised list: 1) The formula Chi=-log2[10^120.Phi_s(T).P(T|H)] contains a rather basic error.  If you have n independent events and the probability of single event having outcome x is p, then the probability of at least one event having outcome x is not np . It is (1 – (1-p)^n). So the calculation 10^120.Phi_s(T).P(T|H) is wrong . The answer is still very small if p is small relative to n. But it does illustrate the lack of attention to detail and general sloppiness in some of the work on CSI. 2) There is some confusion as to whether Phi_s(T) includes all patterns that are “at least as simple as the observed pattern” or whether it is only those patterns that are “at least as simple as the observed pattern AND are at least as improbable”.  If we use the “at least as simple” criterion then some of the other patterns may be vastly more probable than the observed pattern. So we really have to use the “at least as simple AND are at least as improbable” criterion.  However, there is no adequate justification for only using the patterns that are less probable. 3) When Dembski (and you) estimate Phi_s(T) you use a conceptual definition of T: “bidirectional rotary motor-driven propeller”. This is not necessarily (in fact is almost certainly not) the same as the exact configuration of proteins. You do attempt to address this for the ATP case with a note in brackets. You say that any other configuration of proteins would be a lot more complex and therefore vastly more improbable. I am not a biochemist (are you?). I think you have to admit this is unproven. 4) The attempt to identify simple or simpler patterns through number of concepts is an absolute minefield. A concept is not the same as a word. For example, a “motor” can be broken down into many other concepts e.g. machine that converts other forms of energy into mechanical energy and so imparts motion. 5) The definition of H is very flaky. You admit that you are not certain what Dembski means. So you adopt your own definition – “a process which does not require the input of information”. But as we are currently using this formula to clarify what we mean by information this is circular. In once case you want to include the possibility of gene duplication in the chance hypothesis so you don’t end up with the awkward result that gene duplication doubles the CSI. But once you admit that knowing about gene duplication radically affects the level of CSI you are open to the possibility that other unspecified or unknown events such as gene duplication can have enormous affects on the supposed CSI. In other words we cannot even make a rough estimate of CSI without having a good account of all possible natural processes.   Yours Mark markf
vjtorley, The UD FAQ https://uncommondescent.com/faq/#csiqty Has this to say about CSI:
The Information in Complex Specified Information (CSI) Cannot Be Quantified That’s simply not true. Different approaches have been suggested for that, and different definitions of what can be measured are possible. As a first step, it is possible to measure the number of bits used to store any functionally specific information, and we could term such bits “functionally specific bits.” Next, the complexity of a functionally specified unit of information (like a functional protein) could be measured directly or indirectly based on the reasonable probability of finding such a sequence through a random walk based search or its functional equivalent. This approach is based on the observation that functionality of information is rather specific to a given context, so if the islands of function are sufficiently sparse in the wider search space of all possible sequences, beyond a certain scope of search, it becomes implausible that such a search on a planet wide scale or even on a scale comparable to our observed cosmos, will find it. But, we know that, routinely, intelligent actors create such functionally specific complex information; e.g. this paragraph. (And, we may contrast (i) a “typical” random alphanumeric character string showing random sequence complexity: kbnvusgwpsvbcvfel;’.. jiw[w;xb xqg[l;am . . . and/or (ii) a structured string showing orderly sequence complexity: atatatatatatatatatatatatatat . . . [The contrast also shows that a designed, complex specified object may also incorporate random and simply ordered components or aspects.])
Will you be updating that FAQ? JemimaRacktouey
vjtorley
Nevertheless, it should be possible to calculate a provisional upper bound for P(T|H), based on what scientists currently know about chemical and biological processes.
Does your reference to "what scientists currently know" refer to an ID scientist who presumably does not believe that Evolution can create the life we see about us or to a non-ID scientist who does understand that Evolution can create such life? It seems to me if you are calculating probabilities based on the spontaneous formation of (for example) a given protein (tornado in a junkyard) you'll get a different answer to assuming it evolved.
the physical probability of a non-foresighted (i.e. unintelligent) process generating that pattern according to chance hypothesis H.
And my question is what are the options available to that process? Random iteration through the total available possibility space or a gradual step by step process? Which one makes a big difference. And don't forget that no actual biologist claims that the components of cells came together randomly and so the "tornado in a junkyard" calculations so beloved of Kairosfocus and others are simply irrelevant and I'd like to think they were not deliberately misleading but they've been corrected so many times by now it's a reasonable assumption. JemimaRacktouey
Collin (#4) Good question. In part (ii) I offered my own ballpark estimate for P(T|H), based on my own "gut feel":
To illustrate the point, I’ll plug in some estimates that sound intuitively right to me, given my limited background knowledge of geological processes occurring over the past 4.54 billion years: 1*(10^-1)*(10^-1)*(10^-10)*(10*-10)*(10^-6)*(10^-1)*(10^-1)*(10*-4)*(10^-2), for the forehead, two eyebrows, two eyes, nose, cheeks, mouth and jawline respectively, giving a product of 10^(-36) – a very low number indeed. Raising that probability to the fourth power – giving a figure of 10^(-144) – would enable the alien scientists to calculate the probability of four faces being carved at a single location by chance, or P(T|H).
I could imagine something like a forehead (a flat area) forming just about anywhere on earth's surface over a 4.54 billion year time period. The question I had to ask myself next was: given a forehead, what's the probability of something shaped like an eyebrow forming immediately below it, over a period of 4.54 billion years? I said 10^-1. Then I asked: given an eyebrow, what's the probability of something shaped like an eye forming immediately below it, over a period of 4.54 billion years? The eyes on Mt. Rushmore are quite detailed - they even have irises and pupils, as well as eyelids - so I said 10^-10. And so on for the eyebrow and eye on the other side of the face, and the nose, mouth and jawline. Note that the aliens could make these estimates, and thereby estimate P(T|H), without ever having seen a human face or dug up any remains. All they'd need is a rough working knowledge of geological processes, and perhaps a good computer that could make better-informed estimates (based on models) than my top-of-the head stuff. (Although sometimes I wonder - perhaps this kind of estimation is what humans excel at, compared to computers?) They would only need to find human remains in order to estimate Phi_s(T). vjtorley
Given part III, I think that the scientists would at least be able to rank the artifacts consistently even if they were unable to agree on the exact "amount" of CSI. Collin
Couple of questions. Does the calculation of Mt. Rushmore depend on the aliens finding human remains as a reference point? How do the aliens calculate the probability of the faces arising through chance and necessity before calculating CSI? That seems like a gloss over. Collin
why do you demand a generic solution to the CSI problem Mathgrrl wasnt 'demanding' a general solution, she(?) was asking (oh so politely) for a definition and calculation as applied to the 4 cases. The ID critics would be happy if you could calculate CSI for just a few real life cases, whether chosen by MG or others. So far, however, after 400 comments we have your conclusion stated above that CSI is not only non-computable (ever? sometimes?) but is even non-describable (its a 'mixed' property). If CSI cannot be calculated for even simple real-world cases (more than a smiley), then it doesnt seem to be very useful. Graham
Dr. Torley, I am impressed :) especially with the smiley face calculation :) bornagain77
Point mutations, not typos. :cool: Joseph
Hi everyone. I've been up for the past two nights working on this post. If anyone finds any typos, my apologies. I'll try to correct them as soon as I can. vjtorley

Leave a Reply