Uncommon Descent Serving The Intelligent Design Community

CSI Revisited

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Over at The Skeptical Zone, Dr. Elizabeth Liddle has put up a post for Uncommon Descent readers, entitled, A CSI Challenge (15 May 2013). She writes:

Here is a pattern:

It’s a gray-scale image, so it is just one 2D matrix. Here is a text file containing the matrix:

MysteryPhoto

I would like to know whether it has CSI or not.

The term complex specified information (or CSI) is defined by Intelligent Design advocates William Dembski and Jonathan Wells in their book, The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), as being equivalent to specified complexity (p. 311), which is then defined as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). (2008, p. 320)

In some comments on her latest post, Dr. Liddle tells readers more about her mysterious pattern:

There are 658 x 795 pixels in the image, i.e 523,110. Each one can take one of 256 values (0:255). Not all values are represented with equal probability, though. It’s a negatively skewed distribution, with higher values more prevalent than lower…

I want CSI not FSC or any of the other alphabet soup stuff…

Feel free to guess what it is. I shan’t say for a while ☺ …

Well, if I’m understanding Dembski correctly, his claim is that we can look at any pattern, and if it is one of a small number of specified patterns out of a large total possible number of patterns with the same amount of Shannon Information, then if that proportion is smaller than the probability of getting it at least once in the history of the universe, then we can infer design…

Clearly it’s going to take a billion monkeys with pixel writers a heck of a long time before they come up with something as nice as my photo. But I’d like to compute just how long, to see if my pattern is designed…

tbh [To be honest – VJT], I think there are loads of ways of doing this, and some will give you a positive Design signal and some will not.

It all depends on p(T|H) [the probability of a specified pattern T occurring by chance, according to some chance hypothesis H – VJT] which is the thing that nobody every tells us how to calculate.

It would be interesting if someone at UD would have a go, though.

Looking at the image, I thought it bore some resemblance to contours (Chesil beach, perhaps?), but I’m probably hopelessly wrong in my guess. At any rate, I’d like to make a few short remarks.

(1) There is a vital distinction that needs to be kept in mind between a specified pattern’s being improbable as a configuration, and its being improbable as an outcome. The former does not necessarily imply the latter. If a pattern is composed of elements, then if we look at all possible arrangements or configurations of those constituent elements, it may be that only a very tiny proportion of these will contain the pattern in question. That makes it configurationally improbable. But that does not mean that the pattern is unlikely to ever arise: in other words, it would be unwarranted to infer that the appearance of the pattern in question is historically improbable, from its rarity as a possible configuration of its constituent elements.

(2) If, however, the various processes that are capable of generating the pattern in question contain no built-in biases in favor of this specified pattern arising – or more generally, no built-in biases in favor of any specified pattern arising – then we can legitimately infer that if a pattern is configurationally improbable, then its emergence over the course of time is correspondingly unlikely.

Unfortunately, the following remark by Elizabeth Liddle in her A CSI Challenge post seems to blur the distinction between configurational improbability and what Professor William Dembski and Dr. Jonathan Wells refers to in their book, The Design of Life (Foundation for Thought and Ethics, Dallas, 2008), as originational improbability (or what I prefer to call historical improbability):

Well, if I’m understanding Dembski correctly, his claim is that we can look at any pattern, and if it is one of a small number of specified patterns out of a large total possible number of patterns with the same amount of Shannon Information, then if that proportion is smaller than the probability of getting it at least once in the history of the universe, then we can infer design.

By itself, the configurational improbability of a pattern cannot tell us whether the pattern was designed. In order to assess the probability of obtaining that pattern at least once in the history of the universe, we need to look at the natural processes which are capable of generating that pattern.

(3) The “chance hypothesis” H that Professor Dembski discussed in his 2005 paper, Specification: The Pattern That Signifies Intelligence (version 1.22, 15 August 2005), was not a “pure randomness” hypothesis. In his paper, he referred to it as “the chance hypothesis most naturally associated with this probabilistic set-up” (p. 7) and later declared, “H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms” (p. 18).

In a comment on Dr. Elizabeth Liddle’s post, A CSI Challenge, ID critic Professor Joe Felsenstein writes:

The interpretation that many of us made of CSI was that it was an independent assessment of whether natural processes could have produced the adaptation. And that Dembski was claiming a conservation law to show that natural processes could not produce CSI.

Even most pro-ID commenters at UD interpreted Dembski’s CSI that way. They were always claiming that CSI was something that could be independently evaluated without yet knowing what processes produced the pattern.

But now Dembski has clarified that CSI is not (and maybe never was) something you could assess independently of knowing the processes that produced the pattern. Which makes it mostly an afterthought, and not of great interest.

Professor Felsenstein is quite correct in claiming that “CSI is not … something you could assess independently of knowing the processes that produced the pattern.” However, this is old news: Professor Dembski acknowledged as much back in 2005, in his paper, Specification: The Pattern That Signifies Intelligence (version 1.22, 15 August 2005). Now, it is true that in his paper, Professor Dembski repeatedly referred to H as the chance hypothesis. But in view of his remark on page 18, that “H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms,” I think it is reasonable to conclude that he was employing the word “chance” in its broad sense of “undirected,” rather than “purely random,” since Darwinian mechanisms are by definition non-random. (Note: when I say “undirected” in this post, I do not mean “lacking a telos, or built-in goal”; rather, I mean “lacking foresight, and hence not directed at any long-term goal.”)

I shall argue below that even if CSI cannot be assessed independently of knowing the processes that might have produced the pattern, it is still a useful and genuinely informative quantity, in many situations.

(4) We will definitely be unable to infer that a pattern was produced by Intelligent Design if:

(a) there is a very large(possibly infinite) number of undirected processes that might have produced the pattern;

(b) the chance of any one of these processes producing the pattern is astronomically low; and

(c) all of these processes are (roughly) equally probable.

What we then obtain is a discrete uniform distribution, which looks like this:

In the graph above, there are only five points, corresponding to five rival “chance hypotheses,” but what if we had 5,000 or 5,000,000 to consider, and they were all equally meritorious? In that case, our probability distribution would look more and more like this continuous uniform distribution:

The problem here is that taken singly, each “chance hypothesis” appears to be incapable of generating the pattern within a reasonable period of time: we’d have to wait for eons before we saw it arise. At the same time, taken together, the entire collection of “chance hypotheses” may well be perfectly capable of generating the pattern in question.

The moral of the story is that it is not enough to rule out this or that “chance hypothesis”; we have to rule out the entire ensemble of “chance hypotheses” before we can legitimately infer that a pattern is the result of Intelligent Design.

But how can we rule out all possible “chance hypotheses” for generating a pattern, when we haven’t had time to test them all? The answer is that if some “chance hypotheses” are much more probable than others, so that a few tower above all the rest, and the probabilities of the remaining chance hypotheses tend towards zero, then we may be able to estimate the probability of the entire ensemble of chance processes generating that pattern. And if this probability is so low that we would not expect to see the event realized even once in the entire history of the observable universe, then we could legitimately infer that the pattern was the product of Intelligent Design.

(5) In particular, if we suppose that the “chance hypotheses” which purport to explain how a pattern might have arisen in the absence of Intelligent Design follow a power law distribution, it is possible to rule out the entire ensemble of “chance” hypotheses as an inadequate explanation of that pattern. In the case of a power law distribution, we need only focus on the top few contenders, for reasons that will soon be readily apparent. Here’s what a discrete power law distribution looks like:

The graph above depicts various Zipfian distributions, which are discrete power law probability distributions. The frequency of words in the English language follows this kind of distribution; little words like “the,” “of” and “and” dominate.

And here’s what a continuous power law distribution looks like:

An example of a power-law graph, being used to demonstrate ranking of popularity (e.g. of actors). To the right is the long tail of insignificant individuals (e.g. millions of largely unknown aspiring actors), and to the left are the few individuals that dominate (e.g. the top 100 Hollywood movie stars).

This phenomenon whereby a few individuals dominate the rest is also known as the 80–20 rule, or the Pareto principle. It is commonly expressed in the adage: “80% of your sales come from 20% of your clients.” Applying this principle to “chance hypotheses” for explaining a pattern in the natural sciences, we see that there’s no need to evaluate each and every chance hypothesis that might explain the pattern; we need only look at the leading contenders, and if we notice the probabilities tapering off in a way that conforms to the 80-20 rule, we can calculate the overall probability that the entire set of hypotheses is capable of explaining the pattern in question.

Is the situation I have described a rare or anomalous one? Not at all. Very often, when scientists discover some unusual pattern in Nature, and try to evaluate the likelihood of various mechanisms for generating that pattern, they find that a handful of mechanisms tend to dominate the rest.

The Chaos Computer Club used a model of the monolith in Arthur C. Clarke’s novel 2001, at the Hackers at Large camp site. Image courtesy of Wikipedia.

(6) We can now see how the astronauts were immediately able to infer that the Monolith on the moon in the movie 2001 (based on Arthur C. Clarke’s novel) must have been designed. The monolith in the story was a black, extremely flat, non-reflective rectangular solid whose dimensions were in the precise ratio of 1 : 4 : 9 (the squares of the first three integers). The only plausible non-intelligent causes of a black monolith being on the Moon can be classified into two broad categories: exogenous (it arrived there as a result of some outside event – i.e. something falling out of the sky, such as a meteorite or asteroid) and endogenous (some process occurring on or beneath the moon’s surface generated it – e.g. lunar volcanism, or perhaps the action of wind and water in a bygone age when the moon may have had a thin atmosphere).

It doesn’t take much mental computing to see that neither process could plausibly generate a monument of such precise dimensions, in the ratio of 1 : 4 : 9. To see what Nature can generate by comparison, have a look at these red basaltic prisms from the Giant’s Causeway in Northern Ireland:

In short: in situations where scientists can ascertain that there are only a few promising hypotheses for explaining a pattern in Nature, legitimate design inferences can be made.

The underwater formation or ruin called “The Turtle” at Yonaguni, Ryukyu islands. Photo courtesy of Masahiro Kaji and Wikipedia.

(7) We can now see why the Yonaguni Monument continues to attract such spirited controversy, with some experts, such as Masaaki Kimura of the University of the Ryukyus, who claims: “The largest structure looks like a complicated, monolithic, stepped pyramid that rises from a depth of 25 meters.” Certain features of the Monument, such as a 5 meter-wide ledge that encircles the base of the formation on three sides,
a stone column about 7 meters tall, a straight wall 10 meters long, and a triangular depression with two large holes at its edge, are often cited as unmistakable evidence of human origin. There have even been claims of mysterious writing found at the underwater site. Other experts, such as Robert Schoch, a professor of science and mathematics at Boston University, insist that the straight edges in the underwater structure are geological features. “The first time I dived there, I knew it was not artificial,” Schoch said in an interview with National Geographic. “It’s not as regular as many people claim, and the right angles and symmetry don’t add up in many places.” There is an excellent article about the Monument by Brain Dunning at Skeptoid here.

The real problem here, as I see it, is that the dimensions of the relevant features of the Yonaguni Monument haven’t yet been measured and described in a rigorously mathematical fashion. For that reason, we don’t know whether it falls closer to the “Giant’s Causeway” end of the “design spectrum,” or the “Moon Monolith” end. In the absence of a large number of man-made monuments and natural monoliths that we can compare it to, our naive and untutored reaction to the Yonaguni Monument is one of perplexity: we don’t know what to think – although I’d be inclined to bet against it’s having been designed. What we need is more information.

(8) Turning now to Dr. Elizabeth Liddle’s picture, there are three good reasons why we cannot determine how much CSI it contains.

First, Dr. Liddle is declining to tell us what the specified pattern is, for the time being. Until she does, we have no way of knowing for sure whether there is a pattern or not, short of spotting it – which might take a very long time. (Some patterns, like the Champerdowne sequence in Professor Dembski’s 2005 essay, are hard to discern. Others, like the first 100 primes, are relatively easy.)

Second, we have no idea what kind of processes were actually used by Dr. Liddle to generate the picture. We don’t even know what medium it naturally occurs in (I’m assuming here that it exists somewhere out there in the real world). Is it sand? hilly land? tree bark? We don’t know. Hence we are unable to compute P(T|H), or the probability of the pattern arising according to some chance hypothesis, as we can’t even formulate a “chance hypothesis” H in the first place.

Finally, we don’t know what other kinds of natural processes could have been used to generate the pattern (if there is one), as we don’t know what the pattern is in the first place, and we don’t know where in Nature it can be found. Hence, we are unable to formulate a set of rival “chance hypotheses,” and as a result, we have no idea what the probability distribution of the ensemble of “chance hypotheses” looks like.

In short: there are too many unknowns to calculate the CSI in Dr. Liddle’s example. A few more hints might be in order.

(9) In the case of proteins, on the other hand, the pattern is not mathematical (e.g. a sequence of numbers) but functional: proteins are long strings of amino acids that actually manage to fold up, and that perform some useful biological role inside the cell. Given this knowledge, scientists can formulate hypotheses regarding the most likely processes on the early Earth for assembling amino acid strings. If a few of these hypotheses stand out, scientists can safely ignore the rest. Thus the CSI in a protein should be straightforwardly computable.

I have cited the recent work of Dr. Kozulic and Dr. Douglas Axe in recent posts of mine (see here, here and here). Suffice to say that the authors’ conclusions that the proteins we find in Nature are the product of Intelligent Design is not an “Argument from Incredulity” but an argument based on solid mathematics, applied to the most plausible “chance hypotheses” for generating a protein. And to those who object that proteins might have come from some smaller replicator, I say: that’s not a mathematical “might” but a mere epistemic one (as in “There might, for all we know, be fairies”). Meanwhile, the onus is on Darwinists to find such a replicator.

(10) Finally, Professor Felsenstein’s claim in a recent post that “Dembski and Marks have not provided any new argument that shows that a Designer intervenes after the population starts to evolve” with their recent paper on the law of conservation of information, is a specious one, as it rests on a misunderstanding of Intelligent Design. I’ll say more about that in a forthcoming post.

Recommended Reading

Specification: The Pattern That Signifies Intelligence by William A. Dembski (version 1.22, 15 August 2005).

The Conservation of Information: Measuring the Cost of Successful Search by William A. Dembski (version 1.1, 6 May 2006). Also published in IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061.

Conservation of Information Made Simple (28 August 2012) by William A. Dembski.

Before They’ve Even Seen Stephen Meyer’s New Book, Darwinists Waste No Time in Criticizing Darwin’s Doubt (4 April 2013) by William A. Dembski.

Does CSI enable us to detect Design? A reply to William Dembski (7 April 2013) by Joe Felsenstein at Panda’s Thumb.

NEWS FLASH: Dembski’s CSI caught in the act (14 April 2011) by kairosfocus at Uncommon Descent

Is Darwinism a better explanation of life than Intelligent Design? (14 May 2013) by Elizabeth Liddle at The Skeptical Zone.

A CSI Challenge (15 May 2013) by Elizabeth Liddle at The Skeptical Zone.

Comments
And no, it does NOT matter how the thing came to be- CSI is present or not REGARDLESS of the process. However it just so happens that every time we have observed CSI and knew the process it has always been via agency involvemnt- always, 100% of the time. We have never observed mother nature producing CSI-> never, ie 0% of the time. And THAT is why when we observe CSI and don't know the process, we can safely infer an agency was involved. And that matters to the investigation. However none of that will change the fact that Lizzie doesn't understand what Dembski is saying...Joe
May 17, 2013
May
05
May
17
17
2013
12:19 PM
12
12
19
PM
PDT
It's very sad but Lizzie really thinks a specification is present and she sez that Dembski agrees- albeit that is because she doesn't understand anything Dembski wrote. True, she may understand some or even most of the words he used but for some reason she just can't seem to put it all together. Just look at her Dr Nim thread- she doesn't understand that Dr Nim's responses trace back to the creator(s) and designer(s), ie actual intelligent agencies. Far from chance and necessity, Dr Nim operates as DESIGNED, as intended. Meaning the intent can also be traced back to the creator(s) and designer(s). "No but it's just a plastic thing with gates!" Right, but it was designed to do soemthing so no one should be surprised when it does what it is designed to do. Except you...Joe
May 17, 2013
May
05
May
17
17
2013
12:15 PM
12
12
15
PM
PDT
Actually, as I thought more about my own question: "Perhaps I am thinking about this too simplistically, but wouldn’t a (sufficiently complex) pattern be a valid specification?" I concluded that only a sufficiently complex and sufficiently specified pattern would be a valid specification. :P *sigh*Phinehas
May 17, 2013
May
05
May
17
17
2013
12:09 PM
12
12
09
PM
PDT
franklin:
I am glad to see that you agree with me that IDists aren’t able to make a design determination unless they know what the object is.
No one can- archaeologists require an object in order to determine whether or not it is an artifact. Forensic scientists need something in order to determine if a crime has been committed. SETI needs to receive a signal before they can determine whether or not it is from ET.
Makes CSI quite worthless as a metric.
You, being scientifically illiterate, makes CSI worthless as a metric? How does that work exactly?Joe
May 17, 2013
May
05
May
17
17
2013
12:05 PM
12
12
05
PM
PDT
Hey EA: Thanks for the response. I'm curious about this:
I’d say in the case of a Google image search, or even facial recognition, what is being matched in the actual pixel search is not so much a specification in the sense of complex specified information, but just a pattern match.
Perhaps I am thinking about this too simplistically, but wouldn't a (sufficiently complex) pattern be a valid specification? This isn't to say that all specifications must be patterns. I think functionality is another valid specification, though I can see how it might be more difficult to calculate, especially as anything but a binary value. I'd like to better understand the distinction you are making.Phinehas
May 17, 2013
May
05
May
17
17
2013
12:04 PM
12
12
04
PM
PDT
One aspect of facial recognition is recognizing something as a face, as opposed to any other object. There is certainly general specificity in faces, otherwise this would not be possible. After a face is recognized and parameterized, it can be searched in a database and compared with others.Chance Ratcliff
May 17, 2013
May
05
May
17
17
2013
11:55 AM
11
11
55
AM
PDT
Thanks, Phineas. I'd say in the case of a Google image search, or even facial recognition, what is being matched in the actual pixel search is not so much a specification in the sense of complex specified information, but just a pattern match. In the case of the Google image search, once the pattern (a particular picture) is found, we still have to ascertain whether it meets a specification (in the case of the glacier deposits, no). In the case of facial recognition, the additional specification (beyond a simple pattern match) is built into the algorithm or the search parameters, namely, it is programmed to look for specifically identifiable physical features.Eric Anderson
May 17, 2013
May
05
May
17
17
2013
11:47 AM
11
11
47
AM
PDT
EA: Has anyone in ID ever made the claim that you can calculate specificity? I'm not aware of any such claims. Still, in principle, I suppose it might be possible to do so. Intuitively, you might assign a 1.0 value to a grayscale picture of Ben Franklin at the highest fidelity for a particular resolution. You could then sum the number of pixels in any given picture that corresponds to the Ben Franklin picture and divide the result by the total pixel count and call that the specificity. In reality, however, we tend to pattern match with a good deal more sophistication. When pattern matching, we naturally process against every image we've ever seen as well as against conceptual images we haven't seen, but can easily imagine. The Google Image search is a bit closer to this in that it can compare against a large library of pictures on the internet. Though it doesn't show a specificity calculation in what it returns, I'm betting one is generated behind the scenes that could quite easily be exposed. In any case, as has been demonstrated above, it is quite powerful. I'd think that facial recognition or fingerprint comparison software would have a similar concept to specificity, though it might be called something else. Even so, I can pretty much guarantee that a human will vet what the software returns before any serious action is taken. The fact is: we are pattern matching fiends. We even see patterns where none is intended. On a grilled cheese sandwich. In a cloud. Even so, we are extremely adept at doing internal specificity calculations to arrive rather trivially at various determinations of, "probably not designed, maybe designed, or absolutely designed." I think we do this so trivially that it leads the less introspective to conclude CSI is simply a matter of saying something "looks designed." But this merely glosses over how absolutely mind-boggling the mind's pattern matching capabilities are. We might well look at face recognition software as simply a matter of saying someone "looks like Fred." So it is useless? If critics of CSI are claiming that specificity is a concept that could be better nailed down mathematically, then I'd agree wholeheartedly. Here's hoping for better algorithms in the future. Even so, I think our minds are not only sufficiently capable of assigning specificity to patterns, I'd argue that, at our current level of technology, our minds are by far more capable of doing so than any other method. And this is certainly the case when it comes to avoiding false positives. This is why we can assert with such confidence that ID critics will not succeed in offering up a picture where design will be identified, and will not be present. This is also why their complaints about calculating specificity are hollow red herrings.Phinehas
May 17, 2013
May
05
May
17
17
2013
11:35 AM
11
11
35
AM
PDT
Phineas @37: Well said. ----- franklin: Sorry, but you are wrong. See kf @43: ----- Lizzie, via Joe @39:
If CSI is any use, then it ought to be possible to compute it for my pattern.
What does that mean, Lizzie. Are you asking for a calculation of specificity? If so, then you don't know what you are talking about and demonstrate that you still don't understand the design inference.Eric Anderson
May 17, 2013
May
05
May
17
17
2013
10:29 AM
10
10
29
AM
PDT
joe: So science via psychics? No one makes the claim that we can determine design or not with no knowldge of what the object is.
I am glad to see that you agree with me that IDists aren't able to make a design determination unless they know what the object is. Makes CSI quite worthless as a metric.franklin
May 17, 2013
May
05
May
17
17
2013
10:27 AM
10
10
27
AM
PDT
What is needed to really test the design inference is a case where design will be identified and it is not present
^^^So much this!^^^Phinehas
May 17, 2013
May
05
May
17
17
2013
10:22 AM
10
10
22
AM
PDT
F/N: On doing a CSI calc on the case. We had an image of some 500 kBits. There was no evidence of specificity: Chi_500 = 500kbits * 0 - 500 = - 500 bits That is on absence of evident specificity -- as has been pointed out long since, and as has been explained long since, we are at the baseline, 500 bits short of the threshold. Sorry, that one won't wash either. KFkairosfocus
May 17, 2013
May
05
May
17
17
2013
09:52 AM
9
09
52
AM
PDT
F: You are simply wrong, go up to 2 above, BEFORE I knew the image was a snow pattern. (I saw Phineas' post AFTER I posted.) Notice, how I compared the case to wood grain, and pointed out how complexity and specificity were not apparently coupled? Notice, how I drew the inference that unless and until there was evidence of such a coupling of complexity and specificity, there would be a default to chance and necessity? Notice, how I accepted that the design inference process is quite willing to misdiagnose actual cases of design that do not pass the criterion of specificity? WHY ARE YOU TRYING TO REVISE THE FACTS AFTER THE FACT? You will notice that I then saw Phineas' comment, and remarked on that, highlighting WHY such a case would not couple specificity to complexity? Thereafter, I did a Google search, which is a TARGETTED search, and from that identified the credible source. I then was able to fit the clip from TSZ into the image more or less, I think there is a bit of distortion there. This confirmed the assessment. So, the truth is that the EF did work, and did what it was supposed to do. It identified that complexity without specificity to a narrow zone T, will not be enough. It was clear that this could be a case where actual design is such that it cannot be detected -- recall my remarks on not being a general decrypting algorithm? -- and then we were able to confirm the evident absence of such a match. Unless there is some steganography hiding in the code that I do not have time or inclination to try to hunt down. What else is clear, is that the test is a strawman. What is needed to really test the design inference is a case where design will be identified and it is not present. But that, I am afraid -- as the random document generation tests show -- will be very hard to do. What you have succeeded in doing, is to show us that we are not dealing with a reasonable minded, fair process or people. Which, unfortunately, on long experience, we have come to expect by now. I think you have some self examination to do, sir. KFkairosfocus
May 17, 2013
May
05
May
17
17
2013
09:46 AM
9
09
46
AM
PDT
The purpose of the exercise was to see if it is indeed possible to make the calculations with no knowledge of what the object is...
So science via psychics? No one makes the claim that we can determine design or not with no knowldge of what the object is. That has to be one of the stupidest things I have ever heard.Joe
May 17, 2013
May
05
May
17
17
2013
09:34 AM
9
09
34
AM
PDT
EA: Steganography, methinks. A point of worry for intel agencies just now. Indeed, if what was otherwise an image of a natural scene now pops up with steganography, the first aspect would be nature, but the latter would be design. As for the case of a molecular join, the problem is of course to set up the conditions and the absence of actual self-replication. But, as Johnson pointed out, if you are locked in the materialist a priori circle of thought, any slightest relevance to the conclusion already held looks ever so much like confirmation. The problem of the cartoon characters going in circles in the woods, and seeing more and more footprints, thinking they are on the right track again. And that also shows the reluctance to accept just how reliable the EF is when it does rule design, and how often it will be right when it rules not design, too. Which is the default side. For, if things MUST be otherwise, one imagines, this is just a fluke, yet again. (There are billions of these "flukes" and no genuine counter-instances, but hope springs eternal.) KFkairosfocus
May 17, 2013
May
05
May
17
17
2013
09:33 AM
9
09
33
AM
PDT
franklin,, Only scientifically illiterate people think that science is conducted via a photo. The design inference requires an examination of the actual evidence. Also CSI wouldn't be the tool to use anyway. So you chumps are just proud of your inability to investigate.Joe
May 17, 2013
May
05
May
17
17
2013
09:32 AM
9
09
32
AM
PDT
Lizzie:
If CSI is any use, then it ought to be possible to compute it for my pattern.
And if a chainsaw can't help with my kid's math homework it isn't of any use. Again, Lizzie, I suggest you email Dembski before running around like a druken sailor...Joe
May 17, 2013
May
05
May
17
17
2013
09:29 AM
9
09
29
AM
PDT
eric:Further, the photo was put forward, it was analyzed, the design filter worked.
The revisionist history is amusing but surely you realize that the 'filter' did nothing and that it was google that found the image. Everything that follows is post hoc hand-waving which to all onlookers is transparently obvious. Once the identity of the image was revealed it isn't that impressive for you to state if the object was designed or not designed. The purpose of the exercise was to see if it is indeed possible to make the calculations with no knowledge of what the object is or its history or in fact knowing nothing at all about that object this is the claim that IDists make for their/your alleged metric and the answer is quite clear you can't do it as VJT has pointed out.franklin
May 17, 2013
May
05
May
17
17
2013
09:27 AM
9
09
27
AM
PDT
Querius: After finding the image online, I was thinking last night along the exact lines you laid out. What if someone took a simple substitution cipher and used it to modify the pixel data in order to embed a message in the picture? This would be a pretty typical example of steganography. One might argue that, before the cipher was revealed, the calculation of CSI would give one result, and after the cipher was revealed, it would give another. But this would be an argument against claims about CSI that have never been made. It has ALWAYS been recognized that tricking CSI into giving a false negative is rather trivial. CSI is set up specifically to prevent false positives; negative outcomes are always provisional and can change when additional information is brought to light. Inexplicably, it seems that TMZ lacks the sophistication to even make it to that sort of objection. Liz:
That glacier produced that pattern. That pattern is pretty cool, and complex, and specified (the glacier even more so than the photo). So that glacier “found” that pattern.
Seriously? It is difficult to imagine a more appropriate response to this than your favorite star trek facepalm gif. Still, I'll try. The glacier "found" that pattern just like a fair dealer "finds" the improbable pattern of cards in each and every one of your hands. Without an independent specification, no set of five cards is more or less improbable than any other. Nor do you typically make an inference that the dealer is somehow designing exactly what cards are in your hand. You trust that random processes are at work as advertised. However, if an opponent suddenly starts getting Royal Flush after Royal Flush each and every hand, you WILL make a design inference. I guarantee it. You WILL suspect that the advertised random processes are no longer in effect and that the dealer is somehow designing the outcome. HOW can you do this, given that each hand is just as improbable as any other? If you are honest and open-minded, you will conclude that your inference arises from your recognition that these particular cards are consistently lining up with an independent specification. (If you've got a better explanation as to how you'd infer design, I'd love to hear it. My point is that we both know that you WOULD infer design, and we both know that it WOULD be a valid inference.) Bringing this back around to pictures, if what is advertised as the random accumulation of volcanic ash on ice starts to resemble Ben Franklin (an independent specification) with more and more fidelity, there WILL once again be a point where you infer design. You KNOW that this sort of inference is valid, whether you resort to a formal calculation of CSI or not. Why keep acting like you don't understand? It makes no sense and only serves to call your own faculties and credibility into question.Phinehas
May 17, 2013
May
05
May
17
17
2013
09:26 AM
9
09
26
AM
PDT
Eric, Lizzie really thinks she is using Dembski's definition of specification. I know she isn't. And if she had any integrity she would bring it up with Dembski. But then again Lizzie thinks we do science with only one photograph. Science isn't a parlor game, Lizzie. And all you are doing is trying to engage in parlor games.Joe
May 17, 2013
May
05
May
17
17
2013
09:25 AM
9
09
25
AM
PDT
Re: Joe's various comments above: I agree that in most cases we do not need to know the process in order to infer design. To be sure, a process itself can be designed, so in those cases we would need to know what the process was in order to even analyze it. But we do not need to know exactly how, say, the pyramids were built in order to infer design. That kind of thinking is completely wrong. --- If Lizzie thinks a glacier pattern or some similar natural phenomenon is "specified," then this means she simply doesn't understand what is meant by a specification. Further, the photo was put forward, it was analyzed, the design filter worked. But now there seems to be a lot of backpedaling. Why can't they say, "Well done. Looks like the filter worked in this case." BTW, the filter works fine for images. I'm not sure what you mean by it being the wrong tool in some cases. ----- Oh, boy. Not the alleged, hypothetical, never-yet-found, self-replicating molecule again . . .Eric Anderson
May 17, 2013
May
05
May
17
17
2013
08:48 AM
8
08
48
AM
PDT
Querius @20: Great, example, I love it! In your case, without someone discovering the encoded Bible in the numbers, the design filter would not infer design. That is OK. The design inference has never claimed that it can identify all instances of design, particularly not those that are purposely made to be hidden or to mimic natural processes. However, it is also that case that with more context, a bit of sleuthing, or someone stumbling on the embedded code, design would then leap forth. In your example, we really have two things: the first is the undesigned natural process that produced the major image. Without more knowledge of what is there, the design filter -- correctly -- identifies this as undesigned; the second is an embedded designed code (not an image, but a code embedded in an image). When just looking at the pixels, this code is not even seen, so by definition it won't be recognized as designed. Only once it is seen or identified can the filter be applied, at which point it will confirm design. So there are two things going on, and we need to carefully keep them separate and analyze them separately.Eric Anderson
May 17, 2013
May
05
May
17
17
2013
08:42 AM
8
08
42
AM
PDT
EL: Indeed, the glacier -- or the tree growing and then being cut, resulted in a phenomenon exhibiting complexity, coming from a wide space of possibilities, W. However, in neither case, is there any constraint that locks the outcomes to a simply separately/independently describable narrow zone T. You can see that by examining a stack of plywood sheets at your local shop, or planks: the patterns vary al over the place and that makes but little difference. That would be sharply different from a cluster of evident sculptural portraits at a certain mountain in the US. And in the case of parts that have to fit and work together to achieve a function, such is even more evident. KFkairosfocus
May 17, 2013
May
05
May
17
17
2013
07:50 AM
7
07
50
AM
PDT
Now Lizzie is title-hunting. She doesn't realize that the link to the self-replicating peptide doesn't demonstrate self-replication. All that occurs is ONE peptide bond is catalyzed. IOW the experiment requires a pool of peptides, one 15 and one 17 amino acids long. Then the existing peptide facilitates the bonding of the two pieces. Lizzie's link: self-replicating peptide How gullible are you Lizzie?Joe
May 17, 2013
May
05
May
17
17
2013
05:55 AM
5
05
55
AM
PDT
Lizzie spews:
That glacier produced that pattern. That pattern is pretty cool, and complex, and specified (the glacier even more so than the photo). So that glacier “found” that pattern.
Nope, it is NOT specified. Obvioulsy you have no idea what that mmeans, to be specified. And no, darwin doesn't get a break because the evidence is against him- see lenskiJoe
May 17, 2013
May
05
May
17
17
2013
05:42 AM
5
05
42
AM
PDT
Lizzie is totally clueless. She doesn't understand that science is not conducted via one photo. Scientists would go to the actual location and observe the actual formation before making any inferences, Lizzie. Are you really that daft?Joe
May 17, 2013
May
05
May
17
17
2013
05:38 AM
5
05
38
AM
PDT
In fairness to RTH, he couldn't recognize anything in action wrt science and investigation.Joe
May 17, 2013
May
05
May
17
17
2013
05:26 AM
5
05
26
AM
PDT
F/N: I see that despite explicit use of the explanatory filter in inferring not-designed, some over at TSZ -- RTH, this means you in particular -- are unable to recognise it in action. Sadly but unsurprisingly revealing. KFkairosfocus
May 17, 2013
May
05
May
17
17
2013
05:13 AM
5
05
13
AM
PDT
OK all of that said, CSI is the wrong tool for determining CSI of an object or a photo- science is not done via one photo. IOW Lizzie's "challenge" is totally bogus. CSI is a good tool to use when the thing in question is readily amendable to bits- like a coded message or DNA. But when just given an object then counterflow is the tool to use. No one would use CSI to solve a murder. We have to be able to use the proper tool for the job.Joe
May 17, 2013
May
05
May
17
17
2013
04:48 AM
4
04
48
AM
PDT
By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause.- Ibid page 28
Can anyone post anything from Dembski that supports what Felsenstein claimed and VJT supported? Anyone?Joe
May 17, 2013
May
05
May
17
17
2013
04:41 AM
4
04
41
AM
PDT
1 2 3 4

Leave a Reply