Uncommon Descent Serving The Intelligent Design Community

CSI Revisited

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Over at The Skeptical Zone, Dr. Elizabeth Liddle has put up a post for Uncommon Descent readers, entitled, A CSI Challenge (15 May 2013). She writes:

Here is a pattern:

It’s a gray-scale image, so it is just one 2D matrix. Here is a text file containing the matrix:

MysteryPhoto

I would like to know whether it has CSI or not.

The term complex specified information (or CSI) is defined by Intelligent Design advocates William Dembski and Jonathan Wells in their book, The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), as being equivalent to specified complexity (p. 311), which is then defined as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). (2008, p. 320)

In some comments on her latest post, Dr. Liddle tells readers more about her mysterious pattern:

There are 658 x 795 pixels in the image, i.e 523,110. Each one can take one of 256 values (0:255). Not all values are represented with equal probability, though. It’s a negatively skewed distribution, with higher values more prevalent than lower…

I want CSI not FSC or any of the other alphabet soup stuff…

Feel free to guess what it is. I shan’t say for a while ☺ …

Well, if I’m understanding Dembski correctly, his claim is that we can look at any pattern, and if it is one of a small number of specified patterns out of a large total possible number of patterns with the same amount of Shannon Information, then if that proportion is smaller than the probability of getting it at least once in the history of the universe, then we can infer design…

Clearly it’s going to take a billion monkeys with pixel writers a heck of a long time before they come up with something as nice as my photo. But I’d like to compute just how long, to see if my pattern is designed…

tbh [To be honest – VJT], I think there are loads of ways of doing this, and some will give you a positive Design signal and some will not.

It all depends on p(T|H) [the probability of a specified pattern T occurring by chance, according to some chance hypothesis H – VJT] which is the thing that nobody every tells us how to calculate.

It would be interesting if someone at UD would have a go, though.

Looking at the image, I thought it bore some resemblance to contours (Chesil beach, perhaps?), but I’m probably hopelessly wrong in my guess. At any rate, I’d like to make a few short remarks.

(1) There is a vital distinction that needs to be kept in mind between a specified pattern’s being improbable as a configuration, and its being improbable as an outcome. The former does not necessarily imply the latter. If a pattern is composed of elements, then if we look at all possible arrangements or configurations of those constituent elements, it may be that only a very tiny proportion of these will contain the pattern in question. That makes it configurationally improbable. But that does not mean that the pattern is unlikely to ever arise: in other words, it would be unwarranted to infer that the appearance of the pattern in question is historically improbable, from its rarity as a possible configuration of its constituent elements.

(2) If, however, the various processes that are capable of generating the pattern in question contain no built-in biases in favor of this specified pattern arising – or more generally, no built-in biases in favor of any specified pattern arising – then we can legitimately infer that if a pattern is configurationally improbable, then its emergence over the course of time is correspondingly unlikely.

Unfortunately, the following remark by Elizabeth Liddle in her A CSI Challenge post seems to blur the distinction between configurational improbability and what Professor William Dembski and Dr. Jonathan Wells refers to in their book, The Design of Life (Foundation for Thought and Ethics, Dallas, 2008), as originational improbability (or what I prefer to call historical improbability):

Well, if I’m understanding Dembski correctly, his claim is that we can look at any pattern, and if it is one of a small number of specified patterns out of a large total possible number of patterns with the same amount of Shannon Information, then if that proportion is smaller than the probability of getting it at least once in the history of the universe, then we can infer design.

By itself, the configurational improbability of a pattern cannot tell us whether the pattern was designed. In order to assess the probability of obtaining that pattern at least once in the history of the universe, we need to look at the natural processes which are capable of generating that pattern.

(3) The “chance hypothesis” H that Professor Dembski discussed in his 2005 paper, Specification: The Pattern That Signifies Intelligence (version 1.22, 15 August 2005), was not a “pure randomness” hypothesis. In his paper, he referred to it as “the chance hypothesis most naturally associated with this probabilistic set-up” (p. 7) and later declared, “H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms” (p. 18).

In a comment on Dr. Elizabeth Liddle’s post, A CSI Challenge, ID critic Professor Joe Felsenstein writes:

The interpretation that many of us made of CSI was that it was an independent assessment of whether natural processes could have produced the adaptation. And that Dembski was claiming a conservation law to show that natural processes could not produce CSI.

Even most pro-ID commenters at UD interpreted Dembski’s CSI that way. They were always claiming that CSI was something that could be independently evaluated without yet knowing what processes produced the pattern.

But now Dembski has clarified that CSI is not (and maybe never was) something you could assess independently of knowing the processes that produced the pattern. Which makes it mostly an afterthought, and not of great interest.

Professor Felsenstein is quite correct in claiming that “CSI is not … something you could assess independently of knowing the processes that produced the pattern.” However, this is old news: Professor Dembski acknowledged as much back in 2005, in his paper, Specification: The Pattern That Signifies Intelligence (version 1.22, 15 August 2005). Now, it is true that in his paper, Professor Dembski repeatedly referred to H as the chance hypothesis. But in view of his remark on page 18, that “H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms,” I think it is reasonable to conclude that he was employing the word “chance” in its broad sense of “undirected,” rather than “purely random,” since Darwinian mechanisms are by definition non-random. (Note: when I say “undirected” in this post, I do not mean “lacking a telos, or built-in goal”; rather, I mean “lacking foresight, and hence not directed at any long-term goal.”)

I shall argue below that even if CSI cannot be assessed independently of knowing the processes that might have produced the pattern, it is still a useful and genuinely informative quantity, in many situations.

(4) We will definitely be unable to infer that a pattern was produced by Intelligent Design if:

(a) there is a very large(possibly infinite) number of undirected processes that might have produced the pattern;

(b) the chance of any one of these processes producing the pattern is astronomically low; and

(c) all of these processes are (roughly) equally probable.

What we then obtain is a discrete uniform distribution, which looks like this:

In the graph above, there are only five points, corresponding to five rival “chance hypotheses,” but what if we had 5,000 or 5,000,000 to consider, and they were all equally meritorious? In that case, our probability distribution would look more and more like this continuous uniform distribution:

The problem here is that taken singly, each “chance hypothesis” appears to be incapable of generating the pattern within a reasonable period of time: we’d have to wait for eons before we saw it arise. At the same time, taken together, the entire collection of “chance hypotheses” may well be perfectly capable of generating the pattern in question.

The moral of the story is that it is not enough to rule out this or that “chance hypothesis”; we have to rule out the entire ensemble of “chance hypotheses” before we can legitimately infer that a pattern is the result of Intelligent Design.

But how can we rule out all possible “chance hypotheses” for generating a pattern, when we haven’t had time to test them all? The answer is that if some “chance hypotheses” are much more probable than others, so that a few tower above all the rest, and the probabilities of the remaining chance hypotheses tend towards zero, then we may be able to estimate the probability of the entire ensemble of chance processes generating that pattern. And if this probability is so low that we would not expect to see the event realized even once in the entire history of the observable universe, then we could legitimately infer that the pattern was the product of Intelligent Design.

(5) In particular, if we suppose that the “chance hypotheses” which purport to explain how a pattern might have arisen in the absence of Intelligent Design follow a power law distribution, it is possible to rule out the entire ensemble of “chance” hypotheses as an inadequate explanation of that pattern. In the case of a power law distribution, we need only focus on the top few contenders, for reasons that will soon be readily apparent. Here’s what a discrete power law distribution looks like:

The graph above depicts various Zipfian distributions, which are discrete power law probability distributions. The frequency of words in the English language follows this kind of distribution; little words like “the,” “of” and “and” dominate.

And here’s what a continuous power law distribution looks like:

An example of a power-law graph, being used to demonstrate ranking of popularity (e.g. of actors). To the right is the long tail of insignificant individuals (e.g. millions of largely unknown aspiring actors), and to the left are the few individuals that dominate (e.g. the top 100 Hollywood movie stars).

This phenomenon whereby a few individuals dominate the rest is also known as the 80–20 rule, or the Pareto principle. It is commonly expressed in the adage: “80% of your sales come from 20% of your clients.” Applying this principle to “chance hypotheses” for explaining a pattern in the natural sciences, we see that there’s no need to evaluate each and every chance hypothesis that might explain the pattern; we need only look at the leading contenders, and if we notice the probabilities tapering off in a way that conforms to the 80-20 rule, we can calculate the overall probability that the entire set of hypotheses is capable of explaining the pattern in question.

Is the situation I have described a rare or anomalous one? Not at all. Very often, when scientists discover some unusual pattern in Nature, and try to evaluate the likelihood of various mechanisms for generating that pattern, they find that a handful of mechanisms tend to dominate the rest.

The Chaos Computer Club used a model of the monolith in Arthur C. Clarke’s novel 2001, at the Hackers at Large camp site. Image courtesy of Wikipedia.

(6) We can now see how the astronauts were immediately able to infer that the Monolith on the moon in the movie 2001 (based on Arthur C. Clarke’s novel) must have been designed. The monolith in the story was a black, extremely flat, non-reflective rectangular solid whose dimensions were in the precise ratio of 1 : 4 : 9 (the squares of the first three integers). The only plausible non-intelligent causes of a black monolith being on the Moon can be classified into two broad categories: exogenous (it arrived there as a result of some outside event – i.e. something falling out of the sky, such as a meteorite or asteroid) and endogenous (some process occurring on or beneath the moon’s surface generated it – e.g. lunar volcanism, or perhaps the action of wind and water in a bygone age when the moon may have had a thin atmosphere).

It doesn’t take much mental computing to see that neither process could plausibly generate a monument of such precise dimensions, in the ratio of 1 : 4 : 9. To see what Nature can generate by comparison, have a look at these red basaltic prisms from the Giant’s Causeway in Northern Ireland:

In short: in situations where scientists can ascertain that there are only a few promising hypotheses for explaining a pattern in Nature, legitimate design inferences can be made.

The underwater formation or ruin called “The Turtle” at Yonaguni, Ryukyu islands. Photo courtesy of Masahiro Kaji and Wikipedia.

(7) We can now see why the Yonaguni Monument continues to attract such spirited controversy, with some experts, such as Masaaki Kimura of the University of the Ryukyus, who claims: “The largest structure looks like a complicated, monolithic, stepped pyramid that rises from a depth of 25 meters.” Certain features of the Monument, such as a 5 meter-wide ledge that encircles the base of the formation on three sides,
a stone column about 7 meters tall, a straight wall 10 meters long, and a triangular depression with two large holes at its edge, are often cited as unmistakable evidence of human origin. There have even been claims of mysterious writing found at the underwater site. Other experts, such as Robert Schoch, a professor of science and mathematics at Boston University, insist that the straight edges in the underwater structure are geological features. “The first time I dived there, I knew it was not artificial,” Schoch said in an interview with National Geographic. “It’s not as regular as many people claim, and the right angles and symmetry don’t add up in many places.” There is an excellent article about the Monument by Brain Dunning at Skeptoid here.

The real problem here, as I see it, is that the dimensions of the relevant features of the Yonaguni Monument haven’t yet been measured and described in a rigorously mathematical fashion. For that reason, we don’t know whether it falls closer to the “Giant’s Causeway” end of the “design spectrum,” or the “Moon Monolith” end. In the absence of a large number of man-made monuments and natural monoliths that we can compare it to, our naive and untutored reaction to the Yonaguni Monument is one of perplexity: we don’t know what to think – although I’d be inclined to bet against it’s having been designed. What we need is more information.

(8) Turning now to Dr. Elizabeth Liddle’s picture, there are three good reasons why we cannot determine how much CSI it contains.

First, Dr. Liddle is declining to tell us what the specified pattern is, for the time being. Until she does, we have no way of knowing for sure whether there is a pattern or not, short of spotting it – which might take a very long time. (Some patterns, like the Champerdowne sequence in Professor Dembski’s 2005 essay, are hard to discern. Others, like the first 100 primes, are relatively easy.)

Second, we have no idea what kind of processes were actually used by Dr. Liddle to generate the picture. We don’t even know what medium it naturally occurs in (I’m assuming here that it exists somewhere out there in the real world). Is it sand? hilly land? tree bark? We don’t know. Hence we are unable to compute P(T|H), or the probability of the pattern arising according to some chance hypothesis, as we can’t even formulate a “chance hypothesis” H in the first place.

Finally, we don’t know what other kinds of natural processes could have been used to generate the pattern (if there is one), as we don’t know what the pattern is in the first place, and we don’t know where in Nature it can be found. Hence, we are unable to formulate a set of rival “chance hypotheses,” and as a result, we have no idea what the probability distribution of the ensemble of “chance hypotheses” looks like.

In short: there are too many unknowns to calculate the CSI in Dr. Liddle’s example. A few more hints might be in order.

(9) In the case of proteins, on the other hand, the pattern is not mathematical (e.g. a sequence of numbers) but functional: proteins are long strings of amino acids that actually manage to fold up, and that perform some useful biological role inside the cell. Given this knowledge, scientists can formulate hypotheses regarding the most likely processes on the early Earth for assembling amino acid strings. If a few of these hypotheses stand out, scientists can safely ignore the rest. Thus the CSI in a protein should be straightforwardly computable.

I have cited the recent work of Dr. Kozulic and Dr. Douglas Axe in recent posts of mine (see here, here and here). Suffice to say that the authors’ conclusions that the proteins we find in Nature are the product of Intelligent Design is not an “Argument from Incredulity” but an argument based on solid mathematics, applied to the most plausible “chance hypotheses” for generating a protein. And to those who object that proteins might have come from some smaller replicator, I say: that’s not a mathematical “might” but a mere epistemic one (as in “There might, for all we know, be fairies”). Meanwhile, the onus is on Darwinists to find such a replicator.

(10) Finally, Professor Felsenstein’s claim in a recent post that “Dembski and Marks have not provided any new argument that shows that a Designer intervenes after the population starts to evolve” with their recent paper on the law of conservation of information, is a specious one, as it rests on a misunderstanding of Intelligent Design. I’ll say more about that in a forthcoming post.

Recommended Reading

Specification: The Pattern That Signifies Intelligence by William A. Dembski (version 1.22, 15 August 2005).

The Conservation of Information: Measuring the Cost of Successful Search by William A. Dembski (version 1.1, 6 May 2006). Also published in IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061.

Conservation of Information Made Simple (28 August 2012) by William A. Dembski.

Before They’ve Even Seen Stephen Meyer’s New Book, Darwinists Waste No Time in Criticizing Darwin’s Doubt (4 April 2013) by William A. Dembski.

Does CSI enable us to detect Design? A reply to William Dembski (7 April 2013) by Joe Felsenstein at Panda’s Thumb.

NEWS FLASH: Dembski’s CSI caught in the act (14 April 2011) by kairosfocus at Uncommon Descent

Is Darwinism a better explanation of life than Intelligent Design? (14 May 2013) by Elizabeth Liddle at The Skeptical Zone.

A CSI Challenge (15 May 2013) by Elizabeth Liddle at The Skeptical Zone.

Comments
And, taking a known enzyme family we can compare rates within the family, but also look at what happens as we make random substitutions or deletions etc, to see how function falls off, until ewe have effectively no function . . .
Sure. And notice that we can't even talk rationally about the effect of substitutions without mentioning and first having clear in our mind the relevant function, which is an observed physical phenomenon, not a mathematical abstraction. That function is the specification, not the various comparative reaction rates that we can run as an ancillary exercise. Eric Anderson
EA: standard scientific measurement units are built up in a traceable chain from the seven base SI units: mass, length, time, amount of substance [a measure of number of particles counted in moles: 6.023 * 10^23 particles . . . ], current, temperature from Abs Zero, luminous intensity, IIRC, all other units are constructed via equations going back to base quantities. Chem eqns work in molecules, which scales up to moles. Time is based on standard oscillations of one form or another. Rates are in effect dQ/dt, which is dependent on inverse time, per unit time. And, taking a known enzyme family we can compare rates within the family, but also look at what happens as we make random substitutions or deletions etc, to see how function falls off, until ewe have effectively no function -- maybe because folding fails or the pocket for the rxn is no longer effective, etc. That would, in principle give us a picture of an island of function. Does that help? KF kairosfocus
Phineas: 1) Function is easily definable in machines, which are objects that perform some task that we can recognize as useful in a context. Biological machines are machines, and therefore the simple concept of function is perfectly apt for them An outboard motor that does not work is simply a machine that does not work. We can maybe recognize its potentiality (if we fix it, it can work), but the function, either potential or manifest, is always our reference. A mutated protein that does not work is similar to that. Darwinists love to speculate that is we change a fundamental aminoacid in a functional protein, so that it does not work no more, and then we change back the mutation, the protein acquires dFSCI with only one mutation. That is not true. The funtion is already potentially there, almost. We just fix the wrong aminoacid, and we get the full functionality, in the same way as we get the function if we fix the motor. 2) Function, in the machine sense, is not the only expression of purpose in design. For example, in language I would speak of meaning more than function. Meaning, however, can be "measured" functionally (for example, we can meausre the capacity of some phrase to convey a specific information to a group of readers). If you refer to a painting, obviously, it becomes even more complex. We could measure functionally the capacity of the painting to convey specific information on its subject, but the beauty, which is certainly an aspect of purpose, is certainly more difficult to "measure". Luckily, for the biological context, we don't really need all that. Machine functionality is more than enough for our purposes. There is certainly a lot of beauty in the biological world, and I believe it is very much evidence of design, but frankly I prefer to fight darwinists with an enzyme, where I can define function in a very objective way, and define specific quantitative methods to assess its presence or absence. gpuccio
The only objections I have with "functional specification" as a label is that it could possibly cloud the issue in some cases. 1) I haven't been around a lot of outboard motors, but anecdotally, their functional nature is not what I'd call a certainty. But a non-functioning outboard motor is nearly as recognizable as an artifact of design as one that functions. Still, no one designs an outboard motor to fail, so I suppose it is the specification that must be functional and not the artifact itself. Even so, this could generate a bit of confusion. 2) Going further, it seems a stretch to call something like a work of art "functional," even though most clearly exhibit design. I'm not saying it is impossible to imagine that a work of art "functions" to provoke some emotion or thought, but this does seem categorically different to how an outboard motor "functions." In both of these cases, it seems to me that the core issue is purpose more so than function. It is the purpose behind the crafting of even the broken outboard motor that signals design. And it is the purpose behind a work of art that does the same. Unfortunately, purpose seems even more fuzzy than function when it comes to producing a mathematically rigorous inference to design. Further, purpose and design seem so closely linked as concepts that stating one in terms of the other might start to sound a bit tautological. I'm guessing this is why we stick with "functional" despite its evident shortcomings? Phinehas
kf @108: I fear you are not understanding my point. Or, more likely, I'm not explaining myself very well, in which case I apologize. At the risk of beating a dead horse, let me try to articulate my point about the enzymes. I completely understand that reaction rates can be measured. And they can be mapped. And they can be compared. But in doing so, they have to measured against something; they have to be compared against something; they have to be mapped based on something. And what is that something? When we are dealing with a specification we have to talk about the actual specification. What are the specifics? As I asked above in #102, are we just looking for the fastest reaction rate that could exist in the known universe under ideal conditions, or some other reaction rate that is more optimal in our particular biological system? Until we identify our specification, there is nothing to calculate against to compare and map reaction rates. Furthermore, if enzyme A has a reaction rate of X and enzyme B has a reaction rate of Y, and assuming both are sufficiently complex, would we really take the position that A is designed (because it is sufficiently "specified") and B is not designed (because it is less "specified" -- in your terms has a slower reaction rate)? Of course not. They both have an identifiable function: A reacts at rate X and B reacts at rate Y. They are both sufficiently complex. They both are designed. The fact that A does its function better than B does A's function, or that B does its function better than A does B's function doesn't have any impact on whether each enzyme constitutes a specification. And you can't calculate that based on math alone. You have to look at each enzyme on its own merits. Once you've done so, then sure, you can compare enzymes and run comparative calculations. But that isn't to find the specification in the first place; it is just a comparative exercise after the specification has already been identified. I've admitted that there may be some rare exceptions (I'm not sure what they might be at this point) in which a mathematical formula can identify up front in an objective way a specification, but as a general matter there is not a formula or a calculation we can throw out at the world that will come back and identify for us the many things that are specified. Rather, we look at a system, we identify a function/meaning/specification. We do it from our experience and ability to identify function, meaning, goal-driven-activities, engineering acumen, and so on. Then at that point -- when we have adequately defined the specification -- we can start ascertaining whether the complexity related to that specification is sufficient to exclude chance and necessity. In other words, if I were to describe the design inference to someone I wouldn't say that it is simply based on two mathematical calculations (one for specification; one for complexity). Rather, it is based on (i) an real-world assessment of function/meaning/specification, using our logic, experience, understanding of purpose-driven outcomes, plus (ii) a mathematical calculation of complexity. Eric Anderson
gpuccio @109:
I am not sure I understand Dembski’s attitude towards functional specification, or specification at all. He is probably worried about the subjective aspect of function definition, and is searching some totally objective, mathematical definition of specification.
I'm concerned as well that he may be going down this path. I don't think it is a fruitful approach and will likely just confuse things. Eric Anderson
Eric Anderson: I believe we are in perfect agreement. You say: "I argue that this specification is recognized due to our ability to recognize function, meaning, and purpose-driven outcomes, as well as some logic and experience that are brought to the table. I don’t think we recognize a specification because we’ve done some calculation to determine whether the thing is specified enough." That is exactly the point. Functional specification is a categorical, binary value. It is either present or absent. But its assessment is based on some objective definition and quantitative rule, such as a minimal threshold of activity for an enzyme. So, specification is both subjective (it requires recognition by a conscious observer) and objective (the observer who recognizes the function must objectively define it and the criteria for its assessment, so that anybody can verify the binary value of its presence or absence in an object, according to the given definition). I am not sure I understand Dembski's attitude towards functional specification, or specification at all. He is probably worried about the subjective aspect of function definition, and is searching some totally objective, mathematical definition of specification. I am not a mathematician, so I cannot comment on the technical aspects of that problem. But I believe that the definition of functional specification I have given here is completely empirical: it can be applied objectively, and it is a proper basis for design detection. Moreover, as purpose is the essence of design, that definition relies on the same fundamental quality that defines the target to be detected: conscious recognition of purpose. So, I believe that the empirical definition of functional specification is also cognitively consistent and satisfying. gpuccio
EA: Reaction rates and rate constants are routinely measured in Chemistry. It is a commonplace that as a result of such measurements, enzymes are known to accelerate attainments of equilibrium by orders of magnitude, sometimes from essentially zero speed to relevant functional outcomes in a reasonable time for a living cell. No need for relative metrics, we can use standard reaction kinetics and metrics for such cases. of course, we can also compare and take ratios to see how relatively effective something is, but that is secondary. KF kairosfocus
gpuccio @99: Thanks for the quick review of the issues from the paper. Very helpful. @ 105: If I'm understanding you, I think I am essentially in agreement. The specification is recognized and defined as a result of a function or a property of the item in question. I argue that this specification is recognized due to our ability to recognize function, meaning, and purpose-driven outcomes, as well as some logic and experience that are brought to the table. I don't think we recognize a specification because we've done some calculation to determine whether the thing is specified enough. I agree with you that once a specification is identified and defined, then we can bring a complexity calculation to bear to see if we can rule out chance (typically using something like the universal probability bound). ----- Incidentally, your example of the tablet is a good one. One of the problems Bill Dembski got into (in my opinion) is that in some of his examples he was sloppy with how he defined the specification. As a result, it looked like in some of his examples that natural forces could easily account for the item in question (like, say, your example A of a paperweight). Bill's particular examples I'm thinking of were a city and a stool. My response was in the below link. It is terribly long and somewhat dry for most people, so I hesitate to inflict it on anyone, but might be worth checking if you have an hour to kill on a plane sometime: http://www.iscid.org/papers/Anderson_ICReduced_092904.pdf Eric Anderson
I'm sure that this is probably very late to the discussion, but isn't it tiresome that EL still challenging the concept of CSI? I remember exchanges on the boards here at UD where EL and assorted anti-ID partisans were challenging the very notion that DNA sequences could even properly be described as digital information!! Has she ever acknowledged being proven wrong on that score? If not, what intellectual authority can she claim to possess? It seems that she will happily advocate any view that opposes ID. Optimus
Eric Anderson (#102): I have not followed your discussion in detail, but I would like to comment on your last post. Functional specification starts with a conscious observer that recognizes some function in a material object and defines it objectively, so that anyone cam measure that function in any possible object and assess it as present or absent. In that sense, the observer must also provide, in his definition, a minimal threshold for the function to be assessed as present. Only then can a computation of the complexity necessary to generate that function, as defined, be performed. The resulting functional complexity is relative to the function as defined, not absolute. I often make the example of a tablet on a desktop. I observe it and I define for it two different functions: a) acting as a desktop paperweight b) performing a specific list of computing actions The object is the same, but I am defining two very different functions. The complexity needed to provide each of them is completely different (very low for a, very high for b). The same is true for an enzyme. I observe that it can accelerate some reaction, and I define that function as the capacity to provide at least such acceleration in specific lab conditions. Then I can compute the functional complexity tied to that definition. So, I can compute different functional complexities tied to different minimal levels of activity. I hope that helps. gpuccio
Phinehas @103, Good points, especially with regard to compressing a whole class of objects into a single evocative label. However I might be inclined to call that description underdetermined, to use vjtorley's vocabulary, because there would be nothing in that description which would specify how a certain piano is to be constructed; it's not specific enough. So if we consider a specification to be a description by which an object could be reproduced, or a prescription for reproducing it by way of a context, such as a computer program that generates a certain output, then we are dealing with a different "space" of complexity. A set of blueprints, a materials list, a construction plan, and a construction schedule, might constitute a prescription for constructing a building. But that specification is not as complex as the building itself. And the specification would be amenable to being digitized and compressed. So I think that complexity crops up not just in the thing which is specified, but in the specification as well, albeit at different levels. Perhaps KF could give his opinion on how specification relates to the thing specified. Chance Ratcliff
InVivoVeritas: I'm not sure you haven't slipped back into describing the complexity of a piano instead of its specification. The simplest and shortest specification of a piano is, "a piano." There are an infinite number of complex arrangements of matter that will fit this specification, and we can be confident that every single one of them is designed. This is the power of language and functional concepts: that all of the complexity of a finely tuned instrument like a piano can be compressed into a single logos that when invoked can evoke both your complex "specification" and a million others. It seems to me that it might be this very compressiblility of complexity that is a powerful part of the design inference. The potency in looking at what is represented at the top of this page and realizing, "Hey,that's basically an outboard motor," should not be underestimated. But trying to calculate specificity any any sort of formal way seems to be a very slippery thing. Phinehas
kf @94: Sure, the enzymes can be compared and scaled; I've acknowledged that. But against what are they being scaled and compared? Against the fastest possible reaction rate that could exist in the known universe under ideal conditions? Against a non-catalyzed rate? Against a median rate? Against a rate that would be optimal for a particular result in a particular biological context? We can't even start calculating anything until after we have decided -- on the basis of function, logic, engineering analysis and experience; not on the basis of math -- that X is the ideal 100% specification. Then we can rate other enzymes against this ideal 100% specification. Some too fast, others too slow, some requiring too much material, some not faithful enough, and on and on. But even then, are we saying that an enzyme that matches our criteria by, say, 90% is specified or not specified? What about one that matches 80% or 50%? I think we need to distinguish between comparative calculations (which can certainly be done once we have agreed upon various criteria) and a calculation that would in theory allow us to determine whether X is "specified" (which I question whether it can be done, except perhaps in rare cases). Eric Anderson
Hi gpuccio, Thank you very much for your comments on Miller's response. Greatly appreciated. Thanks once again. vjtorley
InVivoVeritas @97, well said. Looking at it from a natural language perspective is illuminating, especially when considering that language (logos) is a fundamental property of intelligence. Chance Ratcliff
VJ (#84): Unfortunately, my time is very limited at present, and I don't feel like analyzing Miller's response to your essay. I have already discussed many of these things with Miller himself and other TSZers, some time ago, and in great detail. I think, however, that I can give a few brief comments about the papers that he quoted and you linked. The first paper is scarcely significant. It just shows a simple enzymatic activity of a short RNA sequence. And so? Let's go to the second. In essence, what it says is: the RNA world is a bad theory, but a protein first world, or other alternatives, are even worse. Obviously, the authors forget to add: except for a design theory of OOL, which is very good! :) If the RNS world theory is the best neo darwinism can offer to explain OOL, then I am very, very happy that I am on the other side. Just a couple of words on the RNA world theory: it would never qualify as a credible scientific theory, if it were not the best non design theory they have. There is absolutely no evidence to justify even the hypothesis of an RNA world. Just think: there is no trace in the whole world of autonomous living beings based on RNA only. They have never been observed, and there is no indirect trace of their existence. If we want to stick to facts, what can we say about OOL? It is very simple. In the beginning life was not present on our planet. At some point in time (probably very early) life appears. What life? If we stick to facts, and do not let our imagination wonder in self-created worlds that never existed, there is only one credible answer: prokaryotes, with DNA, RNA, and proteins, and a very structured core of fundamental functions that have survived, almost identical, up to now, including the functions of DNA duplication, DNA transcription, mRNA translation, the genetic code, and so on. And hundreds of complex proteins, perfectly working. IOWs, the first living being of which we have at least indirect evidence on our planet is LUCA. And LUCA, as far as we can know from facts, was essentially a prokaryote. This is the simple truth. All the rest is imagination. So, could LUCA emerge as such a complex being? Yes, if it was designed. Let's go to the last paper, about proteins. First of all, I will just quote a few phrases from the paper, just to give the general scenario of the problems: a) "We designed and constructed a collection of artificial genes encoding approximately 1.5×106 novel amino acid sequences. Because folding into a stable 3-dimensional structure is a prerequisite for most biological functions, we did not construct this collection of proteins from random sequences. Instead, we used the binary code strategy for protein design, shown previously to facilitate the production of large combinatorial libraries of folded proteins" b) "Cells relying on the de novo proteins grow significantly slower than those expressing the natural protein." c) "We also purified several of the de novo proteins. (To avoid contamination by the natural enzyme, purifications were from strains deleted for the natural gene.) We tested these purified proteins for the enzymatic activities deleted in the respective auxotrophs, but were unable to detect activity that was reproducibly above the controls." And now, my comments: a) This is the main fault of the paper, if it is intepreted (as Miller does) as evidence that functional proteins can evolve from random sequences. The very first step of the paper is intelligent design: indeed, top down protein engineering based on our hardly gained knowledge about the biochemical properties of proteins. b) The second problem is that the paper is based on function rescue, not on the appearance of a mew function. Experiments based on function rescue have serious methodological problems, if used as models of neo darwinian evolution. The problem here is specially big, because we know nothing of how the "evolved" proteins work to allow the minimal rescue of function in the complex system of E. Coli (see next point). c) The third problem is that the few rescuing sequences have no detected biochemical activity in vitro. IOWs, we don't know what they do, and how they act at biochemical level. IOWs, with know no "local function" for the sequences, and have no idea of the functional complexity of the "local function" that in some unknown way is linked to the functional rescue. The authors are well aware of that, and indeed spend a lot of time discussing some arguments and experiments to exclude some possible interpretation of indirect rescue, or at least those that they have conceived. The fact remains that the hypotyhesis that the de novo sequences have the same functional activity as the knocked out genes, even if minimal, remain unproved, because no biochemical activity of that kind could be shown in vitro for them. These are the main points that must be considered. In brief, the paper does not prove, in any way, what Miller thinks it proves. gpuccio
Hi kairosfocus, On reflection, I'd agree with your point about it being sufficient to establish the existence of local fine tuning. Appealing to a higher level won't make the problem go away, of course, because a multiverse generator would itself have to be fine-tuned, as Dr. Robin Collins has argued. InVivoVeritas, I completely agree with your comment about language being the hallmark of intelligent designers, which is why it's appropriate to use text as the primary vehicle for capturing functional specification (FCSI). Food for thought. vjtorley
Eric Anderson @89:
Also, there are many examples of specification — perhaps the vast majority of them — that don’t lend themselves to calculation. Does a piano have a specification? Sure. How do we calculate it? What about a phone or a car or an airplane? Is it possible even in principle to calculate the amount of specification in most designed things?
Chance Ratcliff:
If I’m not misunderstanding you, it should be possible to specify how to reproduce such a thing. A while back I toyed with the idea of a programming language that would specify items for manufacture in a theoretical manufacturing environment. It’s probably not a foolproof way to measure specification, but it would provide a way to specify a thing like a piano for manufacture, and would result in a digital program that could be used to assess complexity. With the advent of 3D printing, this is easier to imagine nowadays.
I tend to agree with Chance Ratcliff. A specification for a piano consists – as suggested - in a thorough, precise, detailed description of how to build, construct and assemble all its parts – in the proper working relationships - such that it will manifest the function/purpose for which it was designed: to produce certain sounds when its keys and pedals are pressed by whatever operator. We should note that this detailed description should comprise – among other things - the precise specification of all materials used to construct the piano – including the steel strings, bronze plaque, ivory keys and – if we were to go to extreme – how these materials and parts are produced/manufactured – or at least procured. Now let’s assume that the whole above description is expressed in English natural Language. That may assume that if any manufacturing and assembly diagrams are needed, there is a professional method to translate them unequivocally in plain English. Then one conventional (preliminary) way to measure the FCSI of the piano is to count the number of characters used by this textual description. Using the same method we can create an English natural language description (equivalent with its FCSI) for a 2013 Ford Mustang in a (huge) text file. The size of the text file will represent a measurement of the FCSI for our Ford Mustang. The sizes of the “descriptor” file for the piano and the descriptor file for the Ford will give an idea how their FCSI measurements compare. I would suggest that since the manifest ability of us human to be intelligent designers is based on our ability to use language (logos) then this can be the justification to use text as primary vehicle for capturing functional specification (FCSI) and to measure it. For specialized domains (programming, hardware design, music composition, knitting patterns, mechanical drawings, etc.) there might be specialized languages or formalisms to precisely define the specific actions, sequences, interactions, that lead to the achieving the intended goal of that domain activity. Let’s touch briefly on what would mean to create an accurate FCSI for an IPhone. Assuming that all software for the IPhone was written using 3 programming languages: Objective C, HTML and SQL, then the totality of all software (source files) written in the three languages above are considered an important part of the FCSI “archive” for the IPhone. However considering that the manufactured IPhone needs also hardware, integrated circuits, printed circuit boards, sensors, codecs, battery, wiring, LCD screen, plastic or metal enclosures, then the FCSI archive must comprise thorough, precise descriptions for the design, manufacturing, assembly and test of all these parts and components. A complete IPhone FCSI archive should comprise also thorough descriptions of all services on which IPhone relies like CDMS, HiFi, networking and communication protocols. This way of staffing into a FCSI textual archive all details at all levels of a product seems to be perfectly legitimate since all design efforts and results with all their dependencies make-up together the intelligent design of that product. Any omission of a part or aspect of the design from the FCSI 'archive' is equivalent with removal of a necessary part from a system and thus a failure in providing the envisioned functionality (see Behe's irreducibly complex system). Next interesting exercise would be to sketch - on the same lines - what should be in the FCSI 'archive' for a single cell organism. I anticipate that such an exercise will show from a new perspective why natural processes and random events have no chance in synthesising 'ex nihilo' such a complex FCSI 'archive' - and thus to create life. InVivoVeritas
KF @95, yes I think so. Perhaps nodes and arcs would provide less ambiguity, and hence be more appropriate. I'm not sure. But it's true that in my model at #93, there is likely more than one program which could account for the output. However I think we're wanting to look for the simplest possible program for any given output. Would this be provable? That's another question. It may not be, and that might be the crux of Eric Anderson's issues with quantification of specificity. Regardless, I think it's progress, and if not, it's thought provoking. Chance Ratcliff
CR: A nodes and arcs network describes configuration. Think of it as a wiring diagram or exploded view. Such allows us to reduce organisation to description, i.e. set of structured strings, sim. to AutoCAD etc. Specification comes from degree of variability in the near neighbourhood. Tolerance for component variability, orientation and connexion. Also, this can be set up through modest low order digit noise bombing. BTW the network pattern is also a guide to assembly. One of the biggest points with cell structures is self assembly. That for the flagellum is a masterpiece of contrivance in itself. KF kairosfocus
EA: enzymes promote rxn rates under conditions, so activity can be measured on that basis, and compared. KF kairosfocus
Food for thought. Imagine a cube wrought of 1024^3 smaller cubes. Now imagine that each little cube could be any one of an arbitrary number of materials, say 64 just for kicks. This gives us a 3D block object with a resolution of 1024^3 cubes, each of which could be any one of 64 different materials. The config space for this output is astronomical. Now we might imagine different ways to arrive at specific configuration. For example, a sculptor could readily fashion such an object like he might fashion clay; only for each little cube, he could choose one of 64 different materials. Here we have a theoretical object which has a resolution sufficient for a great number of real objects. And we can increase or decrease the resolution arbitrarily. We could also imagine that an algorithm and a data set should be able to output objects in the cube's config space. If the complexity of the program for producing a given object is less than the complexity of the object in the cube's configuration space, then we have a candidate for the specification; and the specification's complexity could be cashed out in terms of the program's complexity. There are undoubtedly other issues to consider, and I haven't thought through it sufficiently, but that's a start, perhaps. Chance Ratcliff
Eric, well it's definitely calculating the complexity of the specification. I think the complexity of the output is in another config space. Chance Ratcliff
. . . and would result in a digital program that could be used to assess complexity.
We're not talking about assessing complexity. We're talking about assessing specification. I think there is a tendency to view the latter as collapsing into the former, but it doesn't -- or at least mustn't, if we are to take the ability to detect design seriously. I think in some cases it might be possible to assess, measure, calculate specification. But I'm having a hard time wrapping my head around when that would be applicable and how it would be accomplished. So far the examples people have given, on closer inspection, I believe are really examples of calculating complexity. I need to think through kf's comments a bit more . . . yours too . . . Eric Anderson
Eric,
"Does a piano have a specification? Sure. How do we calculate it?"
If I'm not misunderstanding you, it should be possible to specify how to reproduce such a thing. A while back I toyed with the idea of a programming language that would specify items for manufacture in a theoretical manufacturing environment. It's probably not a foolproof way to measure specification, but it would provide a way to specify a thing like a piano for manufacture, and would result in a digital program that could be used to assess complexity. With the advent of 3D printing, this is easier to imagine nowadays. Chance Ratcliff
kf @72: Thanks for your thoughts.
Also, degree of function of say an enzyme can be measured on a suitable scale and correlated to the particular string config of the underlying entity.
I'm not sure I'm on board with this example. I agree we can take an enzyme function we have identified (or even a simple protein function like binding affinity) and assign it a 1 value. Then we could map other enzymes or proteins and calculate their deviation from 1 and end up with some sort of scale of the various enzymes/proteins. But that is not really a calculation of the underlying specification itself. It is just a comparative calculation of the various enzymes/proteins, based on our assignment of a '1' value to an ideal function. And note that faster catalytic action or more binding affinity does not necessarily mean better function in a particular instance. So the function/specification has to be determined largely independent of any calculation and then assigned an arbitrary value. Once that is done then, yes, we can calculate any comparative deviation, or amount, or percentage of specification a particular enzyme/protein might exhibit with respect to the identified function/specification. Also, there are many examples of specification -- perhaps the vast majority of them -- that don't lend themselves to calculation. Does a piano have a specification? Sure. How do we calculate it? What about a phone or a car or an airplane? Is it possible even in principle to calculate the amount of specification in most designed things? Eric Anderson
PS: Insofar as the framework of cosmology can be represented as a nodes-arcs network of interacting mathematical entities, variables, relationships etc, we can generate an information estimate and could see how perturbation disturbs. But we already know, very fine tuned. kairosfocus
VJT: Pardon, but the cosmological design inference is not a design inference per the filter. This pivots on the issue of scope, where we may see a multiverse suggested with 10^500 sub cosmi. That is, the speculative scope -- and there is simply no empirical observational warrant here, this is phil not sci -- is indefinitely huge. That calls for a different look. What we are dealing with instead is primarily fine tuning, where the observed cosmos sits at a LOCALLY deeply isolated operating point. This means that we face the lone fly on a portion of wall -- an attractive target -- swotted by a bullet argument of John Leslie. It matters not if elsewhere parts of the immense wall may be carpeted with flies. That is LOCAL fine tuning is sufficiently remarkable. The concept of functionally specific complex organisation applies, where our cosmos bears signs of being a put-up job, as Hoyle put it. KF kairosfocus
Hi everyone, I just had another thought. Obviously, in order for cosmological Intelligent Design arguments to work as well as biological ID arguments, we need a definition of complex specified information that is broad enough to encompass not only the functional information we find in biological organisms, but also the information contained in the life-permitting properties of the cosmos. Since these designed properties of the cosmos can themselves be described as having an ultimate purpose which can be cashed out in functional terms (namely, to permit the generation of living, sentient and sapient beings) they can therefore be said to fall within the ambit of gpuccio's remark that "the functional definition of CSI is more natural and powerful: it recognizes purpose and meaning, which are the natural output of conscious beings, and design is the natural output of conscious beings." My two cents. vjtorley
VJT- "They" have self-sustained replication of RNAs and nothing new evolved. There wasn't any darwinian evolution. And that stopped once the resources ran out. They require just-so scenarios just to slow deanimation enough to make it sound feasible. Designed proteins that work in E. coli? Sounds like a winner for ID to me. Joe
Hi gpuccio, Thank you very much for your post (#71), which contained a wealth of sensible advice and penetrating insights. I find myself very much in agreement with your conclusion:
So, in brief, I think we should stick to functional specification, and to its digital expressions in biology. Apart from the obvious advantages in procedure, there is one important objective reasons why the functional definition of CSI is more natural and powerful: it recognizes purpose and meaning, which are the natural output of conscious beings, and design is the natural output of conscious beings. So, we recognize design for the natural features of design.
The functional specificity of the amino acid sequences in proteins (and other biological molecules) is a good place to start. It's a mathematically tractable. Of course, ID critics will continue positing non-random mechanisms with a built-in bias that could have led to the formation of proteins or RNA, or whatever their favorite prebiological molecule is. The way I see it, our job is to stay ahead of the opposition, and make sure we have the better case, mathematically speaking. Speaking of critics, I notice that Allan Miller [by the way, does anyone know his job title, or where he teaches?] has written a very meaty, substantive response to my recent paper, Build me a protein – no guidance allowed! A response to Allan Miller and to Dryden, Thomson and White. Among the papers Miller cites are the following: Multiple translational products from a five-nucleotide ribozyme by Rebecca M. Turk, Nataliya V. Chumachenko and Michael Yarus, in Proceedings of the National Academy of Sciences USA, 2010 March 9; 107(10): 4585–4589. The RNA world hypothesis: the worst theory of the early evolution of life (except for all the others) by Harold S. Bernhardt in Biology Direct 2012, 7:23, doi:10.1186/1745-6150-7-23. De Novo Designed Proteins from a Library of Artificial Sequences Function in Escherichia coli and Enable Cell Growth by Michael A. Fisher, Kara L. McKinley, Luke H. Bradley, Sara R. Viola and Michael H. Hecht, in PLoS ONE (2011) 6(1): e15364. doi:10.1371/journal.pone.0015364. I believe Miller's paper merits a well-considered response on the part of the ID community. vjtorley
Joe: thermodynamic counterflow through an organised entity is indeed a good index of design at work. Cf here at no 2 in the ID foundations. KF kairosfocus
Oh goody- Lizzie said she has emailed Wm Dembski pertaining to her recent CSI posts... Joe
Yes, he does say that algorithms are ideally suited for transmitting already existing CSI- they just are incapable of explaining its origin. (page 149) Joe
Joe said,
Algorithms cannot produce CSI- read “No Free Lunch”, Dembski covers that.
I believe the brain does produce "non-symbolic algorithms" and is a key part of intelligence, especially when it comes to generating and maximizing CSI output. computerist
kairosfocus- Counterflow- read "Nature, Design and Science" by Del Ratzsch. All I am saying is that CSI is not a design detection tool for every scenario. CSI and FSCO/I are great design detection tools in the right situation Joe
gpuccio@71, well said. computerist
Joe: In some cases we can see that the shaping of an object is outside the realm of un-programmed nature and can infer that such shaping is probably functional. if we find a clear bone fossil with a ball-socket joint, that says a lot. It then raises the issue, whence cometh the programming. Similarly, if we see a cluster of arranged elements such as at stonehenge, or even a cirular ditch or a network of linear canals or the like, even without knowing specific function, we can infer design due tot the precise shaping. Likewise, the shaping of certain stone artifacts. What guides in such cases, is that blind unprogrammed non-purposeful forces normally do not do things like that, as opposed to say column jointed basalt, order not organisation. The suggested Moon monolith is a good case. To asses, we do a nodes arcs mesh and see how much perturbation is acceptable. Anyone who thinks on it will realise that a precision shaped structure such as a tombstone like monumental slab not subject to say follow crystal forming forces is not a simple entity. Even without inscriptions. Obvious text would be a dead giveaway. And of course that gets us back to the digitally coded algorithmic text in DNA. KF kairosfocus
Look it is obvious that Lizzie doesn't understand what Dembski is saying so she erected a strawman and will stand by it until she dies. So I suggest we just ignore her and her strawman because giving her attention is what she craves. Joe
And Alan Fox continues to amuse us:
I still can’t get beyond the simple question “how can you possibly say what function an unknown object might have?”
That doesn't have anything to do with ID nor science, Alan. Science works based on observations, Alan. So we would observe an object and its function and then try to figure out how it came to be. Duh. Joe
KF: It's really special to be with the old friends! I completely agree with you, as usual. gpuccio
GP: Always good to see you pop by and comment. I agree, and that is why I have so strongly supported your stress on function. The very intensity that lurks behind the dismissive terms used shows just how the TSZ folks do not want to touch FSCO/I, especially in the form of functional digital strings that bear a code or the direct manifestation of a code as we see with proteins especially enzymes: dFSCI, digitally coded, functionally specific complex information. Notice too how in the above at 2 etc I have underscored how Dembski in NFL made it plain that in the world of living things, CSI is cashed out as FUNCTION. That evasiveness on the clearest point, which would allow them to make sense of wider understandings, is itself highly revealing. Sadly so. KF kairosfocus
EA: A clear example is Durston et al in 2007. Their metric incorporates a redundancy measure that captures specificity of AA strings that function as a given protein family across the world of life. At a more basic level, measure assigns a value on a scale. So, event eh simple approach that assigns function on a threshold basis, in Chi_500 = Ip*S - 500, uses S as a measure of specificity. 1/0 is a nominal scale of the set: nominal, ordinal, interval, ratio. Beyond that the generic concept that we have islands of function such that perturbation of a string beyond a certain point takes us out of the zone of relevant function T, is an in principle metric. The relevant range can be incorporated in metrics of info content, through the sort of average info per symbol for functional sequences that Durston et al used. Where also, functionality in strings is WLOG, as a functionally organised entity can be reduced to a nodes and arcs model and represented as a cluster of structured strings. Much as say AutoCAD does. Also, degree of function of say an enzyme can be measured on a suitable scale and correlated to the particular string config of the underlying entity. KF kairosfocus
VJ: Just a few simple comments, which will be obvious for those who know my position (including our friends at TSZ). a) I really believe that there is no need at all, for the discussion about the origin of biological information, to generalize specification beyond the very simple definition of functional specification. Function and meaning are the true specific outputs of consciousness, and the intervention of consciousness is the specific feature of design. b) Functional specification is easy to detect and define, and the quantity of information necessary to generate the function is often easy to compute. I think you can agree on these two points. You yourself state: "In the case of proteins, on the other hand, the pattern is not mathematical (e.g. a sequence of numbers) but functional: proteins are long strings of amino acids that actually manage to fold up, and that perform some useful biological role inside the cell. Given this knowledge, scientists can formulate hypotheses regarding the most likely processes on the early Earth for assembling amino acid strings. If a few of these hypotheses stand out, scientists can safely ignore the rest. Thus the CSI in a protein should be straightforwardly computable." So, my simple question is: what need is there to have a "mathemathical" specification, when all we need to discuss the origin of biological information is functional specification? Let's go on: c) The following statement by Elizabeth is, IMO, very telling: "I want CSI not FSC or any of the other alphabet soup stuff…" So, why does she want "CSI", and not "any of the other alphabet soup stuff"? IMO the answer is very simple. Our friends at TSZ know too well that they cannot deal with functional specification, especially in digital form. They have been shown very clearly that dFSCI is a very simple and effective tool to detect design. That's why they have to deal with more general concepts of CSI, where it is easier for them to create confusion. d) So, in brief, I think we should stick to functional specification, and to its digital expressions in biology. Apart from the obvious advantages in procedure, there is one important objective reasons why the functional definition of CSI is more natural and powerful: it recognizes purpose and meaning, which are the natural output of conscious beings, and design is the natural output of conscious beings. So, we recognize design for the natural features of design. That is the fundamental concept. All the mathematical procedures to assess complexity have only one purpose: to distinguish true design from apparent design: those cases of simple recognizable function emerging from random or deterministic systems, which were never willed by a conscious agent. In those cases, the assessment of the complexity tied to the function very easily allows us to distinguish between cases where design can be legitimately inferred, and cases where that is not possible. It's as simple as that. And Elizabeth and the other TSZers know that very well. That's why they "want CSI not FSC or any of the other alphabet soup stuff". gpuccio
BTW, has anyone measured specification in a mathematical way? I've read an awful lot of ID literature, but it is certainly possible that I may have missed it, so I'm sincerely asking. I suppose there might be some specifications that are themselves mathematical in nature so that one could be measured, but that would be the exception, not the rule. Complexity is pretty easy to measure, particularly if we have some background knowledge (either up front, or through diligent investigative work) about the relevant probability space. Specification, not so much . . . Eric Anderson
vjtorley @62:
At the opposite end [of the] scale, natural selection is sometimes interpreted as a random process. This is also a misconception. The genetic variation that occurs in a population because of mutation is random – but selection acts on that variation in a very non-random way: genetic variants that aid survival and reproduction are much more likely to become common than variants that don’t. Natural selection is NOT random!
This is pure propaganda. Natural selection is not a mechanism. It doesn't do anything. It is just a label attached to the result of processes that are rarely, if ever, carefully defined. Further, if we know what the process was that produced the differential survival, then we can explain what happened quite nicely without ever referring to the term "natural selection." If the underlying process that produced the result is random (as the passage you quote admits), then the result is random. It doesn't make any difference whether you then attach the label "natural selection" to the outcome. Finally, this statement: "genetic variants that aid survival and reproduction are much more likely to become common than variants that don’t" is nothing more than a circular restatement of what natural selection supposedly is, namely survival of the fittest. Saying that variants that aid survival are more likely to become common (meaning, more likely to survive) than those that don't (meaning, less likely to survive) is singularly unhelpful. It is a meaningless tautology. There is great effort on the part of evolutionists to claim that natural selection somehow eliminates the randomness of evolutionary processes by guiding, or selecting, or moving the population toward some non-random outcome. Nonsense. Natural selection isn't a mechanism. It doesn't do anything. Eric Anderson
Chance and Phinehas, Thank you Joe
So people do read what I post. Thanks VJT, however... CSI is not the only tool in the tool box. Just plain ole counterflow does nicely, especially in the case of the monolith. Have you read Del Ratzsch "Nature, Design and Science"?
However, it does not follow from the fact that design inferences can be made irrespective of how an object arose, that the CSI of an object ought to be computable even if nothing is known about how it arose.
Excuse me but the the purpose of "No Free Lunch" was to apply CSI to biological organisms in order to determine if they were designed. You're messin' with me, aren't you? Then you say:
The point of CSI is indeed to determine whether something is the product of intelligent agency.
But you just said we already have to know how it arose, meaning we would already know if it was via agency or not. So which is it? Next you quote:
At the opposite end [of the] scale, natural selection is sometimes interpreted as a random process. This is also a misconception. The genetic variation that occurs in a population because of mutation is random – but selection acts on that variation in a very non-random way: genetic variants that aid survival and reproduction are much more likely to become common than variants that don’t. Natural selection is NOT random!
Mayr said whatever is good enough survives to reproduce and it is a given that natural selection doesn't act. So yeah, I would say natural selection is about as non-random as the spread pattern from a 12 gauge sawed-off shot gun shooting bird shot. And that may be giving it too much credit. Look NS is just differential reproduction due to heritable random (as in chance/ happenstance (Mayr)) variation. That's it. And it all varies. Joe
"The police might disagree with your last statement (bad pun, I know)."
Lol! I am imagining David Caruso applying the explanatory filter...in dark sunglasses. :D Chance Ratcliff
VJT:
Berkley U Web Page: At the opposite end [of the] scale, natural selection is sometimes interpreted as a random process. This is also a misconception. The genetic variation that occurs in a population because of mutation is random – but selection acts on that variation in a very non-random way: genetic variants that aid survival and reproduction are much more likely to become common than variants that don’t. Natural selection is NOT random!
VJT: Surely you don’t mean to contest this?
I don't want to speak for Joe, but I'm on the fence as to whether or not Natural Selection can be said to be a Darwinian mechanism. It seems more like an ad hoc description than a mechanism. (You'll recall that Joe's challenge was specifically about Darwinian mechanisms.) Phinehas
Joe:
“…the descriptive part means that someone else can come along, take that description and reproduce exactly the same pattern.”
I think I've read where this was also thought of in terms of compression. I'm thinking the algorithm needed to create Lizzie's 2D matrix isn't going to be much smaller than the matrix itself. Still, I'm pretty sure talking in these terms will get you no further in discussions with those who aren't interested in understanding. Phinehas
Joe,
"...the descriptive part means that someone else can come along, take that description and reproduce exactly the same pattern."
That's very well put. Chance Ratcliff
Hi Joe, Thank you for your comments. You cited a passage by Professor William Dembski to support your claim that an object's CSI should be computable even if nothing is known about its origin:
Intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?
I would be the first to agree that they can. For example, the presence of a digital code is a reliable signal of an intelligent agent, as Dr. Stephen Meyer argued in his book, Signature in the Cell. Indeed, I would go further, and say that we can legitimately infer Intelligent Design for anything that Rev. William Paley defines as a contrivance, namely a system possessing the following three features: "relation to an end, relation of parts to one another, and to a common purpose." (Darwin's theory failed to refute Paley on this point; all it proposed was a mechanism whereby one kind of contrivance might, over the course of time, transform itself into another kind of contrivance, without answering the question of where the first contrivance came from.) However, it does not follow from the fact that design inferences can be made irrespective of how an object arose, that the CSI of an object ought to be computable even if nothing is known about how it arose. That would only follow if design inferences invariably depended on measurements of an object's CSI. And they don't, as the case of the Monolith in 2001 shows. We can immediately tell it's designed from a simple knowledge of its dimensions. In the passage you quoted above, Professor Dembski does not state that an object's CSI is computable even if nothing is known about its origin. He simply talks about design inferences. You also write:
We need to be able to assess CSI independently of the process.
That would only follow if measurements of an object's CSI were the only way of determining whether or not it was designed. For some objects, no measurement is needed. You add:
That is the whole point of CSI-> to determine it had to be the product of an intelligent agency...
The point of CSI is indeed to determine whether something is the product of intelligent agency. And it is an excellent tool in those cases where the design inference is not self-evident, or where it is contested by skeptics with a vested ideological stake in denying design - as occurs, for example, in the case of life. You continue:
Dembski's 2005 paper on specification is supposed to be about figuring out if something is designed or not without knowing how it arose.
That may be your interpretation of the paper's purpose; it is not one which I share. I would say it's about how we can decide whether or not an object is designed, if we make some reasonable suppositions regarding its alleged origin through "undirected" (i.e. non-foresighted) processes (the alternative hypothesis). Usually scientists can narrow down the range of hypotheses regarding how an object arose naturally, without intelligent guidance, as kairosfocus pointed out above. You go on to add:
I would also like VJT to produce the definition of darwinian mechanisms that demonstrates they are non-random.
Are you serious? Or are you employing a very funny definition of "non-random"? You might like to have a look at the following Web page by Berkeley University, entitled, Misconceptions about natural selection. Here's an excerpt:
At the opposite end [of the] scale, natural selection is sometimes interpreted as a random process. This is also a misconception. The genetic variation that occurs in a population because of mutation is random - but selection acts on that variation in a very non-random way: genetic variants that aid survival and reproduction are much more likely to become common than variants that don't. Natural selection is NOT random!
Surely you don't mean to contest this? Finally, in the light of your comments above, I am surprised to find you making the following concessions in a later comment:
OK all of that said, CSI is the wrong tool for determining CSI of an object or a photo - science is not done via one photo... CSI is a good tool to use when the thing in question is readily amendable to bits - like a coded message or DNA... No-one would use CSI to solve a murder.
The police might disagree with your last statement (bad pun, I know). If you are simply claiming here that a single photo cannot always tell us what an object is, then I'd say you're making a valid point, which certainly applies to the image that Dr. Liddle threw down as a challenge in her post. But your statement that "CSI is the wrong tool for determining CSI of an object," apart from being somewhat muddled (CSI is the wrong tool for determining CSI?), also appears to be at odds with your citation of Professor Dembski - "Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?" [emphasis mine - VJT] - to support your claim that CSI should be computable even if nothing is known about an object's origin. That said, I would agree with your statement that "CSI is a good tool to use when the thing in question is readily amendable to bits." However, I also think that the concept of CSI can be legitimately applied to things like smiley faces and the images on Mt. Rushmore, as I argued in my post, Why there's no such thing as a CSI scanner. Hope that helps. vjtorley
And Lizzie, the descriptive part means that someone else can come along, take that description and reproduce exactly the same pattern. And with your "description" of the glacial pattern, that would be impossible. Therefor you lose. Joe
Dear Lizzie, If you really think that the glacier meets Dembski's criteria for specification, it makes me very happy to know that you are not a detective, forensic scientist, archaeologist nor a SETI researcher. Joe
Phineas @51 and 53: Good thoughts. I think you've correctly answered your own question. The complexity is only one part of "complex specified" information. There also needs to be a specification (which, incidentally, can be specified either before or after the fact -- in many, perhaps most, cases we are looking at something after the fact and ascertaining whether it was designed or not). One of the problems anti-ID folks have is they fail to understand the difference between complexity and specification. They obsess over the complexity aspect and think that just because something is complex (which is really equivalent to a measure of probability in most cases), then it falls within the design inference and can kick out false positives. Of course, as you've correctly stated, there must also be a separate specification, which allows us to avoid false positives. Eric Anderson
Earth to Richarthughes- Yes, let's be honest-> you don't know anything about science. So perhaps you should start by getting an education above the 3rd grade level. Joe
The following from OM proves they are clueless:
IDers, what could be provided to you such that you would then be able to perform “the design inference”?
Just the actual thing we are trying to make that determintion about, in context (meaning with all relevant evidence). You know just as with regular scientists who get to actually examine the thing the are trying to figure out. Does anyone there @TSZ know anything about science? Joe
One has to wonder why a concept that's supposedly already thoroughly debunked (CSI) continues to be a source of veritable obsession for EL and others. However it's no wonder why they constantly try to use the output of natural processes as a stand in for specified complexity. Chance Ratcliff
And no, it does NOT matter how the thing came to be- CSI is present or not REGARDLESS of the process. However it just so happens that every time we have observed CSI and knew the process it has always been via agency involvemnt- always, 100% of the time. We have never observed mother nature producing CSI-> never, ie 0% of the time. And THAT is why when we observe CSI and don't know the process, we can safely infer an agency was involved. And that matters to the investigation. However none of that will change the fact that Lizzie doesn't understand what Dembski is saying... Joe
It's very sad but Lizzie really thinks a specification is present and she sez that Dembski agrees- albeit that is because she doesn't understand anything Dembski wrote. True, she may understand some or even most of the words he used but for some reason she just can't seem to put it all together. Just look at her Dr Nim thread- she doesn't understand that Dr Nim's responses trace back to the creator(s) and designer(s), ie actual intelligent agencies. Far from chance and necessity, Dr Nim operates as DESIGNED, as intended. Meaning the intent can also be traced back to the creator(s) and designer(s). "No but it's just a plastic thing with gates!" Right, but it was designed to do soemthing so no one should be surprised when it does what it is designed to do. Except you... Joe
Actually, as I thought more about my own question: "Perhaps I am thinking about this too simplistically, but wouldn’t a (sufficiently complex) pattern be a valid specification?" I concluded that only a sufficiently complex and sufficiently specified pattern would be a valid specification. :P *sigh* Phinehas
franklin:
I am glad to see that you agree with me that IDists aren’t able to make a design determination unless they know what the object is.
No one can- archaeologists require an object in order to determine whether or not it is an artifact. Forensic scientists need something in order to determine if a crime has been committed. SETI needs to receive a signal before they can determine whether or not it is from ET.
Makes CSI quite worthless as a metric.
You, being scientifically illiterate, makes CSI worthless as a metric? How does that work exactly? Joe
Hey EA: Thanks for the response. I'm curious about this:
I’d say in the case of a Google image search, or even facial recognition, what is being matched in the actual pixel search is not so much a specification in the sense of complex specified information, but just a pattern match.
Perhaps I am thinking about this too simplistically, but wouldn't a (sufficiently complex) pattern be a valid specification? This isn't to say that all specifications must be patterns. I think functionality is another valid specification, though I can see how it might be more difficult to calculate, especially as anything but a binary value. I'd like to better understand the distinction you are making. Phinehas
One aspect of facial recognition is recognizing something as a face, as opposed to any other object. There is certainly general specificity in faces, otherwise this would not be possible. After a face is recognized and parameterized, it can be searched in a database and compared with others. Chance Ratcliff
Thanks, Phineas. I'd say in the case of a Google image search, or even facial recognition, what is being matched in the actual pixel search is not so much a specification in the sense of complex specified information, but just a pattern match. In the case of the Google image search, once the pattern (a particular picture) is found, we still have to ascertain whether it meets a specification (in the case of the glacier deposits, no). In the case of facial recognition, the additional specification (beyond a simple pattern match) is built into the algorithm or the search parameters, namely, it is programmed to look for specifically identifiable physical features. Eric Anderson
EA: Has anyone in ID ever made the claim that you can calculate specificity? I'm not aware of any such claims. Still, in principle, I suppose it might be possible to do so. Intuitively, you might assign a 1.0 value to a grayscale picture of Ben Franklin at the highest fidelity for a particular resolution. You could then sum the number of pixels in any given picture that corresponds to the Ben Franklin picture and divide the result by the total pixel count and call that the specificity. In reality, however, we tend to pattern match with a good deal more sophistication. When pattern matching, we naturally process against every image we've ever seen as well as against conceptual images we haven't seen, but can easily imagine. The Google Image search is a bit closer to this in that it can compare against a large library of pictures on the internet. Though it doesn't show a specificity calculation in what it returns, I'm betting one is generated behind the scenes that could quite easily be exposed. In any case, as has been demonstrated above, it is quite powerful. I'd think that facial recognition or fingerprint comparison software would have a similar concept to specificity, though it might be called something else. Even so, I can pretty much guarantee that a human will vet what the software returns before any serious action is taken. The fact is: we are pattern matching fiends. We even see patterns where none is intended. On a grilled cheese sandwich. In a cloud. Even so, we are extremely adept at doing internal specificity calculations to arrive rather trivially at various determinations of, "probably not designed, maybe designed, or absolutely designed." I think we do this so trivially that it leads the less introspective to conclude CSI is simply a matter of saying something "looks designed." But this merely glosses over how absolutely mind-boggling the mind's pattern matching capabilities are. We might well look at face recognition software as simply a matter of saying someone "looks like Fred." So it is useless? If critics of CSI are claiming that specificity is a concept that could be better nailed down mathematically, then I'd agree wholeheartedly. Here's hoping for better algorithms in the future. Even so, I think our minds are not only sufficiently capable of assigning specificity to patterns, I'd argue that, at our current level of technology, our minds are by far more capable of doing so than any other method. And this is certainly the case when it comes to avoiding false positives. This is why we can assert with such confidence that ID critics will not succeed in offering up a picture where design will be identified, and will not be present. This is also why their complaints about calculating specificity are hollow red herrings. Phinehas
Phineas @37: Well said. ----- franklin: Sorry, but you are wrong. See kf @43: ----- Lizzie, via Joe @39:
If CSI is any use, then it ought to be possible to compute it for my pattern.
What does that mean, Lizzie. Are you asking for a calculation of specificity? If so, then you don't know what you are talking about and demonstrate that you still don't understand the design inference. Eric Anderson
joe: So science via psychics? No one makes the claim that we can determine design or not with no knowldge of what the object is.
I am glad to see that you agree with me that IDists aren't able to make a design determination unless they know what the object is. Makes CSI quite worthless as a metric. franklin
What is needed to really test the design inference is a case where design will be identified and it is not present
^^^So much this!^^^ Phinehas
F/N: On doing a CSI calc on the case. We had an image of some 500 kBits. There was no evidence of specificity: Chi_500 = 500kbits * 0 - 500 = - 500 bits That is on absence of evident specificity -- as has been pointed out long since, and as has been explained long since, we are at the baseline, 500 bits short of the threshold. Sorry, that one won't wash either. KF kairosfocus
F: You are simply wrong, go up to 2 above, BEFORE I knew the image was a snow pattern. (I saw Phineas' post AFTER I posted.) Notice, how I compared the case to wood grain, and pointed out how complexity and specificity were not apparently coupled? Notice, how I drew the inference that unless and until there was evidence of such a coupling of complexity and specificity, there would be a default to chance and necessity? Notice, how I accepted that the design inference process is quite willing to misdiagnose actual cases of design that do not pass the criterion of specificity? WHY ARE YOU TRYING TO REVISE THE FACTS AFTER THE FACT? You will notice that I then saw Phineas' comment, and remarked on that, highlighting WHY such a case would not couple specificity to complexity? Thereafter, I did a Google search, which is a TARGETTED search, and from that identified the credible source. I then was able to fit the clip from TSZ into the image more or less, I think there is a bit of distortion there. This confirmed the assessment. So, the truth is that the EF did work, and did what it was supposed to do. It identified that complexity without specificity to a narrow zone T, will not be enough. It was clear that this could be a case where actual design is such that it cannot be detected -- recall my remarks on not being a general decrypting algorithm? -- and then we were able to confirm the evident absence of such a match. Unless there is some steganography hiding in the code that I do not have time or inclination to try to hunt down. What else is clear, is that the test is a strawman. What is needed to really test the design inference is a case where design will be identified and it is not present. But that, I am afraid -- as the random document generation tests show -- will be very hard to do. What you have succeeded in doing, is to show us that we are not dealing with a reasonable minded, fair process or people. Which, unfortunately, on long experience, we have come to expect by now. I think you have some self examination to do, sir. KF kairosfocus
The purpose of the exercise was to see if it is indeed possible to make the calculations with no knowledge of what the object is...
So science via psychics? No one makes the claim that we can determine design or not with no knowldge of what the object is. That has to be one of the stupidest things I have ever heard. Joe
EA: Steganography, methinks. A point of worry for intel agencies just now. Indeed, if what was otherwise an image of a natural scene now pops up with steganography, the first aspect would be nature, but the latter would be design. As for the case of a molecular join, the problem is of course to set up the conditions and the absence of actual self-replication. But, as Johnson pointed out, if you are locked in the materialist a priori circle of thought, any slightest relevance to the conclusion already held looks ever so much like confirmation. The problem of the cartoon characters going in circles in the woods, and seeing more and more footprints, thinking they are on the right track again. And that also shows the reluctance to accept just how reliable the EF is when it does rule design, and how often it will be right when it rules not design, too. Which is the default side. For, if things MUST be otherwise, one imagines, this is just a fluke, yet again. (There are billions of these "flukes" and no genuine counter-instances, but hope springs eternal.) KF kairosfocus
franklin,, Only scientifically illiterate people think that science is conducted via a photo. The design inference requires an examination of the actual evidence. Also CSI wouldn't be the tool to use anyway. So you chumps are just proud of your inability to investigate. Joe
Lizzie:
If CSI is any use, then it ought to be possible to compute it for my pattern.
And if a chainsaw can't help with my kid's math homework it isn't of any use. Again, Lizzie, I suggest you email Dembski before running around like a druken sailor... Joe
eric:Further, the photo was put forward, it was analyzed, the design filter worked.
The revisionist history is amusing but surely you realize that the 'filter' did nothing and that it was google that found the image. Everything that follows is post hoc hand-waving which to all onlookers is transparently obvious. Once the identity of the image was revealed it isn't that impressive for you to state if the object was designed or not designed. The purpose of the exercise was to see if it is indeed possible to make the calculations with no knowledge of what the object is or its history or in fact knowing nothing at all about that object this is the claim that IDists make for their/your alleged metric and the answer is quite clear you can't do it as VJT has pointed out. franklin
Querius: After finding the image online, I was thinking last night along the exact lines you laid out. What if someone took a simple substitution cipher and used it to modify the pixel data in order to embed a message in the picture? This would be a pretty typical example of steganography. One might argue that, before the cipher was revealed, the calculation of CSI would give one result, and after the cipher was revealed, it would give another. But this would be an argument against claims about CSI that have never been made. It has ALWAYS been recognized that tricking CSI into giving a false negative is rather trivial. CSI is set up specifically to prevent false positives; negative outcomes are always provisional and can change when additional information is brought to light. Inexplicably, it seems that TMZ lacks the sophistication to even make it to that sort of objection. Liz:
That glacier produced that pattern. That pattern is pretty cool, and complex, and specified (the glacier even more so than the photo). So that glacier “found” that pattern.
Seriously? It is difficult to imagine a more appropriate response to this than your favorite star trek facepalm gif. Still, I'll try. The glacier "found" that pattern just like a fair dealer "finds" the improbable pattern of cards in each and every one of your hands. Without an independent specification, no set of five cards is more or less improbable than any other. Nor do you typically make an inference that the dealer is somehow designing exactly what cards are in your hand. You trust that random processes are at work as advertised. However, if an opponent suddenly starts getting Royal Flush after Royal Flush each and every hand, you WILL make a design inference. I guarantee it. You WILL suspect that the advertised random processes are no longer in effect and that the dealer is somehow designing the outcome. HOW can you do this, given that each hand is just as improbable as any other? If you are honest and open-minded, you will conclude that your inference arises from your recognition that these particular cards are consistently lining up with an independent specification. (If you've got a better explanation as to how you'd infer design, I'd love to hear it. My point is that we both know that you WOULD infer design, and we both know that it WOULD be a valid inference.) Bringing this back around to pictures, if what is advertised as the random accumulation of volcanic ash on ice starts to resemble Ben Franklin (an independent specification) with more and more fidelity, there WILL once again be a point where you infer design. You KNOW that this sort of inference is valid, whether you resort to a formal calculation of CSI or not. Why keep acting like you don't understand? It makes no sense and only serves to call your own faculties and credibility into question. Phinehas
Eric, Lizzie really thinks she is using Dembski's definition of specification. I know she isn't. And if she had any integrity she would bring it up with Dembski. But then again Lizzie thinks we do science with only one photograph. Science isn't a parlor game, Lizzie. And all you are doing is trying to engage in parlor games. Joe
Re: Joe's various comments above: I agree that in most cases we do not need to know the process in order to infer design. To be sure, a process itself can be designed, so in those cases we would need to know what the process was in order to even analyze it. But we do not need to know exactly how, say, the pyramids were built in order to infer design. That kind of thinking is completely wrong. --- If Lizzie thinks a glacier pattern or some similar natural phenomenon is "specified," then this means she simply doesn't understand what is meant by a specification. Further, the photo was put forward, it was analyzed, the design filter worked. But now there seems to be a lot of backpedaling. Why can't they say, "Well done. Looks like the filter worked in this case." BTW, the filter works fine for images. I'm not sure what you mean by it being the wrong tool in some cases. ----- Oh, boy. Not the alleged, hypothetical, never-yet-found, self-replicating molecule again . . . Eric Anderson
Querius @20: Great, example, I love it! In your case, without someone discovering the encoded Bible in the numbers, the design filter would not infer design. That is OK. The design inference has never claimed that it can identify all instances of design, particularly not those that are purposely made to be hidden or to mimic natural processes. However, it is also that case that with more context, a bit of sleuthing, or someone stumbling on the embedded code, design would then leap forth. In your example, we really have two things: the first is the undesigned natural process that produced the major image. Without more knowledge of what is there, the design filter -- correctly -- identifies this as undesigned; the second is an embedded designed code (not an image, but a code embedded in an image). When just looking at the pixels, this code is not even seen, so by definition it won't be recognized as designed. Only once it is seen or identified can the filter be applied, at which point it will confirm design. So there are two things going on, and we need to carefully keep them separate and analyze them separately. Eric Anderson
EL: Indeed, the glacier -- or the tree growing and then being cut, resulted in a phenomenon exhibiting complexity, coming from a wide space of possibilities, W. However, in neither case, is there any constraint that locks the outcomes to a simply separately/independently describable narrow zone T. You can see that by examining a stack of plywood sheets at your local shop, or planks: the patterns vary al over the place and that makes but little difference. That would be sharply different from a cluster of evident sculptural portraits at a certain mountain in the US. And in the case of parts that have to fit and work together to achieve a function, such is even more evident. KF kairosfocus
Now Lizzie is title-hunting. She doesn't realize that the link to the self-replicating peptide doesn't demonstrate self-replication. All that occurs is ONE peptide bond is catalyzed. IOW the experiment requires a pool of peptides, one 15 and one 17 amino acids long. Then the existing peptide facilitates the bonding of the two pieces. Lizzie's link: self-replicating peptide How gullible are you Lizzie? Joe
Lizzie spews:
That glacier produced that pattern. That pattern is pretty cool, and complex, and specified (the glacier even more so than the photo). So that glacier “found” that pattern.
Nope, it is NOT specified. Obvioulsy you have no idea what that mmeans, to be specified. And no, darwin doesn't get a break because the evidence is against him- see lenski Joe
Lizzie is totally clueless. She doesn't understand that science is not conducted via one photo. Scientists would go to the actual location and observe the actual formation before making any inferences, Lizzie. Are you really that daft? Joe
In fairness to RTH, he couldn't recognize anything in action wrt science and investigation. Joe
F/N: I see that despite explicit use of the explanatory filter in inferring not-designed, some over at TSZ -- RTH, this means you in particular -- are unable to recognise it in action. Sadly but unsurprisingly revealing. KF kairosfocus
OK all of that said, CSI is the wrong tool for determining CSI of an object or a photo- science is not done via one photo. IOW Lizzie's "challenge" is totally bogus. CSI is a good tool to use when the thing in question is readily amendable to bits- like a coded message or DNA. But when just given an object then counterflow is the tool to use. No one would use CSI to solve a murder. We have to be able to use the proper tool for the job. Joe
By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause.- Ibid page 28
Can anyone post anything from Dembski that supports what Felsenstein claimed and VJT supported? Anyone? Joe
Having defined specification, I want next to show how this concept works in eliminating chance and inferring design.- Wm Dembski page 25 of "Specification..."
Wow, that flies in the face of the claim made by Felsenstein and agrred to by VJT. Joe
Dembski's 2005 paper on specification is supposed to be about figuring out if something is designed or not without knowing how it arose. Therefor the following is total bunk:
But now Dembski has clarified that CSI is not (and maybe never was) something you could assess independently of knowing the processes that produced the pattern. Which makes it mostly an afterthought, and not of great interest.
Professor Felsenstein is quite correct in claiming that “CSI is not … something you could assess independently of knowing the processes that produced the pattern.” However, this is old news: Professor Dembski acknowledged as much back in 2005, in his paper, Specification: The Pattern That Signifies Intelligence (version 1.22, 15 August 2005).
Will VJT explain himself or not? I would also like VJT to produce the definition of darwinian mecahnsims that demonstrates they are non-random. So VJT has some explaining to do because he has made some claims that do not jibe with what has already been stated. Joe
computerist:
In my opinion we should also assess CSI relative to a process (which usually implies an algorithm).
Algorithms cannot produce CSI- read "No Free Lunch", Dembski covers that. Joe
Q: That is why I pointed out that the design filter was never intended as a puzzle solver that can decode any and all encryption cases. It will by design assign highly contingent complexity to chance unless in context we see a good reason to assign the case to a narrow zone T in a large enough field W. KF PS: Humans and similar artists may also generate images by drawing them. kairosfocus
bornagain77 @17, thanks for that live streaming link. I just watched the debate and enjoyed it. I'm sure that a download and/or YouTube version will be available at some point. Chance Ratcliff
Consider the following possibility--assume each statement below to be true: 1. The image was generated from a natural process. 2. Each pixel has a color value associated with it--gray scales in this case. 3. The least significant digit of each pixel was painstakingly changed to a different value. 4. These least-significant digits are a numerically coded and encrypted version of the entire King James Bible separated by random stretches of white noise. Thus, the modified image looks essentially identical with the original, yet it contains intelligent information. Then, you shuffle the images and hand one to a professor who, after studying it, becomes convinced that the image was never influenced by any intelligent process, that over millions of years a natural evolutionary biological process generated the microscopic image. And then she posts it on a website . . . Querius
Joe,
"We need to be able to assess CSI independently of the process. that is the whole point of CSI-> to determine it had to be the product of an intelligent agency."
I think that is correct, but in the context of images or digital strings on a computer, they're both designed, regardless of the contents. Real facts need to be known about the subject of the image or the contents of the string, otherwise it could just pass through the filter as a negative. Rather than a photograph or a string, consider a painting, just for kicks. It's designed, regardless of what it depicts. For inferring design, we're really talking about the subject of the painting. There's a layer of ambiguity here that would produce a lot of false negatives. If it's a painting of a model T, we could infer design for the model T, but only by knowing something about real model Ts. What about paintings of people, or landscapes, or fantasy paintings? Context rules here, and I think similar problems creep up for strings. Something needs to be known about the string if it can't be discovered by the contents. At the very least, we would need to know the function a string is supposed to perform. If it had no function, but was only a recording of some output, then we would need to know something about how the output was generated. Otherwise, it can happily end up as a false negative. So I would say that in my opinion, needing to know something about the underlying process would be context dependent, and probably only relevant for certain classes of digital strings. Chance Ratcliff
We need to be able to assess CSI independently of the process.
In my opinion we should also assess CSI relative to a process (which usually implies an algorithm). The reason why I feel this way is that when we measure CSI (we've already skipped the actual design process) and determined CSI/not at the endpoint relative to intelligence. Intelligence is highly probable to produce CSI. And this is why naturally we don't need to test it against the process while the test itself remains perfectly valid. However, for biology we need a bit more. I believe that CSI coupled with some clever algorithms and existing knowledge of biological systems will give us (very soon) a definite objective number the Darwinists will no longer be able to question. computerist
OT: live video debate about to start in about 10 minutes Michael Ruse vs. Fuz Rana - The Origin of Life: The Great God Debate II http://watch.biola.edu/origin-of-life-debate-live bornagain77
Eric and computerist, here's my $0.02. In the plainest sense I think we can say that all images are designed. I say this because whether we're talking about a physical photograph or a digital image, it's something which requires a designed system to produce, namely either a camera or computer algorithm. So whether the picture is of an apple, rock formation, or Lamborghini, we could trivially infer design by appealing to the system which recorded the image. However this isn't really satisfying. ;) What we really want to know is if the subject of the image is designed. In the case of Mt. Rushmore, we can certainly infer design by intuition, but for objectively evaluating whether the subject of the image is designed we either need to employ some sort of image analysis, such as image recognition -- which would indeed require a context -- or we would need to know something about the specifics of the subject, such as its identity, composition, location, age, etc. So images are definitely tricky. In the case of the referenced image of volcanic ash deposits, we can explain it naturalistically and so we don't infer design. However before Phinehas identified the source, I considered that the image could conceivably have been generated by a computer algorithm. In this case we would need to know the algorithm, which would be the actual subject of measurement. And as for all algorithms of note, they are specified and complex. Cameras and algorithms are the only systems I'm aware of that can generate images, as long as our definition of image is specific enough. Chance Ratcliff
OK wait: Professor Felsenstein is quite correct in claiming that “CSI is not … something you could assess independently of knowing the processes that produced the pattern.” We need to be able to assess CSI independently of the process. that is the whole point of CSI-> to determine it had to be the product of an intelligent agency.
Intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?-Wm Dembski
And the presence of CSI is supposed to be an indicator of design. If we know something arose by design then we sure as heck don't need to see if CSI is present. So I don't know what Joe Felsenstein is saying- it doesn't jibe with Dembski and I don't understand why VJT is agreeing with him. Someone explain that to me please. Joe
All that said, notwithstanding any difficulties in application, it is still the case that the design inference applies to images and drawings and pixels and schematics and renderings, just as much as it applies to language and code and machines.
I don't disagree with you at all. I just think this is not a useful route unless other additional measures (as I stated) are taken. computerist
computerist, I don't disagree that context is important and that in some cases additional analysis would be required beyond just looking at the pixels. Also, in some cases we would need to distinguish between whether the image itself was designed or whether it is simply an image of something that was designed. For example, if we have a photograph of Mr. Rushmore, could we really say the image was designed? Clearly the underlying subject of the photograph was designed, but what does it mean to say that a particular photograph was designed? All that said, notwithstanding any difficulties in application, it is still the case that the design inference applies to images and drawings and pixels and schematics and renderings, just as much as it applies to language and code and machines. Eric Anderson
There are some images that would be clearly designed. We can’t just say that the design inference doesn’t apply to images. It does.
In my opinion you would need to somehow translate that image into functional context. For example complex schematics would need to be translated into actual function (whether through software rendering or an actual built circuit in the case of electronics). Otherwise we will never know whether there was actual knowledge/intelligence propagated to form the schematic or whether it was generated randomly. In both cases of-course the image was "designed", but whether there is actual knowledge that translated into functional specification is the key question. The test must be performed in a well defined objective context. computerist
F/N: on P(T|H), kindly notice the log reduction EL et al have repeatedly been pointed to. That is, we can impose a reasonable upper threshold and simplify. This has been repeatedly pointed out, and the consequences drawn out. The objection cited, per fair comment, is of the mountain out of a molehill variety; sadly, reinforced by drumbeat repetition of cogently answered objections as though that would drown out the relevance of the answer. Not good. KF kairosfocus
computerist: There are some images that would be clearly designed. We can't just say that the design inference doesn't apply to images. It does. The key in this case is that the filter worked and did not return a false positive, which is really the only logical (though not supported) complaint that can be levied against the design inference anyway. Eric Anderson
To increase the utility of FCSI I believe the solution (especially with respect to biology) is to make the threshold parameter the minimal determined interdependent function. This way we stay within the context of what we're measuring against. We can then plug this into algorithms and make better conclusions regarding the processes involved. computerist
F/N: Cf here, on a solution. KF kairosfocus
As an ID proponent I don't see how CSI is useful for mere images. Images are very subjective. Even if we know for a given fact that an image of Mt. Rushmore and subsequently Mt. Rushmore itself could not have been formed via chance and/or law, CSI would never be able to tell us this unless there was a database of images mapped out from the entire universe to compare against. But even if found such an image, it would be unlikely that the context would be equal (ie: mountain). Any measurement should take into account the context since CSI is context-dependent. I don't think you can just take any arbitrary image and make a case (or expect someone else to) either way for CSI. FCSI on the other hand is from the starting point an objective measurement that is context-dependent. computerist
F/N: A comparatives page from Google Image search: https://diggingintheclay.wordpress.com/2012/08/15/ash-on-ice-earth-art/ (BTW, some very interesting results.) KF kairosfocus
LOL does Liz have you guys staring at ink blots now? :) bornagain77
OT: On this episode of ID the Future, hear the second part of Casey Luskin's interview with Dr. Stephen C. Meyer, author of the forthcoming book Darwin's Doubt: The Explosive Origin of Animal Life and the Case for Intelligent Design. Dr. Stephen C. Meyer: The Mystery of the Origin of Genetic Information http://www.idthefuture.com/2013/05/dr_stephen_c_meyer_the_mystery.html bornagain77
Phineas: Okie, that image would be of the same general type as wood grain revealed by a saw, and reflecting chance and mechanical necessity. Complex -- a wide zone W -- but not specific to a zone T. (Any similar pattern or even a vastly dissimilar one would work.) Not CSI. I keep hoping that some day, objectors would take time to really understand how the inference to complex and specified configs works and why it is maximally unlikely to be arrived at blindly, but is routinely produced by intelligence. KF kairosfocus
VJT: The pattern looks rather like wood grain, and we can easily take it to be such until further notice. The issue on inferring complex specified info, right from the beginning is the COUPLING of high complexity [tantamount to large numbers of possible configurations, W] with a separate constraining description that confines it to a narrow zone T of the field of possibilities W. In the case of wood tgrain, for instance, the grain may well be like a fingerprint, essentially unique in a given case, but because there is not a strong constraint that allows rejection unless confined to a narrow zone, it is not a case of complex specified info. As to the case of FUNCTIONALLY specified complex info, what happens there is simple, the confining principle is a function dependent on organisation and associated with an info metric linked to the constraints specified by organisation. As to the strawman tactic imposed by EL, it should be noted that WmAD explicitly indicates that int eh case of most interest, life, in biology, CSI's specificity is cashed out as function. So, unless there is a clear reason to imagine the pattern constrained in such a way that only cases fitting into a narrow zone T in and about the case E given, then we can safely rule this not CSI. Remember, a false negative is not a problem. The issue of CSI and especially in cases such as in biology where the issue is function, is that complexity must be coupled to specificity, so we have a narrow zone T in a wide field W. On the gamut of our solar system, W is 10^150 or so possibilities. That is, the needle in haystack challenge for blind chance plus mechanical necessity is comparable to blindly picking a one straw sized sample from a cubical haystack 1,000 light years on the side. If such were superposed in our galactic neighbourhood, and we were given just the one try, with all but absolute certainty,w e will get straw and nothing else. So, let us not bother with great mysteries and hidden agenda puzzle games. Absent any other reasonable answer, we here have the equivalent of a photo of wood grain. Complex, certainly, but absent constraining specification, complexity is not enough to be CSI. And if there is a relevant constraining specification, we may revise our estimate reasonably and without harm to the design inference. And if there is one hidden up Dr EL's sleeve, all that is meant is that we have had no obvious constraint so we default to the standard, chance and/or blind necessity. As EL should full well know long since. And if there is, unbeknownst to us, a description of T that specifies, then we should note that -- contrary to an old objection -- the design inference is not a general decoding algorithm. It deliberately defaults to a mechanical explanation -- blind chance and/or mechanical necessity (such as the specific circumstances of the growth of a tree and the uncorrelated decision and action of the cutting saw) unless there is something that objectively assigns outcomes to a narrow T in a very large W. KF kairosfocus
Google Image recognizes the specified pattern as volcanic ash on ice. Phinehas

Leave a Reply