Uncommon Descent Serving The Intelligent Design Community

CSI Revisited

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Over at The Skeptical Zone, Dr. Elizabeth Liddle has put up a post for Uncommon Descent readers, entitled, A CSI Challenge (15 May 2013). She writes:

Here is a pattern:

It’s a gray-scale image, so it is just one 2D matrix. Here is a text file containing the matrix:

MysteryPhoto

I would like to know whether it has CSI or not.

The term complex specified information (or CSI) is defined by Intelligent Design advocates William Dembski and Jonathan Wells in their book, The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), as being equivalent to specified complexity (p. 311), which is then defined as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). (2008, p. 320)

In some comments on her latest post, Dr. Liddle tells readers more about her mysterious pattern:

There are 658 x 795 pixels in the image, i.e 523,110. Each one can take one of 256 values (0:255). Not all values are represented with equal probability, though. It’s a negatively skewed distribution, with higher values more prevalent than lower…

I want CSI not FSC or any of the other alphabet soup stuff…

Feel free to guess what it is. I shan’t say for a while ☺ …

Well, if I’m understanding Dembski correctly, his claim is that we can look at any pattern, and if it is one of a small number of specified patterns out of a large total possible number of patterns with the same amount of Shannon Information, then if that proportion is smaller than the probability of getting it at least once in the history of the universe, then we can infer design…

Clearly it’s going to take a billion monkeys with pixel writers a heck of a long time before they come up with something as nice as my photo. But I’d like to compute just how long, to see if my pattern is designed…

tbh [To be honest – VJT], I think there are loads of ways of doing this, and some will give you a positive Design signal and some will not.

It all depends on p(T|H) [the probability of a specified pattern T occurring by chance, according to some chance hypothesis H – VJT] which is the thing that nobody every tells us how to calculate.

It would be interesting if someone at UD would have a go, though.

Looking at the image, I thought it bore some resemblance to contours (Chesil beach, perhaps?), but I’m probably hopelessly wrong in my guess. At any rate, I’d like to make a few short remarks.

(1) There is a vital distinction that needs to be kept in mind between a specified pattern’s being improbable as a configuration, and its being improbable as an outcome. The former does not necessarily imply the latter. If a pattern is composed of elements, then if we look at all possible arrangements or configurations of those constituent elements, it may be that only a very tiny proportion of these will contain the pattern in question. That makes it configurationally improbable. But that does not mean that the pattern is unlikely to ever arise: in other words, it would be unwarranted to infer that the appearance of the pattern in question is historically improbable, from its rarity as a possible configuration of its constituent elements.

(2) If, however, the various processes that are capable of generating the pattern in question contain no built-in biases in favor of this specified pattern arising – or more generally, no built-in biases in favor of any specified pattern arising – then we can legitimately infer that if a pattern is configurationally improbable, then its emergence over the course of time is correspondingly unlikely.

Unfortunately, the following remark by Elizabeth Liddle in her A CSI Challenge post seems to blur the distinction between configurational improbability and what Professor William Dembski and Dr. Jonathan Wells refers to in their book, The Design of Life (Foundation for Thought and Ethics, Dallas, 2008), as originational improbability (or what I prefer to call historical improbability):

Well, if I’m understanding Dembski correctly, his claim is that we can look at any pattern, and if it is one of a small number of specified patterns out of a large total possible number of patterns with the same amount of Shannon Information, then if that proportion is smaller than the probability of getting it at least once in the history of the universe, then we can infer design.

By itself, the configurational improbability of a pattern cannot tell us whether the pattern was designed. In order to assess the probability of obtaining that pattern at least once in the history of the universe, we need to look at the natural processes which are capable of generating that pattern.

(3) The “chance hypothesis” H that Professor Dembski discussed in his 2005 paper, Specification: The Pattern That Signifies Intelligence (version 1.22, 15 August 2005), was not a “pure randomness” hypothesis. In his paper, he referred to it as “the chance hypothesis most naturally associated with this probabilistic set-up” (p. 7) and later declared, “H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms” (p. 18).

In a comment on Dr. Elizabeth Liddle’s post, A CSI Challenge, ID critic Professor Joe Felsenstein writes:

The interpretation that many of us made of CSI was that it was an independent assessment of whether natural processes could have produced the adaptation. And that Dembski was claiming a conservation law to show that natural processes could not produce CSI.

Even most pro-ID commenters at UD interpreted Dembski’s CSI that way. They were always claiming that CSI was something that could be independently evaluated without yet knowing what processes produced the pattern.

But now Dembski has clarified that CSI is not (and maybe never was) something you could assess independently of knowing the processes that produced the pattern. Which makes it mostly an afterthought, and not of great interest.

Professor Felsenstein is quite correct in claiming that “CSI is not … something you could assess independently of knowing the processes that produced the pattern.” However, this is old news: Professor Dembski acknowledged as much back in 2005, in his paper, Specification: The Pattern That Signifies Intelligence (version 1.22, 15 August 2005). Now, it is true that in his paper, Professor Dembski repeatedly referred to H as the chance hypothesis. But in view of his remark on page 18, that “H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms,” I think it is reasonable to conclude that he was employing the word “chance” in its broad sense of “undirected,” rather than “purely random,” since Darwinian mechanisms are by definition non-random. (Note: when I say “undirected” in this post, I do not mean “lacking a telos, or built-in goal”; rather, I mean “lacking foresight, and hence not directed at any long-term goal.”)

I shall argue below that even if CSI cannot be assessed independently of knowing the processes that might have produced the pattern, it is still a useful and genuinely informative quantity, in many situations.

(4) We will definitely be unable to infer that a pattern was produced by Intelligent Design if:

(a) there is a very large(possibly infinite) number of undirected processes that might have produced the pattern;

(b) the chance of any one of these processes producing the pattern is astronomically low; and

(c) all of these processes are (roughly) equally probable.

What we then obtain is a discrete uniform distribution, which looks like this:

In the graph above, there are only five points, corresponding to five rival “chance hypotheses,” but what if we had 5,000 or 5,000,000 to consider, and they were all equally meritorious? In that case, our probability distribution would look more and more like this continuous uniform distribution:

The problem here is that taken singly, each “chance hypothesis” appears to be incapable of generating the pattern within a reasonable period of time: we’d have to wait for eons before we saw it arise. At the same time, taken together, the entire collection of “chance hypotheses” may well be perfectly capable of generating the pattern in question.

The moral of the story is that it is not enough to rule out this or that “chance hypothesis”; we have to rule out the entire ensemble of “chance hypotheses” before we can legitimately infer that a pattern is the result of Intelligent Design.

But how can we rule out all possible “chance hypotheses” for generating a pattern, when we haven’t had time to test them all? The answer is that if some “chance hypotheses” are much more probable than others, so that a few tower above all the rest, and the probabilities of the remaining chance hypotheses tend towards zero, then we may be able to estimate the probability of the entire ensemble of chance processes generating that pattern. And if this probability is so low that we would not expect to see the event realized even once in the entire history of the observable universe, then we could legitimately infer that the pattern was the product of Intelligent Design.

(5) In particular, if we suppose that the “chance hypotheses” which purport to explain how a pattern might have arisen in the absence of Intelligent Design follow a power law distribution, it is possible to rule out the entire ensemble of “chance” hypotheses as an inadequate explanation of that pattern. In the case of a power law distribution, we need only focus on the top few contenders, for reasons that will soon be readily apparent. Here’s what a discrete power law distribution looks like:

The graph above depicts various Zipfian distributions, which are discrete power law probability distributions. The frequency of words in the English language follows this kind of distribution; little words like “the,” “of” and “and” dominate.

And here’s what a continuous power law distribution looks like:

An example of a power-law graph, being used to demonstrate ranking of popularity (e.g. of actors). To the right is the long tail of insignificant individuals (e.g. millions of largely unknown aspiring actors), and to the left are the few individuals that dominate (e.g. the top 100 Hollywood movie stars).

This phenomenon whereby a few individuals dominate the rest is also known as the 80–20 rule, or the Pareto principle. It is commonly expressed in the adage: “80% of your sales come from 20% of your clients.” Applying this principle to “chance hypotheses” for explaining a pattern in the natural sciences, we see that there’s no need to evaluate each and every chance hypothesis that might explain the pattern; we need only look at the leading contenders, and if we notice the probabilities tapering off in a way that conforms to the 80-20 rule, we can calculate the overall probability that the entire set of hypotheses is capable of explaining the pattern in question.

Is the situation I have described a rare or anomalous one? Not at all. Very often, when scientists discover some unusual pattern in Nature, and try to evaluate the likelihood of various mechanisms for generating that pattern, they find that a handful of mechanisms tend to dominate the rest.

The Chaos Computer Club used a model of the monolith in Arthur C. Clarke’s novel 2001, at the Hackers at Large camp site. Image courtesy of Wikipedia.

(6) We can now see how the astronauts were immediately able to infer that the Monolith on the moon in the movie 2001 (based on Arthur C. Clarke’s novel) must have been designed. The monolith in the story was a black, extremely flat, non-reflective rectangular solid whose dimensions were in the precise ratio of 1 : 4 : 9 (the squares of the first three integers). The only plausible non-intelligent causes of a black monolith being on the Moon can be classified into two broad categories: exogenous (it arrived there as a result of some outside event – i.e. something falling out of the sky, such as a meteorite or asteroid) and endogenous (some process occurring on or beneath the moon’s surface generated it – e.g. lunar volcanism, or perhaps the action of wind and water in a bygone age when the moon may have had a thin atmosphere).

It doesn’t take much mental computing to see that neither process could plausibly generate a monument of such precise dimensions, in the ratio of 1 : 4 : 9. To see what Nature can generate by comparison, have a look at these red basaltic prisms from the Giant’s Causeway in Northern Ireland:

In short: in situations where scientists can ascertain that there are only a few promising hypotheses for explaining a pattern in Nature, legitimate design inferences can be made.

The underwater formation or ruin called “The Turtle” at Yonaguni, Ryukyu islands. Photo courtesy of Masahiro Kaji and Wikipedia.

(7) We can now see why the Yonaguni Monument continues to attract such spirited controversy, with some experts, such as Masaaki Kimura of the University of the Ryukyus, who claims: “The largest structure looks like a complicated, monolithic, stepped pyramid that rises from a depth of 25 meters.” Certain features of the Monument, such as a 5 meter-wide ledge that encircles the base of the formation on three sides,
a stone column about 7 meters tall, a straight wall 10 meters long, and a triangular depression with two large holes at its edge, are often cited as unmistakable evidence of human origin. There have even been claims of mysterious writing found at the underwater site. Other experts, such as Robert Schoch, a professor of science and mathematics at Boston University, insist that the straight edges in the underwater structure are geological features. “The first time I dived there, I knew it was not artificial,” Schoch said in an interview with National Geographic. “It’s not as regular as many people claim, and the right angles and symmetry don’t add up in many places.” There is an excellent article about the Monument by Brain Dunning at Skeptoid here.

The real problem here, as I see it, is that the dimensions of the relevant features of the Yonaguni Monument haven’t yet been measured and described in a rigorously mathematical fashion. For that reason, we don’t know whether it falls closer to the “Giant’s Causeway” end of the “design spectrum,” or the “Moon Monolith” end. In the absence of a large number of man-made monuments and natural monoliths that we can compare it to, our naive and untutored reaction to the Yonaguni Monument is one of perplexity: we don’t know what to think – although I’d be inclined to bet against it’s having been designed. What we need is more information.

(8) Turning now to Dr. Elizabeth Liddle’s picture, there are three good reasons why we cannot determine how much CSI it contains.

First, Dr. Liddle is declining to tell us what the specified pattern is, for the time being. Until she does, we have no way of knowing for sure whether there is a pattern or not, short of spotting it – which might take a very long time. (Some patterns, like the Champerdowne sequence in Professor Dembski’s 2005 essay, are hard to discern. Others, like the first 100 primes, are relatively easy.)

Second, we have no idea what kind of processes were actually used by Dr. Liddle to generate the picture. We don’t even know what medium it naturally occurs in (I’m assuming here that it exists somewhere out there in the real world). Is it sand? hilly land? tree bark? We don’t know. Hence we are unable to compute P(T|H), or the probability of the pattern arising according to some chance hypothesis, as we can’t even formulate a “chance hypothesis” H in the first place.

Finally, we don’t know what other kinds of natural processes could have been used to generate the pattern (if there is one), as we don’t know what the pattern is in the first place, and we don’t know where in Nature it can be found. Hence, we are unable to formulate a set of rival “chance hypotheses,” and as a result, we have no idea what the probability distribution of the ensemble of “chance hypotheses” looks like.

In short: there are too many unknowns to calculate the CSI in Dr. Liddle’s example. A few more hints might be in order.

(9) In the case of proteins, on the other hand, the pattern is not mathematical (e.g. a sequence of numbers) but functional: proteins are long strings of amino acids that actually manage to fold up, and that perform some useful biological role inside the cell. Given this knowledge, scientists can formulate hypotheses regarding the most likely processes on the early Earth for assembling amino acid strings. If a few of these hypotheses stand out, scientists can safely ignore the rest. Thus the CSI in a protein should be straightforwardly computable.

I have cited the recent work of Dr. Kozulic and Dr. Douglas Axe in recent posts of mine (see here, here and here). Suffice to say that the authors’ conclusions that the proteins we find in Nature are the product of Intelligent Design is not an “Argument from Incredulity” but an argument based on solid mathematics, applied to the most plausible “chance hypotheses” for generating a protein. And to those who object that proteins might have come from some smaller replicator, I say: that’s not a mathematical “might” but a mere epistemic one (as in “There might, for all we know, be fairies”). Meanwhile, the onus is on Darwinists to find such a replicator.

(10) Finally, Professor Felsenstein’s claim in a recent post that “Dembski and Marks have not provided any new argument that shows that a Designer intervenes after the population starts to evolve” with their recent paper on the law of conservation of information, is a specious one, as it rests on a misunderstanding of Intelligent Design. I’ll say more about that in a forthcoming post.

Recommended Reading

Specification: The Pattern That Signifies Intelligence by William A. Dembski (version 1.22, 15 August 2005).

The Conservation of Information: Measuring the Cost of Successful Search by William A. Dembski (version 1.1, 6 May 2006). Also published in IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061.

Conservation of Information Made Simple (28 August 2012) by William A. Dembski.

Before They’ve Even Seen Stephen Meyer’s New Book, Darwinists Waste No Time in Criticizing Darwin’s Doubt (4 April 2013) by William A. Dembski.

Does CSI enable us to detect Design? A reply to William Dembski (7 April 2013) by Joe Felsenstein at Panda’s Thumb.

NEWS FLASH: Dembski’s CSI caught in the act (14 April 2011) by kairosfocus at Uncommon Descent

Is Darwinism a better explanation of life than Intelligent Design? (14 May 2013) by Elizabeth Liddle at The Skeptical Zone.

A CSI Challenge (15 May 2013) by Elizabeth Liddle at The Skeptical Zone.

Comments
And, taking a known enzyme family we can compare rates within the family, but also look at what happens as we make random substitutions or deletions etc, to see how function falls off, until ewe have effectively no function . . .
Sure. And notice that we can't even talk rationally about the effect of substitutions without mentioning and first having clear in our mind the relevant function, which is an observed physical phenomenon, not a mathematical abstraction. That function is the specification, not the various comparative reaction rates that we can run as an ancillary exercise.Eric Anderson
June 25, 2013
June
06
Jun
25
25
2013
05:20 PM
5
05
20
PM
PDT
EA: standard scientific measurement units are built up in a traceable chain from the seven base SI units: mass, length, time, amount of substance [a measure of number of particles counted in moles: 6.023 * 10^23 particles . . . ], current, temperature from Abs Zero, luminous intensity, IIRC, all other units are constructed via equations going back to base quantities. Chem eqns work in molecules, which scales up to moles. Time is based on standard oscillations of one form or another. Rates are in effect dQ/dt, which is dependent on inverse time, per unit time. And, taking a known enzyme family we can compare rates within the family, but also look at what happens as we make random substitutions or deletions etc, to see how function falls off, until ewe have effectively no function -- maybe because folding fails or the pocket for the rxn is no longer effective, etc. That would, in principle give us a picture of an island of function. Does that help? KFkairosfocus
May 20, 2013
May
05
May
20
20
2013
12:13 PM
12
12
13
PM
PDT
Phineas: 1) Function is easily definable in machines, which are objects that perform some task that we can recognize as useful in a context. Biological machines are machines, and therefore the simple concept of function is perfectly apt for them An outboard motor that does not work is simply a machine that does not work. We can maybe recognize its potentiality (if we fix it, it can work), but the function, either potential or manifest, is always our reference. A mutated protein that does not work is similar to that. Darwinists love to speculate that is we change a fundamental aminoacid in a functional protein, so that it does not work no more, and then we change back the mutation, the protein acquires dFSCI with only one mutation. That is not true. The funtion is already potentially there, almost. We just fix the wrong aminoacid, and we get the full functionality, in the same way as we get the function if we fix the motor. 2) Function, in the machine sense, is not the only expression of purpose in design. For example, in language I would speak of meaning more than function. Meaning, however, can be "measured" functionally (for example, we can meausre the capacity of some phrase to convey a specific information to a group of readers). If you refer to a painting, obviously, it becomes even more complex. We could measure functionally the capacity of the painting to convey specific information on its subject, but the beauty, which is certainly an aspect of purpose, is certainly more difficult to "measure". Luckily, for the biological context, we don't really need all that. Machine functionality is more than enough for our purposes. There is certainly a lot of beauty in the biological world, and I believe it is very much evidence of design, but frankly I prefer to fight darwinists with an enzyme, where I can define function in a very objective way, and define specific quantitative methods to assess its presence or absence.gpuccio
May 20, 2013
May
05
May
20
20
2013
11:13 AM
11
11
13
AM
PDT
The only objections I have with "functional specification" as a label is that it could possibly cloud the issue in some cases. 1) I haven't been around a lot of outboard motors, but anecdotally, their functional nature is not what I'd call a certainty. But a non-functioning outboard motor is nearly as recognizable as an artifact of design as one that functions. Still, no one designs an outboard motor to fail, so I suppose it is the specification that must be functional and not the artifact itself. Even so, this could generate a bit of confusion. 2) Going further, it seems a stretch to call something like a work of art "functional," even though most clearly exhibit design. I'm not saying it is impossible to imagine that a work of art "functions" to provoke some emotion or thought, but this does seem categorically different to how an outboard motor "functions." In both of these cases, it seems to me that the core issue is purpose more so than function. It is the purpose behind the crafting of even the broken outboard motor that signals design. And it is the purpose behind a work of art that does the same. Unfortunately, purpose seems even more fuzzy than function when it comes to producing a mathematically rigorous inference to design. Further, purpose and design seem so closely linked as concepts that stating one in terms of the other might start to sound a bit tautological. I'm guessing this is why we stick with "functional" despite its evident shortcomings?Phinehas
May 20, 2013
May
05
May
20
20
2013
10:02 AM
10
10
02
AM
PDT
kf @108: I fear you are not understanding my point. Or, more likely, I'm not explaining myself very well, in which case I apologize. At the risk of beating a dead horse, let me try to articulate my point about the enzymes. I completely understand that reaction rates can be measured. And they can be mapped. And they can be compared. But in doing so, they have to measured against something; they have to be compared against something; they have to be mapped based on something. And what is that something? When we are dealing with a specification we have to talk about the actual specification. What are the specifics? As I asked above in #102, are we just looking for the fastest reaction rate that could exist in the known universe under ideal conditions, or some other reaction rate that is more optimal in our particular biological system? Until we identify our specification, there is nothing to calculate against to compare and map reaction rates. Furthermore, if enzyme A has a reaction rate of X and enzyme B has a reaction rate of Y, and assuming both are sufficiently complex, would we really take the position that A is designed (because it is sufficiently "specified") and B is not designed (because it is less "specified" -- in your terms has a slower reaction rate)? Of course not. They both have an identifiable function: A reacts at rate X and B reacts at rate Y. They are both sufficiently complex. They both are designed. The fact that A does its function better than B does A's function, or that B does its function better than A does B's function doesn't have any impact on whether each enzyme constitutes a specification. And you can't calculate that based on math alone. You have to look at each enzyme on its own merits. Once you've done so, then sure, you can compare enzymes and run comparative calculations. But that isn't to find the specification in the first place; it is just a comparative exercise after the specification has already been identified. I've admitted that there may be some rare exceptions (I'm not sure what they might be at this point) in which a mathematical formula can identify up front in an objective way a specification, but as a general matter there is not a formula or a calculation we can throw out at the world that will come back and identify for us the many things that are specified. Rather, we look at a system, we identify a function/meaning/specification. We do it from our experience and ability to identify function, meaning, goal-driven-activities, engineering acumen, and so on. Then at that point -- when we have adequately defined the specification -- we can start ascertaining whether the complexity related to that specification is sufficient to exclude chance and necessity. In other words, if I were to describe the design inference to someone I wouldn't say that it is simply based on two mathematical calculations (one for specification; one for complexity). Rather, it is based on (i) an real-world assessment of function/meaning/specification, using our logic, experience, understanding of purpose-driven outcomes, plus (ii) a mathematical calculation of complexity.Eric Anderson
May 20, 2013
May
05
May
20
20
2013
08:47 AM
8
08
47
AM
PDT
gpuccio @109:
I am not sure I understand Dembski’s attitude towards functional specification, or specification at all. He is probably worried about the subjective aspect of function definition, and is searching some totally objective, mathematical definition of specification.
I'm concerned as well that he may be going down this path. I don't think it is a fruitful approach and will likely just confuse things.Eric Anderson
May 20, 2013
May
05
May
20
20
2013
08:21 AM
8
08
21
AM
PDT
Eric Anderson: I believe we are in perfect agreement. You say: "I argue that this specification is recognized due to our ability to recognize function, meaning, and purpose-driven outcomes, as well as some logic and experience that are brought to the table. I don’t think we recognize a specification because we’ve done some calculation to determine whether the thing is specified enough." That is exactly the point. Functional specification is a categorical, binary value. It is either present or absent. But its assessment is based on some objective definition and quantitative rule, such as a minimal threshold of activity for an enzyme. So, specification is both subjective (it requires recognition by a conscious observer) and objective (the observer who recognizes the function must objectively define it and the criteria for its assessment, so that anybody can verify the binary value of its presence or absence in an object, according to the given definition). I am not sure I understand Dembski's attitude towards functional specification, or specification at all. He is probably worried about the subjective aspect of function definition, and is searching some totally objective, mathematical definition of specification. I am not a mathematician, so I cannot comment on the technical aspects of that problem. But I believe that the definition of functional specification I have given here is completely empirical: it can be applied objectively, and it is a proper basis for design detection. Moreover, as purpose is the essence of design, that definition relies on the same fundamental quality that defines the target to be detected: conscious recognition of purpose. So, I believe that the empirical definition of functional specification is also cognitively consistent and satisfying.gpuccio
May 20, 2013
May
05
May
20
20
2013
05:36 AM
5
05
36
AM
PDT
EA: Reaction rates and rate constants are routinely measured in Chemistry. It is a commonplace that as a result of such measurements, enzymes are known to accelerate attainments of equilibrium by orders of magnitude, sometimes from essentially zero speed to relevant functional outcomes in a reasonable time for a living cell. No need for relative metrics, we can use standard reaction kinetics and metrics for such cases. of course, we can also compare and take ratios to see how relatively effective something is, but that is secondary. KFkairosfocus
May 20, 2013
May
05
May
20
20
2013
12:56 AM
12
12
56
AM
PDT
gpuccio @99: Thanks for the quick review of the issues from the paper. Very helpful. @ 105: If I'm understanding you, I think I am essentially in agreement. The specification is recognized and defined as a result of a function or a property of the item in question. I argue that this specification is recognized due to our ability to recognize function, meaning, and purpose-driven outcomes, as well as some logic and experience that are brought to the table. I don't think we recognize a specification because we've done some calculation to determine whether the thing is specified enough. I agree with you that once a specification is identified and defined, then we can bring a complexity calculation to bear to see if we can rule out chance (typically using something like the universal probability bound). ----- Incidentally, your example of the tablet is a good one. One of the problems Bill Dembski got into (in my opinion) is that in some of his examples he was sloppy with how he defined the specification. As a result, it looked like in some of his examples that natural forces could easily account for the item in question (like, say, your example A of a paperweight). Bill's particular examples I'm thinking of were a city and a stool. My response was in the below link. It is terribly long and somewhat dry for most people, so I hesitate to inflict it on anyone, but might be worth checking if you have an hour to kill on a plane sometime: http://www.iscid.org/papers/Anderson_ICReduced_092904.pdfEric Anderson
May 19, 2013
May
05
May
19
19
2013
11:20 PM
11
11
20
PM
PDT
I'm sure that this is probably very late to the discussion, but isn't it tiresome that EL still challenging the concept of CSI? I remember exchanges on the boards here at UD where EL and assorted anti-ID partisans were challenging the very notion that DNA sequences could even properly be described as digital information!! Has she ever acknowledged being proven wrong on that score? If not, what intellectual authority can she claim to possess? It seems that she will happily advocate any view that opposes ID.Optimus
May 19, 2013
May
05
May
19
19
2013
10:39 PM
10
10
39
PM
PDT
Eric Anderson (#102): I have not followed your discussion in detail, but I would like to comment on your last post. Functional specification starts with a conscious observer that recognizes some function in a material object and defines it objectively, so that anyone cam measure that function in any possible object and assess it as present or absent. In that sense, the observer must also provide, in his definition, a minimal threshold for the function to be assessed as present. Only then can a computation of the complexity necessary to generate that function, as defined, be performed. The resulting functional complexity is relative to the function as defined, not absolute. I often make the example of a tablet on a desktop. I observe it and I define for it two different functions: a) acting as a desktop paperweight b) performing a specific list of computing actions The object is the same, but I am defining two very different functions. The complexity needed to provide each of them is completely different (very low for a, very high for b). The same is true for an enzyme. I observe that it can accelerate some reaction, and I define that function as the capacity to provide at least such acceleration in specific lab conditions. Then I can compute the functional complexity tied to that definition. So, I can compute different functional complexities tied to different minimal levels of activity. I hope that helps.gpuccio
May 19, 2013
May
05
May
19
19
2013
10:29 PM
10
10
29
PM
PDT
Phinehas @103, Good points, especially with regard to compressing a whole class of objects into a single evocative label. However I might be inclined to call that description underdetermined, to use vjtorley's vocabulary, because there would be nothing in that description which would specify how a certain piano is to be constructed; it's not specific enough. So if we consider a specification to be a description by which an object could be reproduced, or a prescription for reproducing it by way of a context, such as a computer program that generates a certain output, then we are dealing with a different "space" of complexity. A set of blueprints, a materials list, a construction plan, and a construction schedule, might constitute a prescription for constructing a building. But that specification is not as complex as the building itself. And the specification would be amenable to being digitized and compressed. So I think that complexity crops up not just in the thing which is specified, but in the specification as well, albeit at different levels. Perhaps KF could give his opinion on how specification relates to the thing specified.Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
06:01 PM
6
06
01
PM
PDT
InVivoVeritas: I'm not sure you haven't slipped back into describing the complexity of a piano instead of its specification. The simplest and shortest specification of a piano is, "a piano." There are an infinite number of complex arrangements of matter that will fit this specification, and we can be confident that every single one of them is designed. This is the power of language and functional concepts: that all of the complexity of a finely tuned instrument like a piano can be compressed into a single logos that when invoked can evoke both your complex "specification" and a million others. It seems to me that it might be this very compressiblility of complexity that is a powerful part of the design inference. The potency in looking at what is represented at the top of this page and realizing, "Hey,that's basically an outboard motor," should not be underestimated. But trying to calculate specificity any any sort of formal way seems to be a very slippery thing.Phinehas
May 19, 2013
May
05
May
19
19
2013
04:56 PM
4
04
56
PM
PDT
kf @94: Sure, the enzymes can be compared and scaled; I've acknowledged that. But against what are they being scaled and compared? Against the fastest possible reaction rate that could exist in the known universe under ideal conditions? Against a non-catalyzed rate? Against a median rate? Against a rate that would be optimal for a particular result in a particular biological context? We can't even start calculating anything until after we have decided -- on the basis of function, logic, engineering analysis and experience; not on the basis of math -- that X is the ideal 100% specification. Then we can rate other enzymes against this ideal 100% specification. Some too fast, others too slow, some requiring too much material, some not faithful enough, and on and on. But even then, are we saying that an enzyme that matches our criteria by, say, 90% is specified or not specified? What about one that matches 80% or 50%? I think we need to distinguish between comparative calculations (which can certainly be done once we have agreed upon various criteria) and a calculation that would in theory allow us to determine whether X is "specified" (which I question whether it can be done, except perhaps in rare cases).Eric Anderson
May 19, 2013
May
05
May
19
19
2013
02:02 PM
2
02
02
PM
PDT
Hi gpuccio, Thank you very much for your comments on Miller's response. Greatly appreciated. Thanks once again.vjtorley
May 19, 2013
May
05
May
19
19
2013
01:02 PM
1
01
02
PM
PDT
InVivoVeritas @97, well said. Looking at it from a natural language perspective is illuminating, especially when considering that language (logos) is a fundamental property of intelligence.Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
10:51 AM
10
10
51
AM
PDT
VJ (#84): Unfortunately, my time is very limited at present, and I don't feel like analyzing Miller's response to your essay. I have already discussed many of these things with Miller himself and other TSZers, some time ago, and in great detail. I think, however, that I can give a few brief comments about the papers that he quoted and you linked. The first paper is scarcely significant. It just shows a simple enzymatic activity of a short RNA sequence. And so? Let's go to the second. In essence, what it says is: the RNA world is a bad theory, but a protein first world, or other alternatives, are even worse. Obviously, the authors forget to add: except for a design theory of OOL, which is very good! :) If the RNS world theory is the best neo darwinism can offer to explain OOL, then I am very, very happy that I am on the other side. Just a couple of words on the RNA world theory: it would never qualify as a credible scientific theory, if it were not the best non design theory they have. There is absolutely no evidence to justify even the hypothesis of an RNA world. Just think: there is no trace in the whole world of autonomous living beings based on RNA only. They have never been observed, and there is no indirect trace of their existence. If we want to stick to facts, what can we say about OOL? It is very simple. In the beginning life was not present on our planet. At some point in time (probably very early) life appears. What life? If we stick to facts, and do not let our imagination wonder in self-created worlds that never existed, there is only one credible answer: prokaryotes, with DNA, RNA, and proteins, and a very structured core of fundamental functions that have survived, almost identical, up to now, including the functions of DNA duplication, DNA transcription, mRNA translation, the genetic code, and so on. And hundreds of complex proteins, perfectly working. IOWs, the first living being of which we have at least indirect evidence on our planet is LUCA. And LUCA, as far as we can know from facts, was essentially a prokaryote. This is the simple truth. All the rest is imagination. So, could LUCA emerge as such a complex being? Yes, if it was designed. Let's go to the last paper, about proteins. First of all, I will just quote a few phrases from the paper, just to give the general scenario of the problems: a) "We designed and constructed a collection of artificial genes encoding approximately 1.5×106 novel amino acid sequences. Because folding into a stable 3-dimensional structure is a prerequisite for most biological functions, we did not construct this collection of proteins from random sequences. Instead, we used the binary code strategy for protein design, shown previously to facilitate the production of large combinatorial libraries of folded proteins" b) "Cells relying on the de novo proteins grow significantly slower than those expressing the natural protein." c) "We also purified several of the de novo proteins. (To avoid contamination by the natural enzyme, purifications were from strains deleted for the natural gene.) We tested these purified proteins for the enzymatic activities deleted in the respective auxotrophs, but were unable to detect activity that was reproducibly above the controls." And now, my comments: a) This is the main fault of the paper, if it is intepreted (as Miller does) as evidence that functional proteins can evolve from random sequences. The very first step of the paper is intelligent design: indeed, top down protein engineering based on our hardly gained knowledge about the biochemical properties of proteins. b) The second problem is that the paper is based on function rescue, not on the appearance of a mew function. Experiments based on function rescue have serious methodological problems, if used as models of neo darwinian evolution. The problem here is specially big, because we know nothing of how the "evolved" proteins work to allow the minimal rescue of function in the complex system of E. Coli (see next point). c) The third problem is that the few rescuing sequences have no detected biochemical activity in vitro. IOWs, we don't know what they do, and how they act at biochemical level. IOWs, with know no "local function" for the sequences, and have no idea of the functional complexity of the "local function" that in some unknown way is linked to the functional rescue. The authors are well aware of that, and indeed spend a lot of time discussing some arguments and experiments to exclude some possible interpretation of indirect rescue, or at least those that they have conceived. The fact remains that the hypotyhesis that the de novo sequences have the same functional activity as the knocked out genes, even if minimal, remain unproved, because no biochemical activity of that kind could be shown in vitro for them. These are the main points that must be considered. In brief, the paper does not prove, in any way, what Miller thinks it proves.gpuccio
May 19, 2013
May
05
May
19
19
2013
10:12 AM
10
10
12
AM
PDT
Hi kairosfocus, On reflection, I'd agree with your point about it being sufficient to establish the existence of local fine tuning. Appealing to a higher level won't make the problem go away, of course, because a multiverse generator would itself have to be fine-tuned, as Dr. Robin Collins has argued. InVivoVeritas, I completely agree with your comment about language being the hallmark of intelligent designers, which is why it's appropriate to use text as the primary vehicle for capturing functional specification (FCSI). Food for thought.vjtorley
May 19, 2013
May
05
May
19
19
2013
02:55 AM
2
02
55
AM
PDT
Eric Anderson @89:
Also, there are many examples of specification — perhaps the vast majority of them — that don’t lend themselves to calculation. Does a piano have a specification? Sure. How do we calculate it? What about a phone or a car or an airplane? Is it possible even in principle to calculate the amount of specification in most designed things?
Chance Ratcliff:
If I’m not misunderstanding you, it should be possible to specify how to reproduce such a thing. A while back I toyed with the idea of a programming language that would specify items for manufacture in a theoretical manufacturing environment. It’s probably not a foolproof way to measure specification, but it would provide a way to specify a thing like a piano for manufacture, and would result in a digital program that could be used to assess complexity. With the advent of 3D printing, this is easier to imagine nowadays.
I tend to agree with Chance Ratcliff. A specification for a piano consists – as suggested - in a thorough, precise, detailed description of how to build, construct and assemble all its parts – in the proper working relationships - such that it will manifest the function/purpose for which it was designed: to produce certain sounds when its keys and pedals are pressed by whatever operator. We should note that this detailed description should comprise – among other things - the precise specification of all materials used to construct the piano – including the steel strings, bronze plaque, ivory keys and – if we were to go to extreme – how these materials and parts are produced/manufactured – or at least procured. Now let’s assume that the whole above description is expressed in English natural Language. That may assume that if any manufacturing and assembly diagrams are needed, there is a professional method to translate them unequivocally in plain English. Then one conventional (preliminary) way to measure the FCSI of the piano is to count the number of characters used by this textual description. Using the same method we can create an English natural language description (equivalent with its FCSI) for a 2013 Ford Mustang in a (huge) text file. The size of the text file will represent a measurement of the FCSI for our Ford Mustang. The sizes of the “descriptor” file for the piano and the descriptor file for the Ford will give an idea how their FCSI measurements compare. I would suggest that since the manifest ability of us human to be intelligent designers is based on our ability to use language (logos) then this can be the justification to use text as primary vehicle for capturing functional specification (FCSI) and to measure it. For specialized domains (programming, hardware design, music composition, knitting patterns, mechanical drawings, etc.) there might be specialized languages or formalisms to precisely define the specific actions, sequences, interactions, that lead to the achieving the intended goal of that domain activity. Let’s touch briefly on what would mean to create an accurate FCSI for an IPhone. Assuming that all software for the IPhone was written using 3 programming languages: Objective C, HTML and SQL, then the totality of all software (source files) written in the three languages above are considered an important part of the FCSI “archive” for the IPhone. However considering that the manufactured IPhone needs also hardware, integrated circuits, printed circuit boards, sensors, codecs, battery, wiring, LCD screen, plastic or metal enclosures, then the FCSI archive must comprise thorough, precise descriptions for the design, manufacturing, assembly and test of all these parts and components. A complete IPhone FCSI archive should comprise also thorough descriptions of all services on which IPhone relies like CDMS, HiFi, networking and communication protocols. This way of staffing into a FCSI textual archive all details at all levels of a product seems to be perfectly legitimate since all design efforts and results with all their dependencies make-up together the intelligent design of that product. Any omission of a part or aspect of the design from the FCSI 'archive' is equivalent with removal of a necessary part from a system and thus a failure in providing the envisioned functionality (see Behe's irreducibly complex system). Next interesting exercise would be to sketch - on the same lines - what should be in the FCSI 'archive' for a single cell organism. I anticipate that such an exercise will show from a new perspective why natural processes and random events have no chance in synthesising 'ex nihilo' such a complex FCSI 'archive' - and thus to create life.InVivoVeritas
May 18, 2013
May
05
May
18
18
2013
11:57 PM
11
11
57
PM
PDT
KF @95, yes I think so. Perhaps nodes and arcs would provide less ambiguity, and hence be more appropriate. I'm not sure. But it's true that in my model at #93, there is likely more than one program which could account for the output. However I think we're wanting to look for the simplest possible program for any given output. Would this be provable? That's another question. It may not be, and that might be the crux of Eric Anderson's issues with quantification of specificity. Regardless, I think it's progress, and if not, it's thought provoking.Chance Ratcliff
May 18, 2013
May
05
May
18
18
2013
11:04 PM
11
11
04
PM
PDT
CR: A nodes and arcs network describes configuration. Think of it as a wiring diagram or exploded view. Such allows us to reduce organisation to description, i.e. set of structured strings, sim. to AutoCAD etc. Specification comes from degree of variability in the near neighbourhood. Tolerance for component variability, orientation and connexion. Also, this can be set up through modest low order digit noise bombing. BTW the network pattern is also a guide to assembly. One of the biggest points with cell structures is self assembly. That for the flagellum is a masterpiece of contrivance in itself. KFkairosfocus
May 18, 2013
May
05
May
18
18
2013
10:55 PM
10
10
55
PM
PDT
EA: enzymes promote rxn rates under conditions, so activity can be measured on that basis, and compared. KFkairosfocus
May 18, 2013
May
05
May
18
18
2013
10:48 PM
10
10
48
PM
PDT
Food for thought. Imagine a cube wrought of 1024^3 smaller cubes. Now imagine that each little cube could be any one of an arbitrary number of materials, say 64 just for kicks. This gives us a 3D block object with a resolution of 1024^3 cubes, each of which could be any one of 64 different materials. The config space for this output is astronomical. Now we might imagine different ways to arrive at specific configuration. For example, a sculptor could readily fashion such an object like he might fashion clay; only for each little cube, he could choose one of 64 different materials. Here we have a theoretical object which has a resolution sufficient for a great number of real objects. And we can increase or decrease the resolution arbitrarily. We could also imagine that an algorithm and a data set should be able to output objects in the cube's config space. If the complexity of the program for producing a given object is less than the complexity of the object in the cube's configuration space, then we have a candidate for the specification; and the specification's complexity could be cashed out in terms of the program's complexity. There are undoubtedly other issues to consider, and I haven't thought through it sufficiently, but that's a start, perhaps.Chance Ratcliff
May 18, 2013
May
05
May
18
18
2013
10:29 PM
10
10
29
PM
PDT
Eric, well it's definitely calculating the complexity of the specification. I think the complexity of the output is in another config space.Chance Ratcliff
May 18, 2013
May
05
May
18
18
2013
09:52 PM
9
09
52
PM
PDT
. . . and would result in a digital program that could be used to assess complexity.
We're not talking about assessing complexity. We're talking about assessing specification. I think there is a tendency to view the latter as collapsing into the former, but it doesn't -- or at least mustn't, if we are to take the ability to detect design seriously. I think in some cases it might be possible to assess, measure, calculate specification. But I'm having a hard time wrapping my head around when that would be applicable and how it would be accomplished. So far the examples people have given, on closer inspection, I believe are really examples of calculating complexity. I need to think through kf's comments a bit more . . . yours too . . .Eric Anderson
May 18, 2013
May
05
May
18
18
2013
09:50 PM
9
09
50
PM
PDT
Eric,
"Does a piano have a specification? Sure. How do we calculate it?"
If I'm not misunderstanding you, it should be possible to specify how to reproduce such a thing. A while back I toyed with the idea of a programming language that would specify items for manufacture in a theoretical manufacturing environment. It's probably not a foolproof way to measure specification, but it would provide a way to specify a thing like a piano for manufacture, and would result in a digital program that could be used to assess complexity. With the advent of 3D printing, this is easier to imagine nowadays.Chance Ratcliff
May 18, 2013
May
05
May
18
18
2013
07:59 PM
7
07
59
PM
PDT
kf @72: Thanks for your thoughts.
Also, degree of function of say an enzyme can be measured on a suitable scale and correlated to the particular string config of the underlying entity.
I'm not sure I'm on board with this example. I agree we can take an enzyme function we have identified (or even a simple protein function like binding affinity) and assign it a 1 value. Then we could map other enzymes or proteins and calculate their deviation from 1 and end up with some sort of scale of the various enzymes/proteins. But that is not really a calculation of the underlying specification itself. It is just a comparative calculation of the various enzymes/proteins, based on our assignment of a '1' value to an ideal function. And note that faster catalytic action or more binding affinity does not necessarily mean better function in a particular instance. So the function/specification has to be determined largely independent of any calculation and then assigned an arbitrary value. Once that is done then, yes, we can calculate any comparative deviation, or amount, or percentage of specification a particular enzyme/protein might exhibit with respect to the identified function/specification. Also, there are many examples of specification -- perhaps the vast majority of them -- that don't lend themselves to calculation. Does a piano have a specification? Sure. How do we calculate it? What about a phone or a car or an airplane? Is it possible even in principle to calculate the amount of specification in most designed things?Eric Anderson
May 18, 2013
May
05
May
18
18
2013
07:49 PM
7
07
49
PM
PDT
PS: Insofar as the framework of cosmology can be represented as a nodes-arcs network of interacting mathematical entities, variables, relationships etc, we can generate an information estimate and could see how perturbation disturbs. But we already know, very fine tuned.kairosfocus
May 18, 2013
May
05
May
18
18
2013
02:50 PM
2
02
50
PM
PDT
VJT: Pardon, but the cosmological design inference is not a design inference per the filter. This pivots on the issue of scope, where we may see a multiverse suggested with 10^500 sub cosmi. That is, the speculative scope -- and there is simply no empirical observational warrant here, this is phil not sci -- is indefinitely huge. That calls for a different look. What we are dealing with instead is primarily fine tuning, where the observed cosmos sits at a LOCALLY deeply isolated operating point. This means that we face the lone fly on a portion of wall -- an attractive target -- swotted by a bullet argument of John Leslie. It matters not if elsewhere parts of the immense wall may be carpeted with flies. That is LOCAL fine tuning is sufficiently remarkable. The concept of functionally specific complex organisation applies, where our cosmos bears signs of being a put-up job, as Hoyle put it. KFkairosfocus
May 18, 2013
May
05
May
18
18
2013
02:47 PM
2
02
47
PM
PDT
Hi everyone, I just had another thought. Obviously, in order for cosmological Intelligent Design arguments to work as well as biological ID arguments, we need a definition of complex specified information that is broad enough to encompass not only the functional information we find in biological organisms, but also the information contained in the life-permitting properties of the cosmos. Since these designed properties of the cosmos can themselves be described as having an ultimate purpose which can be cashed out in functional terms (namely, to permit the generation of living, sentient and sapient beings) they can therefore be said to fall within the ambit of gpuccio's remark that "the functional definition of CSI is more natural and powerful: it recognizes purpose and meaning, which are the natural output of conscious beings, and design is the natural output of conscious beings." My two cents.vjtorley
May 18, 2013
May
05
May
18
18
2013
02:22 PM
2
02
22
PM
PDT
1 2 3 4

Leave a Reply