This is a post about complex specified information (CSI). But first, I’d like to begin with a true story, going back to the mid-1960s. A Cambridge astronomer named Anthony Hewish had designed a large radio telescope, covering more than four acres, in order to pick out a special group of objects in the sky: compact, scintillating radio sources called quasars, which are now known to be the very active and energetic cores of distant galaxies. Professor Hewish and his students were finally able to start operating their telescope by July 1967, although it was not completely finished until later on. At the time, Hewish had a Ph.D. student named Jocelyn Bell. Bell had sole responsibility for operating the telescope and analyzing the data, under Hewish’s supervision.

Six or eight weeks after starting the survey, Jocelyn Bell noticed that a bit of “scruff” was occasionally appearing in the data records. However, it wasn’t one of the scintillating sources that Professor Hewish was searching for. Further observations revealed that it was a series of pulses, spaced 1.3373 seconds apart. The pulses could not be man-made, as they kept to sidereal time (the time-keeping system used by astronomers to track stars in the night sky). Subsequent measurements of the dispersion of the pulse signal established that the source was well outside the solar system but inside the galaxy. Yet at that time, a pulse rate of 1.3373 seconds seemed far too fast for a star, and on top of that, the signal was uncannily regular. Bell and her Ph.D supervisor were forced to consider the possibility of extraterrestrial life. As Bell put it in her recollections of the event (after-dinner speech, published in *Annals of the New York Academy of Sciences*, vol. 302, pp. 685-689, 1977):

We did not really believe that we had picked up signals from another civilization, but obviously the idea had crossed our minds and we had no proof that it was an entirely natural radio emission.

The observation was half-humorously designated *Little green men 1* until a famous astronomer, Thomas Gold, identified these signals as rapidly rotating neutron stars with strong magnetic fields, in 1968. The existence of these stars had been postulated as far back as 1934, by Walter Baade and Fritz Zwicky, but no-one had yet confirmed their existence when Bell made her observations in 1967, and only a few astronomers knew much about them.

Here’s a question for readers: was Bell wrong to consider the possibility that the signals might be from aliens? Here’s another one: if you were searching for an extra-terrestrial intelligence, what criteria would you use to decide whether a signal came from aliens? As we’ll see, SETI’s criterion for identifying alien signals makes use of one form of *complex specified information.* The criterion – narrow band-width – looks very simple, but it involves picking out a sequence of events which is highly *surprising*, and therefore very complex.

My previous post, entitled Why there’s no such thing as a CSI Scanner, or: Reasonable and Unreasonable Demands Relating to Complex Specified Information, dealt with **complex specified information** (CSI), as defined in Professor William Dembski’s paper, Specification: The Pattern that Signifies Intelligence. It was intended to answer some common criticisms of complex specified information, and also to explain why CSI, although defined in a mathematically rigorous manner, is *not* a physically computable quantity. Briefly, the reason is that Professor Dembski’s formula for CSI contains not only the physically computable term P(T|H), but also the *semiotic* term Phi_s(T). Specifically, Dembski defines the specified complexity Chi of a pattern T given chance hypothesis H, minus the tilde and context sensitivity, as:

Chi=-log2[10^120.Phi_s(T).P(T|H)],

where Chi is the specified complexity (or CSI) of a system,

Phi_s(T) is the number of patterns whose semiotic description by speaker S is at least as simple as S’s semiotic description of T,

P(T|H) is the probability of a pattern T with respect to the most plausible chance hypothesis H, and

10^120 is the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history, as calculated by theoretical computer scientist Seth Lloyd (“Computational Capacity of the Universe,” *Physical Review Letters* 88(23) (2002): 7901–4).

Some of the more thoughtful skeptics who regularly post comments on Uncommon Descent were not happy with this formula, so I’ve come up with a simpler one – **call it CSI-lite, if you will** – which I hope will be more to their liking. This post is therefore intended for people who are still puzzled about, or skeptical of, the concept of complex specified information.

The CSI-lite *calculation* I’m proposing here doesn’t require any semiotic descriptions, and it’s based on purely physical and quantifiable parameters which are found in natural systems. That should please ID critics. These physical parameters should have known *probability distributions.* A probability distribution is associated with each and every quantifiable physical parameter that can be used to describe each and every kind of natural system – be it a mica crystal, a piece of granite containing that crystal, a bucket of water, a bacterial flagellum, a flower, or a solar system. Towards the end of my post, I shall also outline a proposal for obtaining as-yet-unknown probability distributions for quantifiable parameters that are found in natural systems. But although CSI-lite, as I have defined it, is a physically computable quantity, the ascription of some physical feature of a system to *intelligent agency* is not. *Two conditions* need to be met before some feature of a system can be unambiguously ascribed to an **intelligent agent:** *first*, the physical parameter being measured has to have a value corresponding to a probability of 10^(-150) or less, and *second*, the system itself should also be capable of being described very briefly (low Kolmogorov complexity), in a way that either explicitly mentions or implicitly entails the surprisingly improbable value (or range of values) of the physical parameter being measured.

There are two things I’d like to say at the outset. First, I’m *not* going to discuss *functional* complex specified information (FCSI) in this post. Readers looking for a rigorous definition of that term will find it in the article, Measuring the functional sequence complexity of proteins by Durston, Chiu, Abel and Trevors (*Theoretical Biology and Medical Modelling* 2007, 4:47, doi:10.1186/1742-4682-4-47). This post is purely about CSI, and it can be applied to systems which lack any kind of functionality. Second, I should warn readers that I’m not a mathematician. I would therefore invite readers with a strong mathematical background to refine any proposals I put forward, as they see fit. And if there are any mathematical errors in this post, I take full responsibility for them.

Let us now return to the topic of how to measure complex specified information (CSI).

**What the CSI skeptics didn’t like**

One critic of Professor Dembski’s metric for CSI (Mathgrrl) expressed dissatisfaction with the inclusion of a *semiotic* term in the definition of complex specified information. She wrote:

[U]nless a metric is clearly and unambiguously defined, with clarifying examples, such that it can be objectively calculated by anyone so inclined, it cannot be used as the basis for claims such as those being made by some ID proponents.

She then quoted an aphorism by the science fiction writer Robert Heinlein: “If it can’t be expressed in figures, it is not science; it is opinion.”

From a purely mathematical standpoint, the key problem with the term Phi_s(T) is that descriptive simplicity is language-dependent; hence the number of patterns whose semiotic description by speaker S is at least as simple as S’s semiotic description of T will depend on the language which S is using. To be sure, mathematics can be considered a universal language which is shareable across all cultures, but even if all mathematicians could agree on a list of basic mathematical operations, there would still remain the problem of how to define a list of *basic predicates.* For instance, is “is-a-motor” a *basic* predicate when describing the bacterial flagellum, or does “motor” need to be further defined? And does “basic” mean “ontologically basic” or “epistemically basic”?

Another critic (Markf) faulted Professor Dembski’s use of a *multiplier* in the calculation of CSI, pointing out that if there are n independent events (e.g. 10^120 events in the history of the observable universe) and the probability of a single event having outcome x (e.g. a bacterial flagellum) is p, then the probability of at least one event having outcome x (i.e. at least one bacterial flagellum arising in the history of the cosmos) is not np, but (1–(1-p)^n). In reply, I argued that the binomial expansion of (1–(1-p)^n) can be very closely approximated by (1-(1-np)) or np, when the probability p is very small, as it is for the formation of a bacterial flagellum as a result of mindless, unintelligent processes. However, Markf responded by observing that Dembski introduces a multiplier into his formula on page 18 of his article as a *general* way of calculating specificity, before it is even known whether p is large or small.

The criticisms voiced by Mathgrrl and Markf are not without merit. Today’s post is intended to address their concerns. The concept I am putting forward here, which I’ve christened “CSI-lite”, is not as full-bodied as the concept employed in Dembski’s 2005 paper on specification. Nevertheless, it *is* a physically computable quantity, and I have endeavored to make the underlying mathematics as rigorous as could be desired.

The modifications to CSI that I’m proposing here can be summarized under three headings.

**Modification 1: Replace the semiotic factor Phi_s(T) with the constant 10^30**

First, my definition of CSI-lite removes Phi_s(T) from the actual formula and replaces it with a *constant figure* of 10^30. The requirement for low descriptive complexity still remains, but as an *extra condition* that must be satisfied before a system can be described as a specification. So Professor Dembski’s formula now becomes:

CSI-lite=-log2[10^120.10^30.P(T|H)]=-log2[10^150.P(T|H)]. (I shall further refine this formula in Modification 2 below.)

Readers of The Design Inference: Eliminating Chance through Small Probabilities and No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence will recognize the 10^150 figure, as 10^(-150) represents the universal probability bound which Professor Dembski argued was impervious to any probabilistic resources that might be brought to bear against it. Indeed, in Addendum 1 to his 2005 paper, Specification: The Pattern that Signifies Intelligence, Dembski wrote: “as a rule of thumb, 10^(-120) /10^30 = 10^(-150) can still be taken as a reasonable (static) universal probability bound.”

Why have I removed Phi_s(T)? First, Phi_s(T) is not a physical quantity as such, but a *semiotic* one. Replacing it with the constant figure of 10^30 makes CSI-lite *physically computable*, once P(T|H) is known. Second, Phi_s(T) is relatively insignificant in the calculation of CSI, anyway, as the exponent of 10 in Phi_s(T) is *dwarfed* in magnitude by the exponents of the other terms in the CSI formula. For instance, in my computation of the CSI of the bacterial flagellum, Phi_s(T) was only 10^20, which is much smaller than 10^120 (the maximum number of events in the history of the observable universe), while P(T|H) was tentatively calculated as being somewhere between 10^(-780) and 10^(-1170). The exponents of both these figures are very large in terms of their *absolute* magnitudes: even 780 is *much* larger than 20, the exponent of Phi_s(T). Thus Phi_s(T) will not usually make a big difference to CSI calculations, except in “borderline” cases where P(T|H) is somewhere between 10^(-120) and 10^(-150). Third, the overall effect of including Phi_s(T) in Professor Dembski’s formulas for a pattern T’s specificity, sigma, and its complex specified information, Chi, is to *reduce* both of them by a certain number of bits. For the bacterial flagellum, Phi_s(T) is 10^20, which is approximately 2^66, so sigma and Chi are both reduced by 66 bits. My formula makes that 100 bits (as 10^30 is approximately 2^100), so my CSI-lite computation represents a very conservative figure indeed.

Readers should note that although I have removed Dembski’s specification factor Phi_s(T) from my *formula* for CSI-lite, **I have retained it as an additional requirement**: in order for a system to be described as a *specification*, it is not enough for CSI-lite to exceed 1; the system itself must *also* be capable of being described *briefly* (low Kolmogorov complexity) in some *common language*, in a way that either explicitly mentions pattern T, or entails the occurrence of pattern T. (The “common language” requirement is intended to exclude the use of artificial predicates like grue.)

**Modification 2: Instead of multiplying the “chance” probability p of observing a pattern T here and now by the number of trials n, use the formula (1–(1-p)^n)**

My second modification of Professor Dembski’s formula relates to Markf’s argument that if we wish to calculate the probability of the pattern T’s occurring at least once in the history of the universe, then the probability p of pattern T occurring at a particular time and place as a result of some unintelligent (so-called “chance”) process should **not** be *multiplied* by the total number of trials n during the entire history of the universe. Instead one should use the formula (1–(1-p)^n), where in this case p is P(T|H) and n=10^120. Of course, my CSI-lite formula uses Dembski’s original conservative figure of 10^150, so my *corrected* formula for CSI-lite now reads as follows:

CSI-lite=-log2(1-(1-P(T|H))^(10^150)).

If P(T|H) is very low, then this formula will be very closely approximated by the formula: CSI-lite=-log2[10^150.P(T|H)]. Actually, *I would strongly advise readers to use this approximation* when calculating CSI-lite online, because calculating it will at least yield meaningful answers, whereas the correct formula does not, owing to the current limitations of online calculators. For instance, if P(T|H) equals 10^(-780), which was the naive upper bound I used in my last post for the probability of a bacterial flagellum arising as a result of a “blind” (i.e. non-foresighted) process, then it is easy to calculate online that since (10^150).P(T|H) equals 10^(-630), and log2(10) is 3.321928094887362, and log2(10^(- 630))=-2092.8147, the minimum value of the CSI-lite for a bacterial flagellum should be approximately 2093, which is well in excess of the cutoff threshold of 1. But if you try to calculate (1-(1-(10^(-780)))^(10^150)) online, all you will get is a big fat zero, which isn’t much help.

**Modification 3: Broaden the scope of CSI to include supposedly “simple” cases – e.g. a highly singular observed value for a physical parameter**

The third and final modification I’ve made to Professor Dembski’s definition of CSI is that I’ve attempted to broaden its scope. Professor Dembski’s formula was powerful enough to be applied to any highly specific pattern T, be it a complex structure (e.g. a bacterial flagellum) or a specified sequence of digits (e.g. a code). However, Mathgrrl pointed out that SETI doesn’t look for complex signals when searching for life in outer space: instead they look for signals whose range of frequencies falls within a very narrow band. I also started thinking about Arthur C. Clarke’s example of the monolith on the moon in his best-selling novel *2001*, and I suddenly realized that the definition of CSI could easily be generalized to cover these cases. In a nutshell: the “surprise factor” attaching to some physical parameter of a system can be used to represent its CSI. This was a very belated recognition on my part, which should have been obvious to me much earlier on. As far back as 1948, the mathematician Claude Shannon pointed out that we can define the amount of information provided by an event A having probability p(A), by the formula: I(A)=-log2(p(A)), where the information is measured in units of bits/symbol (A Mathematical Theory of Communication, *Bell System Technical Journal*, Vol. 27, pp. 379–423, 623–656, 1948). Subsequently, engineers such as Myron Tribus (*Thermostatistics and Thermodynamics*, D. van Nostrand Company, Inc., Princeton NJ, 1961) linked the notion of *surprise* with improbability. Myron Tribus coined the term “surprisal” to describe the unpredictability of a single digit or letter in a word, and hence how surprising it is – the idea being that a highly improbable outcome is very surprising, and hence more informative. More recently, many other Intelligent Design theorists, including Professor Dembski, have already linked complex specified information (CSI) to the *improbability* of a system, as the presence of P(T|H) in the formula shows. And in 2002, Professor Paul Davies publicly praised Dembski for linking the notion of “surprise” to that of “design”:

“…I think that Dembski has done a good job in providing a way of mathematizing design. That is really what we need because otherwise, if we are just talking about subjective impressions, we can argue till the cows come home.

It has got to be possible, or it should be, to quantify the degree of “surprise” that one would bring to bear if something turned out to be the result of pure chance.I think that that is a very useful step.” (Interview with Karl Giberson, 14 February 2002, in “Christianity Today.” Emphasis mine – VJT.)

All I’m doing in this post is explicitly linking CSI-lite to the notion of *surprise*. What I’m proposing is that the “surprise factor” attaching to some physical parameter of a system (or its probability distribution) can be used to represent its CSI-lite.

I would now like to explain how my definition of CSI-lite can be applied to supposedly “simple” but nevertheless surprising cases – a highly singular observed value for a single physical parameter, an extremely narrow range of values for one or more physical parameters, and an anomalous locally observed probability distribution for a physical parameter – in addition to the complex patterns already discussed by Professor Dembski.

My explanation makes use of the notion of a **probability distribution**. Readers who may have forgotten what a probability distribution is, might like to think of a curve showing the height distribution for American men, which roughly follows a Bell curve, or what’s known as a *normal* distribution, in mathematical circles. Many other probability distributions are possible too, of course. If the variable being measured is *continuous* (like height), then the probability distribution is called a *probability density function*. (The terms *probability distribution function* and *probability function* are also sometimes used.) A probability density function for a continuous variable is always associated with a *curve.* Most people, when they think of a probability distribution function, imagine a *symmetrical* curve such as a Bell curve, but for other functions, the curve may be asymmetrical – for instance, it may be skewed to one side. For *discrete* variables (e.g. the numbers that come up when a die is rolled), mathematicians use the term probability mass function, instead of the term “probability density function.” Here is what the probability mass function looks like for a fair die. With a probability mass function, there is no curve, but there is a set of probabilities (e.g. 1/6 for each face on a fair die), which add up to 1. Even though the probability mass function for a discrete variable has no curve, we can still refer to this function as a **probability distribution**.

In the discussion that follows, readers might like to keep the image of a Bell curve in their minds, for simplicity’s sake. Readers who are curious can find out more about various kinds of probability distributions here if they wish. In the exposition that follows, the physical parameter being measured may be either *discrete* or *continuous*. However, *I make no assumptions* about the *shape* of the probability distribution for the parameter in question. One notion I do make use of, however, is that of *salience.* I assume that it is meaningful to speak of a probability distribution as having one or more salient points. A **salient point** would include the following: an *extreme* or end-point (e.g. a uniform probability distribution function over the interval [0,1] has two salient points, at x=0 and x=1), an *isolated point* (e.g. the probability distribution for a die has six salient points – the values 1 to 6), a *point of discontinuity* or sudden “jump” on a probability distribution curve, and a *maximum* (e.g. the peak in a Bell curve). The foregoing list is not intended to be an exhaustive one.

**Definitions of CSI-lite for a physical parameter, a probability distribution, a sequence and a structure**

**CASE 1: CSI-Lite for a locally observed value of a SINGLE PHYSICAL PARAMETER**

**Case 1(a): An anomalous value for a single physical parameter.** (The parameter may be discrete or continuous.)

CSI-lite=-log2(1-(1-p)^(10^150). Where p is very small, we can approximate this by: CSI-lite=-log2[(10^150).p].

If the parameter is *discrete*, p represents the probability of the parameter having that value.

If the parameter is *continuous*, p represents the probability of the parameter having that value, *within the limits of the accuracy of the measurement*.

Additionally, the value has to be *easily describable* (low Kolmogorov complexity) *and* a *salient point* in the probability distribution.

**Example: A monolith found on the moon, as in Arthur C. Clarke’s novel, 2001.**

In this example, I’ll set aside the fact that the lengths of the monolith’s sides in Arthur C. Clarke’s novel were in the exact ratio 1:4:9 (the squares of the first three natural numbers) – a striking coincidence which I discussed in my previous post – and focus on just *one* physical property of the monolith: the physical property of *smoothness* associated with each of the monolith’s faces. Smoothness can be defined as a *lack* of surface roughness, which is a quantifiable physical parameter. No stone in nature is perfectly smooth: even diamonds in the raw are described as rough. Now ask yourself: how would you react if you found a perfectly smooth black basalt monolith on the moon? When I say “perfectly smooth” I mean: to the nearest nanometer, which is about the size of an atom. That’s about as smooth as you could possibly hope to get, even in principle, and it’s many orders of magnitude smoother than any of the columnar basalt formations found in Nature. If you were to graph the smoothness of all the *natural* pieces of basalt that had ever been measured on a graph, you would be able to obtain a probability distribution for smoothness, and you would notice that the *perfect* smoothness of the monolith fell far outside the 10^(-150) cut-off point for your probability distribution. What should you conclude from this fact?

If the monolith were merely exceptionally smooth but not *perfectly* so, you might not conclude anything, although you would probably strongly suspect that intelligent agency was involved in the making of the monolith. Alternatively, if the smoothness of the monolith were not perfect, but could be quantified exactly in simple language (e.g. “as smooth as a diamond segment can cut it”), the ease of description (low Kolmogorov complexity) would further confirm your suspicion that it was the work of an intelligent agent, although you might wonder why it used diamond cutting tools too. However, all your doubts would vanish if you could describe the monolith on the moon in terms that entail that its smoothness has a *salient value:* “perfectly smooth monolith.” This is a *limiting value*, so it’s a *salient point* in the probability distribution. Thus the smoothness of the monolith is not only astronomically improbable, but also easily describable *and* mathematically salient. That certainly makes it a **specification**, which warrants the ascription of its smoothness to some intelligent agent. We can then calculate the CSI-lite according to the formula above, once we have a probability distribution for the smoothness of naturally formed basalt. I’ll discuss how we might obtain one, below.

**CASE 2: CSI-lite for a locally observed PROBABILITY DISTRIBUTION relating to a SINGLE PHYSICAL PARAMETER**

**Case 2(a): An anomalously high (or low) value within the locally observed probability distribution for a single physical parameter** (i.e. a blip in the curve, for the case of a continuous parameter),

**or an anomalously high (or low)**(i.e. a bump in the curve, for the case of a continuous parameter).

*range of values*If the parameter is *discrete*, p represents the probability of the parameter having the anomalous value or range of values.

If the parameter is *continuous*, p represents the probability of the parameter having that anomalous value or range of values, *within the limits of the accuracy of the measurement*.

CSI-lite=-log2(1-(1-p)^(10^150)), where p is the probability of the locally observed probability distribution having the anomalous value or range of values. Where p is very small, we can approximate this by: CSI-lite=-log2[(10^150).p].

Additionally, the blip (or bump) in the locally observed probability distribution has to be *easily describable* (i.e. it should possess low Kolmogorov complexity), *and* it must also be a *salient point* (or salient range of points) in the probability distribution.

**Example: A bump in the math assignment score curve for a class of students (a true story)**

When I was in Grade 9, I had an excellent mathematics teacher. This teacher didn’t like to set exams. “You’re just regurgitating what you’ve already learned,” he would say. He preferred to set weekly assignments. Nevertheless, at the end of each term, students in all mathematics classes were required to take an exam. My teacher didn’t particularly like that, but of course he had to conform to the school’s assessment policies. After marking the exams, the students in my class had a review lesson. Our teacher showed us two graphs. One was a graph of the exam scores, and the other was a graph of our assignment scores. The first graph had a nice little Bell curve – just the sort of thing you’d expect from a class of students. *The second graph was quite different.* Instead of a tail on either side, there was **a rather large bump** at the high end of the graph. The teacher pointed this out, and correctly inferred that someone had been helping the students with their assignments. He was absolutely right. That someone was me. Of course, a bump in a curve is not the same thing as a narrow blip, but it still possesses mathematically salient values (e.g. the beginning and end of the range where it peaks), which my teacher was able to identify. Here, we have a case where an inference to intelligent agency was made on the basis of an unexpected local bias in a probability distribution curve which one would ordinarily expect to look something like a Bell curve.

My teacher was perfectly right to suspect the involvement of an agent, but was his case a watertight one? Not according to the criteria which I’m proposing here. To *absolutely* nail the case, one would need to mathematically demonstrate that the probability of a locally observed probability distribution having that bump fell below Professor Dembski’s universal probability bound of 10^(-150) – quite a tall task. One would also need a *short description* of the probability distribution for the students’ assignment work in terms which *specified* the results obtained – e.g. “Students A, B and C always get the same result as student X.” (As it happened, my assistance was not *that* specific; I only helped the students with questions they asked me about.) Still, for all *practical* intents and purposes, my mathematics teacher’s inference that someone had been helping the students was a rational one.

**Case 2(b): An anomalously narrow range for the locally observed probability distribution for a single physical parameter – i.e. a narrow band of values.**

CSI-lite=-log2(1-(1-p)^(10^150)), where p is the probability of the curve being that narrow. Where p is very small, we can approximate this by: CSI-lite=-log2[(10^150).p].

Additionally, the narrow band has to be *easily describable* (i.e. it should possess low Kolmogorov complexity), both in terms of the *value* of the parameter inside the band, and its narrow *range.*

**Example 1: The die with exactly 1,000,000 rolls for each face (Discrete parameter)**

This case has already been discussed by Professor Dembski in his paper, Specification: The Pattern that Signifies Intelligence. Here, the anomaly is that the locally observed probability distribution value for each of the die’s faces is *exactly* 1/6. Using Fisher’s approach to significance testing, Dembski calculates the probability of this happening by chance as 2.475×10^(-17). Dembski comments: “This rejection region is therefore highly improbable, and its improbability will in most practical applications be far more extreme than any significance level we happen to set.” However, since the rejection region I am using for my calculation of CSI-lite is an *extremely conservative* one – 10^(-150) – **I would need to observe several million more throws before I would be able to unambiguously conclude that the die was not thrown by chance.** Finally, my *additional condition* is also satisfied, since the observed probability distribution is easily describable: the probability distribution *value* for each face is *identical* (1/6) and the *range* of observed values is precisely zero. Thus if we observe a sequence of rolls – let’s say, exactly *ten* million for each face of the die – whose probability falls below 10^(-150), then this fact, combined with the ease of description of the outcome, warrants the ascription of the result to intelligent agency.

**Example 2: SETI – The search for the narrow signal bandwidth (Continuous parameter)**

Mathgrrl argued in a previous post that SETI does not look for complex features of signals, when hunting for intelligent life-forms in outer space. Instead, it looks for one very simple feature: a narrow signal bandwidth. She included a link to the SETI Website, and I shall reproduce the key extract here:

How do we know if the signal is from ET?Virtually all radio SETI experiments have looked for what are called “narrow-band signals.” These are radio emissions that are at one spot on the radio dial. Imagine tuning your car radio late at night… There’s static everywhere on the band, but suddenly you hear a squeal – a signal at a particular frequency – and you know you’ve found a station.

Narrow-band signals, say those that are only a few Hertz or less wide, are the mark of a purposely built transmitter. Natural cosmic noisemakers, such as pulsars, quasars, and the turbulent, thin interstellar gas of our own Milky Way, do not make radio signals that are this narrow. The static from these objects is spread all across the dial.

In terrestrial radio practice, narrow-band signals are often called “carriers.” They pack a lot of energy into a small amount of spectral space, and consequently are the easiest type of signal to find for any given power level. If E.T. is a decent (or at least competent) engineer, he’ll use narrow-band signals as beacons to get our attention.

Personally, I think SETI’s strategy is an excellent one. However, Mathgrrl’s objection regarding complexity misses the mark. In fact, **a narrow band signal is very complex**, precisely because it is extremely *improbable*, and hence extremely *surprising.* For instance, suppose that according to our standard model, the probability of a particular radio emission at time t falling entirely within a narrow band of frequencies is 10^(-6), assuming that it is the result of a natural, unintelligent process. Using that model, if we receive 25 successive emissions within the same narrow band from the same point in space, then we have a succession of very surprising events whose combined probability is (10^(-6))^25=10^(-150), on the assumption that these events are independent of one another, and that they proceed from the same natural, unintelligent source. This figure of 10^(-150) represents Dembski’s *universal probability bound.* Additionally, the sequence is *easily describable* (i.e. it possesses low Kolmogorov complexity) in terms of its *range*, since all of the values fall within *the same* narrow band. The combination of astronomically low probability and easy describability of the band range certainly makes it highly plausible to ascribe the sequence of signals to an intelligent agent, but the case for intelligent agency being involved is still not an airtight one. What’s missing?

A skeptic might reasonably object that the *value* of the band frequency is still mathematically arbitrary, and that it possesses *high* Kolmogorov complexity. For instance, what’s so special about a band with a frequency of 37,184.93578286 Hertz? However, if in addition, the *value* of this narrow band could be *easily described* in non-arbitrary terms – for example, if its frequency was exactly pi times the natural frequency of hydrogen – then that would place the ascription of the narrow band to an intelligent agent beyond all rational doubt. The Wow! signal detected by Dr. Jerry Ehman on August 15, 1977, while working on a SETI project at the Big Ear radio telescope of The Ohio State University, is therefore particularly interesting, because its frequency very closely matched that of the hydrogen line (an easily describable, non-arbitrary value) *and* its bandwidth was very narrow. **The Wow! signal therefore satisfies the requirement for low Kolmogorov complexity.** Dr. Jerry Ehman’s detailed report on the Wow! signal, written on the 30th anniversary of its detection and updated in May 2010, is available here. Unfortunately, the Wow! signal was observed on only one occasion, and only six measurements (data points) were made over a 72-second period. Astronomers therefore do not have a sufficient volume of data to place the natural occurrence of the Wow! signal below Dembski’s universal probability bound of 10^(-150). Hence it would be premature to conclude that the signal was sent by an intelligent agent. Dr. Ehman reaches a similar verdict the conclusion of his 30th anniversary report on the Wow! signal. As he puts it:

Of course, being a scientist, I await the reception of additional signals like the Wow! source that are able to be received and analyzed by many observatories. Thus, I must state that the origin of the Wow! signal is still an open question for me. There is simply too little data to draw many conclusions. In other words, as I stated above, I choose not to “draw vast conclusions from ‘half-vast’ data”.

Returning to Jocelyn Bell’s “Little Green Men” signal: I would argue that *in the absence of* an alternative natural model that could account for the rapidity and regularity of the pulses she observed, the inference that they were produced by aliens was not an unreasonable one. On the other hand, the *value* of the pulse rate (1.3373 seconds) was not easily describable: its Kolmogorov complexity is high. Later, Bell and other astronomers found other objects in the sky, with pulse rates that were very precise in their *range* – in some cases, as precise as an atomic clock – but whose *values* lacked the property of Kolmogorov complexity: the pulse rates for these objects (now known as pulsars) varied from 1.4 milliseconds to 8.5 seconds. Since the observed *values* of these pulse rates lacked the property of Kolmogorov complexity, then even if astronomers at that time had known nothing about pulsars, it would have been premature to conclude that aliens were producing the signals. An *open verdict* would have been a more reasonable one.

**CASE 3: CSI-lite for a SEQUENCE**

CSI-lite=-log2(1-(1-p)^(10^150)), where p is the probability of a sequence having a particular property, under the most favorable naturalistic assumptions (i.e. assuming the occurrence of the most likely known unintelligent process that might be generating the sequence). Where p is very small, we can approximate this by: CSI-lite=-log2[(10^150).p].

Additionally, the property which characterizes the sequence has to be *easily describable* (i.e. it should possess low Kolmogorov complexity). Since we are talking about a sequence here, it should be expressible according to a short mathematical formula.

**Example: The Champernowne sequence.**

This case was discussed at length by Professor Dembski in his paper, Specification: The Pattern that Signifies Intelligence.If we confine ourselves to the first 100 digits, the probability p of a binary sequence matching the Champernowne sequence by chance is 2^(-100), which is still well above Dembski’s universal probability bound of 10^(-150), but if we make the sequence 500 digits long instead of 100, the probability p of a binary sequence matching the Champernowne sequence by pure chance falls below 10^(-150). The use of “pure chance” likelihoods is fair here, because we know of no unintelligent process that is capable of generating this sequence with any greater likelihood than pure chance. Additionally, the sequence is *easily describable*: the concatenation of all binary strings, arranged in lexicographic order. The Champernowne sequence would therefore qualify as a *specification*, if the first 500 binary digits were observed in Nature, and we could be certain beyond reasonable doubt that its occurrence was due to an intelligent agent.

**CASE 4: CSI-lite for a STRUCTURE**

CSI-lite=-log2(1-(1-p)^(10^150)), where p is the probability of the structure arising with the property in question, according to the *most plausible* naturalistic hypothesis for the process that generates the structure. Note: I am speaking here of a process which *does not require* the input of any **local information** either *during* or *at the start of* the process – input of information being the hallmark of an intelligent agent. Where p is very small, we can approximate this by: CSI-lite=-log2[(10^150).p].

Additionally, the property which defines this structure has to be *easily describable* (i.e. it should possess low Kolmogorov complexity).

The calculation of CSI-lite for the structure of an *artifact* relies on treating the artifact *as if* it were a natural system (just as we assumed that the monolith on the Moon was a columnar basalt formation, in Case 1 above), and then calculating the probability that it arose naturally, by the most plausible unintelligent process. If this probability turns out to be less than 10^(-150), *and* if the structure is easily describable, then we are warranted in ascribing it to an intelligent agent.

**Example 1: Mt. Rushmore (a man-made artifact)**

I discussed this case in considerable detail in my previous post, entitled Why there’s no such thing as a CSI Scanner, or: Reasonable and Unreasonable Demands Relating to Complex Specified Information. In that post, I generously estimated the probability of geological processes giving rise to *four human faces* at a single location, at varying times during the Earth’s history, at 10^(-144). My estimate was a generous one: I assumed that each face, once it arose, would last forever, and that erosion and orogeny would never destroy it; hence the four faces could arise independently, at completely separate points in time. I also neglected the problem of size: I simply *assumed* that all of the faces would turn out to be roughly the same in size. Taking these two complicating factors (size and impermanence) into account, a good argument could be made that the probability of natural, unintelligent forces giving rise to four human faces at the same place, at some point in the Earth’s history, falls below Dembski’s universal probability bound of 10^(-150). Moreover, a brief description has already been provided (four human faces), so the Kolmogorov complexity is low. Hence we can legitimately speak of the carvings on Mt. Rushmore as a *specification*.

**Example 2: The bacterial flagellum (a structure occurring in Nature)**

This case was discussed by Professor Dembski in his paper, Specification: The Pattern that Signifies Intelligence as well as his earlier paper, The Bacterial Flagellum: Still Spinning Just Fine. The current (naive) estimate of p falls somewhere between 10^(-1170) and 10^(-780), which is far below Dembski’s universal probability bound of 10^(-150). Additionally, the description is very short: “bidirectional rotary motor-driven propeller”. A bacterial flagellum possesses *functional* rather than geometrical salience, so the requirement for salience in some purely mathematical sense (as in Cases 1, 2 and 3 above) does not arise.

Of course, skeptics are likely to question the provisional estimate for p, arguing that our level of ignorance is too profound for us to even formulate a ballpark estimate of p. I shall deal with this objection below, when I put forward a proposal for estimating formation probabilities for complex structures.

**A Worldwide Project for collecting Probability Distributions**

My definition of CSI-lite in cases 1, 2 and 3 above presupposes that we have probability distributions for each and every quantifiable physical parameter, belonging to each and every kind of natural system. Once realized, would enable someone to compute the complex specified information (CSI-lite) for any quantifiable parameter associated with any natural feature on Earth.

But how do we obtain all these probability distribution functions for each and every physical parameter that can be used to describe the various kinds of natural systems we find in our world? To begin with, we would need a **list of the various kinds of systems** that are found in nature. Obviously this list will encompass *multiple levels* – e.g. the various bacterial colonies that live inside my body, the bacteria that live in those colonies, the components that make up a bacterium, the molecules that are found in these components, and so on. Additionally, some items in the natural world will belong to *several different systems* at once. Moreover, some systems will have *fuzzy boundaries*: for instance, where exactly is the edge of a forest, and what counts as a genuine instance of a planet? But Nature is complicated, and we have to take it as we find it.

The next thing we would need is a **list of measurable, quantifiable physical parameters** that can be meaningfully applied to *each* of these natural systems. For instance, a mineral sample has quantifiable properties such as Mohs scale hardness, specific gravity, relative size of the constituent atoms, crystal structure and so on.

Finally, we need a **probability distribution** for *each* of these parameters. Since it is impossible to predict what the probability distribution for a given physical parameter will be on the basis of theorizing alone, a very large number of observations will be required. To collect this kind of data, one would need a worldwide program. Basically, it’s a very large set of global observations, carried out by thousands of dedicated people who are willing to contribute time and energy (think of Wikipedia). Incidentally, I’d predict that people will happily sign up for this program, once they realize its scope.

Here’s how it would work for one kind of system found in nature: maple trees. We already have a pretty complete map of the globe, thanks to Google Earth, so we should be able to identify the location and distribution of maple trees on Earth, and then take a very large representative sample of these trees at various locations around the globe, and measure all of the quantifiable physical parameters that can be used to describe a maple tree, for each individual tree in the sample (e.g. the height, weight, age etc. of every tree in the sample), in order to obtain a probability distribution for each of these parameters. A large sample is required, so that we can get a clear picture of the probability distribution for the *extreme values* of these parameters – e.g. does the probability distribution for the parameter in question really follow a Bell curve, or are there natural “cutoff points” at each end of the curve? (Trees can only grow so high, after all.)

Having amassed all these probability distributions for each of the parameters associated with each kind of system in the natural world, one would want to store them in a giant database. Possessing such a database, one would then be in a position to *physically compute* the CSI-lite associated with *any* locally observed physical parameter, probability distribution, *or* sequence of events that applies to a particular system belonging to some known natural kind. If the CSI-lite exceeds 1 *and* the parameter, probability distribution, or sequence observed further exhibits low Kolmogorov complexity, then we can refer to it as a specification and ascribe its occurrence to an intelligent agent.

Thus Worldwide Project for collecting Probability Distributions which I have described above, would make it possible to physically compute the CSI-lite of *any* system covered by Cases 1, 2 and 3. What about Case 4 – complex structures? How could one physically compute the CSI-lite of these? To do this, one would need to implement a further project: a Worldwide Origins Project.

**A Worldwide Origins Project**

Imagine that you have a complete set of probability distributions for each and every physical parameter corresponding to each and every kind of natural system you can think of. To calculate the CSI-lite associated with *structures*, there’s one more thing you’ll need to do: for each and every kind of natural system occurring in the world, you’ll need to write an account of how the system typically originates, as well as an account of how the first such system probably arose. How do planets form? Where do clouds come from? How do igneous rocks form? What about the minerals they contain? Where did hippos originally come from, and how do they reproduce today?

Some of your accounts of origins will be provisional and even highly speculative, referring to conditions that cannot (at the present time) be reproduced in a laboratory. But that’s OK. In science, all conclusions are to some extent provisional, as I’ve argued previously. The important thing is that they are correctable.

What about multi-step processes occurring in the distant past – e.g. the origin of the first living cell? Here the most important thing is to break down the process into tractable bits. There are several influential hypotheses for the origin of the first living cell, but each hypothesis, if it is to be taken seriously, needs to identify *stages* along the road leading to the first life-forms. Surveys of the proponents of each hypothesis should also be able to identify a consensus as to:

(i) roughly how many *currently known stages* we can identify, along the road leading from amino acids and nucleotides to the first living cells, where any *chemical reaction*, or for that matter, any *movement of chemicals* from one place to another that’s required for a subsequent stage to occur, or any *environmental transformation* required for a subsequent stage in the emergence of life to occur, would be counted as a separate stage;

(ii) to the nearest order of magnitude, how many *missing stages* there would have been (counting the length of the longest chemical reaction paths, in those cases where there were multiple chemical reactions occurring in parallel);

(iii) a *relative ranking* of the difficulty of the known stages – i.e. which stages would have been the *easiest* (and hence most likely to occur, in a given period of time), which would have been of *intermediate* difficulty, and which would have been the *most difficult* to accomplish;

(iv) the probability of the easiest stages occurring, during the time window available for the emergence of life, and a ballpark estimate of the probability of some of the intermediate stages;

(v) a *quantitative ranking*comparing the degree of difficulty of the known stages – i.e. how many times harder (and hence less likely) stage X would have been than stage Y, over a given period of time, in the light of what we know.

The same process could be performed for each of the known *subsidiary processes* that would have had to have occurred in the evolution of life – e.g. the evolution of RNA, or evolution of the first proteins. My point is that at *some level* of detail, there will be a sufficient degree of agreement among scientists to enable a realistic probability estimate – or at least, upper and lower bounds – to be computed. At higher levels, the uncertainty will widen considerably, but if we go up one level at a time, we can control the level of uncertainty, and eventually arrive at an informed estimate for the upper and lower probability bounds on the likelihood of life emerging by naturalistic processes.

So could there ever be a CSI-lite scanner? We might imagine a giant computer database that stored all this information on probability distributions, origins scenarios and their calculated likelihoods. Using such a database, we could then compute the CSI-lite associated with *any* locally observed physical parameter, probability distribution, sequence of events *or complex structure* that applies to a particular system belonging to some known natural kind. This is an in-principle computable quantity, given enough information. Kolmogorov complexity cannot be computed in this fashion, but by inputting enough linguistic information, we could imagine a machine that could come up with some good guesses – even if, like Watson on Jepoardy, it would sometimes make disastrous blunders.

I hope that CSI skeptics will ponder what I have proposed here, and I look forward to honest feedback.

VJT:

Some serious thinking.

Of course, what you have done is more or less to propose a fourth model for analysis and an associated metric, a modification of Dembski in light of other factors linked to the phenomena of making judgements about the world and evaluating likely causal factors on best explanation.

A good effort, and worth the try.

(Perhaps, some one could look at the odds of getting on an alleged fair shuffle of a standard pack of cards, A, K, Q, J of Hearts on a four card serving. Then look at the odds of doing so 2, 5, 10, 100, . . . n times in succession [with the set of parallels being getting the same for the other three suites] and look at how the thresholds for thinking design vs chance look in that context:

at what point would a reasonable person — i.e. the judging observer as semiotic agent — cry cheat, why.I suspect for the constricted universe of a deck of cards, on earth in a given typical situation, that would come on that happening three times for sure, and would be highly plausible if it happened twice in succession in the same game: once is luck, twice coincidence, three times, a plan. A toy example like that is sometimes helpful. We see here, search resources in a typical situation, a judging agent in action, and a rule of responsible inference on specified complexity that is deceptively simple looking.)However, as a footnote: while the observing, judging semiotic agent is not explicitly onstage, s/he/it is still very much present in the process. think, for instance of the S/N ratio used in estimating channel capacity; where we have an implicit design inference to be able to distinguish meaningful — and designed — signal from meaningless noise due to forces of chance and or necessity that we have to live with.

That’s reality: semiotic agents exist and are inextricably a part of affairs including science.

We need to get over it.

GEM of TKI

PS: I think the onlooker can see now why I have taken as the simplest approach, the X-metric where we specify a bit depth that gives a config space that dwarfs the search capacity of the observable cosmos, 1,000 bits. Once we see something specific and functional towards a plausible purpose of a possible agent in a config space of that sort of degree of complexity, it is not at all likely to be coincidence, but instead a plan. That is how we look at a 747 and intuitively know that this was designed. (I am implying that we have a deep rooted intuitive concept of odds, risk and uncertainty. That is exploited in management decision making on a routine basis, and it is for instance the root of our concept of a dangerously risky act that if done shows irresponsibility. It is also the root of our recognition of the courage of a man who like Horatio is willing to stand in “yon path where a thousand can be stopped by three,” to — if successful — buy time to drop the bridge leading into the city. A chance, however slim, is much better than an utterly adverse certainty.

PS 2: Your CSI-Lite scanner would of course embed the stored collective judgements of semiotic agents. Funny, how we are more willing to trust a machine that stores the results than the agents. The machine is no better than its algorithm, its input data and the underlying assumptions, judgments and biases: GIGO and QIQO. (Quality in, quality out.)

Kairosfocus

Thank you for your kind comments. I was intrigued to read about your X-metric, so I googled it and found more information here . Very interesting reading, I must say, kairosfocus. Thank you.

By the way, I entirely agree with your conclusion that my CSI-Lite scanner would embed the stored collective judgements of semiotic agents. It is strange how much trust people repose in mechanical devices, isn’t it?

Yup,

quite strange.

Hide the string-pulling, puppet controlling hands behind the curtain of a mathematical and mechanical apparatus, and we are comfortably reassured.

But, pull the curtains, and is the Wizard behind the scenes as impressive as the dancing puppet?

Even, if s/he wears a comforting lab-coat?

But GIGO and thankfully QIQO.

G

PS: That X-metric is the brute force simple metric [it even evades being a direct probability estimate, it is a search limitations metric . . . what happens when a cosmos scale search rounds down to a practical zero . . . ] I have been using to give a first measure of FSCI, without having to elaborate on Shannon’s H and extensions [a la Durston et al], or studies on families of proteins etc, or on the complexities of Dembski’s metric. Your link goes to the always linked through my name. I chose X, in tribute to Dembski’s CHI (and with a hint as to which is less sophisticated).

vjtorley,

I agree with markf that (1-1/p)^n is a better measure of the probability of getting at least one event with probability p in n trials than p^n. In fact, it is exact. I am therefore pleased to see you agree with markf

Your math equation for large numbers can be simplified by noting that

(1 + 1/t) ^ t

approaches

eas t approaches infinity.Some idea of this behavior can be deduced from the following table:

t—— (1 + 1/t) ^ t

1——2

2——2.25

4——2.44

10—–2.593

100—-2,7048

1000—2.7169

10^6—2,71828047

infinity -2.718281828+

One can even set bounds for this:

eis between(1 + 1/t) ^ t and (1 + 1/t) ^ (t + 1).

By a little algebraic manipulation, we can obtain:

(1 – 1/t) ^ t ~ 1/

eand

(1 – 1/t) ^ at ~ 1/

e^(-a)Thus if we define D as 10^150, your formula

CSI-lite=-log2((1-(1-p))^(10^150))

can be very closely approximated by

CSI-lite=-log2(1-(1-p)^D)

=-log2(1-(1-p)^(D*p/p))

=-log2(1-

e^(-D*p))for D*p much smaller than 1 (but above zero), we have as a very close approximation (because the slope of

e^x is 1 at x=0),e^(-D*p) = 1-D*p,and so we have, approximately

CSI-lite = -log2(D*p),

which is of course the approximation you gave.

So for very small values of p, markf is making a distinction without a difference except down in the umpteenth decimal place.

It would be nice if your detractors could make more substantial objections.

vj

I would love to respond to this it will a day or two before I can read and digest 13 pages of material and I fear the topic will have died by then.

#5 Paul Giem

It would be nice if your detractors could make more substantial objections.I do have time to respond to this jibe. When I wrote about the incorrect use of pn as opposed to (1-1/p)^n it wasn’t directed at what vj wrote. It was a criticism of Dembski’s paper where it was introduced without the assumption that p was small compared to 1/n and without any recognition that it was an approximation. I admitted at the time that it was not a major problem but was sloppy in what purported to be a mathematically rigorous paper. I have far more substantial objections to vj’s paper but I fear I will not find time to articulate them.

markf,

You said,

By all means find the time. It amazes me that you would lead with something that you term “not a major problem” when you “have far more substantial objections”.

Do not fear that the topic will have died by one or two days. At least some of us will be interested in those major objections.

Paul,

Perhaps a better way to proceed would be for the paper to be submitted to a journal where the subject matter would be relevant. Then the peer review process can provide the critique that markf indicates is required but which he does not have time to create.

This, to me, does not seem an unreasonable way to proceed. In fact, it seems the only way to proceed when such contentious issues are being talked about.

While peer review is not perfect it does put people who understand the subject matter in a position where they can make informed criticism.

The fact is the system works.

So how about it vjtorley? Will you formalize you work and get it into a proper journal where it can create the impact that it may well deserve?

If not, why not? If you are worried, like KF seems to be, that you’d be censored from the start then I have the following suggestions.

A) Should your paper be rejected out of hand because it supports ID then the rejection letter will be valuable evidence that such papers are rejected out of hand for no technical reason, but solely because they support ID. This will be valuable evidence for the claim of censorship by the Darwinists. To my knowledge no such evidence currently exists.

B) You can submit to an ID friendly Journal.

VJT:

Let me pick up your story about your Math class:

________________

>> After marking the exams, the students in my class had a review lesson. Our teacher showed us two graphs. One was a graph of the exam scores, and the other was a graph of our assignment scores. The first graph had a nice little Bell curve – just the sort of thing you’d expect from a class of students. The second graph was quite different. Instead of a tail on either side, there was a rather large bump at the high end of the graph. The teacher pointed this out, and correctly inferred that someone had been helping the students with their assignments. He was absolutely right. That someone was me. Of course, a bump in a curve is not the same thing as a narrow blip, but it still possesses mathematically salient values (e.g. the beginning and end of the range where it peaks), which my teacher was able to identify. Here, we have a case where an inference to intelligent agency was made on the basis of an unexpected local bias in a probability distribution curve which one would ordinarily expect to look something like a Bell curve.

My teacher was perfectly right to suspect the involvement of an agent, but? >>was his case a watertight one_________________

This of course highlights the significance of underlying dynamics at work on alternative explanations, and the degree of reliability of an inference; also, the question of responsibility to decide and act in the face of evidence supportive of a case, but with possible error.

On the “normal class” expectation, classically we should see a near-Gaussian Bell type distribution with a peak — in the old days, at grade C. Nowadays, I guess classes are “set” to give a B to the “average” student. [And yes, the level of expected grade and degree of peakedness are more or less consciously built into the design of a curriculum once its content, approaches, time to complete learning, intake criteria and underlying pool of candidates are set. We all know about easy courses and suicide ones.]

So, the teacher saw the UNEXPECTED, which was now informative, by virtue of salience of the difference from the expected.

Enter, stage

~~left~~right the per-aspect explanatory filter that is the implicit backdrop to ever so much of real world inferential reasoning and action, including in science.We normally see causal effects tracing to one or more of chance, necessity and art. And, we routinely have to decide on signs as a basis for action.

That there is a peak is a natural function of the level of intake the scope-sequence and the typical intake. That there is a scatter, is as a typical result of the variability of intake and the various accidents of the course and the test taken. (Grades and tests or assignments are not a final judgement beyond dispute . . . )

But also, sometimes things don’t fit our expectations. And we make risky decisions that are often right.

How?

By doing a filter analysis. We expect that6 different aspects of an object or situation or process will have observable signs traceable to chance, necessity and/or art. In this case the appearance of a superposed second distribution suggests that a second population is present. Most likely, people who got help.

But, in principle, it could be that by chance, we could have a class where lo and behold, we got some smarties and some average folks; by the luck of the draw, after all we probably are dealing with a smallish distribution where fluctuations from pop norms are to be expected. (That is probably why there was no earlier intervention. If there had been a bump on the other end, there probably would have been a special needs ed intervention.)

But then a cross check happened, under circumstances that would reduce the likelihood of external help, and the brightness bump vanished.

So, we see that there were three hyps at work, and we have signs that pointed to one of them.

Why all of this talk about educators and how they rate their classes?

It helps us see that we use explanatory filters all the time, and we understand that even imperfect tests are often good enough for real world responsible practice.

It also helps us see that we may easily have several competing hypotheses floating around, and that the superior one stands out on explaining the material facts, coherence and explanatory power. So, one is most unwise to exclude a material possibility on an imposed a priori. In fact we saw that: the teacher was inclined to discount the utility of tests, only to find that a test was most revealing.

Now, did the results rise to the level of proofs beyond all doubt?

Nope, I doubt that even significance tests were explicitly used. (Or, were they VJT? In any case these tests are as a rule a second tier of the same looking for what does or does not fit with the expected on HYPs 1, 2, . . .)

Similarly, we can see that the judging observer, AKA semiotic agent, is a factor we should not lightly dismiss.

Nor are all things that are not in numerical quantities meaningless. Indeed, strictly that imposes an infinite regress as MG probably does not recognise. As in, is that criterion itself a mathematically defined entity . . . ?

So, we need to rethink.

I also quietly suggest that we should appreciate the wisdom of the underlying principle that looks at config spaces and their scopes relative to accessible search resources, islands of function or otherwise definable target zones, and then asks: accessible to chance and/or necessity or to intelligent action that picks the island on knowledge, skill and creative imagination.

Then, we ask, which is the more reasonable explanation for say a 747: (a) a tornado in a junkyard, or (b) Boeing?

Which, for a blog post like this: (c) lucky noise on the net, or (d) an intelligent poster?

Which for a Hello world program: (e) a monkey at the keyboard over in Redmond, or (f) an intelligent programmer?

Which for the complex specificity of the living cell with integrated metabolism and a von Neumann type self replicator: (g) chance and necessity in the chemistry of a still warm little pond or the like, or (h) a designer with the capacity of at least that of a high tech molecular nanomachine lab?

I choose b, d, f, h. On config space search and possibility of a designer as my reasons. (If a designer cannot be ruled out on solid reasons, one is possible. For me the contingency of our cosmos and its fine tuning that sets up a zone fit for C-chemistry cell based intelligent life, make such a candidate designer very possible. And, the CSI, specified on specific function, is to my mind as semiotic agent, a strong and empirically reliable sign of such intelligence.)

Which do you choose, why?

GEM of TKI

PS: MF’s reserved criticism announced as such is little more than an blind appeal to dismissive authority. I would suggest that he should put up at least an abstract; given the effort VJT has put in..

PPS: And JR ducks out behind an ink cloud of appeal to the authority of a Journal review panel.

There is a substantial argument on the table, so let’s look at it on the merits.

For me, I think we need to start with basic concepts first and give them a reality check. Hence my cases above and the simple brute force X-metric that builds on cases like that.

I read the CSI metric above [and Dembski’s] as saying that 10^80 atoms across the observed cosmos’ lifespan would be doing something like 10^150 rolls of the cosmic dice [and WmD’s 10^120 was on Lloyd’s 10^90 items doing bit ops], so if something is much deeper in a config space than that, it is implausible to expect it to be observable on chance contingency plus blind forces that are more or less stochastically controlled by relative weights of clusters of microstates compatible with the general framework; but not on design. After all, we routinely see things deeper than that picked out by designers.

We may debate approximations and model parameters as we will, of course.

And, BTW, on the Specification paper, I see that on p.1, the ABSTRACT begins:

That looks to me rather like the underlying frame for the relevant probabilities is that they are

extremelylow, so that if something is a low probability approximation, it is valid. In that context, a technical footnote is a technical footnote, corrective or not.Onlookers:

Sorry to be O/T, but this is so morally significant and such a warning of what we are facing from those tanked up on rhetoric and rage from the fever swamps, that I have to highlight it.

This, after several instances of censorship and career busting, which finally led to the Gaskell case where U of K just had to pay US$ 125,000 in settlement of a suit over biased job selection.

Also, in a context where over the past few days, JR has been confronted with direct quotations and summaries of rhetorical practice that point out just such a prioi censorship up to and including an attempt to redefine science as being implicitly materialistic.

But hen, she routinely indulges Saganian evidentialism, that demands “extraordinary” evidence for the claims she is inclined to reject; boiling down to no reasonable evidence will suffice.

This is not good enough when issues of injustice are on the table.

Telling, sadly telling.

GEM of TKI

KF,

I specifically made the point that no evidence exists that journals are rejecting papers based solely on the fact that they claim to support ID. My point was very clear and specific and you have chosen to distort. No surprise there.

If such evidence exists, please provide it.

The point is that people are not submitting such papers, because such papers do not exist.

Not, as you claim, that such papers would not be published should they be submitted to such a journal because of Darwinian bias.

So, Gordon, please show me a rejection letter that explains that a paper cannot be published because it supports ID and for no other reason.

You can’t.

No, Gordo, what I’m asking for is evidence that journals routinely reject papers solely because they purport to support ID rather then because of any errors in the paper itself.

You may now go off at a tangent and talk about anything at all that is irrelevant to the point I just made. Much as how you are pretending to ignore Indium’s question, specifically:

You are ignoring it because you can’t explain it. If the issue is forced you’ll respond that you don’t need to explain it because Indium can’t explain the origin of life or the origin of a quantity that you invented. Or perhaps you’ll demand he explains the origin of consciousness and because he can’t you’ll claim victory anyway.

VJT:

Now, in the OP above you make the following remark, after various mods to Dembski’s Chi-metric for CSI; I insert Eqn numbers:

Following up at Dr Giem observed that:

Now, let us look a bit more closely at that approximation in light of the Hartley-Shannon view of information as a negative log metric:

C = – log2(D*p) = -log2(D) -log2(p)

That is, C = I – K, . . . Eqn 5

where I is the [Hartley] info metric for p, in bits.

What we are doing above is specifying a threshold K, beyond which we are confident in inferring to the relevant info-set being a product of art not chance and/or necessity.This point is underscored by the further point that you, VJT, made on your first mod:

That can be captured easily enough, by using the technique I have used in my X-metric.

Define a metric Q, for K-C compressibility [and being functional in a specific — reducible to algorithmic or data structure — way will fit in such] and st it to 0/1 — or even a sliding scale where 1 is a peak value, and multiply the above by it, i.e if the constraint is not met the metric is forced to a zero, and if we use the sliding scale version it forces a higher and higher complexity threshold as specificity falls:

C’ = [Q] * [-log2(D*p)] . . . Eqn 6

Now, let us revert to the case where D = 10^150, or more helpfully, D = 2^500:

Where C = I – K, . . . Eqn 5

and K = 500 bits

C’ = I – 500 bits . . . Eqn 7

THAT IS, THE C’ METRIC IS BASED ON IDENTIFYING A THRESHOLD BEYOND WHICH THE SPECIFIC OUTCOME OF LOW PROBABILITY ON A CHANCE HYP, IS INFERRED AS MAXIMALLY UNLIKELY TO BE THE RESULT OF BLIND WATCHMAKER TYPE PROCESSES OF CHANCE AND/OR NECESSITY.

[ . . . ]

Comparing, the “simple X-metric runs thusly:

________________

>> a] Let complex contingency [C] be defined as 1/0 by comparison with a suitable exemplar, e.g. a tossed die that on similar tosses may come up in any one of six states: 1/ 2/ 3/ 4/ 5/ 6; joined to having at least 1,000 bits of information storage capacity. That is, diverse configurations of the component parts or of outcomes under similar initial circumstances must be credibly possible, and there must be at least 2^1,000 possible configurations.

b] Let specificity [S] be identified as 1/0 through specific functionality [FS] or by compressibility of description of the specific information [KS] or similar means that identify specific target zones in the wider configuration space. [Often we speak of this as “islands of function” in “a sea of non-function.” (That is, if moderate application of random noise altering the bit patterns will beyond a certain point destroy function [notoriously common for engineered systems that require working parts mutually co-adapted at an operating point, and also for software and even text in a recognisable language] or move it out of the target zone, then the topology is similar to that of islands in a sea.)]

c] Let degree of complexity [B] be defined by the quantity of bits to store the relevant information, where from [a] we already see how 500 – 1,000 bits serves as the threshold for “probably” to “morally certainly” sufficiently complex to meet the FSCI/CSI threshold by which a random walk from an arbitrary initial configuration backed up by trial and error is utterly unlikely to ever encounter an island of function, on the gamut of our observed cosmos. (Such a random walk plus trial and error is a reasonable model for the various naturalistic mechanisms proposed for chemical and body plan level biological evolution. It is worth noting that “infinite monkeys” type tests have shown that a search space of the order of 10^50 or so is searchable so that functional texts can be identified and accepted on trial and error. But searching 2^1,000 = 1.07 * 10^301 possibilities for islands of function is a totally different matter.)

d] Define the vector {C, S, B} based on the above [as we would take distance travelled and time required, D and t: {D, t}], and take the element product C*S*B [as we would take the element ratio D/t to get speed].

e] Now we identify the simple FSCI metric,

X: C*S*B = X, . . . Eqn 8

the required FSCI/CSI-metric in [functionally] specified bits. Once we are beyond 500 – 1,000 functionally specific bits, we are comfortably beyond a threshold of sufficient complex and specific functionality that the search resources of the observed universe would by far and away most likely be fruitlessly exhausted on the sea of non-functional states if a random walk based search (or generally equivalent process) were used to try to get to shores of function on islands of such complex, specific function. >>

________________

So, the X-metric does much the same, but using a brute force approach on the underlying physics.

[ . . . ]

Reverting to the Dembski Metric from the Specification paper:

Chi=-log2[10^120.Phi_s(T).P(T|H)], . . . Eqn 9

. . . we see the same structure C = – log2[D*p], with only the value of K = – log2 (D) being differently arrived at. In this case, we have a compound factor one term being a metric of number of bit operations in the observed cosmos, and the other — expanding the threshold bit depth — being a metric of “the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T.” 10^120 is basically a multiplier taking into account “where M is the number of semiotic agents [S’s] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120.”

So, whatever the technical details and critiques involved, the metrics all boil down to identifying a reasonable threshold for which, beyond it, once we have specified complexity by KC-compressibility or functionality etc, we can be confident that the hot zone or the like are maximally not likely to have been hit upon by chance.

Thus, the critical factor, once specification and complexity jointly apply, is a threshold. And, it turns out that the range 500 – 1,000 bits is a credible threshold.

At the upper end, we are talking about 125 bytes, or 143 ASCII characters, which specify a field of possibilities of 1.07*10^301. A cosmos we observe where its 10^80 or so atoms on a generous estimate could not go through 10^150 states, could not seriously survey an appreciably different from zero fraction of the possibilities, so such a specified complex entity is comfortably beyond the reasonable blind watchmaker search capacity of the cosmos.

At the same time, 143 ASCII characters is about 18 – 20 words of typical English, and only a few lines of typical code or data structure description. So, any significant object that has in it text beyond that, or a nodes, arcs and interfaces description or algorithmic code beyond that, is quite reasonably to be seen as designed.

Based on abundant observation of such systems, such a threshold, though mathematically quite complex, is conceptually, or linguistically easy to surpass.

Our intuitive recognition that complex functional systems that are specific — a moderate bit of noise or random alteration will fairly easily disrupt function — is best explained as designed, is sound.

While we may now proceed to further examine this “what does it mean” look at the models, or even critique the metrics and the analysis that gets to them, I think this backstop is in place and we need to address it if one is claiming that blind watchmaker processes can give rise to the specified complexity and functionality that are for instance manifested in life, in body plans and the rise of language based on the physical provisions in brain, vocal tract etc.

So, now, let us look at the basic idea and the more sophisticated renderings, metrics and issues.

GEM of TKI

PS: On the peer review jeer, we can respond that

VJT is doing“open workbench science”; with hisworking notebookin public — live — before the watching world.MS JR:

RE:

A FORMAL COMPLAINT TO THE UD MANAGEMENT TEAM ON YOUR UNCIVIL BEHAVIOURYou know or should know that I have repeatedly asked that my personal name not be used in this blog or other contexts;

as this tends to trigger spam waves at my mailbox.That is a simple, easily met request; one that requires no great effort. One that many other commenters here at UD easily comply with.

We now know as well that with ever more personal info online, providing personal information online is an invitation to identity theft and all the harm that flows from that.

Since you know or should know this — it is fairly obvious that your handle is not your real name (not many Jemimas around these days); there is no excuse for your uncivil behaviour. Behaviour that shows plainly the sort of ruthless amoral factionalism that Plato warned of as a direct result of the rise of evolutionary materialism among the avant garde of a culture, 2,350 years ago.

Moreover, we are not in a neutral situation.

You know, or should know that design theory advocates are often subjected to workplace harassment or worse, so “outing” tactics such as you have indulged above through disrespectfully using my personal name, can do real harm, and exerts a chilling effect.

Thankfully, I am beyond the reach of the NCSE and similar thought police. But the tactic could easily hurt others, so I have to take it seriously when it is exerted against me.

You have plainly allowed obvious rage and hostility to lead you into utterly inexcusable behaviour. Yesterday’s childish insults were not enough to satisfy your anger, so today you have resorted to outing tactics.

For shame!

I trust you will understand the consequences of such behaviour.

Good day, madam

GEM of TKI

F/N: Onlookers, on the sort of harassment that ID friendly behaviour in the context of journals can lead to, I give you Sternberg and Gonzalez, also Bergman’s Slaughter of the dissidents. I have already alluded to the case of Gaskell, where U of K had to pay US$ 125,000 in settlement. And, long since, we have documented the censorship of science right up to the level of the NAS trying to redefine science materialistically, the ultimate form of censorship. The usual blame the victim excuses will be trotted out, but that simply compounds the seriousness of what is going on. Remember, the early Christians went to the lions, not because they were worshiping Christ, but because they were enemies of humanity, and traitors to Rome who refused to take the loyalty oath in the local temple, KAISER KURION. Oh, what flimsy excuses we can dress ourselves up with when we do the inexcusable!

KF:

Really, the persecution complex is getting old. Between the complaints of privacy violations and claims of slander against you, it gets very tiring.

You may also want to work harder on keeping your ID secret ON YOUR OWN WEBSITES.

For that matter, you may want to get a better spam filter for your e-mail.

Muramasa:

Sorry, but you are upholding in wrong.

You know or should know the difference between a relatively low traffic reference web site and a high traffic blog that has to deal with continual spamming attacks.

My email spam filter is in fact one of the best, but the problem is still real.

And, the persecution problem for design proponents is real; denial of it is what is called enabling behaviour. You will observe, too, that I point out that thankfully outing will not hurt me, but it will create an annoyance.

As to JR’s rising trend of incivility, that is a matter of open and plain record.

Remember, all I am asking for here, is to respect my privacy by using the handle that is there in public.

That is not much to ask for.

Good day

GEM of TKI

JemimaRacktouey,

vjtorley has chosen to put his post here. That is his perogative. If he later publishes it, or submits it for publication, that will also be his perogative. If you wish to comment, you may do so, as may markf or myself. If you want to say that he should publish it instead, that’s okay, although others may differ, most importantly vjtorley.

However, it still seems odd to me that markf should make a comment about an approximation that vjt uses as an estimate of probability, when according to markf there are much more substantial objections. Some of us would like to know what those are, and if they really are substantial or simply amount to disguised atheist dogma.

vjtorley,

I’ve finally caught up in my real life work enough to visit here again. I appreciate the effort you’ve put into this post and would like to discuss it further. Given how quickly new posts are added to UD, this will be pushed off the front page in short order and that makes it much harder to follow the discussion.

Do you have your own blog where we could discuss this topic further?

MG:

Pardon a note.

One can simply bookmark the page, or (messier) use browser library and tab features.

GEM of TKI

Follow up:

As we continue with some open working notebook science . . .

Let us observe (on Giem’s reduction C = – log2(D*p) so C = I – K, bits) the potentially emerging integration of the different metrics that have been under discussion, once one is dealing with rather low values of p that allow the approximations; which, given the circumstances, is WLO Validity.

In short, I argue from 14 – 16 above, that we are really looking at metrics of information beyond a threshold, K; on the Hartley log-probability approach to information.

Cf on this, Harry Robertson (exploring the link from information theory to statistical thermodynamics):

Also, Jayne has some crisp words on the integration of the semiotic agent observer into the explicit reckoning of science:

So, with some rabbit trails hopefully out of the way, discussion may now focus on reasonableness of threshold, and on assigning reasonable probability metrics to convert to information.

I note that the X-metric approach is also brought in by this approach; as the issue is really is it reasonable that non-intelligent approaches can find something on a hot zone so isolated in a space of possible configs.

Durston et al’s H-metric is related in concept, but different in execution (using families of observed proteins to estimate variability while preserving function sufficient to be a viable organism).

Mathgrrl, Jemima Racktouey and Markf:

Thank you for your interest in my post. I’ll be posting something in the near future on a practical test for CSI, and we can continue the discussion there. By the way, Jemima Racktouey, where do you suggest I publish my proposal?

Kairosfocus

Thank you for your very detailed comments on the X-metric, and how the different CSI metrics all tie in together. I look forward to your forthcoming contribution to the ongoing discussion on CSI.

vj

A bit of time to think about this. I will try to write something longer later in the week.

(1) You have succeeded in making CSI easier to estimate by replacing the rather strange Phi_s(T) with a very large number. Phi_s(T)was intended to capture the number of ways an outcome could be at least as surprising as the observed outcome. You have essentially given up on that said let’s assume it is not as big as 10^30. I don’t see any justification for this – just a feeling that it is big enough.

(2) You have added the subjective concept of salience in addition to KC. I think the whole reason Dembksi introduced KC was to avoid subjective concepts of what is surprising and try to substitute a more objective criterion (even though KC is not always computable). So I see this as a backward step.

(3) Most importantly both your definition of CSI and Dembski’s lack a justification. You come up with a number and then announce that it is reason for rejecting chance and assuming design. Yes you observe that only things that have no known plausible natural explanation have this number – but part of the definition is P(T|H) should be incredibly low – so this is hardly surprising – it essentially means “no known plausible natural explanation”.

Why complicate the issue with all this mathematics? If all you want to claim is “if there is no currently known plausible natural explanation then it must be designed ” then why not just put it like that?

Throughout all this remember that there is are widely accepted methods of deciding if something is designed based no established principles of evidence e.g. Bayesian inference and comparative likelihood. The only issue being that they require an estimate of P(T|design) as well as P(T|chance). And this requires saying something about who, how, when the design took place. You have to ask what various CSI contortions add to this and how they are justified. I firmly believe that in the end all this maths is a (possibly unconscious) smokescreen to hide the fact that the ID community wants to avoid saying anything about the design side and simply conclude – can’t see how it happened naturally therefore it was designed.

Just noticed an important typo above. Should read:

Throughout all this remember that there are widely accepted methods of deciding if something is designed basedonestablished principles of evidenceSorry – too much of a rush

MF of course leaves off one aspect: CSI is a highly recognisable, often observed fact [think, Hoyle’s Jumbo Jet], and in every case where we directly know the cause, it is intelligent, So it is not an inference on ignorance to infer from CSI to intelligent cause, The metrics simply provide quantification of that estimation process, especially by providing a threshold of relevant complexity.

markf (#26),

You are partly correct that half of the bottom line of ID is close to “if there is no currently known plausible natural explanation then it must be designed”. But there are some important mistakes in your formulation, and it is critically incomplete.

First, the “must be” should be corrected. ID is not a proof; it is a fallible inference. Thus the formulation should be amended first to be, “If there is no currently known plausible natural explanation, then the best current inference is that it is designed.”

Second, the term “plausible” needs defining. It refers not to “what a proponent of unguided evolution as a complete explanation can persuade him/herself to believe”. Nor does it mean “what an ID proponent can persuade him/herself to believe”. It means “what can be supported by well-designed experiments, or can be reasonably argued from neutral theory.”

Third, the term “natural” needs defining. There is a debate on whether humans are natural or not. There is not, or at least should not be, a debate over whether they (we) are intelligent. So the true differentiation is not between natural and something else (supernatural?), but between intelligent and unintelligent. The terminology should reflect this.

Finally, and most importantly, as kairosfocus (#28) has implied, the objects in question look designed, as recognized by such people as Richard Dawkins and George Gaylord Simpson. In some cases, including that of long strings of functional DNA, it has been demonstrated that they can be created by intelligent design. So it is not just the absence of a “plausible natural explanation” but also the presence of a plausible, and in some cases demonstrated, explanation involving intelligence.

Thus if you wish to be accurate, your formulation should read, “If there is no currently known demonstrated or theoretically persuasive explanation not involving intelligent design, and there is a demonstrated or theoretically persuasive explanation involving intelligent design, then the best current inference is that it is (at least partly) intelligently designed.”

#29 Paul

If there is no currently known demonstrated or theoretically persuasive explanation not involving intelligent design, and there is a demonstrated or theoretically persuasive explanation involving intelligent design, then the best current inference is that it is (at least partly) intelligently designedI have no problem with this. Now please supply the demonstrated or theoretically persuasive explanation involving intelligent design for life.

#30 I should be more explicit – can you supply a demonstrated or theoretically persuasive explanation involving design for life other than the small elements of DNA that humans designed? To say that life in general was designed by humans is not very persuasive!

To return to the subject of CSI. There is no attempt in any of the manifestations of CSI to capture the existence of a persuasive explanation involving design. CSI is just a mathematical way of expressing the view that there is no currently known plausible natural explanation.

markf (#30-31),

Good. We’re making progress. We agree that

Because of your qualification in #31, I am guessing that you concede that this paragraph legitimately applies to “the small elements of DNA that humans designed”.

“Small” is a relative term here. We are talking about a genome roughly 1/10th the size of a standard bacterium. The string of DNA is much larger than that required to produce a flagellum, so it would appear that we have demonstrated in principle that intelligent designers could create a flagellum. So would you concede that it makes more sense to believe that a flagellum was the product of intelligent design than of unintelligent processes?

For life itself, your objection might still hold weight for one believing that all truth can be supported by peer-reviewed literature. But because I am not completely bound by ID strictures, there is another argument that I can make that does not belong to ID proper.

If one believes that there is evidence for a human-like intelligence before the advent of humans (implicit in the DNA strings argument), then we cannot limit events in history to undirected forces, and it is possible that a similar intelligence may have acted in some portions of history. That means that we cannot rule out miracles by appeal to natural law.

And once we get rid of that stricture, there are reliable reports that non-living matter, specifically dead bodies, in some cases dead for over 24 hours, have come to life again, apparently by intelligent design. Thus we can show evidence for the idea that not just strings of DNA, but life itself, can be created by intelligent design. That would seem to be experiential, if not experimental, evidence for the intelligent origin of life.

To return to CSI, you are right that CSI in isolation is mostly a negative argument. But if you put it in the context of the positive arguments, it fulfills the requirements of the statement upon which we agreed.

Paul

No. I don’t concede this. First let’s put aside the issue of size. The length of the string is hardly the issue. It is presumably easy enough for a human to create an enormously long string that doesn’t do much. As far as a real flagellum is concerned I want a demonstration or a theoretically persuasive explanation of how a real flagellum came about through intelligence.

Put it another way – Behe demands that a nonintelligent explanation provide considerable detail about the probabilities and viability and the selection pressures of each stage. I don’t ask anything like as much detail of a proposed design explanation but I do want enough so I can evaluate whether it is theoretically persuasive. This has to go beyond “some intelligence influenced it”.

Here we get on different buses. I have never seen anything approaching reliable reports that non-living matter, specifically dead bodies, in some cases dead for over 24 hours, have come to life again, apparently by intelligent design. In these cases who was the designer and how did they do it?

I don’t see any positive arguments I am afraid.

(ROFL)

markf (#33),

Let me get this straight. You seriously want me to believe that the DNA that Venter’s group created “doesn’t do much.” Yea, other than supply the DNA needed to run the entire protein manufacturing apparatus for a cell, plus code for all the proteins, you’re right. You seriously want us to believe that a chain that codes for some 400 proteins plus ribosomes is harder to make than a chain that codes for some 60 proteins? It is just amazing what you will say (I’d say with a straight face, but then I can’t really see your face) in defense of your position. (Recovers self, still smiling)

On the dead being raised to life, I’m surprised that you don’t catch the allusion. If you disbelieve in it, I wouldn’t be surprised, because materialism has no room to accept the truth of such reports, and can be counted on to challenge their veracity at every turn. But you should be at least able to recognize what I was talking about. You’re not

thatignorant, are you?That is more like the debating form I was expecting. I don’t know if there is anything I can do to help you see the positive arguments, but, hey, I’ll give it a shot. Humans can create complex harmonious art. They can create motors, destination tags (as at airports), decoding devices, code storage, and code itself. In fact, in one specific case, they have created enough DNA, long enough, and specific enough, that that it can function as if it were natural DNA. There are apparently only 19 mistakes in the entire artificial genome from what was planned. Are you trying to tell me that you can’t see that this is positive evidence that a genome can be, and possibly has been, designed?

#34 Paul

Paul

I am sorry, I didn’t explain this well. My point was simply that the size of the string is irrelevant – one example being that a scientist could produce a string of almost any length that didn’t do anything. I was not trying to imply that Venter’s DNA does nothing.

A bit disappointed that you are starting to make personal remarks.

I guess you are talking about the Bible. But I wasn’t sure. I am aware of other stories about people being raised from the dead. As I said we get on different buses here – about what counts as evidence.

It is evidence that it might be possible for people to design a full working genome one day. It is not evidence that the first genomes were designed because I am pretty convinced there were no humans around when the first genomes came into existence. Until someone makes a hypothesis about what did the designing and how it implemented the design then there is nothing to provide evidence for. I come back to the detail Behe demands for an evolutionary explanation of the immune system – compare what he demands of evolution with what you provide for design. .

markf:

“It is not evidence that the first genomes were designed because I am pretty convinced there were no humans around when the first genomes came into existence”

I am also pretty convinced that there weren’t any humans around when the first genome came into existence. That being said, it poses no problem for believing life was designed. This is how I look at it. What did it take for venter to create an artificial genome? It required intelligence. He didnt rely on the random forces of nature to do the work for him.

Now you obviously believe that no other causal agency is needed to create a genome other that the undirected, unguided forces of physics and chemistry. What evidence is there that the undirected forces of nature can accomplish such a feat?

In shorth, there is zilch evidence for that proposition.

You ask for evidence concerning the who and the how of the design. That is not what ID theory is about. How could we know without being there?

ID concerns itself with identifying what causal agency can reasonably be credited with the origination of certain objects, for example living organisms. Can life be attributed to necessity(law),chance, a combination thereof of those two causal agencies, colloquially known simply as nature,or design.

We know the properties and characteristics of life. Even Dawkins admits that life looks designed,of course he says it is just illusory, sinse he believes nature can do it. But that is the thing, there is absolutely no evidence that nature can produce life from scratch.None.We observe living things and see that there is deep teleology involved in living organisms.Why does your stomach growl? To make you aware that you need food. Why is it making you aware that you need food?Because without food you die. There is a purpose to everything in living processes. How could nature produce systems that are purposeful if natural phenomena are themselves non teleological.Nature doesnt work with any goal in mind,but we know of a causal agency which does operate with purpose and foresight. Intelligence. Nature couldnt care less if you life or die, but a designing mind would.

#36 kuartus

Your comment makes a number of points that have been discussed many, many times on this forum – so please forgive me if I do not respond to all of them in detail.

I do believe there is evidence that non-intelligent causes can create a genome because many of the individual parts of the process have been observed and recreated in the laboratory. But even if you don’t accept this evidence, at least there are concrete hypotheses out there for which the evidence can be assessed.

You write of ID:

You ask for evidence concerning the who and the how of the design. That is not what ID theory is about. How could we know without being there?Well that is my point. Because ID is not about any specific design hypothesis it is not possible to assess the evidence for it or against it.

Here is a little thought experiment. Suppose we eventually find a bizarre set of circumstances which cause amino acids to form into functioning/replicating RNA without direction. Circumstances which could not possibly have existed before on earth. I then announce the non-ID theory that non-intelligent causes created the first genome. You then say – but these circumstances could not have applied on earth. I then respond “but that’s not what non-ID is about – how could we know without being there. It is now about how non-intelligent processes formed life. It is simply the theory that non-intelligent processes formed life.”

#37 Sorry – yet another typo

That should read:

I then respond “but that’s not what non-ID is about – how could we know without being there. It is

notabout how non-intelligent processes formed life. It is simply the theory that non-intelligent processes formed life.”markf:

“many of the individual parts of the process have been observed and recreated in the laboratory”

I always find it amusing that origin of life researchers basically engage in chemical engineering when creating biomolecules and then claim that unguided nature can accomplish the same thing. Its like me claiming that televisions can come about by purely natural phenomena like lightning and erosion because we are able to make them.After all, I have that heard materialistic scientists say that nature is smarter that us.SERIOUSLY?A mindless undirected,unguided process, with no hindsight,foresight, no learning capability,no care in the world can accomplish things like making life?

Something not even the smartest people in the world can do or even come close to replicating? Call me incredulous. But aside from that, simply creating biomolecules doesnt solve the problem.Jonathan Wells conducted an experiment where he basically blew up a living cell to see if its basic components would come together to form a cell again. All the necessary molecules for life were there in an ideal enviroment perfect for life.And what happened? The cell didnt come back from the dead.Why? because there is no innate tendency for non living materials to come together and form life, just like there is no innate tendency for calcium carbonate to arrange itself in the form of precise geometric pyramids.Just like the form of the pyramids must be imposed from outside so do the molecules of life. They need all the codes and machinery and tools in the cell to serve any living purpose. The question becomes,where did all this functional organization come from if natural tendency cannot account for it?Seeing that function and organization is something that is always imposed by an intelligence, then it is reasonable to credit a mind for its origin.

“Because ID is not about any specific design hypothesis it is not possible to assess the evidence for it or against it.”

You are confusing to things, a design implementation hypothesis, and ID. You are right, you cannot asses evidence for or against a non existent design implementation hypothesis. None is provided.

But ID doesnt depend on a design implementation hypothesis. ID searches for any type of SIGNATUREs of design, and then seeks to find such signatures in objects in question.A signature of design would be a characteristic that is consistently possed in objects where design is involved. If an object or event or circumstance in question does not posses any design signatures, then it is resonable to discard ID as an explanation since there is no good reason to entertain the design hypothesis.

ID is about extrapolating the knowledge we have of designed objects and applying that knowledge to objects whose origins are not directly observable.

I hope you forgive me if this comment is too long. I have a lot more to say,but I’ll wait for your response.

#39 kuartus

You are a lot more polite than many of your colleagues and I appreciate your sincere comments but if were to continue this discussion I can predict:

(1) We will not cover any new ground

(2) Neither of us will change our mind about anything

Let’s just leave it shall we?

markf,

Far be it from me to force a dialogue, so sure.

Though I have to say that in principle I would change my mind if convincing evidence was presented on a particular subject,but I have to admit that we all interpret evidence according to our worldview, so my overall world view could only be challenged by undeniable proof.

markf (35),

No, actually you expressed yourself quite well. The position you are currently defending is just in a tight spot, and you are doing your best to defend it. To review:

In #31 you said,

You knew that the DNA that Venter and company made was coming, so you wished to make their achievement seem as small as possible, and your choice of words was apt for that purpose. I then pointed out (#32) that

This was a big warning sign. Now I would have two examples where either experiment (mycoplasmal DNA) or well-founded theory (DNA for the flagellum) suggested that it was within the capabilities of at least some intelligent designers, and we can’t have multiple examples of design for biological structures that do not currently have a plausible (in the sense described in #29) naturalistic explanation. So you again tried to minimize Venter’s accomplishment. In answer to my question (#32),

You said (#33),

Presumably you had now dropped the part about “small” DNA, without explicitly conceding the point, and now were arguing that Venter’s DNA “didn’t do much”. It’s great debating tactics; never admit you’re wrong, just move on to something else, and contest every point you can. But this time you painted yourself into a corner. I feel for you. I’m glad I don’t have to defend that position.

The point is that Venter’s DNA was both very specific, and was functionally determined (one specific mutation was fatal, or perhaps more accurately, a non-starter). It’s hard for me to fathom why someone would maintain that DNA coding for 400+ proteins plus ribosomes and transcription promoters and repressors is harder to reproduce that the DNA coding for some 60 proteins, unless the person was fighting desperately to maintain a position. Seriously, you will grant that we have reasonably shown that a flagellum is possible to reproduce by humans, won’t you?

_

You go on to say,

Well, I didn’t think so. Again the history is important. My first comments were (#32),

You came back with #33:

It was as if you had no clue what I was talking about. I thought that you really should know, and hoped it was just a debating stance, and asked, hoping for a negative answer (#34),

Notice that I wasn’t implying that you were stupid, and even hoped that you weren’t ignorant. I wasn’t trying to insult you.

And sure enough, you did know, or strongly suspect, to what I had reference (#35):

There is probably not much to discuss here. Readers will have to decide whether I believe in non-existent evidence, or whether you have discounted valid evidence because it goes against a cherished theory, or worldview, or whatever. We’re not likely to solve this question on this thread. But at least the issues are clear.

_

We finally get to the crux of the debate. I said (#34),

You replied (#35),

If humans had been around when life on earth got started, then you could accept them as designing life. But first, there is (other than life itself) no evidence of humans, and second, they wouldn’t have to design life; they could just defecate and E. coli would take care of the rest.

Remember that one of the arguments used to blunt the force of the “non-living forces and objects have not been shown to produce life” argument is the “that was 3 billion years ago and we can’t find out exactly how it happened” argument. I agree with the argument as far as it goes. That’s why I don’t ask for

thepathway, only a plausible one.But by the same token, you can’t ask ID to produce the designer or the method of creation (I agree that design entails some method of creation), only a plausible one. And here is where the rub comes.

I can argue till I’m blue in the face, and clear out all the other obstacles, that, as you agreed in #30,

I can get you to agree that intelligent beings can create long specified strings of DNA, because we have experimental evidence for this. And I can get you to agree that there is no experimental evidence for such long specified strings of DNA to have come from nature without intelligent guidance, and the theory is incomplete at best; certainly not as complete as the evidence for intelligent design.

But none of that will matter. That is because, while I allow for the possibility of there being an intelligence at the time life started, and that the origin of life counts as further evidence of that possibility, you, I suspect, do not. (Correct me if I am wrong). Therefore, since there cannot be such an intelligence, no amount of evidence can really count as truly evidence for such an intelligence.

In fact, you may go further (clarify if you need to). You may actually argue that, since there was once no life on earth, and since there cannot have been an intelligence that started it, life being here is probative evidence for the spontaneous generation of life. So there must be a pathway; we just haven’t found it yet.

In that case, we will probably have clarified the issues as well as we can, and might as well move on.

markf:

One specific design hypothesis would be that when designing agencies act the leave traces of their actions behind. And via our knowledge of cause and effect relationships we can determine what those traces are.

And to refute any give design inference all one has to do is demonstrate that nature, operating freely, can produe the effect in question.

That would be a good step for your position but living organisms are more than that.

markf,

Not to take the place of comment #42, which I would appreciate it if you responded to first, but now that Joseph has requoted it in #43, I have to comment on your thought experiment in #37. I think the general point you are making is a valid one.

But it would take a pretty bizarre set of circumstances to cause amino acids to form into RNA, let alone functioning/replicating RNA. Proteins, maybe, but RNA?