Uncommon Descent Serving The Intelligent Design Community

genetic-id, an instance of design detection? (topic revisited)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(In an effort to help my IDEA comrades at Cornell I revisit the issue of Genetic-ID. My previous post on the issue caused some confusion so I’m reposting it with some clarifications. I post the topic as something I recommend their group discuss and explore.)

The corporation known as Genetic-ID (ID as in IDentification, not ID as in Intelligent Design) is able to distinguish a Genetically Modified Organism (GMO) from a “naturally occurring” organism. At www.genetic-id.com they claim:

Genetic ID can reliably detect ALL commercialized genetically modified organisms.

I claim that detecting man-made artifacts (like a GMO) is a valid instance of applying the Explanatory Filter.

The Explanatory Filter is used all the time (implicitly):

The key step in formulating Intelligent Design as a scientific theory is to delineate a method for detecting design. Such a method exists, and in fact, we use it implicitly all the time. The method takes the form of a three-stage Explanatory Filter.

I want to emphasize, the Explanatory Filter (EF) is used ALL the time. When ID critics say the EF has never been used to detect anything, they misrepresent what the EF is, because the EF is used ALL the time.

The Explanatory Filter faithfully represents our ordinary practice of sorting through things we alternately attribute to law, chance, or design. In particular, the filter describes

how copyright and patent offices identify theft of intellectual property
….
Entire industries would be dead in the water without the Explanatory Filter. Much is riding on it. Using the filter, our courts have sent people to the electric chair.

(bolding mine)

When we detect design in a physical artifact, we detect the Complex Specified Information (CSI) the artifact evidences. That means we see that a physical artifact conforms to an independent blueprint.

In the Bill’s book, No Free Lunch (NFL), the concept of CSI if formalized. CSI is detected when the information from a physical artifact (physical information) conforms to an independent blueprint or conception (conceptual information). CSI is defined as:

The coincidence of conceptual and physical information where the conceptual information is both identifiable independently of the physical information and also complex.

It is important to note CSI is defined by two pieces of information not just one

CSI is consistent with the basic idea behind information, which is the reduction of possibilities from a reference class of possibilities. But whereas the traditional understanding of information is unary, conceiving of information as a single reduction of possibilities, complex specified information is a binary form of information. Complex specified information , and specified information more generally, depends on a dual reduction of possibilities, namely a conceptual reduction (i.e., conceptual information) combined with a physical reduction (i.e., physical information ).

Genetic-ID uses PCR (polymerase chain reaction) to detect whether an organism has physical characteristics (physical information) which match a known blueprint (conceptual information) for a GMO. This is a relatively simple case of design detection since the pattern matching method is exact and highly specific. Genetic-ID’s technique is a somewhat trivial example of design detection, but I put it on the table to help introduce the concept of the Explanatory Filter in detecting designs at the molecular level.

But how about less specific pattern matches to detect GMO’s? Do you think we could detect a GMO such as this:

Data stored in multiplying bacteria

The scientists took the words of the song It’s a Small World and translated it into a code based on the four “letters” of DNA. They then created artificial DNA strands recording different parts of the song. These DNA messages, each about 150 bases long, were inserted into bacteria such as E. coli and Deinococcus radiodurans.

Or how about this kind of GMO, a terminator/traitor which does not have a published specific architecture : Terminate the Terminator.

Terminator technology (sometimes called TPS-Technology Protection System or GURTs-Genetic Use Restriction Technologies) refers to plants that are genetically engineered to produce sterile seeds. If commercialized, the technology will prevent farmers from saving seed from their harvest for planting the following season. These “suicide seeds” will force farmers to return to the seed corporations every year and will make extinct the 12,000-year tradition of farmers saving, adapting and exchanging seed in order to advance biodiversity and increase food security.

Extending these ideas, can we in principle detect nano-molecular designs such as a nano-molecular computer? If we find a physical molecular artifact conforming to the blueprints of a computer, should we infer design?

With that question in mind, I point to the fact that biological systems are computers, and self-replicating computers on top of that! This fact was not lost upon Albert Voie who tied the problem of the origin-of-life to the fact that the physical artifacts of biology conform to a known blueprint, namely, a self-replicating computer. I commented on Voie’s landmark outline of the origin-of-life problem here.

In as much as biology conforms to the blueprints of a computer, are we justified in inferring design? And finally, are not the claims of Darwinian evolution ultimately claims that blindwatchmakers can create “Gentically Modified Organisms” (so to speak) from pre-existing organisms? What then do we make of Darwinian evolution’s claims?

Comments

Dave, you're mischaracterizing hypermoderate's argument. A more accurate analogue would be as follows:

Given: Snukldorf is defined to include only things that sparrows can't make.
Therefore: if you see snukldorf, it wasn't made by a sparrow.

I submit that hypermoderate is correct, and that the tautological nature of the CSI argument has been pointed out by several people and never addressed by Dembski. You can do one of four things with this assertion:

1. Demand that I back it up with evidence.
2. Explain why it's incorrect.
3. Proclaim it wrong, but offer no explanation.
4. Censor it and ban me.

I'm hoping you'll take one of the first two options, but I suspect you'll go with #4.

You missed the fifth option. Pat you on the head and say "that's nice, sonny". -ds secondclass

great_ape,

Your concern about sampling bias is a valid one. I would answer it this way: In science, we sometimes have to choose between two hypotheses which fit the data equally well. If the two hypotheses have a chance element, it makes sense to prefer the one which is more probable. If we choose the more probable explanation, we're more likely to be correct, although there is no guarantee. We might be wrong. In any case, we keep our eyes open and are prepared to modify or abandon our chosen hypothesis as more data comes in.

In Dembski's framework, there are two possible explanations for why we are here. One of them, as you suggest, is that a statistical fluke occurred which created CSI via sheer luck, leading to us. Another is that we were designed. Dembski would presumably argue that the second is overwhelmingly more probable than the first. Though you cannot rule out the first absolutely, you're far more likely to be right by betting on the second.

It reminds me of something I've thought about in connection with the multiverse hypothesis. If there really exists an infinitude of universes with differing physical constants, laws, and starting conditions, then presumably there exists a universe somewhere much like ours, but where every coin ever tossed has come up heads. Scientists there are convinced that there is some deep explanation for this regularity, but have been unable to find it. From our perspective we can say "You just got (extremely) lucky (or unlucky)."

Inside that universe, the right thing to do is to look for a deterministic explanation of the coin-tossing phenomenon, because the alternative is so improbable. From the outside, we know that the improbable alternative happens to be the true one.

Okay, that's it. Find somewhere else to babble. -ds hypermoderate

great_ape wrote:
"I do not see any fundamental circularity in Dembski’s argument. It basically boils down to “if it’s ludicrously unlikely that it was produced by unintelligent materialistic causes, it can be comfortably inferred that it was produced by some kind of intelligent agency.”

Hi great_ape,

If that were Dembski's argument, there would be little to object to. There would also be nothing new, as that argument has been around since long before Dembski. To restate, it simply says "Everything is either at least partially designed, or it is undesigned. If one of these alternatives is extremely unlikely, the other is overwhelmingly likely." It's just an application of Aristotle's Law of the Excluded Middle.

What Dembski is trying to do is different. He's trying to introduce a concept, CSI, as an independent, reliable indicator of design. Find something with CSI, and you know it was designed. Salvador certainly interprets Dembski's argument this way, which is why he suggests that "some architectures are recognized by engineers as designed, and it’s only a matter of asking if a biological system conforms to our pre-conceived pattern and if the pattern can be shown to have 500 bits of information."

But the very definition of CSI requires that unintelligent causes be incapable of producing it, and so via the excluded middle something that has CSI is by definition designed. So to say that CSI is a reliable indicator of design is simply to say "Something that is designed can be reliably inferred to have been designed." Quite true, but also quite circular.

And we're left with exactly the same question we had before the concept of CSI was introduced, which is "Could natural selection (or other unintelligent causes) have produced the living structures we see around us today?"

Translated into Dembski's terms, we would say "Structures with CSI are designed, but it's an open question whether living structures have CSI, by Dembski's definition of the term."

You're about to get the boot for stupidity. According to you the following is circular reasoning:

Given: Sparrows can't make bicycles.
Therefore: If you see a bicycle, it wasn't made by a sparrow.

This isn't circular reasoning. It's a simple deduction. Stop wasting comments with this idiocy. Last warning. -ds

hypermoderate
hypermoderate, I do not see any fundamental circularity in Dembski's argument. It basically boils down to "if it's ludicrously unlikely that it was produced by unintelligent materialistic causes, it can be comfortably inferred that it was produced by some kind of intelligent agency." False positives will occur, but they should be ludicrously infrequent. The informative, non-circular essence of the argument is that a probabilistic framework for this kind of thing might be arranged so that we can make reasonable inferences on these matters similar to the kind of inferences we make concerning whether the sun will come up tomorrow, etc. Think what you may about logistical details involved in making such a calculation, but I don't see it as inherrently circular or tautological. As a scientist, I would be mildly concerned with sampling bias coming in to play, though. Assuming they can be calculated--however ludicrous the probabilities might turn out to be--**if** our existence as questioners was, in fact, contingent upon such occurrences happening via nonintelligent mechanisms, then the probability of our, as reasonably complex sentient beings, observing such complex structures is shifted to 1. So ultimately I think Dembski may have to make an even stronger case. Not only is achieving a certain threshold of specified complexity highly unlikely given the overall system, but it is, in fact, not possible at all. Only then can the anthropomorphic "sample bias" argument be finally put to rest. I would be interested to hear people's thoughts on this. great_ape
Patrick, I have no problem with the fact that a design inference can't occur until someone notices the rocks. The question is whether the rock pattern constitutes CSI before anyone notices it. secondclass
secondclass, It's readily admitted that ID can produce false negatives. Patrick
Salvador, there's a problem with the "conceptual information" requirement for CSI. Consider Dembski's example of the rocks on the ground that match a certain constellation. If the rock pattern is not specified until an agent notices it, then CSI is created by the act of noticing. Is this your position? secondclass

"Is this anything like natural selection’s survival of the survivors? You meant to show a tautology, not circular reasoning. You accomplished neither. -ds"

ds,

1. A tautology is a form of circular reasoning (try Googling "tautology circular reasoning").
2. Why do you think the Dembski quotes are non-circular?

Regards,
Hypermoderate

You call what you wrote "reasoning"? :roll: -ds hypermoderate

Salvador, Mung:

It is Dembski, not me, who defines specified complexity in terms of the probability of producing a structure via material mechanisms:

From Chapter 12 of The Design Revolution:
"Indeed, to attribute specified complexity to something is to say that the specification to which it conforms corresponds to an event that is vastly improbable with respect to all material mechanisms that might give rise to the event."

Another quote from Chapter 10:
"For something to exhibit specified complexity therefore means that it matches a conditionally independent pattern (i.e., specification) of low specificational complexity, but where the event corresponding to that pattern has a probability less than the universal probability bound and therefore high probabilistic complexity."

And from Chapter 12 again, regarding the possibility of false positives:
"Even though [the absence of] specified complexity is not a reliable criterion for eliminating design, it [the presence of specified complexity] is a reliable criterion for detecting design."

Thus Dembski's own words illustrate the circularity of the argument.

To recap:
1. According to Dembski, specified complexity is only present if the event is "vastly improbable with respect to all material mechanisms that might give rise to the event."
2. Specified complexity is "a reliable criterion for detecting design."

The circularity is obvious: if it wasn't produced by material mechanisms, then it wasn't produced by material mechanisms. Therefore it was designed.

Is this anything like natural selection's survival of the survivors? You meant to show a tautology, not circular reasoning. You accomplished neither. -ds hypermoderate
1. To quantify the CSI contained in a structure, you need to know how probable it is for that structure to come about by non-intelligent means.
I believe this is incorrect.
2. To quantify that probability, you need to understand all of the non-intelligent mechanisms that could potentially produce the structure in question, and you need to be able to estimate the probability of success for these mechanisms working separately and in concert.
Which would mean that this also is incorrect.
3. Natural selection is one of the mindless mechanisms available for producing biological complexity.
And this is both unintelligible and unproven. 1. How does one establish the claim that "natural selecton" is mindless? 2. How does one establish the claim that natural selection is a mechanism? 3. How does one establish the claim that natural selection is capable of producing biological complexity? 4. Since natural selection is just one of the mindless mechanisms available for producing biological complexity, what are the others, and why doesn't one need to take those into account as well? Mung

Hypermoderate: To quantify that probability, you need to understand all of the non-intelligent mechanisms that could potentially produce the structure in question, and you need to be able to estimate the probability of success for these mechanisms working separately and in concert.

I appreciate that point, but that is not how scientific theories are postulated. If we applied that standard to every theory, there would be no theory, and certainly no evolutionary theory. There is no theory that even attempts to account for every possible cause. It is ordinary practice to simply identify the magnitude of the space of possibilities, and make a reasonable estimate based on empirical evidence as to the probability. Seeing 500 coins heads and inferring design does not require accounting of every possibility.

And finally, regarding Turing machines, they can not arise out of self-organizing systems. They must arise where improbability is guaranteed as uncertainty is an essential element of an information processing system (Shannon defined information as reduction of uncertainty). Thus a highly probable Turing Machine is a oxymoron. The Designer chose an architecture which would resist complaint that we don't know enough!

Salvador

scordova
jaredl, I agree that they must be known mechanisms. After all, without knowledge of the mechanism, you can't estimate the probability of producing the structure in question. But restricting consideration to known mechanisms does not eliminate the circularity. The tautology remains. Inserting the word "known" into my previous comment: "5. In other words, it is a tautology to say “Any structure containing 500 bits of CSI is designed”, because the very definition of CSI implies it. In other words, all you are saying is “If natural selection (or any other known mindless mechanism) couldn’t have produced something, then natural selection couldn’t have produced it.”" hypermoderate
"No. All we need are the relevant known non-telic mechanisms. To appeal to unknown non-telic mechanisms constitutes an argument from ignorance." --jaredl Fair enough. But you're still in a precarious position because what is to be done concerning *plausibly relevant* non-telic processes that are *known* to exist and impact the outcome in some fashion, but for which the exact parameters and associated dynamic interactions involved are either unknown and/or the entire (putatively) generative system (i.e. the universe) is too large/complex/nonlinear to assess whether these (ostensibly) nontelic processes could in fact yield a specified level of complexity? This, I believe, more accurately reflects our current situation and state of knowledge and this is why, specifically concerning the question of complexity, ID and darwinism are effectively at an impass in terms of achieving air-tight indisputable arguments for their respective positions. This will continue to be the case for the forseeable future in IMHO. great_ape
"you need to understand all of the non-intelligent mechanisms that could potentially produce the structure in question, and you need to be able to estimate the probability of success for these mechanisms working separately and in concert...." No. All we need are the relevant known non-telic mechanisms. To appeal to unknown non-telic mechanisms constitutes an argument from ignorance. jaredl

Salvador,

There's a fatal circularity in the idea of using 500 bits of CSI as a criterion of design:

1. To quantify the CSI contained in a structure, you need to know how probable it is for that structure to come about by non-intelligent means.

2. To quantify that probability, you need to understand all of the non-intelligent mechanisms that could potentially produce the structure in question, and you need to be able to estimate the probability of success for these mechanisms working separately and in concert.

3. Natural selection is one of the mindless mechanisms available for producing biological complexity.

4. To say that a biological structure has 500 bits of CSI, you therefore must already know that the probability of producing it by natural selection (or any other mindless mechanism) is extremely low.

5. In other words, it is a tautology to say "Any structure containing 500 bits of CSI is designed", because the very definition of CSI implies it. In other words, all you are saying is

"If natural selection (or any other mindless mechanism) couldn't have produced something, then natural selection couldn't have produced it."

hypermoderate

Michaels7 wrote:

A longer view question; does every new pattern recognition algorithm established take another brick from the wall of evolution? My ignorance in genetics I’m sure shows. My bunny comment was intended humor. But I think Salvadore has hit on something here especially as es58’s point of intellectual property values relates to the debate. I’m curious how lawyers see this issue and businesses like GeneticID. Market forces created GeneticID. Lawyers I’d think will see similar opportunities.

Thank you Michaels7 and es58 and others. I did scant work in nano-molecular machines. The time will come when it might be helpful to be able to distinguish a molucular artifact from the work of blind purposeless forces.

I pose these questions to thoughtful individuals: "What kinds of molecular architectures would suggest one is dealing with a design of intelligent origins? If one's goal is to make a molecular machine that would signal design to a human observer, what characteristics would it possess? Does biology conform to these architectures? Will it become hard to distinguish a man-made nano-molecular machine from a naturally occurring one had we not had the a database of existing "naturally" occurring machines?"

I emphasize again, the case of detecting a Monsanto GMO via a direct sequence was meant to be starting point, not an ending point for the disucssion. I chose the example because it illustrates important concepts that must be mastered before tackling the far greater issues at hand.

At ARN when I put up a similar thread, it showed how much the critics of ID misrepresented Dembski's concepts. Most did not even realize CSI dealt with two items of information (conceptual and physical) not just one (conceptual is what most think CSI deals with exculsively, and they are mistaken).

I hope if nothing else, the readers have a better idea that CSI is composed of 2 sets of information not one! In answer whether the flagellum is designed, here is the case for CSI:

1. the conceptual pattern (conceptual information) are the numerous lock-key and login-password systems in flagellum's architecture and construction. There are bit values associated with lock-key login-password systems

2. the physical pattern (physical information) is the lock/key patterns evidenced by the flagellum

The lock-and-key metaphor is an independent detachable pattern. Bill Dembski calls this metaphor "interface compatibility".

Computing the bits for this is more diffucult than calculating for an expliciy pattern for a GMO that is given by the designer, but it is not impossible. I refer to the reader to my thread at ISCID where I give hints for calculating lock-key probabilities without knowing the exact pattern in advance! The example I give for dice, with some modification is extensible to lock-and-key systems which we do not have exact patterns for (like the Monsanto GMO). See:

Response to Elsberry and Shallit 2003

Salvador

scordova

Mung and Jon_e,

Welcome to my friends from ARN!

Everyone,

I'd like to thank everyone for their comments. The reason this thread was re-opened was I asked Bill Dembski's permission, and he granted it. He felt detecting a grain of wheat as a GMO versus a wildtype grain was indeed design detection, thus the closure on my earlier thread was removed and this discussion was permitted to go forward. That is not to say there is necessarily a right or wrong answer to these issues, but these issues will likely surface again.

The case for genetic-ID is trivial. We have access to the designers (like Monsanto) to give us the detachable specifications which allow us to detect design in an unknown grain of wheat or corn, etc.

The reason I put the example out was to educate the readers in understanding design detection. There are several issues in detecting design, for starters:

1. what architectures would signal design?
2. does an artifact conform to an architecture?

when #1 AND #2 are decided, one detects CSI.

for the case of GMO's, #1 was fairly trivial as the designer (Monsanto or other genetic engineering companies) gave us the blueprint or specification in advance of the detection. #2 easily followed.

But what if we do not have the designer giving us the independent, detachable specification? What architectures then would satisfy #1? We humans surprisingly have archytypal architectures in our consciousness which would signal design even if we do not have the explicitly written down like a Monsanto GMO:

1. patterns of yet-to-be heard music (indeed we know music when we hear it, even if the tune is novel!)

2. patterns of software or language (how did we know heiroglyphics and cuneiform were designed even before we had the rosetta stone)

3. transcendant engineering designs such as a computer (we recognize a computer irrespective of whether it is made of vacuum tubes, silicon transistors, germanium transistors, magnetic relays, DNA, or other materials)

4. our subconscious conception of what would be designed (such as glowing fish)

Regarding the glowing fish, or green rabbit, indeed these are genetic innovations. We are very tempted to infer design in such cases! But cannot the same be said for every other major innovation in an organism from a supposed prior ancestor in the fossil record? I would argue yes. The case is made rigorous if the independent pattern can be assigned a number of bits (a glowing fish innovation requires a certain amount of information increase which may be measurable in bits).

The case for GMO detection was the case of #2 above, not #1.

Ok, how about more difficult issues with design detection where we have to answer the question posed by #1. What patterns in biology would signal design, absent the designer handing us the blueprint?

Here is my suggestion, we have already working examples in human engineering that conform to biological systems, thus, in some cases #1 has been solved in that some architectures are recognized by engineers as designed, and it's only a matter of asking if a biological system conforms to our pre-conceived pattern and if the pattern can be shown to have 500 bits of information. Here are some examples:

1. bat and whale sonar (bat echolocation is absolutely cool)
2. optical sensing and vision processing (I worked at Army Night Vision, and I can tell you the human eye is non-trivial)
3. computers
4. software
5. digital-to-analog and analog-to-digital
6. spectral analyzers and advanced signal processors (the ear!)
7. error-correction
8. software search heuristics (immune system)
9. self-replicating automata
10. digital control circuits
11. feedback control circuits
12. adaptive neural networks
13. fail-safe systems
14. information security
15. lock-and-key, login-password metaphors (protein interaction)
16. coders/decoders
17. complex navigation (monarch butterflies)

etc.

Note the architectures above transcends the underlying materials which build the system. (Shakespeare transcends the chemistry of the ink and paper or screen pixels used to convey his writings).

A sonar system is a sonar system whether it is made of man-made materials for a submarine or biological materials such as in whales or bats. (The founder of IDEA FUMA (Fork Union Military Academy) was a Naval Academy grad in Electrical Engineering. He recognizes the intricacies in sonar systems. They are non-trivial designs. Biological sonar is an example of designed pattern reconizable to electrical engineers, like biological computers are recognizable to computer engineers.)

The challenge is affixing a number of bits to each of these transcendant architectures. The self-replicating computer has a defined architecture, and my preliminary analysis says it's bit numbers exceed Universal Probability. Therefore, in much the same way that we have the Monstanto GMO blueprint, we have the computer (Turing Machine) blueprint and the self-replicating automata blueprint. We merely need to see if a biological system fits the blueprint to reasonably infer design, and the answer is yes. That was the point of Albert Voie's Peer-Reviewed Paper

Salvador

PS
jon_e I fixed your blockquote comments.

scordova
"But admitting even that much would be a concession to an idea - simple as it is - that originated with a Prominent ID Person." --Jon I don't think labelling this as EF would entail a concession of anything. The idea of searching for evidence of *human* tampering has been around for quite some time. It's not unlike looking for boards, soda cans, and rope at the sight of a crop circle. Genetic-ID just does it with modern technology, and it happens to involve organisms and their DNA. This can not be seen in any way as the culmination, or in any way shape or form the product, of Dembski's work. I suspect that the folks at genetic-ID (and their lawyers) would agree with me here. great_ape
Jon, When quoting here don't use quote and /quote in square brackets. Instead use blockquote and /blockquote in angle brackets . Mung

If anything, you’ll lose points when the rank and file try to pass off examples as ID-supportive that are merely childsplay when compared to the more restrictive usage of “design inference,” the unknown case, the one everyone discusses in the context of ID. Many will not understand–as has already been made evident here–that just because you include this in the design inference definition for formal theoretical and/or semantic completeness, you can’t run around using such examples to support design inference in the more restrictive sense. Because of this danger you should invoke the ultimately arbitrary nature of naming to carefully and constructively delineate “design inference,” when in the context of ID, more narrowly.

Salvador himself has already stated this is a "trivial case." The problem ID folks face is that anything with Wm. Dembski's name on it (viz the EF) is open to misrepresentation by the critics. The genetic-ID case is simple, and trivial, on one plane, but very instructive and revealing on another. If it is so trivial as to be obvious, then why do the critics prolong the discussion interminably arguing that it is not a valid demonstration of the EF? Either it is a trivial, yet valid, demo of the EF, or it is an invalid case for the EF. It is bleedingly obvious that, when concerning issues of patent protection, for example, what Genetic-ID does is entirely relevant to the EF. But admitting even that much would be a concession to an idea - simple as it is - that originated with a Prominent ID Person.

The same type of rigamarole is invoked when other simple, basic IDeas are discussed (most notoriously, Behe and the concept of IC). Therefore, ID proponents find themselves arguing endlessly with critics about whether mousetraps are really IC. You'd think the ICness of mousetraps would be blatantly obvious, and one should move on to the more interesting and difficult cases of IC in biology - yet the critics (generally establishment scientists) have spent a great deal of energy and time stubbornly refusing to give quarter to the concept in its most basic and trivial form.

Jon_Ensminger

hypermoderate,

Excellent points and illustration. As you indicated, the genetic-id approach can be encompassed within the larger usage of "design inference." And that has been at the heart of the confusion here. But to me it seems evident that it is only the more restricted, nontrivial usage of design inference that is meaningful and relevant to ID.

mung, you're certainly free to include such "known" cases if you wish to define "design inference" that broadly. But employing a working definition that is broad enough to include trivial displays of "pattern matching" will ultimately injure your cause. Don't expect anyone to give ID any additional credibility for extending definitions so widely that these trivial cases could be treated as evidence of "design inference" in action. If anything, you'll lose points when the rank and file try to pass off examples as ID-supportive that are merely childsplay when compared to the more restrictive usage of "design inference," the unknown case, the one everyone discusses in the context of ID. Many will not understand--as has already been made evident here--that just because you include this in the design inference definition for formal theoretical and/or semantic completeness, you can't run around using such examples to support design inference in the more restrictive sense. Because of this danger you should invoke the ultimately arbitrary nature of naming to carefully and constructively delineate "design inference," when in the context of ID, more narrowly.

I agree. This is a counterproductive example. -ds great_ape
The *inference* is based upon what we *know* of the cause and effect structure of the world. This is why ID theory must include cases of known design and known relationships between artifacts and intelligence. If it didn't, then there would be no basis upon which to make an inference in the cases where the relationship is not clearly known. Mung
There is a typo in my post #40. In the last line, should be this paper If I messed up th link, try http://tinyurl.com/em28c Xavier

I have a few slightly-picky points to make regarding the most recent posts.

1. Genetically-modified plants/crops are modified for a reason: to confer resistance to herbicides, to increase shelf life, or whatever. Therefore, it is not necessarily a simple case of comparing one bland rock to another; there might be other indicators that distinguish the plant from "nature." If a patch of canola plants survives a heavy dose of Roundup spraying, chances are good that the plant contains the Roundup-Ready gene. A PCR test to detect the presence of the RR gene would merely be confirming something already suspected.

2. GMO detection is moving beyond relatively simple PCR pattern-matching from a database of known sequences, to heuristic methods for detecting unknown GMO sequences (for example this.

I'm not sure how (2) is related to detecting pre-existing (not human-made) design in nature. The unknown GMO detection relies on using known natural reference genomes, characterizing PCR hybridization on a number of natural reference variants, then in the actual test popping up a GMO red flag when hybridization patterns not matching anything produced by known natural variants is observed. The former (1) case relies on knowing ahead of time what human-made DNA sequences look like and the latter case (2) is a subtractive method that relies on knowing ahead of time what non-human-made DNA sequences look like and flagging all others as possible human insertions. Such pre-knowledge is not available in inferring non-human design and so it isn't comparable to what ID attempts to demonstrate. -ds Jon_Ensminger
I am totally in agreement with DaveScot on the point that Genetic-ID do not (nor do they claim to) use the EF when comparing test samples to their database of known GMO material. Rather than pursue this red herring, has anyone else thought that this paper may have some relevance? Xavier

It seems that much of the confusion on this thread stems from different meanings of the phrase "design inference":

1. Some folks are using "design inference" to refer to any process that allows you to examine an object and conclude that it was (at least partially) designed.

2. The other folks use "design inference" to mean the process of examining an object's structure and concluding that such a structure could not come about except as the result of an intelligent teleological process. Therefore it was designed.

Meaning #1 is broader and includes meaning #2 as a subset.

To highlight the difference between these two meanings of the phrase "design inference", imagine that we're hiking in the desert and we come across an irregularly shaped boulder. Being extremely ID-conscious, we ask ourselves, "Was this boulder designed?" Everything about the boulder appears compatible with natural, mindless geological processes, so we conclude that it was not designed.

A little further up the trail, we come across a dented tubular container. Opening it up, we find an exquisitely detailed blueprint of a boulder, with an Acme Boulder Co. logo in the corner. Taking it back to our boulder, we find that it matches every indentation, crack, and protrusion with uncanny accuracy.

Calling the Acme Boulder Co. the next day, we learn that they have manufactured thousands of these boulders for installation in rock gardens from Vegas to Tokyo. The one we found in the desert is one that fell from a cargo plane (along with the blueprints) and was never recovered.

Now we believe that the boulder was designed. What made the difference? Not the shape of the boulder itself -- after all, we examined it upon first encountering it and concluded that it was not designed. The difference is that we now know that the boulder matches a design manufactured by the Acme Boulder Co.

Which of the two kinds of "design inference" have we performed? Clearly the first kind, but not the second. Before we knew about the blueprint, there was nothing about the structure of the boulder that suggested design.

We concluded that the boulder was designed, not based on the design itself, but on other information we acquired about the source of the design.

GeneticID is making the first kind of design inference. ID theory is attempting to make the second kind. The second kind is much harder than the first to demonstrate.

This brings up a valid point I overlooked. Absent the design specification (artificial DNA sequences) Genetic ID acquires from the GMO producers they can't tell a genetically engineered tomato from a natural tomato. The artificial DNA is indistinguishable from the naturally occuring DNA. Depending on whose worldview you choose to speak from one can say "It all looks equally designed" or "It all looks equally undesigned". In your analogy, hypermoderate, you chose the equally undesigned viewpoint. Any reason for that? -ds hypermoderate

Alas, it was Ogg's *deer.* I must have lost that particular brain cell last night during my sleep.

It seems there is no use trying to convince folks that this unremarkable thing as far as ID goes. One last time, though, because I'm stubborn: this is not a new industry with its foundations in the design inference paradigm. This is a fancy--it's not even that sophisticated--tamper detection scheme. Nothing conceptually new. We know the precise nature of the contamination to look for. If you try to offer this as a legitimate working example of your approach, you have my sympathies...

great_ape
"this adds weight to design inference" As DaveScot has already pointed out whatever ramifications this has for law etc it is a really bad example if you are then going to extend the process to look at nature. You can say this is a good use of Dembski's method but the problem is it can be calculated accurately that a certain gene in an organism had a very low probability of arising naturally, because we know exactly what and how the designer designs. Chris Hyland

mung: "It adds to the existing knowledge that certain effects can be attributed to intelligent causation."

Maybe so, but I don't think that the fact that such attributable effects *exist* was ever in serious doubt. The question--the *difficult* question--is how on earth to define rigorous criteria to diagnose those effects. Here you have a case where this central question doesn't even pertain. How about formulating the genetic-ID situation like this:

How does *man* know when man has modified an organism?
Man look for things man puts in organism.
Man finds these things -> man content he modified.
Man no find -> man content he not modify.

How about a parable? Ogg told big chief he was on a hunt, injured a dear with arrow, but alas he failed to recover it. Big chief goes into woods alone to find Ogg's dear. Loh! Chief find 3 dear dead instead!! One dear has an arrow in its heart; the other two have no marks. Chief ponders which is dear felled by Ogg? Chief recalls Ogg shoots dear with arrows. Wise chief chooses dear with arrow in heart, and brings back to tribe. Later that night the chief recounts the story to the tribe as they sit around the tribal fire. He boasts of his clever "inference from arrow technique." Tribe is underwhelmed. They beat chief senseless and install Ogg as new chief. Everyone eats dear and there is much rejoicing.

Now replace "injure" with "genetically modify." Substitue "arrow" with "defined dna sequences." Replace "dead" with, say, "orange". What you get is an disturbingly primitive group of geneticists, but I think my point holds nevertheless.

I hope it was really Ogg's deer, not his dear, he wounded. -ds

great_ape
Summation of issues/problems/insights by posters.... possible research areas? 1) Probability/stats of GMO vs nature - can data attributes be rated and appropriate tables of CSI be determined for informational boundaries? This ultimately comes down to math and vindicates Dembski's involvement in this debate as well as mathematicians on both sides. 2) Identification of trivial vs non-trivial in RM/NS vs Design - where on the scale? One single point mutation within species vs gene splicing of proteins across multiple levels of taxa. Ambiguious Mutations(with cost) vs Targeted Beneficials w/o cost(great_apes sickle cell allele example). 3) Laws/Patents/lawsuits - does this ultimately force the issue from a new perspective neither side predicted in the ID/Evo debate? I'm reminded of commentary on OPFOR blog re: Cold War. Each side geared up for a battle which did not materialize head on, instead unforseen circumstances hit each nation from side on. I think this is the case now for ID/Evo advocates - business and law will drive the future debate, not academics(except as expert witness testimony). What will be the ramifications as technology moves forward in science and education, crime? Example: future branding technologies by GMO companies? Future Gene Hackers to remove branding? Classes offered at leading universities in design recognition of GMO's vs nature? Specialized genetic law degrees and Case Law studies of Evo vs Design? Business is saying, yes, we detect design and will use it. This makes it "non-trivial" imo certainly as lawsuits will erupt in the future. Great_ape, Dave, I think Mung is correct this adds weight to design inference. Plus, billions are already being leveraged on the new design paradigm. They do not care about teleological debates or materialist views. But it still pushes design to the forefront. They will ultimately want to protect their investments. 4) Establishing test procedures and QA of evolvability in the lab for case law of Evo vs Design. Can test be developed to induce genes to evolve at rapid rates vs designed products? Example might be nylanase eating bacteria in nature vs lab. Business will ultimately want Design Laws to win, not evolution. Because if evolution succeeds in the lab, it could cost them vast profits? Sorta the generic vs brand name cost? 5) Because business and market forces drive the new design paradigm - will actuary's find new positions in risk adventures of GMO vs old case law precedents and anticipatory design cost vs evolution? A longer view question; does every new pattern recognition algorithm established take another brick from the wall of evolution? My ignorance in genetics I'm sure shows. My bunny comment was intended humor. But I think Salvadore has hit on something here especially as es58's point of intellectual property values relates to the debate. I'm curious how lawyers see this issue and businesses like GeneticID. Market forces created GeneticID. Lawyers I'd think will see similar opportunities. It "appears" that design detection methodology is a valid future specialty minor, if not maybe a specialized major for genetics. It appears a whole new level of forensics ID will develop. Am I to far off in some of these conjectures? It seems like fertile ground for ResearchID topics. Finally, can someone answer my transgenic question vs simpler GMOs? Are there not different levels of complex manipulation? Therefore a table of evolutionary vs design rates a valid metric for future litigation and property rights? Michaels7
I primarily view GeneticID as a good possibility for double-checking the accuracy of the methods of ID (see how many false negatives are generated, etc). Thinking on this issue brought up a question: With the spread and cheapening of technology what is to prevent a ID-hating fanatic from eventually planting an instance of CSI in an organism and then falsely claiming to have documented that this CSI came about by RM+NS, thus falsifying ID by means of identifying a false positive? After all, if ID is such a danger to science as some claim then the ends would justify the means...not to mention in some circles being known as the "person who killed ID" would be quite a career booster. Patrick
Mung, The toughest issue, as I see it: No one has the foggiest idea how to calculate the probabilities of a flagellum evolving. To do so, one would have to condition on things like horizontal transfer, recombination, epistasis, effective population size as well as the prior state of the system. DS is totally wrong that this isn't an example of a design inference. It is completely equivocal to a case in which there are 50 decks of cards in the room, all of them are shuffled but one, and one is asked to pick out the unshuffled deck. The unshuffled deck fits a specified pattern that is unlikely to occur by chance. The sequence of DNA that is present in the GMO organism is also unlikely to occur in the organism by chance (ie, horizontal transfer or spontaneous evolution). How DS distinguishes between a "search for code sequences already known" and identification of a "specified complexity" is totally wrong. According to Dembski, methods used to identify plagiarism (based on the identification of near identical texts) are a design inference. If that it true, then this is a design inference. bdelloid
It seems we have yet another objection. This objection is that yes, this is design detection, it may even be design detection in accordance with the EF, but it's trivial. "The Genetic-ID case is trivial b/c it doesn’t entail any of the toughest issues at the core of the design inference." What are these toughest issues? It seems to me that the toughest issue at the core of the design inference is cases where we cannot discover or trace the causal history back to a designing intelligence. How does identifying a case when we *CAN* discover and trace the causal history become trivial? Why is this not a confirmation of the theory? It adds to the existing knowledge that certain effects can be attributed to intelligent causation. This can only strengthen the design inference, so it is hardly trivial from an ID point of view. Mung
es58 - well put. You hit the nail squarely on the head! antg
this may seem somewhat tangential, but it ultimately gets back to the genetic-ID issue, and I think this may help clarify some key aspects of design inference I don't yet understand: What is the ID-based position on the design status of dna sequences such as those associated with sickle cell anemia? As most of you are no doubt familiar, in certain ecological contexts, namely tropical, being heterozygous for the sickle cell allele confers resistance to malaria. Of course, the prevelance of heterozygotes in these populations necessitates homozygous individuals will be born who have this painful disease. So the question is this: is the malaria resistance designed in the allele? If so, then does this indicate the designer was not capable or otherwise unwilling to employ a superior design that did not entail pain and suffering? Alternatively, if the the malaria resistance allele/phenotype is not designed, it would seem a fortuitous byproduct of an otherwise unfortunate mutation. In *that* case, however, an important and functional phenotype in humans was the byproduct of a random mutation. (While the immediate biological change itself does not constitute irreducible complexity, arguably for the malaria resistance to express as a coherent phenotype an elaborate tapestry of IC molecular biology envelopes and support it) I would like to hear some of your thoughts on the matter. For me it invokes the notion of "functional ambiguity." Functional ambiguity (related to pleiotropy) is a problem when teleology can't be presumed and when you don't have a pre-defined template from the designers, such as Genetic-ID has (from other humans) for its PCR assay. Genetic-ID's approach is: 1. if it's modified, it will have one or more of these pre-defined signatures. 2. The probability of the signature arising idenpendent of modification is very small. 3. If sample X has one or more DNA signatures, the it is most likely the case that its existence is due to human modification. Modification is inferred. Dembski's approach, which seems to me on another plane altogether, is more like: I'm looking at an data...Is it a signature? What's the probability that *this* showed up on its own? if excessively/stupendously high, design should be inferred. While *superficially* similar, the second question is on a different plane entirely b/c it must first identify the signal/system, then come up with criteria for defining the "chance of *this* occurring on its own" given our entire state of knowledge of how the natural world works. That's a whole other level of difficult in my opinion. By having a pre-defined list, Genetic-ID variety inference gets teleology (via humans) for free, functional ambiguity is circumvented, and the only relevant probability to be calculated is the chance of finding those pcr primers (in appropriate orientations and spacings) by chance given the genome size and relative nucleotide frequencies of the organism in question... (which not as hard to calculate as it is to say three times rapidly b/c the relevant parameters are easily assayed) The Genetic-ID case is trivial b/c it doesn't entail any of the toughest issues at the core of the design inference. As such, it seems from my perspective to be trivial to the point of irrelevance. great_ape
We might someday see a proverbial apple that falls up instead of down and at that time we’ll have evidence that the law of gravity has exceptions. Until then, it’s pretty well written in granite that we won’t see any apples falling up.
Every time an apple falls it falls "up" as well as "down." It's just a matter of perspective. There is no violation of gravity in an apple falling "up." Were the sun to move closer to the earth we might observe apples falling "up" all the time. Can we get back to Genetic-ID, GMO and the design argument? Mung
#22, es58, excellent points and it puts both sides to the test. You hit on a current/future problem! I'm sure we'll see the use of new nano-technology like IBM's soon in branding. But... Maybe we should expand the GM test to animals, specifically to transgenic green pigs: http://news.bbc.co.uk/2/hi/asia-pacific/4605202.stm In nature, are the transgenic pigs and their proteins comparable to frogs with green pigmentation? Or is it just specific jellyfish? Or, how many other species contain the same DNA for fluorescent green? Or, can GeneticID build a test specifically for this transfer? "Transgenic" - human intervention and purposeful design. Building upon your question re: patents. If I mass produce "green bunny rabbits" after someone else files a patent. Can't I just say I found a green bunny in my garden one day and it must have evolved the green color from eating meee spinach? Wraaaskally wrabbits! If not, why not? What specific boundaries are being recognized here? Do transgenic green pigs provide a better example for utilizing EF? Michaels7
I'm waiting for the day that Intelligent Design comes up with a PCR reaction to test whether or not the flagellum was designed. Inoculated Mind
As this was posted with a note to the IDEA Club at Cornell I thought you might be interested to note that Allen MacNeill has a reply up here Cornellian
If we had a time machine, and we could transport every evolutionist that existed prior to the discovery of the technology that allows us to create these GMO's to today's world, and locked them in a room with all the newly designed GMO's as data, and neglected to tell them they were designed, we could sit back and watch them generate 10's of 1000's of papers explaining how everyone of the evolved, and they'd be fighting with each other, because each could see the colossal mistakes of the other, but not his own mistakes. But, to the rest of the world they'd present a single face, and insist they're only debating the "details", but certainly there's on controversy whatsoever that the organisms they were studying had evolved. And the "evidence" would be the 10's of 1000's of papers that they had produced. Does this sound familiar to anyone? es58
If the "designer" of a GMO component of arbitrary complexity tries to patent or copyright it, and I mass produce it, he'll sue me. And if I say in court, "you can't prove design" because it's impossible to prove design (ie: there is no validity to the EF), all the Dawkins types say so, would Dawkins be on my side and say: "he's right, it's only an 'appearance' of design" I'm sure he wouldn't. But, any evolutionist finding this *exact same pattern* 30 years ago (before we had the technical capabiliity to do this stuff) would be *forced* to conclude that it was the product of RM&NS. So the same pattern which today is "beyond a reasonable doubt" seen as the product of design, would, 30 years ago, have been unquestionably seen as the product of evolution. If there is no such thing as a valid EF, how can they *insist* it didn't evolve in court and find me at fault? Maybe I found a version that *did* evolve? For them to *prove* I didn't would seem to require them to admit that design can be detected es58

Also,

DS - you are pointing out one of the major problems that many have raised with the EF. The EF is dependent of the idea that the categories are mutually exclusive. As you have shown with the person with the artificial hip joint, the categories design and not-design are not mutually exclusive. This is a major problem with the EF that has not been fully addressed.

One only has to show one instance of non-human design. It is not a problem for the EF if it cannot positively identify every scrap of code on the DNA molecule as designed or not designed. EF is not dependent on the categories being mutually exclusive. It is dependent on finding one case and one case only giving a positive. The possibility for false negatives is understood. The real problem area is in probalistic resources. The EF must at some point presume that all probalistic resources are known and accounted for. The objection has been made, both by myself and others, that one cannot describe and account for resources one is unaware of. In the case of a protein, for instance, we don't know how many alternative amino acid sequences will function as well, or well enough, to work in place of the sequence under scrutiny. Computer modeling of protein folding should go a long way towards characterizing that particular probalistic resource problem. So far, such modelling capability has eluded us. At some point however, one presumes that the search for unknown resources is exhaustive and thus the description is complete. This is always tentative in science and is no unsurmountable impediment. We might someday see a proverbial apple that falls up instead of down and at that time we'll have evidence that the law of gravity has exceptions. Until then, it's pretty well written in granite that we won't see any apples falling up. So while I feel that a reasonably certain positive from the EF is not here yet, it's certainly a live possibility until proven otherwise. -ds bdelloid
ds makes a valid point that one cannot jump from "contains" to "is." However, the point is hardly a substantive one. The substance is to be found elswhere in dave's comment. ds admits that GMO wheat "contains a designed artifact." But for some reason or other (or lack thereof), when we actually go about examining wheat to see if it is GMO wheat, we are not performing design detection. So could someone please clarify for mw what the objection actually is? Is it that we are not doing design detection? Or is it that we are doing design detection, but we are not doing design detection in accordance with the EF? Is the argument that Dembski's EF is not a reliable way to detect design, or is the argument agnostic on that point, and rather that the EF is not the method of design detection being employed in the case of detecting GMOs? Can we at least try to identify the fundamental objections here? So who is arguing that there is no design detection of any sort whatsoever going on here, and who is arguing that there is, but it's not according to the EF? Mung

This is absolutely design detection as explained in the Design Inference. The DNA sequence at issue here is an example of specified complexity. Sal is 100 % correct. Plus, one can reasonably calculate the probabilities at hand here.

One can calculate the exact probabilities using the same exact method that was employed by the IBM engineers that were lauded here for finding putatative function in non-coding DNA.

Unlike the flagellum, this is an example of Design Inference based on reasonable probability arguments. The probability calculation for the flagellum, as written in NFL, doesn't get anywhere as close to an example of design detection. For one, the NFL calculation doesn't also show how the flagellum is specified (other than bold face assertion that it looks like an outboard motor), so that argument is the equivalent to saying a shuffled deck can't happen by chance because of low probability. In the case pointed out by Sal, we have a SPECIFIED PATTERN of nucleotides, so it is a much better example for the design inference as discussed in Dembski's first book.

This is not design detection. It is a search for code sequences already known to be designed. -ds bdelloid

Tiax wrote:

But they aren’t looking at the organisms and asking if they exhibit evidence of design. They’re asking if they’re on the list of GMOs. That’s as far as it goes. You could put a ‘natural’ organism on the GMO list, and they could tell you if what you give them is a match. Design never enters the picture.

To take the car analogy, they’re taking an object and asking, “Is this a car?” They aren’t finding the car and then asking if it’s designed.

Whenever one discriminates a lump of matter as a designed artifact from lumps of matter that are not designed, it is still a design detection. The way one realizes a lump of matter might be designed is that it conforms to a pattern such as the conceptual pattern of "car" or "watch" or "GMO". I pointed out where you equivocated, and you simply repeated the equivocation.

The question is not whether cars, watches, GMOs are designed, the question is whether a physical artifact is designed because it conforms to the conceptual pattern of a car, watch, or GMOs etc.

Let me spell it out for you:

One infers that a lump of matter is a car because it conforms to the pattern of a car. A car is desinged, therefore that lump of matter is considered designed.

One infers a grain of wheat is a GMO because it passes genetic-ID's test. GMO's are designed, therefore that grain of wheat is designed (as far as it being a GMO or naturally occuring grain).

Here are quote from one of Bill's books:

Taken in its most fundamental sense, the word design denotes a pattern or blueprint.

Design Inference page 8

Just about anything that happens is highly improbable event, but when a highly improbable event is also specified (i.e., conforms to an independently given pattern) undirected natural causes lose their explanatory power.

Opening of Design Inference

The word "event" I use interchangeably with artifact, but hopefully the reader who has Bill's books will see that "event" applies equally well to physical static artifacts such as mount rushmore.

The blueprints for computers and self-replicating automata have been around long before we realized cells conformed to them so well. The blueprint therefore is considered detachable and independent and not post dictive. A cell conforms to a self-replicating computer, self-replicating computers are designed, therefore the cell is designed.

The same can be said of blueprints for GMO or copyrighted material or patented objects. (Note: The blueprints are independent in the sense they are not post-dictive. And just for clarification, independence and detachability do not mean the blueprint could not have been used to guide the physical fabrication of an artifact (like a car, watch, or GMO), it means that the blueprint is not post-dictively drawn to fit the physical artifact. )

At least one basic mistake you're making is calling GMOs "designed artifacts". GMO wheat, for instance, is not a designed artifact. It CONTAINS a designed artifact. By your logic a person with an artificial hip joint becomes a "designed artifact". Does that sound logical to you? -ds scordova
"as I understand it, the PCR technique is not really specific enough to indicate a pattern match at the nucleotide level across an entire target nucleotide sequence. It is the combination of the specificity of the primers (a relatively small subset of the actual target sequence) and the length of the PCR product that returns a positive match." Good point, although they don't go in to too much detail on their website. It is possible that if they get a correct length match with the PCR they then sequence it. Thinking about it, this could be said to be using the explanatory filter if we say that a particular sequence known to be a GMO, once found, has a very low probability of occuring randomly, and confroms to some kind of specification. This doesn't appear to be a good example if you are going to argue for detecting non-human design in organisms though, becuase in this case we are able to calculate the relevant probabilities with some confidence. Chris Hyland
This is simply not comparable with the problem of determining if a naturally occuring DNA sequence was designed by a non-human designer.
The DNA in "naturally" occuring organisms conforms to a vital component in a self-replicating computer. We do not say "there is no design detection in the cell because we already know in advance that self-replicating computers are designed", actually quite the opposite. scordova
Scordova- But they aren't looking at the organisms and asking if they exhibit evidence of design. They're asking if they're on the list of GMOs. That's as far as it goes. You could put a 'natural' organism on the GMO list, and they could tell you if what you give them is a match. Design never enters the picture. To take the car analogy, they're taking an object and asking, "Is this a car?" They aren't finding the car and then asking if it's designed. The same with the watch. They're picking up items in the forest and saying, "Are any of these watches?" They aren't saying that anything looks designed, or doesn't look designed. All they're doing is taking two things and comparing them. Is the sequence in the DNA the same as the sequence on our list of known GMOs? As I believe has been pointed out, this is the same technology as a paternity test, in which they do the exact same thing. They take the child's DNA and ask, "Does this match the DNA in the father?" Tiax

Tiax wrote:

they aren’t detecting design. What they’re doing is detecting known sequences which they already know to be designed. In other words, they never have to ask whether or not something is designed, because the list of what is and is not designed is a given.

There is a Fallacy of Equivocation in the above quotation which I will point out with a bit of hyperbole.

Let's assume out in the forest you found an object which was made of metal and glass and other substances. You conclude the physical object conforms to the architecture of a watch. In fact, given that it's ticking and doing all sorts of things it is surely a watch. I doubt anyone would say, "we're not really detecting that this object is designed since we know watches are designed already."

How about stumbling on a lump of metal an silicon and other things in the grass. It turns out this object conforms to the architecture of a cell phone (which by the way, the modern cell phones have computers to manage it's many features, and motors for the vibration ring). I doubt anyone would say, "we're not really detecting design since we already know cell phones and computers and motors are designed, therefore we won't say this cell phone is designed."

How about stumbling upon a computer or a motor in a cell. I hope no one will say, "we really don't see design since we already know motors and computers are designed."

The error in Tiax's assertion is a Fallacy of Equivocation. This fallacy occurs many times in these discussions.

The question is NOT whether the GMO conceptual architecture was designed, the question is whether the physical object (such as a piece of corn) has physical evidence indicating it is a designed versus a physical object naturally occurring.

Tiax's assertion equivocated the issue of conceptually designing a pattern (conceiving an architecture for a GMO) and physically designing the artifact (manufacturing the GMO).

There are three aspects of design in a GMO:

1. design of the sequence and blueprint of the GMO (conceptual design)
2. design of the physical artifact (physical design)

3. the coincidence of #1 and #2 (CSI)

#1 was being equivocated for #3. CSI deals with detecting #3. I already pointed out the definition of CSI entails dual pieces of information, not just one.

Genetic-ID allows us to tell if a physical object conforms to a conceptual pattern, thus we are able to tell if a piece of corn is a GMO or a naturally occurring artifact. The question is not whether GMO's are designed (that is a given), the question is whether a particular PHYSICAL artifact is a GMO and therefore designed, much in the same way we would conclude an object is designed because it conforms to the architecture of a watch.

That's utter nonsense. Tiax's response contains no logical fallacies. What he said is exactly right and painfully obvious. -ds scordova
Here's another angle to consider: as I understand it, the PCR technique is not really specific enough to indicate a pattern match at the nucleotide level across an entire target nucleotide sequence. It is the combination of the specificity of the primers (a relatively small subset of the actual target sequence) and the length of the PCR product that returns a positive match. That is, you're talking about a relatively high probability threshold for concluding a true positive result in the case of GMO detection. And yet the critics are claiming this is simple detection of a known design. That's probably why they are so eager to label this a "red herring." Put another way, the probability that basic Genetic-ID-style PCR methodology will return a false positive is very low, but not [i]that[/i] low. Yet it's obviously considered to be reliable. Jon_Ensminger
How to Really Detect Design While I'm picking on Dembski's blog, I figured I'd critique another entry there, this one written by Salvador Cordova. Sal is a congenial fellow, most of the time at least, so I'll spare him the snark and get right to the point. Sunbeams From Cucumbers

ds: "What Genetic ID does is no more design detection than it would be looking for a Circle C brand on a steer to detect whether it belongs to the Circle C ranch."

But does a trademark need to be CSI in order to be effective as a trademark? For example, for the Circle C brand:

1. What's the probability of the steer encountering a naturally occurring object as hot as a branding iron (for example) and living to 'tell' about it? -- Brush fire? Lightning?
2. Given #1, what's the probability of the object creating only a surface burn wound?
3. Given #'s 1 & 2, what' the probability of the wound being in the form a precise circle?
4. Given #'s 1, 2, & 3, what's the probability that located within the circle would be a pattern corresponding to one (or a few) of the 26 characters of the Roman alphabet?
Etc.

Is the overall probability less than the UPB? How about if one sees two steers with the Circle C? How about a herd?

1. steer are very likely to have odd shaped scars 2. the mark needs to be what everyone from small children up would intuitively call "unique" 3. probably quite good - circles are common in nature 4. a C inside a perfect circle makes it unique and unlikely to be a natural scar (people know this without being told) Are you done belaboring the obvious now? -ds j
"All Genetic ID does is search for >>> ALREADY KNOWN TO BE ARTIFICIAL Mung
This is a sample of what the Cornell IDEA club will be using to argue against Professor MacNeill? If so then I'm afraid I underestimated the thrashing MacNeill is going to deliver unto them. DaveScot

All Genetic ID does is search for >>>ALREADY KNOWN TO BE ARTIFICIAL< << DNA sequences in DNA samples from foods suspected of being genetically engineered. The only problem being solved here is the technical difficulty of testing a DNA sample quickly and cheaply for dozens of known genetically engineered DNA sequences. This is simply not comparable with the problem of determining if a naturally occuring DNA sequence was designed by a non-human designer. What Genetic ID does is no more design detection than it would be looking for a Circle C brand on a steer to detect whether it belongs to the Circle C ranch.

Unfortunately I've been overruled about closing this comment thread so let the ridicule from the peanut gallery begin. Just keep it civil and it won't be deleted.

DaveScot
Forgive me if I have some doubts about the ability to detect design. It seems to me that humans are so clever, they could easily design something that would fool very sophisticated design-detection. Take quantum theory. Clearly a mathematical model that has been designed by humans. We know it is not the "truth" because it doesn't properly account for gravity, so it is an artefact. Would a pattern generated by a simulation model based on quantum theory be able to pass the test? What I'm trying to say is that I believe certain "unsophisticated" designs might be identified as such, but certainly not all, perhaps only a vanishingly small subset of all designs that occur in nature. Raevmo
"Genetic-ID (ID as in IDentification, not ID as in Intelligent Design) is able to distinguish a Genetically Modified Organism (GMO) from a “naturally occurring” organism." Were this true, I might agree with you. However, they aren't really distinguishing between a GMO and a naturally occuring organism. They're distinguishing -known- GMOs from everything else, including unknown GMOs and naturally occuring organisms. Because of this, they aren't detecting design. What they're doing is detecting known sequences which they already know to be designed. In other words, they never have to ask whether or not something is designed, because the list of what is and is not designed is a given. Tiax
Thank you everyone for you comments so far. Bill said he expected vigorous discussion of this topic. I think what you all offer here will be informative to him regarding his work and how he can convey the ideas of the EF. The reasons I offered up quotations from his writings such as "Such a method exists, and in fact, we use it implicitly all the time" is to show that the implicit application of the EF happens all the time. If so, are calculations done in all such cases? I personally think explicit calculations are a sufficient but not necessary part of making a design detection. For example, consider this scenario: let's say genetic-ID is present 1000 blind samples, and they achieve a 100% true positive identificaiton rate with a few false negatives (that is permissible for an instance of the EF). That would implicitly show the artificats have sufficient complexity to allow successful detection method within a certain degree of reliability. I would presume a mathematician could affix a minimum complexity score on the artifact based on the efficacy of the detection method. Thus one has an empirical means of qualifying an instance of the EF without that instance of the EF explicitly making a probability calculation. If the number detections increased, the complexity score affixed to the artifacts in question would rise as our confidence in the design detection grew. This however his my take on the issue, Bill would be the best resource to comment on whether GMO detection as done by genetic-ID is a valid instance of the EF. I believe it is, and it is my hope he is reading our comments and he'll weigh in. Salvador scordova
Chris -- I agree completely, I'm not sure how this relates to Dembski's proposals. Perhaps later we could hypothetically apply Dembski's criterion to the problem and see what the numbers churn out. I think this would be a great future research application for ID. JosephCCampana
"Finding previously known artificial patterns amid natural patterns is indeed design detection." I agree, I just don't think it is analogous to Dembski's method because it requires knowledge of the designer. Even if we imagine that the processes involved in genetic engineering leave some kind of trace, so you don't need knowledge of all commerically available GMOs, this still holds. Chris Hyland
If I understand ID, correctly, then a signature would be an example of an extremely low probability sequence. Although I don't know how one would calculate it, the low probability would be determined by the likelyhood that there would be naturally selective processes present which would select that specific sequence. If, for example, you could show that certain proteins are created from DNA sequences which correspond to, as in one example above, a musical code. Then, perhaps there would be reason to suspect evolution of "accidentally" creating something which we might only have expected from a modern designer. Since evolution would act on the protein fitness, only. It wouldn't be hard to demonstrate the existence of some such code, if it were true. However, we're only just now scratching the surface in being able to understand how natural selection can act on genes. Without positive evidence of another ancient actor, besides evolution, the only answer to scordova's last question remains that ID might simply be detecting evolution. It's not a conclusion that is immutable, but changing it requires more than simply demonstrating that some things are complex and that we can *only now* make these structures directly, ourselves. Of course the explanatory filter works in a situation where we already know of the existance of a designer. That's too easy. curtrozeboom
Chris -- It seems to me that comparing effects against one another is a form of design detection; comparative detection. Finding previously known artificial patterns amid natural patterns is indeed design detection. The method employed may not be what we typically think of, but it is pattern-finding, which is a basic premise of ID. Our ability to do this is an axiomatic postulate of ID research in general (e.g. comparing the effects of human intelligence to the effects of any possible intelligence). Once ID has a better grasp on recognizing which parts of the universe can be reliably detected or fruitfully researched as designed, (possibly through design constraints, retrodictive ceilings, or something of the like) I see this comparative approach being used as a "hypothetical imperative" more frequently since it doesn't require a protracted probabilistic calculation. JosephCCampana
Doesn't the Explanatory Filter work by calculating the odds of natural causation, and then concluding design if the odds are too small? It seems that this company works the other way round by seeing if they can detect a signature that they know is a product of a genetic engineering process. Presumably they have a databse of PCR primers that correspond to known genetically modified organisms. Chris Hyland
Well, Salvador, I can hear the objections already. They go something like this: Genetic-ID is not ID detection nor is it a use of the EF because all Genetic-ID does is match a known pattern to a pattern in DNA, that's all. Some might even claim that the patterns are not even independent, therefore it's not ID. Or that the Genetic-ID was known in advance, therefore it's not ID. In any event, the objections will no doubt incorporate the claim that Genetic-ID does not rule out either chance or necessity, nor the interworking of chance and necessity, all Genetic_ID does is look for a pattern match, therefore this is not the EF nor is it ID. Of course, the critics will do everything they can ignore why the pattern-matching is taking place, or what can be inferred from a match. Some will even go so far to claim that the only thing that can be inferred from a match is that a match was found. Others, or maybe even the same people, will claim that the only why of the pattern-matching is the fact of the pattern-matching, or that the why is because that's what Genetic-ID does. Some will try to divert the argument using the red-herring that Genetic-ID also matches natural patterns, and not just engineered patterns. Of course, what they will avoid is what you point out through the Dembski quote, and that is what is actually going on intuitively. Why match patterns at all? What would the purpose be of trying to find a match for a specific pattern if organisms regularly produce that pattern as a matter of course? What would the significance be of locating a pattern in the DNA if that pattern can easily get there "by chance?" How long are these GMO sequences? How long do they need to be in order for us to rule out chance and necessity as possible explanations for their appearance in a DNA sample? All good questions. Let's see if we get any answers. Mung

Leave a Reply