Uncommon Descent Serving The Intelligent Design Community

Share
Flipboard
Print
Email

I often see misunderstanding of what ID is about. It’s about inferring design by critical analysis of a pattern and the ways that pattern could have come to exist. I find a comparison with a lottery to be the easiest way to understand this.

Suppose there is a state lottery and each month for 12 consecutive months 10 million tickets are sold and one winning ticket is drawn at random. Obviously there must be 12 winners at the end of the year. While each winner beats odds of 10 million to 1 there’s nothing unusual about that as someone must beat the odds each time.

Now suppose that the 12 winners are all siblings in order from oldest to youngest.

This lottery result constitutes a pattern.

First of all we have complexity in the pattern. The odds of any particular sequence of 12 winners are 1 in 10^84 (that’s 10 followed by 84 zeroes). Any single pattern where there are trillions and trillions of patterns possible is complex. But complex things like this happen all the time because the result must be one of those many sequences. A sequence of 10 coin flips, no matter the result, is not complex as there are only 1024 possible results. This is roughly how we define complexity. Complex results happen all the time and in themselves are no indication of design.

Next, the pattern has specification. The pattern conforms to an independently given specification. In this case siblings from the same family is the indendently given specification.

Now we have identified the lottery result as a complex specified pattern (or complex specified information if you will). This is a reliable indicator of design. The more complex the result and the more definitive the pattern the more reliable the design inference.

No matter how convincingly it can be told that the lottery was secure from cheating no reasonable person will be convinced that there was no cheating involved. So we can almost certainly rest assured that the result of the lottery was not random but was the result of design (cheating; rigged).

However, even though we know the result was rigged we have no clue how it was rigged (the mechanism) nor who did the rigging (the designer).

ID is the theory that certain patterns in nature exhibit specified complexity that can only reasonably be attributed to design. ID does not and cannot reveal how the design was accomplished nor what entity or entities did the designing. ID is nothing more or less than design inference based upon high improbability of independently given patterns arising by chance.

Now let’s quickly look at the flagellum. There’s no room for debate about complexity. It’s a precise arrangement of millions of protein molecules from a set of dozens of different proteins, each protein itself a complex pattern. There’s little room for debate that it conforms to an independently given pattern. It’s a propulsion device. Where there is room for debate is in what Bill Dembski calls “probabilistic resources”. These are the resources that “chance” (or unintelligent cause) has to draw upon in forming the pattern. This is why ID seems to be an attack on mutation & selection. Mutation & selection are the leading known probabilistic resource that could form the specified complexity of the flagellum.

Logically one can never prove a negative. ID proponents will never be able to prove that some unknown probabilistic resource wasn’t the source of design in the flagellum. However, this is a problem with nearly every hypothesis in science and it’s why you often hear that all of science is tentative. Some bits are just more tentative than others. This is why most philosophers of science say a hypothesis has to be, at least in principle, falsifiable. If we can’t prove something is true, if we can at least be able to prove it false in principle, then it’s science. The ID hypothesis of the flagellum is falsifiable. In principle a neoDarwinian pathway for its evolution can be plotted on paper and confirmed in a laboratory.

The greater question in my mind regarding falsifiability is whether there’s any method in principle of falsifying a hypothetical neoDarwinian pathway for the flagellum. The only real contender for falsification is a design inference! So you see, if ID didn’t exist, neoDarwinists would have to invent it just so they have a method of falsification in principle for random mutation plus natural selection in creating things like the flagellum.

I was just wondering has anyone come up with a good layman's explanation of the Design Inference of Dembski. I have ready "Intelligent Design" and to be honest the chapter on the Design Inference lost me. It seems that Dembski has come up with a way to determine if something specified but I have been unable to grasp his mathematical arguments. Can anyone help me? mathezar

"Actually itÃ¢â‚¬â„¢s 1 in 10^84 Ã¢â‚¬Â¦Ã¢â‚¬Â¦.. 1/((10^7)^12). Looks like your math is worse than mine. At least I knew that the correct answer had to be a power of ten. I did it in my head and added 12 zeroes to 10 million when I should have raised it to the 12th power. IÃ¢â‚¬â„¢m not sure what your excuse is but IÃ¢â‚¬â„¢m dying to hear it. -ds"

Still off by 9 orders of magnitude, but getting closer. You overlook that there are many ways to draw the same 12 people from 10^7. In fact there are n!/(k!(n-k)!) [where n!=n.(n-1).(n-2)...1] ways to draw k people from a group of size n. Maple 9.5 tells me that's roughly 2.1 10^75. The reason is as follows: you can order n people in n! ways. There are k! times (n-k)! ways to split n! up in subgroups of sizes k and n-k so you have to divide n! by that to get the result (aka binomial coefficient).

You're right. I didn't explicitely say the winners were an ordered set. It's now explicitely an ordered set. Thanks for pointing out the ambiguity. -ds Raevmo

Actually, the odds "of any particular set of 12 people winning the lottery" are not "1 in 10,000,000,000,000,000,000", but rather 1 in 2.1 times 10^75. That's roughly 55 orders of magnitude off the mark. I know, I know, it's pedantic, but typical of certain probabilistic calculations in the context ID.

Actually it's 1 in 10^84 ........ 1/((10^7)^12). Looks like your math is worse than mine. At least I knew that the correct answer had to be a power of ten. I did it in my head and added 12 zeroes to 10 million when I should have raised it to the 12th power. I'm not sure what your excuse is but I'm dying to hear it. -ds

Raevmo
Secondclass, there is a difference between a specification, and specified complexity. "My understanding of the detachability requirement is that the specification should be recognizable independent of any facts behind the occurence of the event..." Perhaps..."independent of the factors behing the occurence..." Irving
Well Dave, it looks like you and I have very different definitions of specificity, detachment, etc. My guess is that we won't be able to find enough common ground for a discussion, so I'll let it go. Keep up the good work! secondclass

Dave, now I'm more confused than ever.

1) My understanding of the detachability requirement is that the specification should be recognizable independent of any facts behind the occurence of the event, which is why I presented the sequence without mentioning the Caputo story.

You didn't present the specification. You presented the sequence.

2) My understanding is that specifications are descriptions or patterns, not motivating factors.

Specifications can be anything sets the pattern in question apart.

3) I don't understand what chance hypotheses have to do with specifications.

Chance hypotheses are a step in inferring design.

Sorry to keep bugging you, but I need some help understanding this.

secondclass

Dave, it seems I'm completely confused regarding specificity. I see that the advantage of being first on the ballot is a motive for cheating, but I don't understand how it constitutes a specification. My understanding of an independently given specification would entail a description of or pattern in the sequence that is recognizable even if the source of the sequence is unknown. Am I way off track?

The source of DDDRDDD need not be known. In fact I don't know the source. As far as I know it could be a faulty random number generator or a 1 in 1 trillion odd happenstance. The point is that further investigation of chance hypotheses is warranted. -ds secondclass

"No matter how convincingly it can be told that the lottery was secure from cheating no reasonable person will be convinced that there was no cheating involved. So we can almost certainly rest assured that the result of the lottery was not random but was the result of design (cheating; rigged)."

I don't see this. All we can reliably infer from your lottery example is that the outcome is almost certainly not due to random chance. But cheating suggests intention, and a mechanism used in the service of that intention.

Perhaps the family works at the ticket factory and managed a not-so-clever swindle? Then our pattern would have been generated by cheating. But perhaps there was a printing error at the ticket factory and 12 identical tickets were all sent to the same store in sequence. If all the family members went to the store together or (more likely) one or two of them bought twelve tickets for the others, then we'd get the outcome you describe, but without cheating.

If you want to infer intentional design in the lottery case, you're going to have to look at more than outcomes and likelihoods: you'll have to dig further, finding out whether there were machine errors, and if not, then investigating the family members in question, and probably other lottery employees. We know the outcome is too improbable to result from simple chance, but until we figure out the mechanism generating the distribution we observe, that's all we can reliably say about the outcome.

"However, even though we know the result was rigged we have no clue how it was rigged (the mechanism) nor who did the rigging (the designer)."

Again, we don't know that the result was rigged: all we know is that it almost certainly wasn't due to random chance. Something systematic, non-random is at work here, but we don't know whether this is a machine error and a lucky shopping day for one family, or a not-so-clever inside job designed to cheat the lottery commission.

"ID is the theory that certain patterns in nature exhibit specified complexity that can only reasonably be attributed to design."

Then it isn't a very interesting theory. We study intelligent design in the historical and social sciences all the time, and if we stopped at simply saying "that education policy definitely didn't arise from chance" then we wouldn't be taken very seriously by anyone. It isn't until historians and archeologists, for instance, tell plausible, evidence-based, and independently verifiable stories about designers and mechanisms that they do interesting descriptive and explanatory work.

"ID does not and cannot reveal how the design was accomplished nor what entity or entities did the designing."

Then it's not a very interesting scientific appproach. Science isn't just the business of speculating about patterns (although a lot of good science starts that way ... of course, plenty of bad science starts that way as well). Science is about understanding causal mechanisms. If you think important biological systems are designed, then start figuring out ways to identify and study designers and their mechanisms.

You completely overlooked the following:

Suppose there is a state lottery and each month for 12 consecutive months 10 million tickets are sold and one winning ticket is drawn at random.

Tickets are sold monthly for each monthly lottery 10 million of them. One winning ticket is drawn from that number. The scenario you outline, 12 winning tickets sold at one time, was not possible in that circumstance. -ds

lorenk

Dave, Dembski attributes CSI to the above sequence. (See here) What independent meaning does he see that you and I don't see?

DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

"D" represents "Democrat first on the ballot" and R is "Republican first on the ballot". There is an advantage in being first on the ballot. That advantage is an independently given specification. A string of 41 Democrat advantages and 1 Republican advantage on a ballot is an approximate 1 in 1 trillion possibility and is therefore fairly complex. Thus we have complexity that conforms to an independently given pattern. The only reason I didn't see it when presented with it before is the definition and significance of "D" and "R" were withheld. What now follows in a design inference is analysis of chance hypotheses (also called probabilistic resources) that could compose this CSI without intelligent agency involved. -ds

secondclass

Dave, is independent meaning always a requirement? For instance, does the following sequence have a pattern with independent meaning? DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

If incomplete descriptions are allowed, then is "string of a million bits" a valid specification? How about "10 ton rock"?

Independent meaning is always a requirement for a design inference. The sequence you give has no meaning to me so I can't begin to make a design inference. However, it may have meaning I don't know about. Maybe it's the password someone used for their Swiss bank account. String of a million bits has no independently given meaning and neither does 10 ton rock. -ds secondclass

Dave, I think if we'd both properly read each other's post, you would realize that a "Face on Mars" was not my concern. A discussion regarding nature of (wholly valid) pro-design vs. (will you at least consider considering) natural "apparent design" is a worthwhile, a reasonable, position, no?

I added more commentary to yours. I think you misunderstand my position. I don't discount material mechanisms. However, science is about demonstration and no material mechanism has been demonstrated capable of producing novel cell types, tissue types, organs, organelles, or body plans. Extrapolation of a mechanism demonstrably able to produce small changes is a reasonable position as long as one doesn't lose sight of the fact that it is extrapolation, might not be the correct answer, and has not even in theory been shown to have a plausible way of creating the observed complexity in living systems in the time and space and environment available. Intelligent design is the only other option on the table that many people see as a viable alternative and people have recognized it as an option for millenia. Furthermore there's no reason in principle why the intelligent agency can't be material in origin. In fact as a materialist I believe that the source of intelligence, when and if it is discovered and characterized, will be comprehensible in material terms. It's difficult for me to grasp why any objective, rational person would exclude design as a live possibility for the origin of complexity in life on earth. I can only conclude that resistance is driven by irrational and/or subjective motives such as fear, ignorance, hubris, financial, and philosphical concerns. -ds

jjj

Dave, I'm sorry if I came across as argumentative. Irving said that specifications could be complex. You seem to be saying that specifications are simple. I'm in your camp, but I'm trying to be agreeable to everyone here.

My question as to whether a specification must contain a complete description of the event is a sincere one.

Specification is some distinguishing characteristic that gives meaning to a pattern. Independence means the meaning isn't a tautalogy - the pattern must mean something independent of itself. Simple and complex, compete and incomplete description, are irrelevant since meaning can be construed independent of those terms. -ds secondclass

Irving, you make a good point. I suppose a complex pattern could constitute a specification if it's independently given, although recognition of such a pattern may, in some cases, be difficult or even intractable. This raises a question of whether CSI requires that the specification be smaller than the event.

This brings up another question I have regarding specifications. Must a specification contain a complete description of an event? In other words, must a spec contain enough information to reconstruct an event exactly? I see problems attached to both a yes and a no answer.

You've totally lost the plot here. A family of blood relatives is the specfication in the lottery example. A propulsion device is the specification for a flagellum. These are not intractible or difficult. I suspect you're just being argumentative. In any case your comments are not constructive. Take a break from this thread. -ds secondclass
Secondclass, my impression is that Dembski sees specification in BOTH simple and complex patterns. From No Free Lunch: "What is specified complexity? An object, event, or structure exhibits specified complexity if it is both complex (i.e., one of many live possibilities) and specified (i.e., displays an independently given pattern). A long sequence of randomly strewn scrabble pieces is complex without being specified. A short sequence spelling the word "the" is specified without being complex." So "the" is specified without being complex... And from ISCID.ORG "The second component in the notion of specified complexity is the criterion of specificity. The idea behind specificity is that not only must an event be unlikely (complex), it must also conform to an independently given, detachable pattern." One cannot work with Specified Complexity, absent a full appreciation of Independence. Irving
I think that Chris and I are driving at the same point, namely that complexity assessments are often poorly justified. This problem is aggravated by the frequent conflation of Dembski's definition of complexity with the more common definition, which leads some to attribute self-evident complexity to events with potentially complicated causal histories. (Note: This is one of 4 main issues that I have with CSI, this one being the least problematic. I can state the other 3 if anyone's interested.) secondclass
Irving, perhaps we should step back and reframe the issue. It is my impression that Dembski sees specification in simple, rather than complex, patterns. An example of the former would be a highly compressible bit string. Do you share this view? secondclass

Dave, regarding your comment in #21, I fully agree that simple, repeating patterns are abundant in nature. Furthermore, I see them as specified, as Dembski associates high compressibility with specification. For a string of a million 1's, a chance hypothesis of uniform noise would lead to a verdict of CSI, which would be problematic in many cases.

My point is that our choice of chance hypotheses is crucial to the reliability of a CSI-based design inference. I make this point only because most of the CSI examples that I've seen do not explicitly state or justify their set of chance hypotheses, which reduces my assurance that their conclusions are correct.

It's usually chance hypotheses (plural not singular, as there's usually more than one way to skin a cat). Our knowledge of chance hypotheses isn't just crucial to a design inference, it's everything. The whole shootin' match is wound up in probabilistic resources which I prefer to "chance hypotheses". CSI is a highly improbable pattern that conforms to an independently given specification. Probabilisitc resources are the set of processes that might hypothetically produce the pattern. If those processes are well understood and the set is believed to be complete, and none are reasonably capable of producing the pattern, a design inference is warranted. In patterns exhibited by cellular nanoscale machinery there is really only one chance hypothesis with empirical support and that is RM+NS. Of course there are other possibilites in the set such as undiscovered laws of nature (the structuralists I believe they are called) which cause self-organization but I tend to dismiss claims of undiscovered laws of nature until they are discovered. That's just wool gathering. RM+NS is the only real contender for chance hypotheses IMO. The problem is that under observation RM+NS is quite limited and certainly can't be seen to cause evolution of new genera nor can it be seen creating new cell types, tissue types, organs, or body plans. This capability of RM+NS is pure extrapolation of the observed capabilities. Its creative power beyond what's been observed is purely an argument from ignorance i.e. "if not rm+ns then what?". This is illustrative that the set of chance hypotheses for biological evolution is a set with really only one member. So if we admit the possibility that intelligent design is among the set of all things that can fully explain evolution the question then becomes "if not RM+NS and not ID then what?". It's still an argument from ignorance because there might something other than RM+NS and ID, or it might turn out that with better understanding an RM+NS pathway is possible to such things as flagella and ribosomes and eukaryotic nuclei. What bugs me is why is the default and only acceptable explanation for evolution a chance hypotheses when the appearance of design is universally acknowledged even by NeoDarwinian dogmatists. It seems to me that ID should be the null hypothesis in any objective analysis i.e. rather than saying the appearance of design is an illusion why not say the appearance of chance is an illusion? -ds secondclass
Chris Hyland: The flagellum is controlled by, in the standard case, a signalling cascade caused by sensors on the front of the bacteria, which cause the flagellum to rotate in resonse to one or various external stimuli. My question is how do you include all of this in a calculation of the complexity of just the flagellum. Alrighty then. IMHO, IDists generally don't include it because the issue is difficult enough to contend with just given the flagellum. And anything "deeper" than that would be the origin of life itself. However it does go to show that the issue is much deeper than just the physical aspect of the flagellum, which is something IDists have been saying for years. Joseph
Yes I know, this is why i included my remarks in brackets, the protein structure also depends on the translational machinery, possibly molecular chaperones, and the proteins involved in flagellular assembly. These are then of course enocded on the genome themselves and so on. The flagellum is controlled by, in the standard case, a signalling cascade caused by sensors on the front of the bacteria, which cause the flagellum to rotate in resonse to one or various external stimuli. My question is how do you include all of this in a calculation of the complexity of just the flagellum. Chris Hyland
Ã¢â‚¬Å“First there is the information required to build proteins (for example). Then there is the information required to assemble the proteins in a precise sequence (as in the bacterial flagellum). And finally is the information required for using it (the flagellum)Ã¢â‚¬Â Chris Hyland: The information for assembling the proteins (if you dont count the translation machinery), and assembling the complex(assuming no molecular chaperones are involved) is exactly the same thing, as it is specified by the protein structure. That is false. Chapter XIII "What Teaches Proteins Their Shapes?" of geneticist Giuseppe Sermonti's book Why is a Fly Not a Horse? paints a different story than what you just posted.
The problem is that in order to get proteins to function one needs not only an orderly and correct sequence of amino acids, but also a spatial configuration that folds them into the proper association with each other and enables them to interact with the molecules on which they are supposed to work.
He goes on to say:
The spatial information necessary for specifying the three-dimensional structure of a protein is vastly greater than the information contained in the sequence.
And it still doesn't say where the information came from. Chris Hyland: The information for Ã¢â‚¬Ëœusing itÃ¢â‚¬â„¢ is presumably the mechanism by which it is activated, which is a signalling cascade that I have not seen mentioned in these discussions(depending on which strain you are referring to). Is the Ã¢â‚¬Å“informationÃ¢â‚¬Â referring to the configuration of the proteins themselves of the genetic information required to encode them. Something controls the bacterial flagellum. It can rotate clockwise at varying speeds, stop and then rotate counter-clockwise. Soemthing is telling it to do so. IOW the bacteria has to know how to use the structure once it is in place. That means a communication link must also exist. Joseph
Secondclass, Yes, I can see how that could get confusing. As you read the quote carefully, you see that it is the "pattern" that is the specification, and not the event... "...that makes the pattern exhibited by (ÃË†R) Ã¢â‚¬â€ but not (R) Ã¢â‚¬â€ a specification." I believe he's referring to the "likelihood" of something occuring by chance as "event-complexity." In other words, the series of events required to build the pattern is complex... "...difficulty of reproducing the corresponding event by chance" Even though that pattern may be easily described....such as Pi=C/2r. I don't have time at the moment to find the greater context within the paper you linked to place your quote in context...but I'll take great risk in assuming that Dembski's defining Specification in "building up" to a definiton of Specified Complexity? Given that one must first define "specification" before one describes a complex "specification." Irving
"First there is the information required to build proteins (for example). Then there is the information required to assemble the proteins in a precise sequence (as in the bacterial flagellum). And finally is the information required for using it (the flagellum)" The information for assembling the proteins (if you dont count the translation machinery), and assembling the complex(assuming no molecular chaperones are involved) is exactly the same thing, as it is specified by the protein structure. The information for 'using it' is presumably the mechanism by which it is activated, which is a signalling cascade that I have not seen mentioned in these discussions(depending on which strain you are referring to). Is the "information" referring to the configuration of the proteins themselves of the genetic information required to encode them. Also, how do we decide that the flagellum, or any biological feature, conforms to an independantly given specification? Is is by reference to human structures, it is the molecular function, or the biological function, and if so, what particular level of function is used? Do we have examples of biological structures that do not exhibit CSI to make a coimparison? Chris Hyland
Irving, I didn't convey my point very well, so we're probably talking past each other. My perception of CSI as a simple specification coupled with a complex event is taken from Dembski's articles, like this one: Thus, what makes the pattern exhibited by (ÃË†R) a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance. ItÃ¢â‚¬â„¢s this combination of pattern-simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes the pattern exhibited by (ÃË†R) Ã¢â‚¬â€ but not (R) Ã¢â‚¬â€ a specification. secondclass
Secondclass, I would suggest your perception is flawed. Not unusual, I find that Specified Complexity is the most routinely mis-understood concept within ID. If it was the "event," it would be known as CSE - Complex Specified Event! You say: "Dembski seems to characterize CSI specifications as simple, short descriptions." From Explaining Specified Complexity, by Dembski "A single letter of the alphabet is specified without being complex (i.e., it conforms to an independently given pattern but is simple). A long sequence of random letters is complex without being specified (i.e., it requires a complicated instruction-set to characterize but conforms to no independently given pattern). A Shakespearean sonnet is both complex and specified." Irving

Joseph, your point is well taken, but I don't see how CSI requires meaningful content, unless you equate meaning with specification. My understanding is that specifications are characterized by short descriptions, so it would seem that a string of a million 1's can easily constitute CSI (depending, as I mentioned, on our choice of chance hypotheses).

In information theory a pattern is only as complex as the simplest way of expressing it. "one million ones" is the simplest way of way representing a string composed of 1 million ones. It doesn't take anywhere near a million characters to represent it. This is basic information theory. There's no particular independently given specification for it and nature produces simple repeating patterns like this in abundance so the realm of chance hypothesis would usually be very large. -ds secondclass
Secondclass, Shannon's "information" is useless here. It is useless because it doesn't care about content. Meaning is useless to Shannon. Here (ID) information is all about content and meaning is very relevant. First there is the information required to build proteins (for example). Then there is the information required to assemble the proteins in a precise sequence (as in the bacterial flagellum). And finally is the information required for using it (the flagellum). Joseph
Irving, my perception of CSI may be flawed. According to my understanding, it's the event, not the specification, that should exhibit high complexity. Dembski seems to characterize CSI specifications as simple, short descriptions. Joseph, it depends on our definition of information. Shannon's information measure is a function of the signal source. If we know nothing about the source and assume white noise, then all signals are information-rich, including those that exhibit a simple pattern. secondclass
seconclass: The question is vital to the reliability of CSI-based design inference. Under a uniform-noise chance hypothesis, cyclical radio signals exhibit CSI. Only if you don't know what CSI is. CSI, at the minimum, is 500 bits of information. What information does your uniform-noise, cyclical radio signals contain? Joseph
Secondclass. It's not just a pattern, but the complexity of the independent specification of that pattern. A constant signal does not infer design. Repetition of a low CSI pattern does not infer design. Certainly recognizing a specification and determining it's independence is not a trivial matter (in some cases). A Pulsar is cyclical. It's repetition is a pattern, and might be considered an independently sepecified binary pattern...i.e. 10101010101. However, that specification is of low complexity. But if the signal conformed to an "independently specified" pattern say...3.1415926535 out to say 500 decimal places. Then it is specified, complex, and independent...as there is no conceivable reason why natural, un-guided physics would build a pattern equal to a base 10 expression of Pi. It's conformance to external (independent) requirements, not local ones. Consider the rovers currently on Mars. Complex machines, built on Earth for a Martian environment. There's no reason what RM / NS operating in a Terran environment would build a complex machine suited for a Martian environment. The Martian environment is "independent." Now there may be similarities between the two environments, and those features of the rovers would be excluded. And, certainly, a random mutation on Earth might just, by happenchance might result in some feature better suited for Mars than Earth. BUT such a feature is mere mimicry, since there is no Terran selection pressure to build upon it. The question then is, what are the odds that highly complex, inter-related, Mars specific features might be hit uppon by mere chance? The likelihood decreases as the complexity increases. Irving

The basic notion of an intelligent inference is not the issue. There is some question regarding the most plausible assumption, however.

As shown by the general acceptance of SETI, this notion of an "intelligent signal" is not deeply contested. If, for example a protacted sequence of prime numbers was broadcast to us via radio signal, most scientists would accept that it was a plausibly of an intelligent origin.

Though certainly no one would look down upon those scientists (including myself) who would consider an unknown natural explanation. It is hard to fathom, but it might be possible. There were peculiar radio signals recieved here that while initially suspicious, later proved to be the origin on a naturally occuring astronomical phenonmem. A long sequence of primes is awfully tough to imagine as a natural occurence though.

And certainly if we, for example, received even a low-def broadcast of some strange green aliens introducing themselves in English, along with a quick description of themselves and their best understanding on humans based on the radio broadcasts they've recieved, I think just about everyone would accept the "intelligent design hypothesis" regarding that signal. The only issue of debate would likely be whether it was a hoax (human intelligence) or real (some distant alien intelligence).

But for the moment, let us assume some prime numbers from space. We might reasonably assume the origin of the signal suggested an environment sustaining life not unlike ours. Further, let us assume that subsequent, detailed study of the origin star system suggested a young star and largely a dust solar system (ie. no planets). Let's even go so far as to imagine that we send a probe there, and find no planets and no existence of an intelligent species. And even worse, we find no evidence that some intelligence had visited that system prior to our probe (no evidence for an exhaust trail, for example).

Then, we scientists on Earth are faced with this: a sequence of prime numbers sent out to the cosmos, of which we recieved, with no discernable evidence that an intelligence was the cause. Even then, the default assumption is that there was an intelligence, but we not have figured out how to detect it's existence. But certainly, an entirely valid approach would be to find a natural explanation for the origin of the prime numbers. In fact, it is the only obvious way forward. We have no potential to understand the numbers from space except by natural, observable means. Even if it is hopeless and wrong, it's simply the only thing we could do.

Further, assume some scientist did find a plausible explanation for the natural origin of primes as radio signals, but not necessarily that particular signal we recieved (though almost, and importantly, plausibly in a way we don't quite understand yet). Then the default assumption might be in debate. And if we find, over time, that the prime number emission is characteristic of a number of astronomical emissions -- even if we didn't understand how that initial one worked in particular -- the default assumption might even move to that of a natural mechanism.

Of course, the video transmission would blow the methodolgical naturalism gasket of every scientist right from the start. The idea of a natural process emitting an NTSC broadcast with coherent English-speaking apparent-aliens discussing their society in the context of observed human radio broadcasts... yeah, that's some alien intelligence. It's might be, somehow, still possible it was a natural occurence, but I'd likely have won the lottery millions and millions of times before that occured.

We can tell the difference between Mount Rushmore and the "Face on Mars", and if we had found a laptop in the fossil record -- even in the 19h century before we'd made any laptops -- we'd all be blown away.

The difference is simply that the "interdependent, complex nanomachinery of the cell" is (a) not a revelation, and (b) rather unlike the known products of existing "top-down" intelligent engineering. That is, to the vast majority of professional biologists, it looks more like the "Face on Mars" than it does Mount Rushmore. I know you disagree with that, but your disagreement is largely over possible mechanism rather than pure incredulity.

I don't expect you to just believe me, and I completely understand the concern over believing a purely RM/NS method settles everything. All I can say is that it is a work in progress, and that I think a new "layer of science" is forming. It doesn't replace RM/NS -- it builds on it -- and it provides the abstraction necessary to intuitively understand the process. At the moment, understanding complex evolution in the framework of specific genetic events is likely akin to understanding chemistry in the framework of particle physics. This issue is clearly already true of certain aspects in evolutionary theory (advanced hierarchical problem solving and endosymbiosis -- generally, cooption of function -- the indirect mechanisms of evolution).

Obviously there will need to be more detailed experimental linkage and grounding of this "novel" discipline, but I think the recourse to an unexplainable statement of mechanism is a tad premature.

An aside to those agnostic engineers who happen to read this, I suggest you read up on John Doyle's work (an engineer with expertise in system control theory at Caltech). He basically came at this problem with a similar mindset, but developed a very intriguing and purely natural theory. Basically that complex, "organically grown" human engineered systems show many basic properties of biological systems because they were solving the same fundamental problem: accumulation of control systems (largely of a feedback variety) to accomodate common environmental noise. His basic argument is that a 747 (and a cell) is a fairly simple system, as long as you don't mind it crashing every few seconds. The not crashing bit requires control systems. He describes a 747 as a massive, complex, computational control system that just, almost irrelevantly, happens to fly. He also has a theory regarding the "conservation of fragility", which is related to the oscillations in any simple feedback control system. Basically you can pick what you are robust to, but you cannot be robust to everything. The mindless patching (both in biology and, to a significant degree, engineering beyond our intuitive level -- I've written some wacky computer programs in my day) is the same process, and results in the same issue: growing complexity.

I honestly believe that the engineers reading this forum will find more intellectual significance in Doyle's work than they do with Intelligent Design. It's fascinating work, and I'd love to seem more engineering experts expand on it.

Justin

Comparing nanomachinery in living cells to the face on Mars is a false analogy which I addressed just recently. This is like comparing the pyramids of Egypt and everything in them to a rock that resembles a stone axe. Your argument falls apart from there as it depends on that logical fallacy. -ds

Further - a string of primes is a false analogy. - a broadcast of little green men saying hello is a false analogy - complexity of the cell IS a revelation and it just keeps getting revealed as more complex every single day - it rather IS like the products of human engineering - a ribosome and its digital control program in DNA, basic machinery shared by every living cell, is a robotic assembler amazingly similar in form and function to human designed robotic assemblers, - a biologist knows nothing of human engineered factory automation so they can't see the congruence with cellular automata - there's nothing unexplainable in principle about intelligent design anymore than other fundamental phenomenon like what's beyond the visible universe or what caused the visible universe; some questions may not be answerable and that invalidates neither the question nor the answers that lead up to the question - Paley's watchmaker analogy is still a good one but given what we know about cells today a better analogy is the space shuttle and all its support infrastructure at Cape Canaveral instead of a watch - tell Doyle to envision a 747 that can make copies of itself and uses sunflower seeds as both fuel to fly and the raw material to copy itself then I'll read what he thinks about it jjj