Uncommon Descent Serving The Intelligent Design Community

Lizzie Joins the ID Camp Without Even Knowing It!

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Lizzie,

You continue to astonish.

In the first sentence of your reply to my prior post you wrote: “I know that it is possible for intelligent life-forms to send radio signals, because we do; my priors for the a radio signal to have an intelligent origin are therefore substantially above zero.”

As I demonstrated earlier, the issue is not whether nature or intelligent agents can cause radio signals. We know that both can. The issue is whether we have any warrant to distinguish this particular signal from a natural signal.

Then you write: “I know of no non-intelligent process that might generate prime numbers (presumably expressed as binary code), and so my priors on that are low.”

Upon a moment’s reflection I am certain you will agree that this is not, strictly speaking, correct. It is easy to imagine such a process. Imagine (as you suggested) a simple binary code that assigns two “dots” to the number “two” and three “dots” to the number “three” and five “dots” to the number “five” and so on, and also assigns a “dash” to delimit each number (a cumbersome code to be sure, but a conceivable one). In this code the series “dot dot dash dot dot dot dash” denotes the first two prime numbers between 1 and 100. Surely you will agree that it is well within the power of chance and mechanical necessity to produce a radio signal with such a simple sequence.

So what do we now know? We know that nature sends out radio signals. But that is not all we know. We know that it is entirely within the realm of reason to suppose that nature could send out a radio signal that denotes the first two prime numbers between 1 and 100 given a particular binary code.

From this information we must conclude that if the signal we received were only the first two prime numbers, we would have no warrant to assign a high probability to “intelligent cause.”

Nevertheless, we both know that your calculation (and it is a very good calculation for which I commend you) that the probability that this particular signal has an intelligent source is for all practical purposes “one” is correct.

Nature can send out a radio signal.

Nature can embed a pattern in that signal that appears to generate prime numbers under the binary protocol we have designated.

Why, then, are we warranted to infer intelligent agency and not the work of nature as the cause of this particular signal?

The answer has nothing to do with your or my “intuition” about the signal.

The answer is that we both know that nature can do two things. (1) It can generate highly improbable patterns. Imagine ANY 500 bit long series of dots and dashes, and you will have a pattern that could not reasonably be replicated by chance before the heat death of the universe. And (2) it can generate specified patterns (for example, the two prime numbers we saw above).

We also know something about what nature cannot do. You said, “I know of no non-intelligent process that might generate prime numbers.” You were almost right. As I have already demonstrated, you should have said “I know of no non-intelligent process that might generate A COMPLEX PATTERN OF prime numbers.”

In other words, you and I know that while nature can do “specified,” and nature can do “complex,” it cannot do “specified and complex at the same time”! This is not your intuition speaking Lizzie. Without seeming to know it, you have made an inference from the universal experience of the human race.

Here’s the most important “take away” for purposes of the discussion we have been having: As much as you have bucked against the idea, you were able to make this design inference based upon nothing more than the character of the embedded signal (i.e., that it contained complex and specified information at the same time, that is to say, complex specified information).

Welcome to the ID camp Lizzie!

Comments
Dr Liddle: Kindly note my remarks, clips and onward links on your comment no 1. G'night GEM of TKIkairosfocus
August 14, 2011
August
08
Aug
14
14
2011
11:07 PM
11
11
07
PM
PDT
F/N: Please notice the money clip on why the needle in the haystack approach is a reasonable one:
the “elimination” approach rests on the well known, easily observed principle of the valid form of the layman’s “law of averages.” Namely, that in a “sufficiently” and “realistically” large [i.e. not so large that it is unable or very unlikely to be instantiated] sample, wide fluctuations from “typical” values characteristic of predominant clusters, are very rarely observed. [For instance, if one tosses a "fair" coin 500 times, it is most unlikely that one would by chance go far from a 50-50 split that would be in no apparent order. So if the observed pattern turns out to be ASCII code for a message or to be nearly all-heads or alternating heads and tails, or the like, then it is most likely NOT to have been by chance. (See, also, Joe Czapski's "Law of Chance" tables, here.)]
There's more than one side to a story, in short.kairosfocus
August 14, 2011
August
08
Aug
14
14
2011
11:04 PM
11
11
04
PM
PDT
Let's give the bottom-line from the linked: __________ >> L[E|T2]/ L[E|T1] = LAMBDA = {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)} Thus, the lambda measure of the degree to which the evidence supports one or the other of competing hyps T2 and T1, is a ratio of the conditional probabilities of the evidence given the theories (which of course invites the "assuming the theory" objection, as already noted), times the ratio of the probabilities of the theories being so. [In short if we have relevant information we can move from probabilities of evidence given theories to in effect relative probabilities of theories given evidence, and in light of an agreed underlying model.] All of this is fine as a matter of algebra (and onward, calculus) applied to probability, but it confronts us with the issue that we have to find the outright credible real world probabilities of T1, and T2 (or onward, of the underlying model that generates a range of possible parameter values). In some cases we can get that, in others, we cannot; but at least, we have eliminated p[E]. Then, too, what is credible to one may not at all be so to another. This brings us back to the problem of selective hyperskepticism, and possible endless spinning out of -- too often specious or irrelevant but distracting -- objections [i.e closed minded objectionism]. Now, by contrast the “elimination" approach rests on the well known, easily observed principle of the valid form of the layman's "law of averages." Namely, that in a "sufficiently" and "realistically" large [i.e. not so large that it is unable or very unlikely to be instantiated] sample, wide fluctuations from "typical" values characteristic of predominant clusters, are very rarely observed. [For instance, if one tosses a "fair" coin 500 times, it is most unlikely that one would by chance go far from a 50-50 split that would be in no apparent order. So if the observed pattern turns out to be ASCII code for a message or to be nearly all-heads or alternating heads and tails, or the like, then it is most likely NOT to have been by chance. (See, also, Joe Czapski's "Law of Chance" tables, here.)] Elimination therefore looks at a credible chance hyp and the reasonable distribution across possible outcomes it would give [or more broadly the "space" of possible configurations and the relative frequencies of relevant "clusters" of individual outcomes in it]; something we are often comfortable in doing. Then, we look at the actual observed evidence in hand, and in certain cases -- e.g. Caputo -- we see it is simply too extreme relative to such a chance hyp, per probabilistic resource exhaustion. So the material consequence follows: when we can “simply" specify a cluster of outcomes of interest in a configuration space, and such a space is sufficiently large that a reasonable random search will be maximally unlikely within available probabilistic/ search resources, to reach the cluster, we have good reason to believe that if the actual outcome is in that cluster, it was by agency. [Thus the telling force of Sir Fred Hoyle’s celebrated illustration of the utter improbability of a tornado passing through a junkyard and assembling a 747 by chance. By far and away, most of the accessible configurations of the relevant parts will most emphatically be unflyable. So, if we are in a flyable configuration, that is most likely by intent and intelligently directed action, not chance. ] We therefore see why the Fisherian, eliminationist approach makes good sense even though it does not so neatly line up with the algebra and calculus of probability as would a likelihood or full Bayesian type approach. Thence, we see why the Dembski-style explanatory filter can be so effective, too. >> ___________ In short, there is a reason why a needle in a haystack search analytical approach is not just easily dismissible error.kairosfocus
August 14, 2011
August
08
Aug
14
14
2011
10:59 PM
10
10
59
PM
PDT
PS: Ducking behind revisable Bayesian priors does not answer the problem. Cf the discussion here [from my always linked] -- this issue came up at UD already and has been answered long since, years ago.kairosfocus
August 14, 2011
August
08
Aug
14
14
2011
10:45 PM
10
10
45
PM
PDT
If you doubt me, think about the challenge of preserving a dot-dash pattern with consistency in duration, while listing the primes in succession. Just the structuring of an aperiodic message with that sort of consistency would be suspicious. Multiply by the first 100 primes and this is a morally certain message, not lucky noise.kairosfocus
August 14, 2011
August
08
Aug
14
14
2011
10:41 PM
10
10
41
PM
PDT
Dr Liddle: Can you by incremental, rewarded accidental errors, turn "See Spot run" into a textbook on Java Programming, within the reasonable resources of the observable cosmos? And in that context, for months now the simplest way to assess those resources is, on a solar system scale basis: Chi_500 = I*S - 500, bits beyond the threshold of our solar system's PTQS resources. (For the observable cosmos go up to 1000 bits. You already know the scope of resources to the scope of possibilities for 500 bits is as a one-straw size sample to a cubical haystack over 1 light month across.) The inference to design on specified complexity, esp functionally specific complexity, beyond a reasonable threshold is quite well warranted, and is also backed up by billions of successful cases in point and NIL credible counter-examples. Given the conservative nature of the threshold as just seen, that is no wonder -- our solar system out to Pluto could easily be lost in such a haystack. And Barry's example of the first 100 primes, as a structured signal, is well beyond the 500 bit threshold or even the 1,000 bit threshold. GEM of TKIkairosfocus
August 14, 2011
August
08
Aug
14
14
2011
10:39 PM
10
10
39
PM
PDT
Yikes! How was Lamarck partly right? Are you referring to gene switching as a result of environmental stress?Querius
August 14, 2011
August
08
Aug
14
14
2011
07:42 PM
7
07
42
PM
PDT
No Lizzie. What you request requires an honest dialogue, and you have demonstrated that you are not willing to be honest with me, or even yourself. As I say, it saddens me, because I truly had hope for you.Barry Arrington
August 14, 2011
August
08
Aug
14
14
2011
06:51 PM
6
06
51
PM
PDT
Lamarck was at least partly right.Elizabeth Liddle
August 14, 2011
August
08
Aug
14
14
2011
06:50 PM
6
06
50
PM
PDT
Lizzie Joins the ID Camp Without Even Knowing It!
That's true of many, if not most, ostensibly anti-IDists; in much the same way that most Darwinists, while publicly anti-Lamarckian, and really ultra-Lamarckians under the hood.Ilion
August 14, 2011
August
08
Aug
14
14
2011
06:44 PM
6
06
44
PM
PDT
Detecting patterns from sentient sources among those generated by nature--isn't that what SETI has been searching for? Here's a nightmare scenario: SETI scientist: "There's some good news and some bad news. The good news is that we located and confirmed a structured extraterrestrial radio signal." Science magazine editor: "And the bad news?" SETI scientist: "It seems to be replicating the base pairs of the E.coli genome."Querius
August 14, 2011
August
08
Aug
14
14
2011
06:31 PM
6
06
31
PM
PDT
No, I do not think I am intellectually dishonest. Being as honest as I can be, which I try to be, I cannot rule out the possibility that there is some culpably unexamined dissonance lurking in my soul, but I am as honest as I can be. If you think I otherwise, please explain exactly why you think so. It's the least you can do.Elizabeth Liddle
August 14, 2011
August
08
Aug
14
14
2011
06:26 PM
6
06
26
PM
PDT
Now you are being intellectually dishonest. It saddens me.Barry Arrington
August 14, 2011
August
08
Aug
14
14
2011
06:19 PM
6
06
19
PM
PDT
Barry, please read my posts. I said: "I know of no non-intelligent process that might generate prime numbers (presumably expressed as binary code), and so my priors on that are low." I did not say there could be no non-intelligent process, yadda yadda, but that I knew of none and so my priors on that are low. My position is that some "natural" (i.e. non-intentional) systems can generate CSI: what is required is not intention but a system in which replication or repetition with variance is modulated by feedback from results. This is the basis of learning systems, and while the paradigm case of a learning system is critter with a brain, it is also a system found in non-brain-possessing systems, most famously, the "rm+ns" system postulated by Darwin as responsible for the adaptation of populations to environmental conditions, namely processes in which there replication of a pattern with heritable (by any means) variation in the likelihood with which the pattern will be repeated. I started to discuss this in one of your other Lizzie-call-out threads, but as you've started yet another (! Should I be flattered? Ashamed? :) ) I'll say this here. And so, faced with a pattern that has CSI (and not all designed things do), the first thing I'd ask is: does it replicate with heritable variance in the ability to replicate? If so, I have a candidate for CSI generation right there. If not, I look for an external designer. If there isn't an obvious candidate, then I start to look for any clues from the pattern that might tell me some details about the way it was designed. Does it have tool marks? Does it have an obvious use? Does it look like an imitation of something else? If so, might the something else be a clue to the designer? And so on. I go in with priors, certainly, but priors that will be adjusted in the light of new data, and new data I will search for by devising predictive hypothesis and seeing whether the data that turn up support the prediction. That's the scientific method. It's also the way Bayesian Inference works. But CSI is something else. It's basically a frequentist methodology applied to something for which frequencies are not actually available in most cases, and for which the null is badly specified, while the alpha value (the rejection criterion) is rolled up in to the CSI number itself. As I keep saying, it's useless. When applicable, it is too conservative* (returns loads of false negatives) and when non-applicable it doesn't tell you anything about intentional design, at best telling you merely that some decision-tree-with-feedback was likely to be responsible, and at worst telling you nothing at all. I'm not an enemy of ID, although I think it has very little in its favour. But I do think that, possibly inadvertently, the ID idea raises some very interesting questions about the nature of evolutionary processes, and, indeed, the nature of intelligence and intention. And, because I'm like that, I'd also like to see some specific hypotheses tested (like "front-loading" which would readily generate differential hypotheses). But when you say:
In other words, you and I know that while nature can do “specified,” and nature can do “complex,” it cannot do “specified and complex at the same time”!
no, I do not "know" that. I think it is false. I think "nature" can easily do "specified and complex at the same time" as long as the generator is a system that involves a repeater-with-variance and feedback. Like brains. Like evolutionary theory. Like weather. Like crystals. Like beaches. Like sand-dunes. Each of these produces patterns less complex than the previous (roughly) but they all do it to some extent because the sufficient requirement is repetition with variance and feedback, not intentional intelligence. That's my position. Now you know :) Site is looking good btw. Cheers Lizzie *and at least one critic claims it contains a mistake, and if computed with the right constant, would return negatives for every possible pattern in the universe!Elizabeth Liddle
August 14, 2011
August
08
Aug
14
14
2011
06:11 PM
6
06
11
PM
PDT
On that basis I would join the ID camp as well, insofar as radio signals comprising the first 100 primes was concerned. Likely the same for the first 100 Fibonacci numbers. That is because we know of no natural mechanism for producing radio signals of that nature. But it's still provisional, as is all science, and subject to change if we do find evidence for a natural mechanism that could produce the same signals. Remember, we've been here before: when pulsar signals were first detected the assumption was they were artificial, for the same reason. It took a short time for a natural mechanism to supplant that view. None of which alters the fact that ID does not provide any explanation for the origin of species, which evolution does.Grunty
August 14, 2011
August
08
Aug
14
14
2011
05:56 PM
5
05
56
PM
PDT
1 2 3 4

Leave a Reply