Intelligent Design

Lizzie Joins the ID Camp Without Even Knowing It!

Spread the love

Lizzie,

You continue to astonish.

In the first sentence of your reply to my prior post you wrote: “I know that it is possible for intelligent life-forms to send radio signals, because we do; my priors for the a radio signal to have an intelligent origin are therefore substantially above zero.”

As I demonstrated earlier, the issue is not whether nature or intelligent agents can cause radio signals. We know that both can. The issue is whether we have any warrant to distinguish this particular signal from a natural signal.

Then you write: “I know of no non-intelligent process that might generate prime numbers (presumably expressed as binary code), and so my priors on that are low.”

Upon a moment’s reflection I am certain you will agree that this is not, strictly speaking, correct. It is easy to imagine such a process. Imagine (as you suggested) a simple binary code that assigns two “dots” to the number “two” and three “dots” to the number “three” and five “dots” to the number “five” and so on, and also assigns a “dash” to delimit each number (a cumbersome code to be sure, but a conceivable one). In this code the series “dot dot dash dot dot dot dash” denotes the first two prime numbers between 1 and 100. Surely you will agree that it is well within the power of chance and mechanical necessity to produce a radio signal with such a simple sequence.

So what do we now know? We know that nature sends out radio signals. But that is not all we know. We know that it is entirely within the realm of reason to suppose that nature could send out a radio signal that denotes the first two prime numbers between 1 and 100 given a particular binary code.

From this information we must conclude that if the signal we received were only the first two prime numbers, we would have no warrant to assign a high probability to “intelligent cause.”

Nevertheless, we both know that your calculation (and it is a very good calculation for which I commend you) that the probability that this particular signal has an intelligent source is for all practical purposes “one” is correct.

Nature can send out a radio signal.

Nature can embed a pattern in that signal that appears to generate prime numbers under the binary protocol we have designated.

Why, then, are we warranted to infer intelligent agency and not the work of nature as the cause of this particular signal?

The answer has nothing to do with your or my “intuition” about the signal.

The answer is that we both know that nature can do two things. (1) It can generate highly improbable patterns. Imagine ANY 500 bit long series of dots and dashes, and you will have a pattern that could not reasonably be replicated by chance before the heat death of the universe. And (2) it can generate specified patterns (for example, the two prime numbers we saw above).

We also know something about what nature cannot do. You said, “I know of no non-intelligent process that might generate prime numbers.” You were almost right. As I have already demonstrated, you should have said “I know of no non-intelligent process that might generate A COMPLEX PATTERN OF prime numbers.”

In other words, you and I know that while nature can do “specified,” and nature can do “complex,” it cannot do “specified and complex at the same time”! This is not your intuition speaking Lizzie. Without seeming to know it, you have made an inference from the universal experience of the human race.

Here’s the most important “take away” for purposes of the discussion we have been having: As much as you have bucked against the idea, you were able to make this design inference based upon nothing more than the character of the embedded signal (i.e., that it contained complex and specified information at the same time, that is to say, complex specified information).

Welcome to the ID camp Lizzie!

105 Replies to “Lizzie Joins the ID Camp Without Even Knowing It!

  1. 1
    Grunty says:

    On that basis I would join the ID camp as well, insofar as radio signals comprising the first 100 primes was concerned. Likely the same for the first 100 Fibonacci numbers. That is because we know of no natural mechanism for producing radio signals of that nature.

    But it’s still provisional, as is all science, and subject to change if we do find evidence for a natural mechanism that could produce the same signals. Remember, we’ve been here before: when pulsar signals were first detected the assumption was they were artificial, for the same reason. It took a short time for a natural mechanism to supplant that view.

    None of which alters the fact that ID does not provide any explanation for the origin of species, which evolution does.

  2. 2
    Elizabeth Liddle says:

    Barry, please read my posts. I said: “I know of no non-intelligent process that might generate prime numbers (presumably expressed as binary code), and so my priors on that are low.”

    I did not say there could be no non-intelligent process, yadda yadda, but that I knew of none and so my priors on that are low.

    My position is that some “natural” (i.e. non-intentional) systems can generate CSI: what is required is not intention but a system in which replication or repetition with variance is modulated by feedback from results.

    This is the basis of learning systems, and while the paradigm case of a learning system is critter with a brain, it is also a system found in non-brain-possessing systems, most famously, the “rm+ns” system postulated by Darwin as responsible for the adaptation of populations to environmental conditions, namely processes in which there replication of a pattern with heritable (by any means) variation in the likelihood with which the pattern will be repeated.

    I started to discuss this in one of your other Lizzie-call-out threads, but as you’ve started yet another (! Should I be flattered? Ashamed? 🙂 ) I’ll say this here.

    And so, faced with a pattern that has CSI (and not all designed things do), the first thing I’d ask is: does it replicate with heritable variance in the ability to replicate? If so, I have a candidate for CSI generation right there. If not, I look for an external designer. If there isn’t an obvious candidate, then I start to look for any clues from the pattern that might tell me some details about the way it was designed. Does it have tool marks? Does it have an obvious use? Does it look like an imitation of something else? If so, might the something else be a clue to the designer?

    And so on. I go in with priors, certainly, but priors that will be adjusted in the light of new data, and new data I will search for by devising predictive hypothesis and seeing whether the data that turn up support the prediction.

    That’s the scientific method. It’s also the way Bayesian Inference works.

    But CSI is something else. It’s basically a frequentist methodology applied to something for which frequencies are not actually available in most cases, and for which the null is badly specified, while the alpha value (the rejection criterion) is rolled up in to the CSI number itself.

    As I keep saying, it’s useless. When applicable, it is too conservative* (returns loads of false negatives) and when non-applicable it doesn’t tell you anything about intentional design, at best telling you merely that some decision-tree-with-feedback was likely to be responsible, and at worst telling you nothing at all.

    I’m not an enemy of ID, although I think it has very little in its favour. But I do think that, possibly inadvertently, the ID idea raises some very interesting questions about the nature of evolutionary processes, and, indeed, the nature of intelligence and intention.

    And, because I’m like that, I’d also like to see some specific hypotheses tested (like “front-loading” which would readily generate differential hypotheses).

    But when you say:

    In other words, you and I know that while nature can do “specified,” and nature can do “complex,” it cannot do “specified and complex at the same time”!

    no, I do not “know” that. I think it is false. I think “nature” can easily do “specified and complex at the same time” as long as the generator is a system that involves a repeater-with-variance and feedback.

    Like brains. Like evolutionary theory. Like weather. Like crystals. Like beaches. Like sand-dunes. Each of these produces patterns less complex than the previous (roughly) but they all do it to some extent because the sufficient requirement is repetition with variance and feedback, not intentional intelligence.

    That’s my position. Now you know 🙂

    Site is looking good btw.

    Cheers

    Lizzie

    *and at least one critic claims it contains a mistake, and if computed with the right constant, would return negatives for every possible pattern in the universe!

  3. 3
    Barry Arrington says:

    Now you are being intellectually dishonest. It saddens me.

  4. 4
    Elizabeth Liddle says:

    No, I do not think I am intellectually dishonest. Being as honest as I can be, which I try to be, I cannot rule out the possibility that there is some culpably unexamined dissonance lurking in my soul, but I am as honest as I can be.

    If you think I otherwise, please explain exactly why you think so.

    It’s the least you can do.

  5. 5
    Querius says:

    Detecting patterns from sentient sources among those generated by nature–isn’t that what SETI has been searching for? Here’s a nightmare scenario:

    SETI scientist: “There’s some good news and some bad news. The good news is that we located and confirmed a structured extraterrestrial radio signal.”

    Science magazine editor: “And the bad news?”

    SETI scientist: “It seems to be replicating the base pairs of the E.coli genome.”

  6. 6
    Ilion says:

    Lizzie Joins the ID Camp Without Even Knowing It!

    That’s true of many, if not most, ostensibly anti-IDists; in much the same way that most Darwinists, while publicly anti-Lamarckian, and really ultra-Lamarckians under the hood.

  7. 7
    Elizabeth Liddle says:

    Lamarck was at least partly right.

  8. 8
    Barry Arrington says:

    No Lizzie. What you request requires an honest dialogue, and you have demonstrated that you are not willing to be honest with me, or even yourself. As I say, it saddens me, because I truly had hope for you.

  9. 9
    Querius says:

    Yikes! How was Lamarck partly right? Are you referring to gene switching as a result of environmental stress?

  10. 10
    kairosfocus says:

    Dr Liddle:

    Can you by incremental, rewarded accidental errors, turn “See Spot run” into a textbook on Java Programming, within the reasonable resources of the observable cosmos?

    And in that context, for months now the simplest way to assess those resources is, on a solar system scale basis:

    Chi_500 = I*S – 500, bits beyond the threshold of our solar system’s PTQS resources.

    (For the observable cosmos go up to 1000 bits. You already know the scope of resources to the scope of possibilities for 500 bits is as a one-straw size sample to a cubical haystack over 1 light month across.)

    The inference to design on specified complexity, esp functionally specific complexity, beyond a reasonable threshold is quite well warranted, and is also backed up by billions of successful cases in point and NIL credible counter-examples. Given the conservative nature of the threshold as just seen, that is no wonder — our solar system out to Pluto could easily be lost in such a haystack.

    And Barry’s example of the first 100 primes, as a structured signal, is well beyond the 500 bit threshold or even the 1,000 bit threshold.

    GEM of TKI

  11. 11
    kairosfocus says:

    If you doubt me, think about the challenge of preserving a dot-dash pattern with consistency in duration, while listing the primes in succession. Just the structuring of an aperiodic message with that sort of consistency would be suspicious. Multiply by the first 100 primes and this is a morally certain message, not lucky noise.

  12. 12
    kairosfocus says:

    PS: Ducking behind revisable Bayesian priors does not answer the problem. Cf the discussion here [from my always linked] — this issue came up at UD already and has been answered long since, years ago.

  13. 13
    kairosfocus says:

    Let’s give the bottom-line from the linked:

    __________

    >> L[E|T2]/ L[E|T1] = LAMBDA

    = {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)}

    Thus, the lambda measure of the degree to which the evidence supports one or the other of competing hyps T2 and T1, is a ratio of the conditional probabilities of the evidence given the theories (which of course invites the “assuming the theory” objection, as already noted), times the ratio of the probabilities of the theories being so. [In short if we have relevant information we can move from probabilities of evidence given theories to in effect relative probabilities of theories given evidence, and in light of an agreed underlying model.]

    All of this is fine as a matter of algebra (and onward, calculus) applied to probability, but it confronts us with the issue that we have to find the outright credible real world probabilities of T1, and T2 (or onward, of the underlying model that generates a range of possible parameter values). In some cases we can get that, in others, we cannot; but at least, we have eliminated p[E]. Then, too, what is credible to one may not at all be so to another. This brings us back to the problem of selective hyperskepticism, and possible endless spinning out of — too often specious or irrelevant but distracting — objections [i.e closed minded objectionism].

    Now, by contrast the “elimination” approach rests on the well known, easily observed principle of the valid form of the layman’s “law of averages.” Namely, that in a “sufficiently” and “realistically” large [i.e. not so large that it is unable or very unlikely to be instantiated] sample, wide fluctuations from “typical” values characteristic of predominant clusters, are very rarely observed. [For instance, if one tosses a “fair” coin 500 times, it is most unlikely that one would by chance go far from a 50-50 split that would be in no apparent order. So if the observed pattern turns out to be ASCII code for a message or to be nearly all-heads or alternating heads and tails, or the like, then it is most likely NOT to have been by chance. (See, also, Joe Czapski’s “Law of Chance” tables, here.)]

    Elimination therefore looks at a credible chance hyp and the reasonable distribution across possible outcomes it would give [or more broadly the “space” of possible configurations and the relative frequencies of relevant “clusters” of individual outcomes in it]; something we are often comfortable in doing. Then, we look at the actual observed evidence in hand, and in certain cases — e.g. Caputo — we see it is simply too extreme relative to such a chance hyp, per probabilistic resource exhaustion.

    So the material consequence follows: when we can “simply” specify a cluster of outcomes of interest in a configuration space, and such a space is sufficiently large that a reasonable random search will be maximally unlikely within available probabilistic/ search resources, to reach the cluster, we have good reason to believe that if the actual outcome is in that cluster, it was by agency. [Thus the telling force of Sir Fred Hoyle’s celebrated illustration of the utter improbability of a tornado passing through a junkyard and assembling a 747 by chance. By far and away, most of the accessible configurations of the relevant parts will most emphatically be unflyable. So, if we are in a flyable configuration, that is most likely by intent and intelligently directed action, not chance. ]

    We therefore see why the Fisherian, eliminationist approach makes good sense even though it does not so neatly line up with the algebra and calculus of probability as would a likelihood or full Bayesian type approach. Thence, we see why the Dembski-style explanatory filter can be so effective, too. >>
    ___________

    In short, there is a reason why a needle in a haystack search analytical approach is not just easily dismissible error.

  14. 14
    kairosfocus says:

    F/N: Please notice the money clip on why the needle in the haystack approach is a reasonable one:

    the “elimination” approach rests on the well known, easily observed principle of the valid form of the layman’s “law of averages.” Namely, that in a “sufficiently” and “realistically” large [i.e. not so large that it is unable or very unlikely to be instantiated] sample, wide fluctuations from “typical” values characteristic of predominant clusters, are very rarely observed. [For instance, if one tosses a “fair” coin 500 times, it is most unlikely that one would by chance go far from a 50-50 split that would be in no apparent order. So if the observed pattern turns out to be ASCII code for a message or to be nearly all-heads or alternating heads and tails, or the like, then it is most likely NOT to have been by chance. (See, also, Joe Czapski’s “Law of Chance” tables, here.)]

    There’s more than one side to a story, in short.

  15. 15
    kairosfocus says:

    Dr Liddle:

    Kindly note my remarks, clips and onward links on your comment no 1.

    G’night

    GEM of TKI

  16. 16
    kairosfocus says:

    F/N 2: The Chi metrics are DESIGNED to give negative values for events in the observable cosmos [or at least our solar system] that are reachable by chance and/or mechanical necessity.

    For instance take Chi_500 = I*S – 500

    a: If I by something like I = – log p or a weighted sum as Shannon’s H estimates or by direct count is under 500, this will be negative, not complex enough.

    b: If I is over 500, then there is a possibility of a positive result, i.e a verdict not plausible on chance, on the 1 light month needle in the haystack search standard.

    c: If the matter is driven by necessity, e.g crystal face as opposed to optically flat polycrystalline surface, I would be low or zero, so we would be negative.

    d: If any old combination would do, or something that is typical of the bulk of the set of possibilities W, S would be zero, putting Chi negative again.

    e: Only where the observed case E comes from a complex [beyond 500 bits] specific, describable and narrow zone T in W, such that needle in haystack issues apply,can we go positive.

    f: this is routinely achieved for intelligent products such as posts in this thread beyond 72 ASCII characters, which are in English and responsive to a context, while being complex beyond 500 bits.

    _______

    So, the asterisked objection in 1 fails.

  17. 17
    Prof. FX Gumby says:

    Barry, an accusation of dishonesty is a series one to level at a scientist. It completely undermines credibility and if it sticks, it ruins careers. As such, it’s now your strong moral obligation to explain why you think Elizabeth is being dishonest.

    For a blog that values civility highly, the levels of civility shown to Elizabeth by some commenters (and posters) here are disappointing.

  18. 18
    Prof. FX Gumby says:

    series = serious. It’s early here.

  19. 19
    paragwinn says:

    “most Darwinists, while publicly anti-Lamarckian, and really ultra-Lamarckians under the hood.”

    Citation needed.

  20. 20
    markf says:

    Barry

    I think that you have not understood Lizzie’s point and that is why you think she is being dishonest.  I have similar (but not identical) views to her  – so I hope she  won’t mind if I try to explain.

    As I understand it, you think that Lizzie is using ID principles to detect design without admitting it.  However, there are key differences in the way she and I conclude design from the way that e.g. William Dembski concludes design.  We might in specific cases, such as the prime number sequences, come to the same conclusion but we get there a different way. We both use Bayesian inference while Dembski (and presumably you) use a rather odd variation on old-fashioned Fisherian hypothesis testing. 

    You deduce design by calculating the probability of an outcome conforming to some specification given what you believe to be the only reasonable chance hypothesis.  In this case the probability of a string of prime numbers assuming all number strings are equally likely.

    Bayesians deduce design by considering a range of hypotheses and considering not only the relative likelihood of the outcome given each hypothesis but also the prior probability of each hypothesis.  In this case hypotheses under consideration might include:

    1. Alien civilisation sending radio message intentionally or accidentally

    2. Natural radio source generating numbers with no pattern which just happen to correspond to the first 100 prime numbers

    3. Natural radio source which is heavily biased to emit prime numbers (perhaps only emits prime numbers)

    4. Reflection of prime number sequence generated by our own planet so it appears to come from space

    There may well be others which have just not occurred to us.

    Each of these has its own prior probability which is then modified by the likelihood of receiving that specific sequence given that hypothesis. I think we all agree the likelihood of the outcome given hypothesis 2 is so low compared to the others we can dismiss it – even though the prior is very high.

    The prior probability of 1 is hard to estimate. It can only be based on things like the number of stars, how many are known to have planets, and personal estimates of the chances of alien civilisation developing on another planet.  But it is not zero.  (I personally think the likelihood of the outcome given 1 is lower than most take for granted – but that is my interpretation.)

    My personal favourite is 4 – which seems to have a highish prior and likelihood compared to the others.  But I don’t think the prior for 3 is so very low.  I think of the hypothesised processes which lead to the periodic cicadas having periods which are prime numbers.

    But the point is that this is a very different process from the idiosyncratic methodology which is ID.  It considers specific hypotheses – an alien civilisation, not design in general – and assesses each one using acknowledged processes for rational inference.

    I fear the result of this overlong comment will be something about intellectual dishonesty or faith or irrationality – but with luck others readers may disagree.

  21. 21
    Elizabeth Liddle says:

    Barry, you have accused me of dishonesty.

    It does not “require honest dialogue” for you to honour my request, which was to explain to me why you asserted that I was being dishonest.

    It simply requires you to do what anyone making such a charge is obligated to do – to explain the charge.

    As for the honest dialogue – you are assuming your consequent here, in a way that would make Kafka proud. You accuse me of dishonesty then refuse to tell me why thou think I have been dishonest on the grounds that for you to do this would require “honest dialogue”, the very thing you imply I am not capable of.

    huh?

    Barry, as I said in Vincent’s thread, I post under my own name here. I did so because I did not wish to hide my identity – I am honest enough, in other words (not that I think there is anything wrong with web anonymity, though it is usually easy enough to breach) to want to be completely open about who I am and what I think.

    I assume you also use your own name, as do Denyse, Vincent, Gil and, of course William Dembski.

    But you have made a serious allegation about my personal integrity, in an easily googled-comment on a site with a high google ranking. The least you can do is to support it in a manner in which I can respond.

    All I can do, in the absence of such support, is to insist on my innocence.

    Please honour my request.

  22. 22
    Elizabeth Liddle says:

    Thanks Mark.

    Yes, that’s well put.

    The difference between a Bayesian and a Frequentist inference is important here, and CSI is a frequentist concept.

    I used the former, not the latter.

    That was my point.

  23. 23
    kairosfocus says:

    Darwin, as the edns of Origin went through, was. It can be seen in the commonly available 6th edn. I know there is some report of strange issues that lead to a sort of low level neo-Lamarckianism today.

  24. 24
    kairosfocus says:

    Dr Liddle (& MF etal):

    Kindly see 1.1 ff above, re Bayesianism etc.

    This is an old issue at UD, and the balance on merits is not as you imagine or suggest.

    GEM of TKI

  25. 25
    DrBot says:

    Fortunately visitors can read the discussion that led to the accusation and determine that you are not being dishonest, but mud sticks, as they say, which is probably the point.

  26. 26
    Meleagar says:

    Elizabeth said:

    “And so, faced with a pattern that has CSI (and not all designed things do), the first thing I’d ask is: does it replicate with heritable variance in the ability to replicate? If so, I have a candidate for CSI generation right there.”

    Outside of the phenomena under debate (life), where do you find systems of heritable variance with feedback mechanisms?

  27. 27
    Meleagar says:

    MarkF said:

    As I understand it, you think that Lizzie is using ID principles to detect design without admitting it.

    What difference does it make if she’s using some other design inference method? The fact is, she just admitted that design detection valid, and that intelligent design can be detected without knowledge of the particulars of the designing agency or process.

    What any design detection method does is locate a likely design candidate, and then changes the heuristic one uses in further examination of said phenomena from one of finding natural causes to one of finding intelligent purpose, origin, fabrication methods, etc.

  28. 28
    Elizabeth Liddle says:

    Meleager, if you have been following my posts, you will know that at no time have I said that design detection is not “valid”, in principle or that intelligent design cannot be detected without knowledge of the particulars of the designing agency.

    I think a number of invalid methods have been advanced by ID proponents for inferring design, and ironically, on several occasions I’ve said that I think that CSI is far too conservative. It produces way too many false negatives. It also, IMO, produces false positives, but that, is, I think, because it is not a signature or intentional design but of a particular system of contingencies-with-feedback.

    What any design detection method does is locate a likely design candidate, and then changes the heuristic one uses in further examination of said phenomena from one of finding natural causes to one of finding intelligent purpose, origin, fabrication methods, etc.

    Well, except that I think that you need to iterate between the two things, constantly adjusting your priors on both (likelihood of design; likelihood of fabrication method) to allow a well-fitting model to emerge.

    But, basically, yes.

  29. 29
    Elizabeth Liddle says:

    Well, in computer programs, obviously, but using “heritable” metaphorically, in the sense that characteristics from one iteration are preserved, with modification, in the next. Other examples might be beach formation, some crystal processes, weather systems.

    But biological systems are the most obvious, and at many scales, from within brains to within populations to within ecosystems.

    Prions are an interesting example.

  30. 30
    Elizabeth Liddle says:

    kf, thanks for your responses. I hugely appreciate your engagement with the actual content of my remarks.

    I will try to return the courtesy, but it may take me a while.

  31. 31
    markf says:

    What any design detection method does is locate a likely design candidate, and then changes the heuristic one uses in further examination of said phenomena from one of finding natural causes to one of finding intelligent purpose, origin, fabrication methods, etc

    Seems like a reasonable approach. It is not the one ID uses which makes it a matter of principle to detect design without locating a likely design candidate.

  32. 32
    Barry Arrington says:

    Dr. Liddle,
    With respect to the prime number example, by no means am I casting aspersions on your personal integrity. As I have said before, I do not believe you are lying. I believe your faith commitments force you away from glaringly obvious conclusions.

    The entire issue we have been exploring is why we would have any warrant to begin investigating this signal in the first place. Certainly we wouldn’t perform a Bayesian analysis on just any old radio signal. Why would we consider performing any sort of analysis on this one? Why not simply ignore it as a product of the background radiation? Why is it interesting to us to begin with? it is interesting to us to begin with because it specifies the primes between 1 and 100. Why are the primes between 1 and 100 interesting? There is one and only one reason they are interesting — they are a case of complex specified information. You refuse to admit the obvious — that the only reason we would think this signal to be interesting to begin with — the only reason you would start to perform Bayesian analysis — is that you have already made at least a preliminary design inference based on the fact that the signal contains CSI.

    Again, I do not believe you are lying to me or anyone else. I believe your faith commitments are so strong they do not allow you to see the obvious.

    Now, to suggest that Bill Dembski believes that intelligence can be non-intentional, I do believe you are being dishonest in the normal sense of the word about that.

  33. 33
    Bantay says:

    Ms. Liddle…welcome to the big tent.

  34. 34
    Elizabeth Liddle says:

    Dr. Liddle,
    With respect to the prime number example, by no means am I casting aspersions on your personal integrity. As I have said before, I do not believe you are lying. I believe your faith commitments force you away from glaringly obvious conclusions.

    And I asked you to explain why you believe this.

    The entire issue we have been exploring is why we would have any warrant to begin investigating this signal in the first place. Certainly we wouldn’t perform a Bayesian analysis on just any old radio signal. Why would we consider performing any sort of analysis on this one? Why not simply ignore it as a product of the background radiation? Why is it interesting to us to begin with? it is interesting to us to begin with because it specifies the primes between 1 and 100. Why are the primes between 1 and 100 interesting? There is one and only one reason they are interesting — they are a case of complex specified information. You refuse to admit the obvious — that the only reason we would think this signal to be interesting to begin with — the only reason you would start to perform Bayesian analysis — is that you have already made at least a preliminary design inference based on the fact that the signal contains CSI.

    Barry, your claim, another thread (I have no idea why you have spread this over several threads, I’m going to have to hunt) was that I “made a design inference based on nothing more than the existence of CSI embedded in a radio signal. I based my inference on CSI alone”. I did not, as I explained here, in excruciating detail.

    As you can see from the Bayesian equation there, the probability that the signal came from a non-intelligent source is a prior, but not the only prior And it was not a “CSI” probability, although I would agree that the fact that the signal is complex and readily compressible (has specified complexity) was a factor in my arriving at that prior. But the point about a prior probability is that it can be adjusted in the light of new data. CSI calculations generate a non-conditional probability value that is supposed to be the probability the pattern was generated by a non-intentional source.

    That is why my reasoning is fundamentally different. I am not dishonestly (whether knowingly or unknowingly) refusing to admit that I am using CSI to make an inference and then refusing to peek over the brink; I am applying a different inferential approach, one that I consider much sounder, which leads me to no brink at all. As I keep saying: I have no problem with inferring design. I do have a huge problem in inferring it solely from a CSI estimate, and the reason is that IMO the probability calc in a CSI estimate is not the probability that a pattern could have been generated from an non-intentional sources. In other words, I think it’s wrong.

    Again, I do not believe you are lying to me or anyone else. I believe your faith commitments are so strong they do not allow you to see the obvious.

    No, it’s because I have decent training in inferential statistics.

    Now, to suggest that Bill Dembski believes that intelligence can be non-intentional, I do believe you are being dishonest in the normal sense of the word about that.

    Except that I didn’t suggest it. In fact, I made it clear that I think he almost certainly does not believes it. Repeatedly.

  35. 35
    Elizabeth Liddle says:

    There is 🙂

    Let me link again to Denis Noble, he’s really worth listening to:

    http://videolectures.net/eccs07_noble_psb/

    Honestly, you should all watch it! It’s the antidote to The Selfish Gene 🙂

  36. 36
    Barry Arrington says:

    Barry: “Now, to suggest that Bill Dembski believes that intelligence can be non-intentional, I do believe you are being dishonest in the normal sense of the word about that.”

    Dr. Liddle: “Except that I didn’t suggest it. In fact, I made it clear that I think he almost certainly does not believes it. Repeatedly”

    Barry: No, you repeatedly suggest just the opposite when you say that a fair reading of Dembski’s definition of intelligence excludes intentionality. No, it cannot. To suggest, as you repeatedly do, that a wire mesh filter process “chooses” and an intelligent agent “chooses” and therefore “chooses” means the same thing in both contexts is a gross misuse of language.

  37. 37
  38. 38
    Ilion says:

    Not according to the Darwinistic Received Wisdom we are all “taught” in school.

  39. 39
    Elizabeth Liddle says:

    No, Barry, it is not.

    And you have just moved the goalposts.

    You accused me of suggesting that “Bill Dembski believes that intelligence can be non-intentional”. I have not suggested that. I’m sure he does not believe that intelligence can be non-intentional. I have made that clear.

    You need to retract the allegation that I said, or suggested any such thing. I did not. Therefore I did not act dishonestly. So you need to retract that charge.

    Now you accuse me of “gross misuse of language”. Which is a rather different

    And I deny it because you have misunderstood, despite my repeated efforts to make it clear what I am actually saying.

    Dembski gives an operational definition, as a good scientist should. Using that operational definition I show (or I claim to show) that Dembski is correct – that CSI is generated by systems that conform to that definition. However, those systems include evolutionary systems – they generate CSI, AND they conform to the definition.

    I am not saying, oh look Dembski forgot to mention intention, let’s equivocate and claim that evolutionary systems can create CSI!

    I am saying: look here is how evolutionary systems create CSI, and, look, it’s just the same way as the rat in the maze does it, and look again, this is because Dembski has found something interesting – his definition nicely includes our intentional rat, but also includes our non-intentional evolutionary systems. In other words, it is not the intentional aspect of intelligence that is sufficient and necessary for CSI but the aspect of it that facilitates “informed choice” – not random selection, but selection informed by success, which is exactly what natural selection is.

    Dembski, having shown how choosing systems generate CSI, fails to notice that the choosing system that generates CSI doesn’t have to be intentional; that a similar choosing system that also fits his description despite being non-intentional also generates CSI.

    He’s correct, but for the wrong reason, just as Montgolfier successfully launched his balloon, but thought that it was the smoke doing the lifting, not the hot air.

    Dembski has produced a definition of a CSI generator that works, but it turns out that it’s not intention that does the lifting but informed choice which can result from any iterative feedback system.

    Feel free to disagree.

    But please engage with the argument, rather than impugn my intellectual (and, in this case, moral) integrity.

    Thanks.

  40. 40
    Barry Arrington says:

    Dr. Liddle, I am mulling your 12.1. Before I answer that, please tell me how the following statements can both be true:

    On August 12, 2011 Dr. Liddle wrote: “If you want to figure out whether at [sic] thing is designed or not, or, even, what the signature of design is, you have to ask something about the design process.”

    On August 15, 2011 Dr. Liddle wrote: “Meleager, if you have been following my posts, you will know that at no time have I said that design detection is not “valid”, in principle or that intelligent design cannot be detected without knowledge of the particulars of the designing agency.”

  41. 41
    Elizabeth Liddle says:

    Barry:

    Dr. Liddle, I am mulling your 12.1. Before I answer that, please tell me how the following statements can both be true:

    OK

    On August 12, 2011 Dr. Liddle wrote: “If you want to figure out whether at [sic] thing is designed or not, or, even, what the signature of design is, you have to ask something about the design process.”

    Let me rephrase in an attempt at greater clarity:

    If you want to know whether an item is an artifact or not, then you need to ask something about the processes it underwent in order to come to be as it is. You also need to do this if you want to understand what the signature of the products of such processes might be.

    So you ask: is there evidence that it is iterative, for instance, and does it vary? Does it involve trial-by-error learning? Is there evidence of foresight? Of retrofitting? What is the medium of the design? How is it built? Does it show signs of tooling, or shaping or polishing? Could it be a signal that forms part of a feedback process? Could it be a signal from an intelligent source to another? Does it appear to mimic another object? If so, in what way?

    All these things will inform your conclusion, which, like all scientific conclusions, will be provisional, and subject to change should more data become available that adjusts your priors on any unknown.

    Now for my (somewhat irritable, for which I apologise) response to Meleager:

    “Meleager, if you have been following my posts, you will know that at no time have I said that design [either sense] detection is not “valid”, in principle or that intelligent design cannot be detected without knowledge of the particulars of the designing agency.”

    I meant exactly what I said. No, you don’t need “particulars of the designing agency”. You need to ask, as I said in my first quote, questions about the processes by which the thing came to be the way it is.

    And if, for example, you can see no evidence that the thing is a self-replicator, for instance, then that is a strong clue that the thing has an external designer, even if you have no information whatsoever about the nature of the designer.

    In other words there is no (I would argue) simple method for establishing whether something is an artifact or the outcome of a non-intentional process such as evolution, or geology. As with any investigation you collect as much data as you can, think up possible theories, derive hypotheses from them, test them against the data.

    You can’t just do a probability calc and stop.

    F/N Ironically, perhaps, I spent a huge amount of time back in 2004 trying to persuade people that just because there was an “overwhelming” probability that the exit polls gave Kerry a majority, that did not mean that the votes were artifactual – “designed” – by Karl Rove to get Bush back into the White House.

    When you compute a probability you have to be very clear (and many who should be aren’t) about what your probability is a probability of. You are a lawyer, Barry – did you read the case of Sir Roy Meadows whose expert evidence resulted in several convictions for infanticide, later quashed because he mistook the “overwhelming probability” that there was a familial factor when several cot deaths occurred in a single family for the “overwhelming probability” that the babies had been intentionally killed.

    To come to a confident conclusion, you need as much data as you can, and testing your models is an iterative process, not something you do on day one, and then leave set in stone for all time.

  42. 42
    Barry Arrington says:

    Dr. Liddle, now I am really confused. Today you say: ““Meleager, if you have been following my posts, you will know that at no time have I said that design [either sense] detection is not “valid”, in principle or that intelligent design cannot be detected without knowledge of the particulars of the designing agency.”

    And then you reiterated for my benefit: “No, you don’t need ‘particulars of the designing agency.’”

    But on August 8 you wrote: “What I am asking is how, in the absense of ANY information about the designer you would spot that the a string of nucleotides contained a name? In other words, take that string of nucleotides with Craig Venter’s name in it, and say how you would distinguish it from any other randomly generated string without benefit of any knowledge regarding the designer. And it’s actually completely on point wrt the OP, as it demonstrates just how a design inference made on the basis of non-functional code depends on at least a reasonably detailed hypothesis concerning attributes of the designer. To put the problem more generally: how would you distinguish between a randomly generated string and one with a coded message without knowing anything about the sender of the message?”

    I am really trying to understand how the first two statements can be reconciled with the third.

  43. 43
    kairosfocus says:

    Onlookers, notice something int eh likelihood ratio expression:

    L[E|T2]/ L[E|T1] = LAMBDA

    = {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)}

    See that direct ratio of probabilities of the theories T1, T2, and that in an essentially subectivist context of perceiving degrees of probability?

    What happens if we assign P(T2) = 0, along the lines of Lewontin’s a priorism and claimed “self-evidence”?

    Do you see how the LHS is forced to zero, on essentially a begged question?

    Do you see why a search of a config space of possibilities approach is better able to force out into the open the issues at stake?

    A space of possibilities, W — observable and/or calculable

    An observed case from that space E, plainly observed.

    A definable zone T that is narrow relative to the space W — reasonably identifiable and observable.

    Then, let us go for Chi_500 = I*S – 500, bits beyond the solar system PTQS threshold.

    I — calculable on observation, or inferred by inspection as Shannon did in some cases.

    S — essentially looks at whether E comes form a zone T in W.

    That’s it.

    Cf cases here.

    In effect once you are from something of complexity that requires 500 bits worth, we are looking at the solar system resources being able to sample the ratio of 1 straw to a cubical bale 1 light month across [light goes 1/5 mile per microsecond].

    Any reasonably blind search of that scope in that size of possibilities, is going to overwhelmingly be likely to be straw, not needle. In fact you could have our solar system out to Pluto in that bale and be utterly unlikely to hit on it by chance.

    That is the basis for inferring design on seeing CSI or better yet FSCI, especially digitally coded info. Intelligence routinely produces it, chance and necessity in a highly contingent situation will be utterly unlikely to.

    Which is exactly what is sitting in the nucleus of the living cell. (I forget, Dr Liddle has disputed the identification of this as digital code. It is, 4-state elements, 2 bits potential storage per symbol. Codon tables are all over the net.)

    GEM of TKI

  44. 44
  45. 45
    Elizabeth Liddle says:

    Barry:

    I am really trying to understand how the first two statements can be reconciled with the third.

    And I appreciate the effort 🙂 But of course it wasn’t a statement, it was a question:

    But on August 8 you wrote: “What I am asking is how, in the absense of ANY information about the designer you would spot that the a string of nucleotides contained a name? In other words, take that string of nucleotides with Craig Venter’s name in it, and say how you would distinguish it from any other randomly generated string without benefit of any knowledge regarding the designer. And it’s actually completely on point wrt the OP, as it demonstrates just how a design inference made on the basis of non-functional code depends on at least a reasonably detailed hypothesis concerning attributes of the designer. To put the problem more generally: how would you distinguish between a randomly generated string and one with a coded message without knowing anything about the sender of the message?”

    Right. Now here we have a very specific puzzle. We have a sequence of nucleotides that doesn’t appear to do anything (presumably): we see whether it makes any difference to the organism if we remove it. It doesn’t. It seems completely inert. So we have no specification in advance, we just have a string of nucleotides that we can express as a string of codons.

    And we want to know whether that string, now a string symbols for codons is just a bit of “junk” – a happenstance sequence of non-coding nucleotides, or contains information.

    We have no knowledge of human designers, we know nothing about their language, or even if they have one, their writing system, if they had one – no clue.

    Just a hunch, maybe, that there might be a coded something in the string.

    My question was: how would you set about finding out?

    I do not know of a method in this particular case. Do you?

  46. 46
    Elizabeth Liddle says:

    Just to make clear, in case this is the difficulty: when I said you did not need to know the identity or characteristics of the designer, I meant that it is perfectly possible, in many cases, to make an inference without those details. It’s not a universally necessary requirement. But in the case above, I cannot see any way of doing it without at least some information about Craig Venter.

  47. 47
    Elizabeth Liddle says:

    True. But science moves on.

  48. 48
    Barry Arrington says:

    Yes, actually. I do.

    See: http://www.wired.com/wiredscie.....-institut/

  49. 49

    Elizabeth Liddle seems stuck with where the discussion over CSI was ten years ago when I published my book NO FREE LUNCH.

    She characterizes the Bayesian approach to probabilistic rationality as though that’s what science universally accepts when in fact the scientific community overwhelmingly adopts a Fisherian approach (which, by the way, is not wedded to a frequentist approach — epistemic probabilities, Popper’s propensities, and Bernoulli’s indifference principle all work quite well with it).

    Liddle makes off that CSI is a fuzzy concept when her notion of prior probabilities when applied to design inferences is nothing but an exercise in fuzzification. What, indeed, is the prior probability of a space alien sending a long sequence of primes to planet earth? In such discussions no precise numbers ever get reasonably assigned to the priors. Bayes works when the priors can themselves be established through direct empirical observation (as with medical tests for which we know the [prior] incidence of the disease in the population). That’s never the case in these discussions, where the evidence of design is purely circumstantial.

    I refer readers to two articles of mine relevant to this thread:

    (1) “Design by Elimination vs. Design by Comparison” (available at http://www.designinference.com....._Bayes.pdf), in which I clearly spell out how the Bayesian approach to design inferences is parasitic on my generalization of Fisher to CSI.

    (2) “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information,” which answers Liddle’s vain hope that RV+NS can serve as a designer substitute. This article is available at http://evoinfo.org/publication.....ation-law/ and is also a chapter in THE NATURE OF NATURE.

    Closing thought: If Bayes were such a boon to design inferences, then why don’t we see more of them? When people in real-life infer design on the basis of a small probability event (and such events do regularly trigger design inferences), why don’t they factor in the priors? Is it that they’re just not properly educated in the logic of Bayes? Or perhaps it’s that estimating priors in such circumstances is simply an exercise in handwaving. In any case, if design in biology is real, then Bayes should long ago have uncovered it. The fact that it has not and that it is regularly used to insulate Darwinian evolution from probabilistic critique (Elliott Sober is the master of this) suggests that more objectve probabilistic methods are called for — such as CSI.

  50. 50
    Mung says:

    Elizabeth Liddle:

    And so, faced with a pattern that has CSI (and not all designed things do), the first thing I’d ask is: does it replicate with heritable variance in the ability to replicate? If so, I have a candidate for CSI generation right there.

    Like humans?

    If not, I look for an external designer.

    Oh wait. Humans are external designers.

  51. 51
    markf says:

    I expect Lizzie will respond but I must add a comment because this is so misleading. I agree that the Bayes/Fisher distinction is different from the Subjective/Frequentist distinction.  As you say you can adopt a Fisherian approach and a wide range of views of probability.  Similarly you can be a Bayesian frequentist or whatever.

    But, as I am sure you know, there are deep conceptual problems with Fisherian hypothesis testing. There are any number of articles pointing this out e.g. The Insignificance of Null Hypothesis Significance Testing. In fact pure Fisherian hypothesis testing is not used overwhelmingly in the scientific community.  For example in a wide range of disciplines such as medical statistics Fisherian approaches would not be allowed and it is required to use something like Neyman/Pearson.  (Try getting a test for a drug through the authorities without calculating the power!). This tries to avoid the deep problems with the Fisherian approach by defining one more alternative hypothesis and comes closer to comparing likelihoods. However, I admit p-values are still, unfortunately, used all too often.

    The biggest problem with classical hypothesis testing in all its shades is that it answers the wrong question.  It doesn’t try to work out the probability of the hypothesis given the data.  Bayes answers the right question.  This follows from the maths.  As you say there is a pragmatic problem because it may not be possible to make a reasonable estimate of prior probabilities.  All other methods of hypothesis testing can be considered as heuristics to overcome the difficulty of doing the correct Bayesian calculation. But it is bizarre to respond to not being able to answer a question with certainty by answering a different question instead!  Especially when dealing with philosophically challenging issues such as the development of life and alien civilisations. What we should do is recognise the uncertainty in our answer and limit it as best we can. 

    Needless to say, I think your article Design by Elimination vs. Design by Comparison is also mistaken but that takes rather more space to address.

  52. 52
    markf says:

    I find that I already wrote a discussion of the paper “Design by Elimination vs. Design by Comparison” back in 2006 as part of this. The discussion comes at the end, and as it not so very long I have repeated it here.

    So far we have established that the use of specifications to reject chance hypothesis has some problems of interpretation and has no justification, while comparing likelihoods seems to account for our intuitions and is justified. Dembski is well aware of the likelihood approach and has tried to refute it by raising a number of objections elsewhere, notably in chapter 33 of his book “The Design Revolution” which is reproduced on his web site (Dembksi 2005b). But there is one objection that he raises which he considers the most damning of all and which he repeats virtually word for word in the more recent paper. He believes that the approach of comparing likelihoods presupposes his own account of specification.

    He illustrates his objection with another well worn example in this debate — the case of the New Jersey election commissioner Nicholas Caputo who is accused of rigging ballot lines. It was Caputo’s task to decide which candidate comes first on a ballot paper in an election and he is meant to do this without bias towards one party or another. Dembski does not have the actual data but assumes a hypothetical example where the party of the first candidate on the ballot paper follows this pattern for 41 consecutive elections (where D is democrat and R is republican)

    DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

    This is clearly conforms to a pattern which is very demanding for the hypothesis that Caputo was equally likely to make a Republican or Democrat first candidate. In fact it conforms to a number of such patterns for 41 elections, for example:

    There is only one republican as first candidate.
    One party is only represented once.
    There are two or less republicans.
    There is just one republican and it is between the 15th and 30th election.
    Includes 40 or more Democrats.
    And so on.

    Dembski has decided that the relevant pattern is the last one. (This is interesting in itself as it is a single-tailed test and assumes the hypothesis that Caputo was biased towards Democrats. Another alternative might simply have been that Caputo was biased — direction unknown — in which case the pattern should have been “one party is represented at least 40 times”). His argument is that when comparing the likelihoods of two hypotheses (Caputo was biased towards Democrats or Caputo was unbiased) generating this sequence, we would not compare the probability of the two hypotheses generating this specific event but the probability of the two hypotheses generating an event which conforms to the pattern. And we have to use his concept of a specification to know what the pattern is. But this just isn’t true. We can justify the choice of pattern simply by saying “this is a set of outcomes which are more probable under the alternative hypothesis (Caputo is biased towards Democrats) than under the hypothesis that Caputo is unbiased”. There is no reference to specification or even patterns in this statement.

    This is clearer if we consider a different alternative hypothesis. Suppose that instead of suspecting Caputo of favouring one party or another we suspect him of being lazy and simply not changing the order from one election to another — with the occasional exception. The “random” hypothesis remains the same – he selects the party at random each time. The same outcome:

    DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

    counts against the random hypothesis but for a different reason — it has only two changes of party. The string:

    DDDDDDDDDDDDDDDDDDDDDDRRRRRRRRRRRRRRRRRRRR

    would now count even more heavily against the random hypothesis – whereas it would have been no evidence for Caputo being biased.

    So now we have two potential patterns that the outcome matches and could be used against the random hypothesis. How do we decide which one to use? On the basis of the alternative hypothesis that might better explain the outcomes that conform to the pattern.

    The comparison of likelihoods approach is so compelling that Dembski himself inadvertently uses it elsewhere in the same chapter of The Design Revolution. When trying to justify the use of specification he writes “If we can spot an independently given pattern…. in some observed outcome and if possible outcomes matching that pattern are, taken jointly, highly improbable …., then it’s more plausible that some end-directed agent or process produced the outcome by purposefully conforming it to the pattern than that it simply by chance ended up conforming to the pattern.”

  53. 53
    markf says:

    This should have appeared as a response to comment 14 by William Dembski. I am sorry for any confusion.

  54. 54
    Elizabeth Liddle says:

    William Dembski:

    Thank you for responding to my posts.
    You write:

    Elizabeth Liddle seems stuck with where the discussion over CSI was ten years ago when I published my book NO FREE LUNCH.

    Possibly, but I don’t find the discussion resolved by your more recent writing, I’m afraid.

    She characterizes the Bayesian approach to probabilistic rationality as though that’s what science universally accepts when in fact the scientific community overwhelmingly adopts a Fisherian approach (which, by the way, is not wedded to a frequentist approach — epistemic probabilities, Popper’s propensities, and Bernoulli’s indifference principle all work quite well with it).

    Well, no, I don’t so characterise it. Fisherian approaches are, of course the workhorse of statistical hypothesis testing in science, but increasingly, Bayesian approaches are used to resolve questions as to which, of several hypotheses, all giving good fits to the data, are more likely to be true. The reason for this is, of course, that Bayesian methods pose the question: “what is the most likely explanation for these data?”, which is the question we actually want to know the answer to, whereas Fisherian approaches ask the highly counter-intuitive (and widely misunderstood) question: “given the null, how likely are the data?” That’s fine-ish for very circumscribed null hypotheses, for example the null hypothesis that two samples of data are drawn from the same, well-defined population, or that the number of correct responses on a multiple choice test are no more than you’d expect under the null hypothesis that the candidate is guessing. But it’s absolutely useless for addressing the question, “what is the most likely explanation for these data?”. And the reason it’s useless becomes obvious as soon as you start to consider alternative theories. We can use Fisherian null hypothesis testing to conclude that a candidate actually knows the answers as long as we can be sure that that is the only alternative to the null hypothesis that she is guessing. But there are other possibilities – that she was given the answer numbers in advance and memorised them, or that she is copying them from another candidate. In the Fisherian model, the probability of these alternatives is set, a priori, at zero, hence the “illusion of theory confirmation”, cited in that Gill paper linked by MarkF.

    And when it comes to ID, we don’t have a clearly specified null. Is it that biological organisms just happen to assemble themselves from atoms that just happened to find themselves in close enough proximity to bind into the observed molecules with the observed conformations? We don’t even have to do a CSI calculation to reject that null. The entire science of evolutionary theory assumes the rejection of that null and searches for alternate hypotheses. So what we want to know is, of the various postulated mechanisms by which biological organisms might have came about, which is the most likely? Which is why the Bayesian approach is the right approach. But that takes more than just math. The beauty, and also the downside, of Bayesian approaches is that it tells you how probable an explanation is, given what you know. It isn’t an absolute value, like the p value you extract from a Fisherian hypothesis test, which, although appealingly precise, is illusory, because it hides the zero probability you have implicitly assigned to anything other than your research hypothesis.

    As for Barry’s example (actually I think you have priority on it :)) of a radio signal consisting of prime numbers in binary code: in order to reject a non-intelligent source as an explanation, using Fisherian hypothesis testing, you’d be asking: “how likely is the signal, given the null hypothesis that it comes from a non-intelligent source?” How do you even start to compute your null? We don’t even know what the population of non-intelligent radio sources is, and we have no clue as to what the distribution of signal patterns would be. To be honest, if an extraterrestrial signal had any kind of unpredictable radio pattern, I’d start to wonder: “aliens?” because the only kind of non-intelligent extra-terrestrial radio signal I’m aware of is that from pulsars, and if the signal were anything other than something likely to be generated by simple harmonic motion, I’d prick up my ears. But that might be my ignorance.

    Which is precisely the point: Bayesian probabilities tell you what is likely given what you know. They drive you to find out more. Your Fisherian approach, in contrasts, leads you to assume (fallaciously, IMO) that you have a valid conclusion (a significant p value), and regard any further research into the nature of your hypothesised Designer as extraneous to the project.

    Liddle makes off that CSI is a fuzzy concept when her notion of prior probabilities when applied to design inferences is nothing but an exercise in fuzzification. What, indeed, is the prior probability of a space alien sending a long sequence of primes to planet earth? In such discussions no precise numbers ever get reasonably assigned to the priors. Bayes works when the priors can themselves be established through direct empirical observation (as with medical tests for which we know the [prior] incidence of the disease in the population). That’s never the case in these discussions, where the evidence of design is purely circumstantial.

    Well, no, I didn’t say that CSI was a “fuzzy concept”, or, if I inadvertently implied that I thought so, I must clarify that I do not. My problem with it is that it is too precise. Or rather, that it is precise but not accurate, and the trouble with measures that are precise but not accurate, is that it is tempting to mistake the precision for accuracy. The lack of accuracy arises from a priori setting of the probability of any alternative hypothesis to zero. That’s as much a prior as any Bayesian prior, but its precision (exactly, and permanently, zero) is unwarranted, and, indeed, renders your whole ID inference circular. Of course Bayesian inferences are fuzzy. But better a fuzzy hit than a precise miss! There’s a reason why sawn-off shotguns are the weapon of choice for certain purposes 🙂

    I refer readers to two articles of mine relevant to this thread:
    (1) “Design by Elimination vs. Design by Comparison” (available at http://www.designinference.com….._Bayes.pdf), in which I clearly spell out how the Bayesian approach to design inferences is parasitic on my generalization of Fisher to CSI.

    Thank you. Yes, I had already read this paper. I have two problems with it. Firstly, you seem to set out to compare the two approaches as though they were approaches to answering the same problem. They aren’t. They address different problems, and which you use depends on what problem you want to solve. If you want to know which explanation is the most likely explanation for your data, you ask a Bayesian question. If you want to know whether your data are unlikely under some null hypothesis you ask a Fisherian question. This isn’t Cavaliers versus Roundheads; it’s hammers versus screwdrivers. I use both approaches in my work on a daily basis, and which I use depends very simply on the question I am asking. To give a practical example: if I want to know whether the brain activation induced by a task is different in different groups of participants, I ask a simple Fisherian question: if these participants are drawn from the same population (i.e are not different on this measure), how likely would I be to observe the observed differences? And I get a nice low Fisherian p value, telling me it is very unlikely. However, if now want to know: “what is the most likely explanation for the observed differences in activation between these two groups of participants?” then I have a Bayesian question, and so I use Bayesian Model Selection to tell me which of a set of possible models is the most likely explanation for each group’s data. In fact, my classic response to anyone who comes up to me with a stats question in relation to data analysis is: “what question are you asking?” Once they know that, they know the right test.

    But my second problem with this paper is your repeated reference to “the chance hypothesis”. That’s where the bodies are buried. “Chance” isn’t an explanation, and so it isn’t a hypothesis. When we test whether a coin is fair, and we reject the hypothesis that it is, we are not rejecting “chance” as an explanation for our data, we are rejecting the highly specific hypothesis “that the coin is fair”. We can reject it because we know the probability distribution for the tosses under that null hypothesis. We refer to “chance” simply because in order to infer from our sample to the population, we randomly sample. So when we say, loosely, that the data we observed are unlikely to have occurred “by chance”, we don’t mean that “chance” is unlikely to be the explanation for the data. What we mean is that if we randomly sampled from the distribution that we’d expect under our null, we would be unlikely to observe the data sample we want to test.

    So when you say: “When the Bayesian approach tries to adjudicate between chance and design hypotheses…” that makes no more sense, because chance is not an explanation. Just as a Fisherian approach does not attempt to determine whether data are unlikely to be “due to chance”, but rather to determine whether, under some null hypothesis, a random sample would resemble our data sample. So the Bayesian approach does not try to adjudicate between any hypothesis and “chance”. What it does is to adjudicate between two proposed explanations for your sample of data.

    And that is what is wrong with CSI – not that it is Fisherian, or not-Bayesian, but that the null hypothesis is not specified, and so there is no way of calculating the probability distribution under the null (as there is for the null hypothesis that the coin is fair).

    (2) “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information,” which answers Liddle’s vain hope that RV+NS can serve as a designer substitute. This article is available at http://evoinfo.org/publication…..ation-law/ and is also a chapter in THE NATURE OF NATURE.

    Again, thank you for the link, although again, I have read it. I won’t address it here, although I consider it flawed. However, it certainly poses a more coherent question IMO.

    Closing thought: If Bayes were such a boon to design inferences, then why don’t we see more of them? When people in real-life infer design on the basis of a small probability event (and such events do regularly trigger design inferences), why don’t they factor in the priors? Is it that they’re just not properly educated in the logic of Bayes? Or perhaps it’s that estimating priors in such circumstances is simply an exercise in handwaving. In any case, if design in biology is real, then Bayes should long ago have uncovered it. The fact that it has not and that it is regularly used to insulate Darwinian evolution from probabilistic critique (Elliott Sober is the master of this) suggests that more objectve probabilistic methods are called for — such as CSI.

    IANAL, but I’d have thought that Bayesian inferences were the stuff of design inferences: “did he fall or was he pushed”? Fisherian hypothesis tests won’t help you with that kind of question, because it’s not the question they pose.
    But to address your implication that Bayesian methods are “less objective” than Fisherian methods, as I hope I have made clear above, this is simply not the case. Fisherian methods, unless the null is clearly stated, simply hide their bias and deliver theory confirmation that is illusory.

    So we remain in disagreement 🙂

    Cheers

    Lizzie

  55. 55
    Elizabeth Liddle says:

    I was taught stats by a somewhat eccentric professor who would fail papers if you gave a p value!

    He’d return the paper with red ink all over it,saying “DO NOT DO THIS”. And would withhold a pass mark until you’d deleted it.

    It was effect sizes or nothing. Needless to say I have never published a research paper that does not report a p value! But it did force us to think very hard about what our p values are a probability of 🙂

  56. 56
    lastyearon says:

    Elizabeth, Markf,
    From someone not involved in statistical analysis on a daily basis, thank you for your clear, concise and rigorous rebuttals of the concept of CSI as a marker of intelligent design.

  57. 57
    lastyearon says:

    It’s ironic (but not surprising)that the best collection of evidences of the scientific vacuity of Intelligent Design lies within the posts of a blog dedicated to its promotion.

  58. 58
    Elizabeth Liddle says:

    Barry, first of all, David Wheeler and Tao Tao started from the certain knowledge that the artificial genome was designed, and that it contained a “watermark”!

    Not only that, but they knew it was designed by a human being who spoke a language with a roman alphabet. And not only that, but they knew the alphabetic English shorthand for each codon!

    How can you possibly think this method could tell you whether a string of unknown provenance contained information or not?

    Seriously, my mind boggles!

  59. 59
    kairosfocus says:

    Cheap-shot emptily dismissive talking points. Please, do better than that.

  60. 60
    kairosfocus says:

    Onlookers:

    I am simply noting for record that the main issues raised in support of Bayesianism were addressed from here on at the top of the thread.

    Just reflect on the sound point WmAD made in 14 above, in light of a reduction to likelihoods, in the onward linked from the above.

    before I clip, let me cite Dembski:

    Elizabeth Liddle seems stuck with where the discussion over CSI was ten years ago when I published my book NO FREE LUNCH.

    She characterizes the Bayesian approach to probabilistic rationality as though that’s what science universally accepts when in fact the scientific community overwhelmingly adopts a Fisherian approach (which, by the way, is not wedded to a frequentist approach — epistemic probabilities, Popper’s propensities, and Bernoulli’s indifference principle all work quite well with it).

    Liddle makes off that CSI is a fuzzy concept when her notion of prior probabilities when applied to design inferences is nothing but an exercise in fuzzification. What, indeed, is the prior probability of a space alien sending a long sequence of primes to planet earth? In such discussions no precise numbers ever get reasonably assigned to the priors. Bayes works when the priors can themselves be established through direct empirical observation (as with medical tests for which we know the [prior] incidence of the disease in the population). That’s never the case in these discussions, where the evidence of design is purely circumstantial.

    Clipping my own remarks again, having done the algebra we come to a point where we deduce a ratio of likelihoods that allow relative comparison of degree of warrant for two theories in light of the evidence, E being observed evidence and T1 or 2 the relevant theories in contest:

    __________

    >> L[E|T2]/ L[E|T1] = LAMBDA

    = {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)} >>
    __________

    Now, here’s the key trick, who assigns P(T1) or P(T2) on what grounds, in a subjective context?

    That is, if I set P(T2) = 0, say [on whatever clever argument — cf here Lewontinian a priori materialism], then I can stoutly insist that T is simply not good enough, i.e, I have a suitably mathematicised excuse for the fallacy of the closed mind in this context where there are no background epidemiology studies to set the values.

    Expanding somewhat on the clip, we can see why an approach based on the fact that most reasonable samples of a large pop will represent its typical vales, not its atypical values, is a quite reasonable approach:

    ___________

    >> L[E|T2]/ L[E|T1] = LAMBDA

    = {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)}

    Thus, the lambda measure of the degree to which the evidence supports one or the other of competing hyps T2 and T1, is a ratio of the conditional probabilities of the evidence given the theories (which of course invites the “assuming the theory” objection, as already noted), times the ratio of the probabilities of the theories being so. [In short if we have relevant information we can move from probabilities of evidence given theories to in effect relative probabilities of theories given evidence, and in light of an agreed underlying model.]

    All of this is fine as a matter of algebra (and onward, calculus) applied to probability, but it confronts us with the issue that we have to find the outright credible real world probabilities of T1, and T2 (or onward, of the underlying model that generates a range of possible parameter values). In some cases we can get that, in others, we cannot; but at least, we have eliminated p[E]. Then, too, what is credible to one may not at all be so to another. This brings us back to the problem of selective hyperskepticism, and possible endless spinning out of — too often specious or irrelevant but distracting — objections [i.e closed minded objectionism].

    Now, by contrast the “elimination” approach rests on the well known, easily observed principle of the valid form of the layman’s “law of averages.” Namely, that in a “sufficiently” and “realistically” large [i.e. not so large that it is unable or very unlikely to be instantiated] sample, wide fluctuations from “typical” values characteristic of predominant clusters, are very rarely observed. [For instance, if one tosses a “fair” coin 500 times, it is most unlikely that one would by chance go far from a 50-50 split that would be in no apparent order. So if the observed pattern turns out to be ASCII code for a message or to be nearly all-heads or alternating heads and tails, or the like, then it is most likely NOT to have been by chance. (See, also, Joe Czapski’s “Law of Chance” tables, here.)]

    Elimination therefore looks at a credible chance hyp and the reasonable distribution across possible outcomes it would give [or more broadly the “space” of possible configurations and the relative frequencies of relevant “clusters” of individual outcomes in it]; something we are often comfortable in doing. Then, we look at the actual observed evidence in hand, and in certain cases — e.g. Caputo — we see it is simply too extreme relative to such a chance hyp, per probabilistic resource exhaustion.

    So the material consequence follows: when we can “simply” specify a cluster of outcomes of interest in a configuration space, and such a space is sufficiently large that a reasonable random search will be maximally unlikely within available probabilistic/ search resources, to reach the cluster, we have good reason to believe that if the actual outcome is in that cluster, it was by agency. [Thus the telling force of Sir Fred Hoyle’s celebrated illustration of the utter improbability of a tornado passing through a junkyard and assembling a 747 by chance. By far and away, most of the accessible configurations of the relevant parts will most emphatically be unflyable. So, if we are in a flyable configuration, that is most likely by intent and intelligently directed action, not chance. ]

    We therefore see why the Fisherian, eliminationist approach makes good sense even though it does not so neatly line up with the algebra and calculus of probability as would a likelihood or full Bayesian type approach. Thence, we see why the Dembski-style explanatory filter can be so effective, too. >>
    ____________

    So, the Bayes rabbit trail is a distraction, a storm in a teacup.

    There is an excellent reason tied to the valid form of the law of averages, why the explanatory filter works.

    And if the why of that is not obvious, think about how the solar system scope EF is saying that if at most you can take a straw sized sample from a cubical hay bale a light month across, even if a great many needles are in it and even if it has out solar system in it out to Pluto, by overwhelming probability, you are most likely going to pick up a straw, and nothing else.

    So much so that if someone claims to have picked up a needle at random in such a bale on a single trial — and that is equivalent to a sample of scope equal to the Planck Time Quantum states for our solar system’s 10^57 atoms since the Big Bang’s usual date, you would be entitled to disbelieve him.

    Some lotteries are not credibly winnable.

    GEM of TKI

  61. 61
    Elizabeth Liddle says:

    kf: I’m not sure whether you read my response to William Dembski, but your post does not address my key point which is that “chance”, in itself, is not a hypothesis.

    The reason it is invoked, in Fisherian statistics, is because it refers to the probability that a random (“chance”) sample from the population with the probability distribution under the null hypothesis would have looked like your observed data.

    The problem with the CSI formulation is that the null hypothesis is not specified, and all alternative hypotheses are given an a priori probability of zero.

    Now, you could argue, and Dembski and Marks do, that the probability of alternative hypotheses are indeed zero, but that is the case that has to be made. You can’t just reject the (unspecified) null, produce a p value, together with the cosmological alpha criterion, and say “therefore design”.

    That’s why I keep saying that CSI is useless. It doesn’t pose the question we want to know the answer to.

  62. 62
    junkdnaforlife says:

    “Now, you could argue, and Dembski and Marks do, that the probability of alternative hypotheses are indeed zero, but that is the case that has to be made.”

    Currently no remotely plausible scenario exists, as you yourselves know first hand with the Upright Biped debacle, so why is setting pr = 0 for alternative hypothesis not a valid provisional variable?

  63. 63
    Elizabeth Liddle says:

    Because it would assuming the consequent.

  64. 64
    Elizabeth Liddle says:

    would=would be

  65. 65
    kairosfocus says:

    A few notes:

    By now it is clear that there will be no agreement so I simply note for record so the astute onlooker can see where Dr Liddle goes off the rails, as has been pointed out to her over and over again, but repeatedly brushed aside.

    I expect nothing different this time around.

    I am now getting tired of having to repeat this over and over again, being convinced that Dr Liddle is simply not open to hear this message. She is too locked into the failed paradigm to hear the force of anything that would break it up. Pardon if that sounds harsh, but it is meant to be frank though respectful. Dr Liddle please re-read that incident in Jn 8 again.

    To my mind at present, the resort to Bayesianism is simply yet another confusing distractor, as the real issue is to find islands of function and not so much to try to identify how likelihood ratios can be estimated on prior probability estimates etc. DOWN THAT ROAD LIE ALL SORTS OF CONFUSIONS, DRIVEN BY LEWONTINIAN A PRIORISM WHICH INDEED AHEAD OF ANY EVIDENCE HAS DECIDED THAT DESIGN IS VERBOTEN, MOST STRICTLY VERBOTEN.

    So yes my point on closed minded refusal to even entertain design as a possibility is all too patently relevant. Prof Lewontin put this on the table for all to see, and the US NAs etc show that this is not just one or a few, it is the system.

    Mutabaruka drumming in the head: De System, de system, de system is a FRAUD . . .

    Never mind the other dust-ups on deciding how to put numbers into the parameters identified under the relevant circumstances. The algebra is nice and pretty, the application is not.

    I come from the old school of having to deal with less than neat and sweet realities, so I am not overly impressed by pretty math exercises that run into problems on getting down to the real world. Just like in management, you will see pretty academic exercises on net present value, only to run into the real world of practical finance where Internal Rate of Return is often king, because real world managers will have less of a dust up over comparing rates of interest equivalent to a project, never mind the potential pathologies in the math.

    The mathematically neater may not be the better on the ground, save where the pathologies are real. Let’s keep things simple, and reserve heavy artillery that is ever so hard to set up for when it is necessary.

    So, I comment:

    a: The null hyp in testing under Fisherian conditions exists in the context of an alternative. This is somewhat related to what is happening with the EF where high contingency is to be explained.

    b: For the Explanatory filter, the first default hyp is actually necessity, leading to low contingency under similar starting conditions, e.g. the dropped heavy object falls. This has been discussed with Dr L before, fruitlessly so I am simply noting for record.

    c: Where there is high contingency under initial conditions, then chance and/or agency may be involved, e.g what side of a die comes up when it is dropped.

    d: the second default is chance, i.e a sample of the config space that is a random in some relevant sense. And chance can be seen as relevant in the same sense that it drives dice, it comes up in the inescapable noise in real world measurements, it disturbs telecomms systems and is known to appear in chemical reactions where the particular outcome is a statistical pattern, cf JM’s current discussion on pre life chemistry and what happens in the real world outside the neatly controlled lab. It even drives time’s arrow, the second law of thermodynamics. So, pardon me, it may not be neat and sweet but it is reality. Ever watched “grass” on a good old D52 CRO screen? having done that many a day, please don’t try to tell me noise and chance are not neat, easily digested cut and dry hypotheses. They are observable reality, as close as what temperature means: a measure of molecular chaos, and as significant as the resulting noise in telecomms systems, or the aging of a component vulnerable to activation processes.

    e: Where we have islands of function or specified zones in large config spaces, the logical thing is if there is something that pushes you to those zones, or you have no good reason to infer this. Absent a chemist and considerable manipulation, i.e. design, we have no reason to think a warm little pond etc is going to be pushed in the desired directions. See JM’s remarks here. Watch proteins first AND RNA first disintegrate. The tree of life at best would have a root deep in a config space well beyond he search capacity of our observed cosmos. That molecular nanotech lab several generations beyond Venter is looking better and better as a hyp.

    f: Now, in evolutionary models ever since Darwin, the discussion is on hill climbing by differential reproductive success, but that presumes being ON an island of function already. Big begged question, for OOL and for body plans.

    g: Until you are on an island of function hill climbing adaptation mechanisms cannot kick in, so we see the issue already highlighted: origin of life and/or of major body plans requiring complex functional info.

    h: From observations it is credible that first cell based life required 100+ k bits of genetic info, and new body plans required 10+ m bits. WENT OVER THAT ALREADY, DOD YOU TAKE NOTE?

    i: These are so far beyond 500 – 1,000 bits to come form lucky noise that it is a no brainer that we are looking at huge haystacks and tiny relative samples, on the gamut of accessible resources of our solar system or observed cosmos.

    j: And that is exactly where the EF comes in. Once we have quantified the PTQS resources of our solar system and observed cosmos, it is reasonable to ask whether special and unrepresentative zones can reasonably be hit on by the known molecular level chaotic forces at work in warm little ponds for OOL or in triggering accidents in the chemistry of the living cell.

    k: The wall of 500 to 1,000 bits as the upper limit of our cosmic resources jumps out at us. Adaptation within an island of function is possible, but getting to such deeply isolated islands of function is not credible on the gamut of our observed universe, much less our solar system.

    l: That is why the test of generating coherent text in English by chance is so crucial as a test, one that evo mat advocates despise because they know the3 message. Spaces of 10^50 possibilities are searchable just barely, but spaces of 10^150 are patently not.

    m: Funny, when I was a kid, the monkeys at keyboards example was often used to persuade us that any config would eventually appear at random, but over the past few years that argument has vanished. No prizes for guessing why.

    n: Monkeys at keyboards is a dead icon of evo, and one that the evo mat advocates hope would vanish into the memory hole. Turns out the monkeys have switched sides, and are now an icon of Design, along with the burning match and spinning flagellum, the walking kinesin molecule [talk about Imperial AT AT walker tanks!] pulling a vesicle along a microtubule highway, the ATP Synthase rotary motor enzyme etc etc!

    o: Remember, the 500 bit threshold is equivalent o having a cubical haystack 1 light month across, and picking ONE straw sized sample at random through all the 10^57 atoms of our solar system working away for the lifespan of the cosmos since the usual date of the big bang. You could have a whole solar system in there and it would make no difference, overwhelmingly you are going to end up with straw.

    p: the search resources to get to OOL and onward to major body plans just are not there without programming or other intelligent direction, period.

    q: So, it is no surprise that when we actually test empirically — another point where the rhetoric on side issues obfuscates the actual real world result — we keep on seeing that FSCI is indeed an excellent sign of design. the text of this post is sufficient to show that what chance could not reasonably do, intelligence does routinely and quickly.

    s: So blatant is this that the only explanation for why a plainly failed model prevails is that it has us in ideological captivity, to C19 positivism and its descendants.

    Okay, am I clear enough now.

    (And I won’t even bother to more than note that MF has long ago decided to studiously ignore instead of cogently address. That speaks volumes on what he is doing here at UD, and I have not forgotten where the one who threatened my family got his start before he ran totally out of control.)

    Okay, it should be plain enough onlookers.

    GEM of TKI

  66. 66
    kairosfocus says:

    I trust it should be plain — yet again — that CSI is most definitely not “useless.” Good night

  67. 67
    Upright BiPed says:

    Debacle?

  68. 68
    Timbo says:

    Nope. Not plain to me.

  69. 69
    junkdnaforlife says:

    So a hypothesis with no demonstrable plausibility should have a pr > 0?

  70. 70
    Elizabeth Liddle says:

    kairosfocus: Thank you for this summary of your argument. I appreciate that you consider my failure to appreciate it must be due to incalcitrance on my part. I do not believe it is, but having been in the equivalent position myself, I understand how it looks. So let me have a go at dealing with it point-by-point, which your point-by-point layout facilitates nicely:

    So, I comment:

    a: The null hyp in testing under Fisherian conditions exists in the context of an alternative. This is somewhat related to what is happening with the EF where high contingency is to be explained.

    Certainly null hypothesis testing is the testing of an alternative to the null.

    b: For the Explanatory filter, the first default hyp is actually necessity, leading to low contingency under similar starting conditions, e.g. the dropped heavy object falls. This has been discussed with Dr L before, fruitlessly so I am simply noting for record.

    The EF is interesting, kf, in that it has two successive hypothesis testing phases. I was in fact referring to Dembski’s later integrated formulation.

    But let’s say that the first hypothesis to be tested is: do biological organisms come about by the direct, one-stage action of a natural law?

    Well, it certainly doesn’t look like it, and no-one is claiming that, so, no! I think we can reject that without recourse to a probability distribution at all. Although some OOL researchers think that eventually life will be observed forming in a lab, nobody thinks it will be a non-contingent process – it will be contingent on a large number of variables that the scientists have yet to discover and tune.

    c: Where there is high contingency under initial conditions, then chance and/or agency may be involved, e.g what side of a die comes up when it is dropped.

    Right – so stochastic processes are likely to be important, and whether life emerges from given initial conditioins will have a probability distribution.

    d: the second default is chance, i.e a sample of the config space that is a random in some relevant sense. And chance can be seen as relevant in the same sense that it drives dice, it comes up in the inescapable noise in real world measurements, it disturbs telecomms systems and is known to appear in chemical reactions where the particular outcome is a statistical pattern, cf JM’s current discussion on pre life chemistry and what happens in the real world outside the neatly controlled lab. It even drives time’s arrow, the second law of thermodynamics. So, pardon me, it may not be neat and sweet but it is reality. Ever watched “grass” on a good old D52 CRO screen? having done that many a day, please don’t try to tell me noise and chance are not neat, easily digested cut and dry hypotheses. They are observable reality, as close as what temperature means: a measure of molecular chaos, and as significant as the resulting noise in telecomms systems, or the aging of a component vulnerable to activation processes.

    Well, here is where we disagree, kf. No, I don’t think “chance” is a “cut and dry hypothesis”. Or rather, if you want to present “chance” as the null, you need to actually compute the pdf of the chance hypothesis you want to model. For example, if you were modeling the “chance” hypothesis of a coin, you’d need the right pdf. The “chance” hypothesis for a fair coin would have a mean of .5. The “Chance” hypothesis for a biased coin might have a mean of .6. Both could be “Chance” hypothesis for different questions, for examplel you might want to test the hypothesis that instead of your usual biased coin with a bias of .6 of for tails, you’d accidentally picked up some fellow shyster’s coin with a bias of .6 for heads. That was the point I was trying to make – the “chance” in itself, is not a hypothesis. The pdf of the null has to be specified in order to make Fisher testing work, and that’s what I’m not seeing in EF or CSI calculations – any derivation of that pdf.

    e: Where we have islands of function or specified zones in large config spaces, the logical thing is if there is something that pushes you to those zones, or you have no good reason to infer this. Absent a chemist and considerable manipulation, i.e. design, we have no reason to think a warm little pond etc is going to be pushed in the desired directions. See JM’s remarks here. Watch proteins first AND RNA first disintegrate. The tree of life at best would have a root deep in a config space well beyond he search capacity of our observed cosmos. That molecular nanotech lab several generations beyond Venter is looking better and better as a hyp.

    Well, that is a separate issue, of course. Different people will have different priors regarding the plausibility of OOL hypotheses. As for the “islands of function” – perhaps there are – essentially an “island of function” is an “irreducibly complex” function, right? And I would seriously contest the claim that there are any IC functions in living things, Behe notwithstanding. As for “pushed in the desired direction” – the whole point of Darwin’s theory is that there is no “desired direction” and no “push”. Indeed, I’d say that “pull” is a better metaphor than “push” – populations roll into attractor basins posed by the environment, where anything that inhibits reproduction is minimised and anything that facilitates it is maximised. And as the environment changes, as it must, the populations rolls with it, unless it gets stuck in a local minimum. “Hill climbing” is another metaphor, but it does tend to imply a higher energy state. But it’s essentially the same metaphor.

    f: Now, in evolutionary models ever since Darwin, the discussion is on hill climbing by differential reproductive success, but that presumes being ON an island of function already. Big begged question, for OOL and for body plans.

    Well, I don’t think it’s begged for body plans. For OOL, yes – we do not yet know how simple the first Darwinian-capable self-replicator was.

    g: Until you are on an island of function hill climbing adaptation mechanisms cannot kick in, so we see the issue already highlighted: origin of life and/or of major body plans requiring complex functional info.

    Well, this assumes that life is an archipeligo. I do not regard this as demonstrated.

    h: From observations it is credible that first cell based life required 100+ k bits of genetic info, and new body plans required 10+ m bits. WENT OVER THAT ALREADY, DOD YOU TAKE NOTE?

    Well, I dispute these assertions! Yes, I take note, but disagree!

    i: These are so far beyond 500 – 1,000 bits to come form lucky noise that it is a no brainer that we are looking at huge haystacks and tiny relative samples, on the gamut of accessible resources of our solar system or observed cosmos.

    Well, yes, if true. That’s the whole point – sure, if you are correct, life is impossible by Darwinian means. I don’t think you are correct. I think your logic is fine but your premise faulty!

    j: And that is exactly where the EF comes in. Once we have quantified the PTQS resources of our solar system and observed cosmos, it is reasonable to ask whether special and unrepresentative zones can reasonably be hit on by the known molecular level chaotic forces at work in warm little ponds for OOL or in triggering accidents in the chemistry of the living cell.

    Well, no – sure the EF might come in at some stage, but only when you know what the potential mechanisms are, and when you are sure about the height above sea level of the lowest shoreline, as it were. That’s what is in dispute.

    Nobody disputes that it is inconceivable that a highly complex biological object could come into being ex nihilo “by chance”. What they – we – dispute is that we are talking about a highly complex biological objects. The reasons there is all this talking past each other is that one side keeps shouting “look! You can’t get to these islands by chance!” and the other side is shouting “they aren’t islands!”.

    k: The wall of 500 to 1,000 bits as the upper limit of our cosmic resources jumps out at us. Adaptation within an island of function is possible, but getting to such deeply isolated islands of function is not credible on the gamut of our observed universe, much less our solar system.

    Sure. But it’s the deeply isolated islands that are contested, not the probability of getting to them were they to exist.

    l: That is why the test of generating coherent text in English by chance is so crucial as a test, one that evo mat advocates despise because they know the3 message. Spaces of 10^50 possibilities are searchable just barely, but spaces of 10^150 are patently not.

    Right. Because those are deeply isolated islands.

    m: Funny, when I was a kid, the monkeys at keyboards example was often used to persuade us that any config would eventually appear at random, but over the past few years that argument has vanished. No prizes for guessing why.

    Because the proposal – Darwin’s proposal – was that the islands aren’t isolated – that they are contiguous, i.e. not islands, or, if islands, separated by wadeable water.

    n: Monkeys at keyboards is a dead icon of evo, and one that the evo mat advocates hope would vanish into the memory hole. Turns out the monkeys have switched sides, and are now an icon of Design, along with the burning match and spinning flagellum, the walking kinesin molecule [talk about Imperial AT AT walker tanks!] pulling a vesicle along a microtubule highway, the ATP Synthase rotary motor enzyme etc etc!

    Right! So let’s talk about those putative islands! The chance issue just isn’t in dispute. We all know that IF these functions are isolated islands, evolution can’t happen. The question is: are they isolated islands? From our side of the divide, the answer is no, and one of the reasons it is no, is that some of the isolated-island proponents have made the error of thinking that all parts of an island have to appear simultaneously. They don’t.

    o: Remember, the 500 bit threshold is equivalent o having a cubical haystack 1 light month across, and picking ONE straw sized sample at random through all the 10^57 atoms of our solar system working away for the lifespan of the cosmos since the usual date of the big bang. You could have a whole solar system in there and it would make no difference, overwhelmingly you are going to end up with straw.

    Right – if the needles are unconnected. The Darwinian contention is that they are not.

    p: the search resources to get to OOL and onward to major body plans just are not there without programming or other intelligent direction, period.

    Only if OOL and “major body plans” are islands. If they aren’t, then no, you don’t need intelligent direction.

    q: So, it is no surprise that when we actually test empirically — another point where the rhetoric on side issues obfuscates the actual real world result — we keep on seeing that FSCI is indeed an excellent sign of design. the text of this post is sufficient to show that what chance could not reasonably do, intelligence does routinely and quickly.

    Well, what it means is that Behe and Meyer are making the right argument, and that the chance argument is only relevant once Behe is established to be right. At which point you don’t even need a Fisher test. The trouble is, that Behe and Meyer’s arguments don’t IMO actually work.

    s: So blatant is this that the only explanation for why a plainly failed model prevails is that it has us in ideological captivity, to C19 positivism and its descendants.

    No, I think it’s that people think that evos are saying that these astronomically unlikely events are in fact likely. They aren’t. They are saying that the events postulated as being astronomically unlikely are not the events being postulated by evolutionary theory.

    In other words, we ae arguing over the different things.

    Anyway, hope your island is behaving itself, and not too battered by storms 🙂

    Cheers

    Lizzie

  71. 71
    kairosfocus says:

    Dr Liddle:

    Pardon, but instead I think the above aptly shows that you do not understand the issue that CSI highlights. This, I believe is due to prior commitments.

    For instance, I see a telling exchange in the just above:

    [KF:] o: Remember, the 500 bit threshold is equivalent o having a cubical haystack 1 light month across, and picking ONE straw sized sample at random through all the 10^57 atoms of our solar system working away for the lifespan of the cosmos since the usual date of the big bang. You could have a whole solar system in there and it would make no difference, overwhelmingly you are going to end up with straw.

    {EL:} Right – if the needles are unconnected. The Darwinian contention is that they are not.

    Not at all. Did you notice how I pointed out several times that a whole solar system could be lurking in the haystack and it would make but little difference to the search challenge?

    The problem is that you are taking so disproportionately small a sample of possibilities, due to the explosive exponentiation of possibilities vs the scope of Planck Time quantum states for the solar system since its founding, that you are overwhelmingly unlikely to pick up the UNrepresentative in any reasonable blind sample.

    That is why your earlier objection also fails:

    [EL:] So let’s talk about those putative islands! The chance issue just isn’t in dispute. We all know that IF these functions are isolated islands, evolution can’t happen. The question is: are they isolated islands? From our side of the divide, the answer is no, and one of the reasons it is no, is that some of the isolated-island proponents have made the error of thinking that all parts of an island have to appear simultaneously. They don’t.

    This one goes to the heart of the problem, so let’s take it in steps:

    1 –> Take an arbitrary ASCII text string equal in length to the first 72 characters of this post. For all intents and purposes, that is 500 bits. There are many possible sense-making configs, some of which will be a few steps apart and can be imagined in aggregate to form an archipelago of islands.

    2 –> One thing is certain, the number of gibberish configs vastly outnumbers these, so we can be certain that we are dealing with islands of specific function in a sea of non-functional configs.

    3 –> This is demonstrated by the output of monkey at keyboard experiments, which I have excerpted on again and again for literally months, only to be brushed aside again and again. One more time, citing Wiki testifying against interest, to put the matter on the table squarely, but this time I will extend the clip slightly:

    A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

    RUMOUR. Open your ears; 9r”5j5&?OWTY Z0d…

    Due to processing power limitations, the program uses a probabilistic model (by using a random number generator or RNG) instead of actually generating random text and comparing it to Shakespeare. When the simulator “detects a match” (that is, the RNG generates a certain value or a value within a certain range), the simulator simulates the match by generating matched text.

    More sophisticated methods are used in practice for natural language generation. If instead of simply generating random characters one restricts the generator to a meaningful vocabulary and conservatively following grammar rules, like using a context-free grammar, then a random document generated this way can even fool some humans

    4 –> Saw the way that it is a processing power challenge to get to coherent text and it is a further challenge to detect it? Guess why: gibberish — as common sense driven by the “law of averages” tells us — is the overwhelming majority of the output.

    5 –> Next, notice what improves performance: programming that sets up an algorithm that then guides random variation towards function and presumably may even improve the function by hill climbing, much as ID objector Zachriel boasts of and imagines is a fatal objection to the design inference. DESIGN out performs chance and necessity without direction.

    6 –> But — on track record of objecting to DNA exhibiting a linguistic, algorithmic 4-state digital code — you will object, this has nothing to do with life forms.

    7 –> Here, your objection (which to my recall you have never withdrawn) is astonishingly ill-informed. The code is real and is easily accessible. It functions for protein manufacture in the ribosome, specifying in step by step algorithmic sequence:

    a: START (and lay down a Methionine AA),

    b: elongate step by step by using tRNA taxicab molecules as position-arm machines with pre-loaded AA’s based on coded assignment —

    recall the CCA — COOH link is generic, it is a recognising enzyme that loads a given tRNA with a given AA, and this can be reprogrammed, as has been demonstrated

    c: continue in a cycle until one of the STOP codons is reached,

    d: Release the protein, and perhaps pass it to a chaperone unit to ensure correct folding, and maybe taxicab it with kinesin to its work site

    8 –> Protein fold domains, as has been pointed out to you over and over to the point of frustration, form deeply isolated fold domains, where the capacity to fold and to function crucially depend on the programed AA sequence. THESE ARE ISLANDS OF ISOLATED FUNCTION.

    9 –> Similarly, A pile of bricks and the like do not a house make. And a tornado hitting a hardware store is utterly unlikely to build a house, for the same needle in a haystack reason.

    10 –> In short, there is an inherent basic plausibility tot he islands of function model of complex configuration spaces that makes it plain that your reasoning in effect is that we “know” evo happened and works by forces of chance plus necessity, so we “know” there is an “exception” to the rule here.

    11 –> There is no exception. There is a reason why “missing links” is a part of the vocabulary of this debate. Namely, the fossil record — however we interpret it, it is the only actual direct observable evidence of the deep past — is one of gaps, stasis, and disappearance.

    12 –> There are a lot of triumphant headlines about found links, and there is a lot of rhetoric to obfuscate but in fact Darwin recognise that the record (starting from the Cambrian explosion) was not in favour of gradualist, tree-branching evo, but hoped that future evidence would bear him out. It has not.

    13 –> Here is Gould’s summary — and recall that in the macro evo model, speciation is the gateway to all higher forms, so the talking point that tries to dismiss this as applicable tot the higher levels is misleading and irresponsible (a distressingly common pattern with evo mat advocates and popularisers):

    . . . long term stasis following geologically abrupt origin of most fossil morphospecies, has always been recognized by professional paleontologists. [[The Structure of Evolutionary Theory (2002), p. 752.]

    . . . . The great majority of species do not show any appreciable evolutionary change at all. These species appear in the section [[first occurrence] without obvious ancestors in the underlying beds, are stable once established and disappear higher up without leaving any descendants.” [[p. 753.]

    . . . . proclamations for the supposed ‘truth’ of gradualism – asserted against every working paleontologist’s knowledge of its rarity – emerged largely from such a restriction of attention to exceedingly rare cases under the false belief that they alone provided a record of evolution at all! The falsification of most ‘textbook classics’ upon restudy only accentuates the fallacy of the ‘case study’ method and its root in prior expectation rather than objective reading of the fossil record. [[p. 773.]

    14 –> Meyer’s summary in PBSW — a case of expulsion and blaming the victim by the reigning orthodoxy, where I have seen many irresponsible remarks that distract from the clear report on investigation that the paper “passed proper peer review by renowned scientists” — is even more pointed:

    The Cambrian explosion represents a remarkable jump in the specified complexity or “complex specified information” (CSI) of the biological world. For over three billions years, the biological realm included little more than bacteria and algae (Brocks et al. 1999). Then, beginning about 570-565 million years ago (mya), the first complex multicellular organisms appeared in the rock strata, including sponges, cnidarians, and the peculiar Ediacaran biota (Grotzinger et al. 1995). Forty million years later, the Cambrian explosion occurred (Bowring et al. 1993) . . . One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . .

    In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.6

    [ . . . ]

  72. 72
    kairosfocus says:

    [oops, modded too many links, try again]

    15 –> Loennig of the Max Planck Institute, adds:

    examples like the horseshoe crab [supposedly, a 250 mn yr living fossil] are by no means rare exceptions from the rule of gradually evolving life forms . . . In fact, we are literally surrounded by ‘living fossils’ in the present world of organisms when applying the term more inclusively as “an existing species whose similarity to ancient ancestral species indicates that very few morphological changes have occurred over a long period of geological time” [85] . . . .

    One point is clear: granted that there are indeed many systems and/or correlated subsystems in biology, which have to be classified as irreducibly complex and that such systems are essentially involved in the formation of morphological characters of organisms, this would explain both, the regular abrupt appearance of new forms in the fossil record as well as their constancy over enormous periods of time. For, if “several well-matched, interacting parts that contribute to the basic function” are necessary for biochemical and/or anatomical systems to exist as functioning systems at all (because “the removal of any one of the parts causes the system to effectively cease functioning”) such systems have to (1) originate in a non-gradual manner and (2) must remain constant as long as they are reproduced and exist. And this could mean no less than the enormous time periods mentioned for all the living fossils hinted at above. Moreover, an additional phenomenon would also be explained: (3) the equally abrupt disappearance of so many life forms in earth history . . . The reason why irreducibly complex systems would also behave in accord with point (3) is also nearly self-evident: if environmental conditions deteriorate so much for certain life forms (defined and specified by systems and/or subsystems of irreducible complexity), so that their very existence be in question, they could only adapt by integrating further correspondingly specified and useful parts into their overall organization, which prima facie could be an improbable process — or perish . . . .

    According to Behe and several other authors [5-7, 21-23, 53-60, 68, 86] the only adequate hypothesis so far known for the origin of irreducibly complex systems is intelligent design (ID) . . . in connection with Dembski’s criterion of specified complexity . . .

    16 –> It is plain that what is on the ground is a scrubland of bushes model, with roughly family level body plans adapting to environmental niches. Precisely what would happen once one moves onto an island of function then spreads out across it. The Darwinian tree of life is a dead icon.

    17 –> Now, have I made a blunder of thinking that “all parts of an island have to appear simultaneously”?

    18 –> Frankly, this is a highly misleading strawmannish caricature of the real and unavoidable challenge: functionally specific complex organisation often — indeed, typically — exhibits irreducible complexity, so that the core function (manifest in the basic body plan) has to arrive all at once based on the right sized, matching parts all put together in the right way or the function will simply not be there.

    19 –> We may have variations on the basic theme, but that core has to be there in the right config or there will be no function. That is a common fact of life for writing sentences, for programming computers, for building musical instruments or houses, and so on and so forth.

    20 –> In this case each body plan has to be embryologically feasible, based on early mutations that can affect the body plan — precisely those most likely to be lethal. (Hence the problem of miscarriages.)

    21 –> In the ID foundations series, no 3, I cited Angus Menuge:

    For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:

    C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.

    C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.

    C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.

    C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.

    C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.

    ( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)

    [ . . . ]

  73. 73
    kairosfocus says:

    [concluding]

    22 –> Mengue is obviously correct and that is why the usual talking points about co-option are refuted by the reality of car parts stores. The part does not only have to be generically right, it has to be specifically right, and put in the right way in the right place for the machine to work again. Refuted to the point where such co-option rhetoric is plainly irresponsible and in some cases outright willfully deceptive. FRAUD, in one word.

    23 –> Where the problem does not rise tot the level of fraud, I have begun to get the impression that I am dealing with people who have never had to design and develop a moderately complex partly mechanical system that has to be properly integrated to work right, and/or who have never had to develop and debug a complex software program, and/or who are not open to see that there is a vast difference between a random string of gibberish and a 72+ ASCII character paragraph in contextually responsive, correctly spelled, grammatically correct English.

    24 –> So, pardon me but, for serious reasons, I do not think that declarations like:

    “some of the isolated-island proponents have made the error of thinking that all parts of an island have to appear simultaneously. They don’t”

    . . . are reasonable or responsible. Not after the past several months of discussions and patient, repeated explanations.

    25 –> In that context where the REASONS and empirical data for identifying that body plans starting with the first will be deeply isolated in genome and proteinome space, I am also much less than amused to see a remark like:

    I think it’s that people think that evos are saying that these astronomically unlikely events are in fact likely. They aren’t. They are saying that the events postulated as being astronomically unlikely are not the events being postulated by evolutionary theory.

    26 –> To respond in the terms of Dawkins’ Mt Improbable analogy — and yes he is giving an argument by analogy, I am giving an argument on cutting down a phase space to a configuration or state space by leaving off momentum variables — Mt Improbable, on much evidence as already summarised and as has been discussed for months in painful detail and/or as linked — sits on an ISLAND of function. Until you get to the shores that island, questions about the easy back slope don’t even arise. And the beyond astronomical challenge is not to move from shoreline to niches and peaks within the island of function, it is to get to the island.

    27 –> Let me clip from my always linked, an apt remark by Gary Parker, via Royal Trueman:

    A cell needs over 75 “helper molecules”, all working together in harmony, to make one protein (R-group series) as instructed by one DNA base series. A few of these molecules are RNA (messenger, transfer, and ribosomal RNA); most are highly specific proteins. ‘When it comes to “translating” DNA’s instructions for making proteins, the real “heroes” are the activating enzymes. Enzymes are proteins with special slots for selecting and holding other molecules for speedy reaction. Each activating enzyme has five slots: two for chemical coupling, one for energy (ATP), and most importantly, two to establish a non-chemical three-base “code name” for each different amino acid R-group. You may find that awe-inspiring, and so do my cell-biology students! [Even more awe-inspiring, since the more recent discovery that some of the activating enzymes have editing machinery to remove errant products, including an ingenious “double sieve” system.[2],[3]] ‘And that’s not the end of the story. The living cell requires at least 20 of these activating enzymes I call “translases,” one for each of the specific R-group/code name (amino acid/tRNA) pairs. Even so, the whole set of translases (100 specific active sites) would be (1) worthless without ribosomes (50 proteins plus rRNA) to break the base-coded message of heredity into three-letter code names; (2) destructive without a continuously renewed supply of ATP energy [as recently shown, this is produced by ATP synthase, an enzyme containing a miniature motor, F1-ATPase.[4],[5],[6],[7]] to keep the translases from tearing up the pairs they are supposed to form; and (3) vanishing if it weren’t for having translases and other specific proteins to re-make the translase proteins that are continuously and rapidly wearing out because of the destructive effects of time and chance on protein structure! [8]

    28 –> To that, we can add the astonishing complexity of the ATP Synthase molecular factory that makes the steady supply of ATP molecules required to energise the cell, and many other associated nanomachines required to carry out the processes of life. To get to a viable self-replicating metabolic automaton is an exercise in the most complex and against the flow sort of molecular engineering and nanotechnology.

    29 –> Then, to move up to the body plans level, the best thing I can do is to point you to the remarks by the now expelled Sternberg, in this video, on how to make a whale. This one on the cichlids, will also be illuminating on built-in capacity for adaptive radiation. [Both of course are to be found in the IOSE page on body plan origins issues, which you may find useful to read, as I have suggested several times.]

    30 –> Translation: the pop genetics just does not add up within ay reasonable estimate of the available time and resources on earth or in our observed cosmos.

    _________

    That is why the above remarks you have made above – after months of patient discussion and repeated explanation — come across as irresponsible, supercilious and willfully obtuse, indeed I can understand why some would see them as manifesting a passive aggressive strategy of resistance to the unwelcome.

    Please do better than the above. A lot better.

    GEM of TKI

  74. 74

    That is why the above remarks you have made above – after months of patient discussion and repeated explanation — come across as irresponsible, supercilious and willfully obtuse, indeed I can understand why some would see them as manifesting a passive aggressive strategy of resistance to the unwelcome.

    kf, I am not “resisting” the above, I’m trying to point out that that the probability argument is a straw man:

    Of course there is an infinitessimal probability that complex biological structures will just pop into being because the right atoms or molecules happen to be next to each other at the right time. But no-one is claiming this. We all reject that hypothesis, but it tells us nothing. The question is which, of several candidate hypotheses, offers a plausible mechanism.

    So the probability arguments are irrelevant to any actual evolutionary argument.

    The argument isn’t that evolutionary processes can find deeply isolated islands of function, it’s that biological functions are not deeply isolated!

    But what there is no point in doing is keeping on trying to persuade me that evolutionary search can’t find deeply isolated islands of function, because I completely agree!

    Nobody disagrees.

    What you have there is an IC argument, not a probability argument.

    And the trouble with IC arguments is that they are, essentially, arguments from ignorance – if we don’t have a detailed account of how a given organism or function could have evolved incrementally, then it is potentially “IC”.

    There isn’t an obvious evolutionary counter-argument to that in evolutionary theory, each IC candidate throws up a different set of problems, and the best evolutionary biologists can do, mostly, is to provide circumstantial evidence that points to plausible pathways.

    And, right now, for OOL we don’t even have that, although OOL researchers seem quite excited at recent progress.

    What evolutionary scientists can do, however, is point to the power of Darwinian algorithms to deliver complex solutions when the solution space is not a series of isolated islands, and also draw attention to genetic and palaeontological evidence that suggests that the evolutionary fitness landscape is similar.

    Indeed the most important supporting evidence is the evidence Darwin himself drew attention to – that far from being “islands” the pattern of distribution of structures in organisms forms a connected tree Sure there are islands, too, but they are conspicuously uninhabited! So we do not have mammals with bird lungs, or six limbed lizards, or birds with rotational symmetry.

    Evolution can only as you correctly state, find “solutions” that are connected, it can’t leap.

    It’s the view of evolutionary biologists, in general, that the data do not show leap.

    I am aware that you, and many others in the ID movement disagree, but that’s the issue that needs to be debated, not how improbable a leap would be. We all agree that a leap would be improbable.

  75. 75

    A short addendum, and hopefully summary of my point:

    There seem to me to be too quite separate issues here:

    1) Can Darwinian “search” find solutions that are connected?
    2) Are the solutions that we observe in nature connected?

    The answer to the first, seems to me to be clearly “yes”.

    I think it is highly likely that the answer to the second is also yes, but there are certainly gaps in our knowledge. Whether these are also gaps in the connectivity is what we are debating, I think.

  76. 76
  77. 77
    kairosfocus says:

    Dr Liddle:

    At this point, I think I will rest my case with the astute onlooker, who will be able to see for him or her self what is going on.

    GEM of TKI

  78. 78
    kairosfocus says:

    Dr Liddle:

    I am making a search resources vs search space discussion, which has been at the heart of modern design thought for the past decade and more.

    Have you noticed the discussion of taking a one straw sized sample from a cubical hay bale 1 light month across, and what sort of result is likely form that, on search resource vs being able to get a typical vs an atypical observation from such a sample?

    Did you note the remarks on the valid form of the layman’s law of averages, in response to the Bayesian arguments above?

    I am NOT making a probability argument.

    And yet, that is what you thought you saw. (And in another thread, by demanding that I try to calculate the probability for the Grand Canyon forming, that is what a certain Dr Matzke thinks he sees too.)

    That is a measure of how off track the “strawman” accusation above is.

    Please, think again.

    GEM of TKI

  79. 79
    kairosfocus says:

    Dr Liddle

    There are a lot more than mere gaps in our knowledge that have to be faced.

    The issue of the difficulty of moving from dilute solutions of hard to form monomers in ponds, to endothermic informational polymers organised into functioning automata is hard. That from the first such to complex body plans is yet harder informationally, and the fossil record bears that out as does the population genetics.

    THE ONLY KNOWN, OBSERVED SOURCE OF FSCI, ROUTINELY, IS INTELLIGENCE.

    The issue is what we do know, not what we don’t.

    GEM of TKI

  80. 80

    Well, as I think I’ve said, kf, your point about “islands of function” is a potentially valid one.

    But I don’t understand why you say you aren’t making a probability argument. Anything involving the EF or CSI or the UPB is a probability argument. Isn’t it?

  81. 81
    kairosfocus says:

    Dr Liddle:

    I am discussing the implications of extremely small samples, relatively speaking of large configuration spaces of possibilities.

    Such samples, as long as they are not intelligently directed targetted searches based on a knowledge of the structure of the overall space, are overwhelmingly likely to be representative of the bulk of the population of configs, not the isolated narrow zones that are specially and separately describable.

    In short, if you have a 1 light month across cubical hay bale, and under relevant circumstances take a one-straw sized sample, you are overwhelmingly likely to pick up straw, even if a solar system is hiding in there. That is the sample to population ratio for 10^102 PTQS’s for our solar system’s 10^57 atoms since the beginnings of the universe to the 10^150 or so states for 500 bits, from 000 . . . 0 to 111 . . . 1. That is why trial at random and error/success is not going to work as a search strategy for even modest information spaces.

    So, once for the solar system, we have narrow zones of interest T in very large W’s beyond 500 bits, chance driven searches are not reasonable to explain instances of FSCI. Similarly, if we expand the scope to cosmos as a whole, 1,000 bits is overwhelming relative to the PTQS resources of our observed cosmos. In that case the ratio is much worse: 10^150:1.

    And if you look carefully, you will see that I am not relying on probability or estimates thereof but on the brute force — Dembski’s finesse is right it takes 10^30 – 10^40+ PTQSs to carry out the fastest reactions or a typical enzyme enabled reaction . . . ) but it is being lost in distractive talking points — of drowning out the resources of a given gamut of reasonable search. That is why my scope of search for 500 bits is the solar system, and it is why I take time to point out that our solar system is our effective universe for atomic interactions.

    If you want to make an anywhere in our observed cosmos argument, just move up to 1,000 bits, so the 500 bit discussion is without loss of generality, as there is little practical difference between 72 or 143 ASCII characters worth of prescriptive, functionally specific information.

    If you want to swamp out the observed cosmos and shift to appeals to the imagined multiverse, you have changed the subject from physics to philosophy. In philosophy the full panoply of worldview issues and worldview warranting evidence and argument obtain, on comparative difficulties. Evolutionary materialism does not fare so well on such.

    And if you want to cloak your philosophical speculation by wearing a Lewontinian lab coat, dismissing others from the table of comparative difficulties — and we saw a case here just a few days ago — you are resorting to worldview level question-begging and censorship.

    The notion that all discussions about CSI etc hinge on probability calculations that can then be derided through suitably hyperskeptical talking points, is a fallacy. The eis a search challenge version of such arguments that is independent of any particular probability estimate, relying on ending up in a tiny sample of the haystack, hope to catch a needle challenge. Such a sample, unless intelligently and specifically directed, will by the valid version of the layman’s law of averages, by overwhelming likelihood, will pick up the typical, not the atypical.

    And that was demonstrably –NFL, pp. 144, 148 — Dembski’s underlying analysis, over the past decade.

    The dismissive argument has been based on red herrings led out to strawmen, too often laced with ad hominems and ignited through incendiary rhetoric. The better to poison and polarise the atmosphere, confusing, distracting and alienating participants, and distorting ability to hear and understand what would otherwise be simple and clear, so that the ability to think clearly is compromised.

    A set of tactics that I am all too familiar with, and which — having seen how they led my homeland to bloody ruin by economic collapse and civil war 30 years ago — I deplore.

    I doubt that the likes of the Anti Evo denizens who seem to love such tactics, understand the matches they are playing with. Here is the apostle James’ inspired warning:

    james 3:1NOT MANY [of you] should become teachers ([a]self-constituted censors and reprovers of others), my brethren, for you know that we [teachers] will be judged by a higher standard and with greater severity [than other people; thus we assume the greater accountability and the more condemnation].

    2For we all often stumble and fall and offend in many things. And if anyone does not offend in speech [never says the wrong things], he is a fully developed character and a perfect man, able to control his whole body and to curb his entire nature.

    3If we set bits in the horses’ mouths to make them obey us, we can turn their whole bodies about.

    4Likewise, look at the ships: though they are so great and are driven by rough winds, they are steered by a very small rudder wherever the impulse of the helmsman determines.

    5Even so the tongue is a little member, and it can boast of great things. See how much wood or how great a forest a tiny spark can set ablaze!

    6And the tongue is a fire. [The tongue is a] world of wickedness set among our members, contaminating and depraving the whole body and setting on fire the wheel of birth (the cycle of man’s nature), being itself ignited by hell (Gehenna).

    7For every kind of beast and bird, of reptile and sea animal, can be tamed and has been tamed by human genius (nature).

    8But the human tongue can be tamed by no man. It is a restless (undisciplined, irreconcilable) evil, full of deadly poison.

    9With it we bless the Lord and Father, and with it we curse men who were made in God’s likeness!

    10Out of the same mouth come forth blessing and cursing. These things, my brethren, ought not to be so.

    11Does a fountain send forth [simultaneously] from the same opening fresh water and bitter?

    12Can a fig tree, my brethren, bear olives, or a grapevine figs? Neither can a salt spring furnish fresh water.

    13Who is there among you who is wise and intelligent? Then let him by his noble living show forth his [good] works with the [unobtrusive] humility [which is the proper attribute] of true wisdom.

    14But if you have bitter jealousy (envy) and contention (rivalry, selfish ambition) in your hearts, do not pride yourselves on it and thus be in defiance of and false to the Truth.

    15This [superficial] wisdom is not such as comes down from above, but is earthly, unspiritual (animal), even devilish (demoniacal).

    16For wherever there is jealousy (envy) and contention (rivalry and selfish ambition), there will also be confusion (unrest, disharmony, rebellion) and all sorts of evil and vile practices.

    17But the wisdom from above is first of all pure (undefiled); then it is peace-loving, courteous (considerate, gentle). [It is willing to] yield to reason, full of compassion and good fruits; it is wholehearted and straightforward, impartial and unfeigned (free from doubts, wavering, and insincerity).

    18And the harvest of righteousness (of conformity to God’s will in thought and deed) is [the fruit of the seed] sown in peace by those who work for and make peace [in themselves and in others, that peace which means concord, agreement, and harmony between individuals, with undisturbedness, in a peaceful mind free from fears and agitating passions and moral conflicts]. [AMP]

    Just think about how, after months you have not seen that I am NOT making a probability calculation, but am discussing a search space scope challenge. That is where I have been for years and years.

    GEM of TKI

  82. 82
    DrBot says:

    KF:

    THE ONLY KNOWN, OBSERVED SOURCE OF FSCI, ROUTINELY, IS HUMANS.

    You have been corrected on this many times yet you willfully persist with the drumbeat repetition of patent falsehoods.

    We observe intelligent behavior in other animals, but no generation of FSCI. It only crops up with humans, so perhaps this consistent observation ought to infer that FSCI generation is unique to humans rather than an expected outcome of intelligence.

  83. 83

    Yes, I know you are discussing a search space scope challenge, kf.

    IMO, it is the only valid approach.

  84. 84
    kairosfocus says:

    Dr BOT:

    Humans are instances of intelligent beings.

    We have no reason to infer that such intelligence is limited to or exhausted by human beings [a beaver dam or the behaviour of a dolphin or an elephant or some birds points to their having limited intelligence and design capacity too, for instance . . . ], and indeed the very evidence of codes and programming in life itself not to mention the signs of functionally specific and complex organisation of the cosmos point towards intelligence beyond humanity.

    So, I am entirely correct to highlight that the only known source of FSCI is intelligence, and leave open the possibility of intelligence beyond humans.

    Going beyond that, it is a longstanding fact of life that many serious thinkers have pointed to the possibility of intelligence beyond humanity, indeed, as foundational to the cosmos. That is a major issue, with significant evidence in its favour at worldviews level. So we should not beg questions by locking out intelligence beyond humanity.

    I’d say your friendly local beaver, or dolphin etc may also have a few choice things to say on that!

    So, the properly scientific approach is to observe, identify, rest and respect the reliable signs of intelligence, and then let the signs speak for themselves where they appear.

    GEM of TKI

  85. 85
    kairosfocus says:

    In addition, Dr BOT, your resort to a turnabout and manifestly false, mocking dismissal, is duly noted as a further regrettable descent on tone.

  86. 86
    kairosfocus says:

    Further point: I forgot to note, above that if we for instance come across a car or a computer or a copy of Libre Office, we do not infer to “human” but to skilled, knowledgeable intelligent designer. Embodiment in a human body plainly does not define the matter. Indeed, I have said several times here at UD that if we are credibly designed intelligences ourselves per the testimony of our cells, I see no reason why we cannot in turn be such designers of intelligences, once we crack the techniques. So, I have repeatedly spoken in terms of R Daneel Olivaw, of Asimov’s series. I take the Derek Smith two-tier controller model seriously, and see no reason why we should not be able to create a software supervisory controller that would to at least a significant degree be artificially intelligent. Such may well not be conscious [though I suspect sophisticated control looping, projective, proprioception and memory techniques may well give a passable imitation of that], but it might be intelligent enough to be creative.

  87. 87
    DrBot says:

    I’m using the same language to you that you use to others, and you regard that use of language as bad manners!

    Sadly telling onlookers!

  88. 88
    DrBot says:

    A great example of circular reasoning. First off we can certainly infer that human like intelligence could be found elsewhere in the universe, but we don’t know it for certain so based on what we DO know, humans are the only observed source of FSCI.

    You cite the example of FSCI min biology and the universe in general as an evidence of another intelligence that can make FSCI, whilst arguing that the FSCI in biology and the universe must have come from intelligence because we observe humans producing it.

    Around and around we go 😉

  89. 89
    kairosfocus says:

    Onlookers:

    The above shows how Dr Bot is not being serious in any way worth a further discussion.

    I speak for record.

    I pointed out above in outline again the specific reason for inferring on FSCI to intelligence. Dr Bot’s dismissal attempt was to say this was to humans not to intelligence. I pointed out the reason why I draw the distinction, with evidence. He then accuses of circularity, failing to address evidence. For just one aspect, what part of say a beaver making a dam adapted to the particular trees and location is human? Or, since when do we infer to “human” instead of computer engineer when we see the functionally specific complex organisation and information in a computer? For that matter, when I pointed to the prospect of artificially intelligent machines, such as R Daneel Olivaw, as illustrating how non-human intelligences on say the Smith Cybernetic model, could be developed, what did that show but that I am — and have long been — open to other cases?

    The first suffices to show that we have observed cases of relevant non-human intelligence building entities that exhibit FSCO/I, and the second to show that merely being human does not equip one to do particular instances of design. the third, where I point to an architecture that opens up a whole world of possibilities, and conceptual possibilities, shows that I am open to different architectures of intelligence.

    So, “human” and intelligent designer cannot be equivalent.

    The accusation of circularity on inferring inductively from tested, reliable signs of intelligence to intelligence as their most credible cause, through trying to substitute “human” for “intelligent,” is therefore absurd on its face.

    Failure to respond seriously shows that the intent is to ridicule and dismiss, not to seriously deal with serious matters, that for instance impinge on whether soul/mind is real and independent of matter.

    He also tried a straight unwarranted turnabout to my highlighting a common uncivil rhetorical pattern [and remember I am currently dealing with a case where this has now amounted to making threats against my family]. When I pointed out that he was making unwarranted accusations in an obvious turnabout, which is atmosphere-poisoning, instead of dealing with matters on the merits, he tried another round of unwarranted accusation.

    Sorry, this is going nowhere where reasonable people want to go.

    And, Dr Bot the above is a sorry record that you have made; please, please, please do better than that.

    Good day.

    GEM of TKI

  90. 90
    kairosfocus says:

    F/N: Pardon, Dr Bot, but if you think that the issue is mere assertions, or “language,” instead of the underlying issues of demonstrated selective hyperskepticism and projection through unwarranted turnabout accusation, pardon but you missed the boat, here, here, and here as well as here leading on to here.

  91. 91
    molch says:

    So, beaver dams have FSCI?

  92. 92
    kairosfocus says:

    Molch:

    Have you noticed that such are specifically adapted to location and are functionally specific and organised?

    FSCO implies FSCI, via nodes and arcs analysis. And recall, the solar system threshold is 500 bits of such.

    Now where the beaver got that ability from is a great question.

    One that would be directly comparable to same relative to the AI in a robot.

    GEM of TKI

  93. 93
    kairosfocus says:

    PS: Cf here, noticing site-specificity and use of divergent architectures based on stream flow patterns. Observe:

    A beaver shapes a dam according to the strength of the water’s current. Relatively still water encourages dams that are almost straight; while dams in stronger currents are curved, bowed toward upstream. The beavers use driftwood, green willows, birch and poplars; and they mix in mud and stones that contribute to the dam’s strength. When some of the sticks used in the dam “truncheon” (start to grow) the tangled roots contribute more strength to the dam.

    Beavers are known to build very large dams.[1] The largest known was discovered by satellite imagery in Northern Alberta, in 2007, approximately 2,790 ft (850 m) long,[2] beating the previous record holder found near Three Forks, Montana, at 2,140 ft (650 m) long, 14 ft (4.3 m) high, and 23 ft (7.0 m) thick at the base.[3] . . . . studies involving beaver habitual activities have indicated that beavers may respond to an array of stimuli (such as seeing water movement), not just the sound of running water. In two experiments Wilson[6] and Richard (1967, 1980)[Full citation needed] demonstrate that, although beavers will pile material close to a loudspeaker emitting sounds of water running, they only do so after a considerable period of time. Additionally the beavers, when faced with a pipe allowing water to pass through their dam, eventually stopped the flow of water by plugging the pipe with mud and sticks. The beavers were observed to do this even when the pipe extended several meters upstream and near the bottom of the stream and thus produced no sound of running water. Beavers normally repair damage to the dam and build it higher as long as the sound continues. However, in times of high water, they often allow spillways in the dam to flow freely . . . .

    A beaver dam has a certain amount of freeboard above the water level. When heavy rains occur, the pond fills up and the dam gradually releases the extra stored water. Often this is all that is necessary to reduce the height of the flood wave moving down the river, and will reduce or eliminate damage to human structures. Flood control is achieved in other ways as well. The surface of any stream intersects the surrounding water table. By raising the stream level, the gradient of the surface of the water table above the beaver dam is reduced, and water near the beaver dam flows more slowly into the stream. This further helps in reducing flood waves, and increases water flow when there is no rain. Beaver dams also smooth out water flow by increasing the area wetted by the stream. This allows more water to seep into the ground where its flow is slowed. This water eventually finds its way back to the stream. Rivers with beaver dams in their head waters have lower high water and higher low water levels.

    Here, it is argued that beavers are a keystone species, raising issues onward of structures and fine-tuning of ecosystems:

    A keystone species is one that modifies the natural environment in such a way that the overall ecosystem builds upon the change. The ponds, wetlands, and meadows formed by beaver dams increases bio-diversity and improves overall environmental quality. It is our opinion that many environmental decision makers do not fully understand the positive effects that beavers and dams bring to ecosystems. This is understandable, because beavers had been virtually eradicated prior to the development of modern scientific methods. This site incorporates first principle engineering concepts in combination with environmental observations to illustrate the extent that our watersheds have changed with the removal of beavers. Beavers affected our ecosystems and land in a very extensive and positive way. Modern society has recently begun to realize the benefits of wetlands. This realization marks a turning point in over 300 years of extensive wetland eradication. Beaver dams are the primary natural method of establishing wetlands. Beaver dams represent the only natural methods of forming lakes, ponds, and wetlands in most watersheds. The exceptions to this would be glacial lakes, or lakes formed by geologic activity . . .

    In short, the humble beaver raises some very significant questions about designed designers, and designed environments.

    Which has things to say about powers of adaptation being designed in to species and ecosystems. That is, we are back to the issue raised in Wallace’s intelligent evolution concept as a designed means of creating particular features of the world of life in local ecosystems.

  94. 94
    Eugene S says:

    Hi All,

    I have seen mentions of onlookers, one of which I happen to be. Here is what I observed, if it is interesting to anyone.

    Unfortunately, I can see unwillingness of Darwinist users of this forum to discuss the ID argumentation seriously (the rare exceptions prove the rule). I hazard a guess that this is done for non-scientific reasons, one of which seems to be unwillingness to be reconciled to the two ideas:

    1. ID is science.
    2. ID does not refute religion, which is a no-no for materialist thinkers, who maintain that “proper science” must shut the door in front of any religious implications.

    It is my personal opinion but after all I have read to date here and elsewhere (I even participated in chats with some Darwinists users here, without calling names), the Darwinist camp goes even further down in my estimation, regrettably. Of course, I can distinguish Darwinism from Darwinists 🙂 But as far as Darwinism is concerned, IMHO it is already a matter of the past.

    My special handshake to Kairosfocus.

  95. 95
    Joseph says:

    CSI has generated false negatives? Evidence please.

    Darwinian algorithms? Evidence please.

  96. 96
    Joseph says:

    Grunty,

    ID is not anti-evolution. And darwinism doesn’t explain the origin of spcies.

    As for pulsars there wasn’t any investigation that went into that call of being artificial. Geez pulsars don’t even meet SETI’s criteria for an alien radio signal.

  97. 97
    markf says:

    Eugene

    Unfortunately, I can see unwillingness of Darwinist users of this forum to discuss the ID argumentation seriously (the rare exceptions prove the rule)

    What are your criteria for a serious argument? Much of the debate above is about the relevance of Bayesian inference to detecting design – something which Dembski disagrees with but presumably thinks is a serious subject which is relevant to ID (he devotes a paper and a whole chapter of The Design Inference to it). You might disagree with the anti-ID case but why do you say it is not serious?

  98. 98
    Eugene S says:

    Markf,

    By “serious” I mean “scientific” (no personalities, nor passion, nor personal likes/dislikes). As simple as that.

    The one you mention is serious. But such things get buried under non-scientific repartee. The fluff makes it difficult to follow the case. It is tiring to get through rounds and rounds of repetitive philosophising that something might or might not be possible. In particular, I mean perpetual repetitions that “CSI can be generated spontaneously”. This is not science without concrete examples, this is philosophy. Of course, true philosophy is also serious, but I am afraid it is a different story.

    What else can an onlooker suppose apart from the unwillingness to hear the other side?

    Thanks.

  99. 99
    Joseph says:

    The anti-ID case is not serious for one simple reason-

    To refute any given design inference all one has to do is step up and demonstrate nature, operating freely can produce the effect/ structure deemed to be of agency origin. Anything short of that is just plain ole whining. And whining should never be taken seriously.

  100. 100
    markf says:

    Eugene

    If it helps – I would never claim that CSI can be generated spontaneously because I believe that if you examine the definition of CSI (as in Dembski’s most recent writings) you find that it is cannot be generated spontaneously by definition. If plausible natural cause is found then it no longer counts as CSI.

  101. 101
    kairosfocus says:

    Jose3ph, exactly: show us the cases that do not turn out to be illustrations of intelligent design in disguise or unrecognised. GEM of TKI

  102. 102
  103. 103
    kairosfocus says:

    Dr ES, Thank you. Appreciated. GEM of TKI

    PS, DV, on the morrow I intend to post something on Beavers as Designers . . . all sorts of implications lurk in that.

  104. 104
    kairosfocus says:

    Are beavers designers? Are they intelligent? Cf here.

  105. 105

Leave a Reply