Intelligent Design

Of Pulsars and Pauses

Spread the love

DrREC is not just any Darwinist.  He holds a doctorate and has published on complex matters of biology in peer reviewed journals.  He is not stupid.  That’s why I like to use his examples in my posts.  I am not picking on a defenseless layman.  He’s among the Darwinists’ best and brightest.  So let’s get to his latest pronouncement from on high:

DrREC writes: 

Pulsars often have a complex behavior. But is it specified? If we took the pattern of pulses we detect as the ‘design specification’ — the pattern we search for, we would conclude yes. Totally and undeniably circular. Prove me wrong.

Here’s the problem with DrREC’s reasoning.  He seems to assume (despite being told the contrary numerous times), that any “pattern” can be designated post hoc as “specified.”  He does not seem to understand the most basic concepts of design theory.  The answer is that not any pattern can legitimately be called a specification. 

In a comment to my prior post Bruce David explains the concept nicely as follows:

Dembski’s work builds on that of earlier probability theorists’ who were wrestling with the problem that, for example, any pattern of heads and tails obtained by tossing a coin 100 times is equally improbable, yet intuitively, a pattern of 50 heads followed by 50 tails is in some sense far less probable than a ‘normal’ random pattern. In order to solve this conundrum, they came up with the idea of specification—if the pattern of heads and tails can be described independently of the actual pattern itself, then it is specified, and specified patterns can be said to be non-random. And note, the pattern does not have to be described ahead of time; the requirement is just that it is capable of being described independently of the actual pattern itself. In other words, a normal ‘random’ pattern can only be described by something equivalent to ‘the first toss was heads, the second heads, the third tails,’ and so on, whereas the example above is specified because it can be described as I already have, namely, ’50 heads followed by 50 tails’.

Back to DrREC’s question.  The pulses from the pulsar are indeed highly complex (i.e., improbable).  But they are never specified because they cannot be, as Bruce says, “described independently of the actual pattern itself.”  Therefore if we “took the pattern of pulses we detect as the ‘design specification'” even though that pattern could not be described independently of the actual pattern itself, we would simply be wrong.  That pattern does not conform to the definition of a specification. 

DrREC basically says, “If we call any pattern we find a “specification” then any pattern we find will be a “specification,” and that gets us nowhere.  Well, of course he is right as far as it goes.  But at a deeper and more meaningful level he is wrong, because no one says you can call just any pattern you find a specification.  The pattern must conform to a strict criterion before it can be considered a specification. 

So DrREC, I answered your question.  While we are on the issue of pulses you can answer mine.  Suppose researchers detect a repeating series of 1,126 pulses and pauses of unknown origin.  The pulses and pauses start like this (with one’s conforming to pulses and zero’s conforming to pauses):  110111011111011111110 . . .  After analyzing the series they determine that the zero’s are spaces between numbers and the one’s add up to numbers.  Thus, the excerpt I reproduced would be 2, 3, 5 and 7, the first four prime numbers.  The researchers suddenly realize that the 1,126 pulses and pauses represent the prime numbers between 1 and 100.  (Obviously, this was the series in the movie Contact).

My question for you DrREC is this:  Would you join Arch-atheist, uber-materialist, Darwinist Carl Sagan and conclude that this series is obviously designed by an intelligent agent?  If so, why?  After all, it is a hard fact that this series of 1,126 pulses and pauses is NO MORE IMPROBABLE than any other series of 1,126 pulses and pauses.

125 Replies to “Of Pulsars and Pauses

  1. 1
    DrREC says:

    Wow! A third thread about me! And comments are on! Joy.

    “It is a hard fact that this series of 1,126 pulses and pauses is NO MORE IMPROBABLE than any other series of 1,126 pulses and pauses.”

    Indeed.

    “Thus, the excerpt I reproduced would be 2, 3, 5 and 7, the first four prime numbers”

    Yes-because it conforms to a recognized PRE-specified pattern-of the prime numbers. This is analogous to what I gave you earlier: “In the pulsar example, if the pattern was specified beforehand (say 30 digets of pi) we might conclude design.” Not particularly creative on your part, SETI and Contact and all…..

    What was my question in that comment? Oh yeah:

    “But in biology, you take post-hoc human specifications-”designs” that describe nature, after scientific investigation, and use them to detect “design.”

    Maybe someone should PREdict a biological design.”

    Try that one.

    I like the comments on 🙂

  2. 2
    DrREC says:

    But note that if the specification did not exist beforehand, using the complex pattern detected form the pulsar as the pattern used to detect design (because ala Dembski, design is all about pattern) is totally circular.

    This is what is going on in biological fcsi fsci fisaci fiasco? I forget. ….

  3. 3
    Barry Arrington says:

    DrREC, you never got around to giving a straight answer to the question. Would you make a design inference and if so why?

  4. 4
    DrREC says:

    For the third time,

    “Yes-because it conforms to a recognized PRE-specified pattern-of the prime numbers.”

    What biology have you detected that conforms to a pre-specified pattern, not a post-hoc detection?

  5. 5
    DrREC says:

    Or as you call it a “specification” not merely a pattern.

    Wondering what the criteria for that are.

  6. 6
    Barry Arrington says:

    We are making progress!!! That is so rare in these combox debates I wanted to stop and celebrate for a moment. Thank you DrREC.

    Let’s review the points on which we appear to agree.

    1. Information is “complex” if it is highly improbable.

    2. Mere improbability is insufficient to warrant a design inference.

    3. In the “Contact” example, the series of 1,126 pulses and pauses is clearly improbable, but it is no more improbable than any other series of 1,126 pulses and pauses.

    4. In the “Contact” example, however, the series conforms to a pattern, the prime numbers between 1 and 100.

    5. And because the series conforms to this pattern, we can confidently infer that the series was produced by an intelligent agency, i.e., a design inference is warranted.

    DrREC, please confirm that we are in agreement on these points and I will answer your question.

  7. 7
    DrREC says:

    Sure, but don’t act like this is some sort of victory. I set the example in referencing pi in pulsar sequences.

    “Mere improbability is insufficient to warrant a design inference.”

    Is one you should tell the “big number” crowd over and over.

    Curious that Demski’s definition of specification has disappeared:
    ““he distinction between specified and unspecified information may now be defined as follows: the actualization of a possibility (i.e., information) is specified if independently of the possibility’s actualization, the possibility is identifiable by means of a pattern.”

    And Barry has already granted that post-hoc pattern recognition is excluded.

    So I’m curious, what non post-hoc pattern equivalent to pi or prime numbers do you think exists in biology?

    And don’t just reference something improbable, because that fails 2.
    Hand-waving semiotic definitions are most unimpressive, also.

    Seriously, if you’ve found the prime numbers to 100 or the digits of pi in some organism, I’ll write your next ID article…..

  8. 8

    DrREC,

    Thanks for your thoughts and criticism of the specification criterion. Your point seems to be that we can detect design only when it conforms to a pre-specified pattern.

    Just to make sure I am understanding, is it your position that it would be impossible to identify design in the following instances (in each case where the specification is not known beforehand): (i) finding a new and unusual architectural structure at an archaeological dig site, (ii) cracking a previously-unknown communications code, such as was done in WWII, or (iii) determining homicide in an investigation where it is not known beforehand whether it was a homicide or how it might have been performed?

  9. 9
    DrREC says:

    “unusual architectural structure”

    You recognize something about it as architecture.

    “cracking a previously-unknown communications code, such as was done in WWII”

    Pretty sure the Brits knew who was behind that, no design inference required. Doubt they thought the transmissions were natural. The Polish and capturing a few Enigma machines didn’t hurt.

    “determining homicide in an investigation”

    Again, pre-determined patterns. Something not conforming to expectations?
    Call X-files? Note also that forensics has a strict adherence to methodological naturalism, but perhaps we’ll save that for another thread.

  10. 10

    “Again, pre-determined patterns.”

    No, the precise pattern, which is the very thing that ultimately gets identified as designed, is not known. What you have answered is that those looking for a pattern analogized to other things that were recognized as designed. But the specific pattern in question was not known — let’s not be vague here and say “well, they kinda knew; they kinda expected.”

    You don’t like a WWII example. Fine, whatever. Pick whichever code you want: cuneiform, Egyptian heiroglyphs, any historian looking at an old, previously unknown text, doesn’t matter. I’m just giving an example for purposes of discussion. The question is simple: is it possible to crack a previously unknown code and recognize that it is designed? Has this ever happened? Of course it has.

    So your answer is yes, it is possible to identify design in these kinds of cases, but you would argue that it is because the investigator is able to analogize to other things he knows to be designed (something “conforming to expectations”), correct?

  11. 11
    DrREC says:

    “Egyptian hieroglyphs”

    Were solved because of 1 to 1 mapping with a known langage ala the Rosetta stone. Don’te get your point here. Seems counter to it.

  12. 12

    “And Barry has already granted that post-hoc pattern recognition is excluded.”

    Not sure what you mean by “post-hoc”. Are you saying that Barry thinks it is impossible to identify a pattern from investigating an object/system/information if we don’t sit down and specify the pattern we are looking for beforehand?

  13. 13

    You are right, you don’t get the point. The fact that the code was cracked in some way doesn’t in any way negate the fact that the code wasn’t known initially. Don’t try to avoid the issue. The question is very simple: Is it possible to crack a previously uncracked code and recognize as a result that it was designed? Yes or no.

  14. 14
    Barry Arrington says:

    Okey dokey. We agree on these points I set out in comment 4. Let’s build on that foundation.

    Complex specified information (“CSI”) is information that is both “complex” (i.e., highly improbable) and “specified” (i.e., it conforms to a certain kind of pattern). The prime numbers in “Contact” are CSI. Without saying this in so many words, you conceded the point when you agreed that the “Contact” numbers are highly improbable and that they conformed to a pattern. You insisted that the pattern had to be given beforehand, but we’ll get to that later. The main point is that at least in this example you have conceded the basic insight of ID, that a design inference is warranted when an event exhibits CSI.

    Apparently your only quibble with ID is your insistence that the specification be designated prior to the investigation in order to avoid what you call circular reasoning. I will now demonstrate why your quibble is not justified.

    Let’s go back to Bruce’s example. He defines a “specification” as a type of pattern that can be “described independently of the actual pattern itself.” Note, I did not abandon Dembski’s definition. Bruce’s definition is the same thing stated in a simpler way. Importantly, Bruce continues: “And note, the pattern does not have to be described ahead of time; the requirement is just that it is capable of being described independently of the actual pattern itself.”

    You write: “Or as you call it a “specification” not merely a pattern. Wondering what the criteria for that are.” Well, there you go. The criterion for a specification is that it is capable of being described independently of the actual pattern itself.”

    Now, as someone holding a doctorate in biology you are no doubt familiar with the concept of “rejection region” in statistics, i.e., the set of values for which we fail to reject the null hypothesis. Let’s say we are testing a person who claims to be clairvoyant. We can test this claim by showing him the back 24 cards and asking him to say what suit they are. The null hypothesis is that he is not clairvoyant and can do no better than chance. He would be expected to get 6 answers right on sheer dumb luck. The number of correct answers that would cause us to reject the null hypothesis is called the “rejection region.” So we could say that if he gets 12 answers correct he is clairvoyant and the numbers 12 and above would be the rejection region.

    In statistics we always set the rejection region in advance. And I take it this is the source of your insistence that any “specification” be designated in advance. But a moment’s reflection shows that in design detection (as opposed to a confirmatory statistical data analysis), the pattern need NOT be set beforehand. Dembski gives the following example. Consider the following set of letters:

    nfuijolt ju jt mjlf b xfbtfm

    This appears to be a meaningless sequence of random letters until a Caesar cipher is applied (i.e., each letter is moved on position down the alphabet). Then it becomes:

    methinks it is like a weasel.

    Here the pattern (the decrypted text) is given after the fact. Dembski writes: “In contrast to statistics, which always identifies its patterns before an experiment is performed, cryptanalysis must discover its patterns after the fact. In both instances, however, the patterns are suitable for inferring design.”

    The same is true in a criminal forensic investigation. It is absurd for the investigator to try to impose a rejection region on the crime scene before his investigation. He takes the crime scene as he finds it and only then does he look for patterns that indication design (i.e., a crime).

    So, as promised, I will answer your question. You write: “What biology have you detected that conforms to a pre-specified pattern, not a post-hoc detection?”

    Here’s the answer. Your question is meaningless because it is based on a false premise, the premise the patterns in biology from which we may detect design must be designated in advance of the investigation. As is the case with cryptanalysis and forensic investigation, we are perfectly justified in taking the patterns as we find them and making design inferences (or not) after the fact. The only issue is whether the information we find is both complex and conforms to a specification.

  15. 15
    DrREC says:

    I”m taking about design detection (distinguishing nature from design) here. I doubt the British receiving Encrypted German communications (particularly after Polish intel and capture of Enigma devices) or the French looking at the Rosetta Stone (which had a language they well knew) were naturally occurring objects.

    Odd response. Still not getting the point.

  16. 16
    DrREC says:

    Wow, after chasing me through three new threads (one with comments closed) I didn’t expect you to give up so soon.

    “it is based on a false premise, the premise the patterns in biology from which we may detect design must be designated in advance of the investigation.”

    Is it? Really?

    “In statistics we always set the rejection region in advance.”

    So “the patterns in biology from which we may detect design” which you say are not designated in advance are nothing at all like a statistical science?

    I kinda suspected that.

    I’m picturing a young SETI investigator (and you guys luuuuvvve SETI (the other folks in the design detection business)) running to his boss and saying he’s discovered a designed signal, based not on the prime numbers, or digits of pi, bit on the complex sequence he discovered. Which pulsars also produce. Naturally. Ooops. Cancel the call to CNN.

    So here, you argue biological ID can be detected based on the human patterns (designs) which describe nature. So “the patterns in biology from which we may detect design” can be designated after they are detected in the search for design. How wonderfully and pathetically circular. So four posts back, my original point stands.

    And here what evidence do you bring against it?

    “As is the case with cryptanalysis and forensic investigation, we are perfectly justified in taking the patterns as we find them”

    In cryptanalysis has the question ever been natural vs. designed, or is it the breaking of a code know to be designed?’

    Same for forensics-natural patterns vs. predefined/expected scenarios. Post-hoc rationalizations of seemingly natural scenarios might not pass reasonable doubt. Or logic. Kinda X-files/Torchwood stuff there. How would a jury react to that?

    So does the design detection in fcsi/csi/fiasco/fcsio? rely on post-hoc detection?

    If your answer is no, give me a single example to the contrary.

  17. 17
    Christian-apologetics.org says:

    Wow, great discussion! Please keep going.

  18. 18
    kairosfocus says:

    Dr REC:

    Pardon, but this question-begging strawman has been dealt with before, indeed over a decade ago.

    Let’s go back to Dr Dembski on CSI, in NFL:

    p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

    I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

    Biological specification always refers to function . . . In virtue of their function [[a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”

    p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

    That should long since have been clear enough. My only concern is that the relevant UN-representative zones, T in the wider space of possible configs, W, may be quite large though relatively small compared to the config space. That is why in adjusting Dembski’s 2005 expression, I confined the 500 bits case to the solar system of some 10^57 atoms, as the number of Planck Time Quantum states would be of order 10^102.

    In that context, it is easy to show that a sample on the scope of 10^102 is comparable to drawing at blind chance plus equally blind mechanical necessity, one straw-size sample from a cubical hay bale 3 1/2 light days on the side. As a biologist, you are certainly familiar with sampling theory and therefore full well know that such a sample is maximally likely to come from the absolutely dominant bulk of typical possibilities, not the atypical ones, even if we are not just having a few needles in the haystack, but our Solar system out to Pluto. (To take in the observed cosmos as a whole, simply extend to 1,000 bits.)

    The needle in the haystack challenge, unsurprisingly, is proverbial.

    It is also aptly illustrative of what a “specification” is: a needle is not so much specified before the fact, as independent of the situation and is an objective, observable fact.

    That needle in the haystack challenge is the reason why an ATYPICAL zone, T, will be all but certainly unobservable on chance plus necessity in a sufficiently large space, within a given compass of search resources; our solar system being our “practical” universe, where also it takes about 10^30 PTQS’s for the fastest chemical reactions. Which, BTW [GD, kindly note], is foundational to the statistical grounding of the second law of thermodynamics.

    Further, function is a macro-observable, independently specifiable state of affairs. Namely: does it work in some way dependent on having a configuration of parts from a narrow and unrepresentative zone in the field of possible configs? (Your car part must not only be generically right, it must be specifically correct within a zone of tolerance, of it will not work.)

    Alphanumerical characters in posts in this thread could come from a vast field of possibilities. But, when we find them in strings that conform to contextually responsive messages in English, we have excellent reason to infer that that needle was not found in the haystack by chance and blind necessity. AND, THIS IS AN AFTER-THE-FACT OBSERVATION.

    It plainly warrants inference to intelligent author, not chance and/or mechanical necessity.

    Indeed, most contributors in this thread do not know one another face to face, we are inferring to specific authors in the first instance on FSCI, rather than the equivalent of a pile of rocks tumbling down a hillside on the Welsh border and by happenstance of lucky noise plus blind gravity forming the shape of the glyphs that spell out: Welcome to Wales.

    So, your whole argument has collapsed.

    Going further, much of this — in the context of exchanges at UD since March — is about a latterday attempt to pretend by drumbeat repetition of long since already answered talking points, that the log reduced chi_500 metric does not adequately answer to the sock puppet MG’s claim that CSI is improperly defined. So, let’s clip the just linked:

    . . . when a sufficiently small, chance based, blind sample is taken from a set of possibilities, W — a configuration space, the likeliest outcome is that what is typical of the bulk of the possibilities will be chosen, not what is atypical. And, this is the foundation-stone of the statistical form of the second law of thermodynamics.

    Hence, Borel’s remark as summarised by Wikipedia:

    Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

    In recent months, here at UD, we have described this in terms of searching for a needle in a vast haystack:

    g: As Abel estimated, there are perhaps 10^57 atoms in our solar system, which

    h: in 10^17 s [a plausible lifetime estimate of order 10 bn years] will undergo ~ 10^102 Planck Time quantum states (this being a lower limit to physical events, the fastest chemical reactions take ~ 10^30 such, and fast nuclear events take ~ 10^20), where

    i: a set of just 500 bits have 3.27* 10^150 possible configurations. So,

    j: the scope of possible blind search to that of possible outcomes is 1:10^48.

    k: This is comparable to taking a one-straw sized sample at random from a cubical haystack 2 1/2 light days on the side.

    l: Even if our solar system out to Pluto were lurking in the stack, by utterly overwhelming likelihood, the sample would be straw not anything else.

    With this in mind, we may now look at the Dembski Chi metric, and reduce it to a simpler, more practically applicable form:

    m: In 2005, Dembski provided a fairly complex formula, that we can quote and simplify:

    ? = – log2[10^120 ·?S(T)·P(T|H)]. ? is “chi” and ? is “phi”

    n: To simplify and build a more “practical” mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = – log p, in bits if the base is 2. (That is where the now familiar unit, the bit, comes from.)

    o: So, since 10^120 ~ 2^398, we may do some algebra as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p):

    Chi = – log2(2^398 * D2 * p), in bits

    Chi = Ip – (398 + K2), where log2 (D2 ) = K2

    p: But since 398 + K2 tends to at most 500 bits on the gamut of our solar system [our practical universe, for chemical interactions! (if you want , 1,000 bits would be a limit for the observable cosmos)] and

    q: as we can define a dummy variable for specificity, S, where S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T:

    Chi_500 = Ip*S – 500, in bits beyond a “complex enough” threshold

    (If S = 0, Chi = – 500, and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: A string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive.)

    r: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was intelligently designed. (For instance, no-one would dream of asserting seriously that the English text of this post is a matter of chance occurrence giving rise to a lucky configuration, a point that was well-understood by that Bible-thumping redneck fundy — NOT! — Cicero in 50 BC.)

    s: The metric may be directly applied to biological cases:

    t: Using Durston’s Fits values — functionally specific bits — from his Table 1, to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold:

    RecA: 242 AA, 832 fits, Chi: 332 bits beyond

    SecY: 342 AA, 688 fits, Chi: 188 bits beyond

    Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond

    u: And, this raises the controversial question that biological examples such as DNA — which in a living cell is much more complex than 500 bits — may be designed to carry out particular functions in the cell and the wider organism.

    v: Therefore, we have at least one possible general empirical sign of intelligent design, namely: functionally specific, complex organisation and associated information [[FSCO/I] . . .

    Let’s ask, just on this example: is biofunction of an AA sequence independently observable — say, as an enzyme — and/or measurable? Obviously, yes.

    Does the requirement of being from a specific zone T — and island of function — in the space of possible AA strings of the same or comparable length constitute a question-begging after the fact imposition? Patently, not so, we know that derangement of function is easy enough to achieve by breaking folding (fold domains being of order 1 in 10^70 or so of the AA space per recent empirical studies) or removal of key functional groups.

    The “after the fact” objection falls apart, again.

    As has been repeatedly pointed out.

    What about, the imposing of a value 1/0 on S in the equation above, is subjective and question-begging? The answer is obvious, 0 is the DEFAULT and means that chance plus necessity can explain an outcome. As has been repeatedly pointed out — and just as repeatedly ignored in the attempts to use drumbeat talking points to drown it out — it is when we can find an objective, independent credible reason to identify an observed case E as coming from a narrow and atypical, separately describable zone T, that we identify S = 1. Just as we would adjust a macroeconomic model for being in a war by setting a similar dummy variable.

    I should also add that all human intelligent activity pivots on the fact of being a conscious subject who holds a differing view, i.e. the mere existence of a subject in the situation does not allow us to dismiss an inconvenient finding as merely subjective. That, is a thinly veiled ad hominem circumstantial.

    Instead, the challenge is to examine the warrant on the merits of fact, and logic.

    In this case, quite plainly, that challenge has long since — over a decade ago — been met. As cited from Dembski, BEFORE the controversy.

    In short, all of this debate has been a matter of tilting at strawmen.

    Cho, man, do betta dan dat!

    GEM of TKI

  19. 19
    kairosfocus says:

    F/N: And, since when is a string structure not a string structure, just because it is made of D/RNA monomers or AA’s that we observe in a living cell? KF

  20. 20
    kairosfocus says:

    F/N: Onlookers, you may want to work your way through the thought exercise here, to clarify the matter in your minds. I have very little confidence that sufficiently determined objectors to the design inference will be open to ANY argument and evidence, given the a priori controlling force of evolutionary materialism. They will only be silenced by the shifting force of a consensus that makes their view so obviously indefensible that prudence will dictate a change. KF

  21. 21
    kairosfocus says:

    F/N 2: Cf. here on, on a priori evolutionary materialism flying the false colours of science.

  22. 22
    Joe says:

    No, we wouldn’t say the signal from a pulsar was designed because it does NOT meet the criteria of an artificial signal.

    Specification is NOT the only criteria and the EF makes that very clear.

  23. 23
    Joe says:

    I answered you already- no one can predict what any given designer will design next. Just as no one from your position can predict what mutation will occur next nor which mutation will be kept and spread.

  24. 24
    Joe says:

    Yes there is a design inference required to break a code even if you know the sender- you do not know the message and that is what you are trying to detect.

  25. 25
    M. Holcumbrink says:

    Actually, I find the discussion maddening. How is it that an intelligent man with Dr. REC’s background, will bend over backwards and twist himself into pretzels to deny the clear design implications found in biological life? He is incorrigible, plain and simple, and it seems silly to me to carry on the conversation. Obviously, nothing can be said to turn him around.

    Hand-waving semiotic definitions are most unimpressive

    That says it all right there. This man would infer intelligence if he finds a single rune scrawled on a cave wall, but when we discover algorithmically compressed, hierarchically nested, multilayer encrypted machine code regulating and driving molecular compound machinery (replete with levers, wheels & axles, ramps, screws, etc.), his worldview forces him to chalk that up to a cosmic fart. He’s been brainwashed.

    Although… it is kind of fun to watch a few ID proponents make the “scientist” look just as dogmatic as a wild haired fundamentalist.

  26. 26
    ScottAndrews2 says:

    A pattern must be pre-specified in order to be truly specified?
    You’re correct in stating that we can’t just call anything specified.
    But what about this post, or yours? Who specified them before they were typed? The specific combination of characters does not match any specification.

    An unspecified arrangement of DNA molecules will result in absolutely nothing, every time, no exceptions.
    That a given arrangement results in the components of cells, the cells themselves, all of their functions, and their assembly into greater units which in turn perform additional functions is evidence that the arrangement is specified.

    You can dispute this and reason that I’m still arbitrarily applying the specification post-hoc. But then it becomes your position that there really is no difference between a sea urchin and a pile of lifeless, functionless proteins. It’s arbitrary, splitting hairs.

    Is that your position? If the specification is post-hoc and arbitrary, then is the difference between the arranged proteins of a living thing and a random collection of functionless proteins also just a post-hoc invention? Are they only different because we choose to see them that way?

  27. 27

    DrREC, don’t get hung up on how a code is broken. Every time a code is broken there is some kind of story about how the investigators finally figured it out. Doesn’t matter if they sat long nights staring at incomprehensible figures, or if they found some other clue, or if someone walked in the room and told them what to look for. The question remains, please don’t avoid it:

    Is it possible to crack a previously unknown code and recognize as a result that it was designed? Yes or no?

  28. 28

    Exactly, ScottAndrews2. We infer design all the time without knowing the exact prior specification beforehand.

  29. 29

    OT: Barry, I think there is some merit in keeping the discussion here, rather than opening up multiple threads, but I think once we’re done here you need to open up the other thread to comments so that DrREC can respond to that specific issue. He has already complained multiple times about the thread being closed and is obviously going to use that as a reason why he wasn’t able to win the day — the moderators took a pot shot and didn’t allow him to respond; the discussion was unfairly stacked. Please let him respond before too long so this moderating point can get off the table.

  30. 30
    Barry Arrington says:

    Irony can be defined as loudly proclaiming that you are being denied the chance to proclaim loudly. No, I wanted to single that comment out in its own OP. As the post says, anyone who wants to comment on it is welcome to do so in the first post.

  31. 31
    krtgdl says:

    Dear DrREC, what about this? a strange manuscript is found. No doubt it is medieval (carbon-dating and so on). It is divinely inspired, the author says. There are 100 theorems all false in euclidean geometry and all true in some non-euclidean geometry. It takes decades to interpret and prove all the theorems. Could this be considered specified information?

  32. 32
    DrREC says:

    First a few housekeeping things-

    I can’t reply to all of your comments.

    I don’t think we need new analogies. I think the pulsar example is sufficient. I’m interested in detecting design from non-design (nature) not enigma codes or rosetta stones (which no one ever suspected of being natural. I’m still waiting for someone to explain how they would detect design without a pre-established or independently known pattern.

    Anyone?

    I don’t believe I’m insane, brainwashed, genuflecting at the altar of darwinism, wicked or an idiot. Thanks for the insults and the kind treatment by the moderators. For the record, I declared the case of getting 10 flushes in a row highly improbable. It is.

    I will now show how KF’s metric assumes design:

  33. 33
    Petrushka says:

    Is the Voynich Manuscript specified? Does it contain information, or is it just gibberish?

  34. 34
    CJYman says:

    … and … Finals are done! I hope to be posting here again more often.
    I’ll start with saying that although I have not yet read through all the comments above, I have noticed that Dr. REC appears to have a problem with specifications vs. pre-specifications. I’m not sure if this has been dealt with yet, but Dr. Dembski has laid out the difference between the two forms of specification (ie: the predictive form of pre-specification vs. the inherent “meaningful/functional” form of specification) in his paper “Specification: the Pattern that Signifies Intelligence.” The “meaningful/functional” form of specification is basically the type of information that Upright Biped has been discussing lately (ie: requiring sender, receiver, protocol, and instantiation in physical medium yet not being defined by the physical properties of that medium).

  35. 35
    Petrushka says:

    An unspecified arrangement of DNA molecules will result in absolutely nothing, every time, no exceptions.

    I assume you mean the percentage of random sequences that are functional is low. No that there are no exceptions.

    But you have no theory behind the distribution of function. You cannot independently determine a sequence that is one base pair from functionality from a purely random sequence.

    For that reason alone you cannot assert that the other words in the sequence are specified. If you cannot distinguish a degraded functional sequence from a random sequence, you cannot assign bits of information to it.

    You have no theory of what the minimum functional sequence is, nor any theory that will tell you whether there are possibly synonymous sequences nearby in sequence space.

  36. 36
    DrREC says:

    “Using Durston’s Fits values — functionally specific bits — from his Table 1, to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1”

    I) Use of Durston’s, or any related metric imposes a post-hoc specification-a design in the search for design.

    Look at the tables of Fits. There is an estimate based of the length, and number of sequences. But sequences of what? A post-hoc specified design.

    Lets do an easy one-Insulin. They list 419 sequences. I get over 900 in a blast search. The difference is where the boundary is drawn-do we include “insulin-like” proteins that perform the same function? Insulin is a human concept with a homology cutoff based on similarity to human insulin. Insulin-like proteins that perform related functions get lopped out.

    So what has happened here:
    a) From a wide range of sequences, not just a function, but a FORM (a design) has been specified. Post-hoc.
    b) A cutoff based on similarity has been made
    c) From this narrowed range of sequence space, a metric is made that is then used in turn to detect design.

    This ignores:
    I) Evolution has no target, where this metric narrowly specifies one
    II) Other insulin and insulin like sequences that perform the same function have been lopped out
    III) This isn’t a true mapping of functional space-other proteins not yet sequences, or that don’t even exist in nature that could substitute

    Take RecA, as another example. They only consider homologues above a certain detection threshold. They only consider RecA, but not other recombinases. This ignores other proteins that can substitute in functional space. Unlike insulin, taking the whole of RecA also ignores it has functional subparts, which would have lower fits. So there is a specification in the search for design. You’re querying design with design.

    II) Sequence-based specificity metrics are fundamentally flawed

    A) The method estimates the functional portion of sequence space by known sequences that code for a given protein. There is going to be a significant underestimate of functional space in the connection of it to sequence space

    i) Evolution has not explored all of sequence space
    ii) Humans haven’t sequenced all of biology
    iii) Most sequences are from evolutionarily related organisms
    iv) Most sequences are of unknown function, and therefore there could be many islands of sequence space which perform the same function

    B) Conversely, in the case of a small, novel family, the sequence space is very low, and the method would detect design. Basically anything new (sequence space=1) is declared design.

    C) It has no consideration of evolutionary paths. The number of fits for some whole domains and enzymes is well below the universal probability bound. If two of these recombine (which is almost a given in some organisms) the number would be pushed over the universal probability bound. So apparently natural+natural undergoing a natural (and highly probable) process yields design detection.

    III) As the previous example shows, what really matters isn’t the state, but the change of fsc over time. The paper states repeatedly “The measured value ? of a biosequence S can change over time with mutation events.” Figure 1 is “Changing measure of FSC over time.” So the relevance to ID is finding an event where fsc increased over the universal probability bound at once. Which you simply don’t have. Design has never been detected in biology.

  37. 37
    krtgdl says:

    and if a ‘genetic alphabet’ existed allowing to translate the genome into an unknown poem, would you deem DNA as designed?

  38. 38
    Petrushka says:

    That sounds like Bible code to me.

    You can write a function to translate anything into anything else.

  39. 39
    krtgdl says:

    i mean a bases’ alphabet (of the type ‘G’ ‘GGA’ ‘TT’ etc..)

  40. 40
    M. Holcumbrink says:

    Is the Voynich Manuscript specified? Does it contain information, or is it just gibberish?

    We don’t know, but we do know that SOMEONE wrote it, with the intent to do so, gibberish or not.

  41. 41
    M. Holcumbrink says:

    Point being, with said manuscript, there is at least an appearance of specification (use of syntax, symbols), whether we know what it specifies or not. With biology, we *know* that symbols and syntax are being utilized, and we know what they specify. And we also know that it is algorithmically compressed, hierarchically nested, uses multilayered encryption, and regulates machinery, just like any modern computer would.

  42. 42
    DrREC says:

    “We also know that it is algorithmically compressed, hierarchically nested, uses multilayered encryption,”

    We do? Could you give an example of each in biology?

  43. 43
    lastyearon says:

    An unspecified arrangement of DNA molecules will result in absolutely nothing, every time, no exceptions.
    That a given arrangement results in the components of cells, the cells themselves, all of their functions, and their assembly into greater units which in turn perform additional functions is evidence that the arrangement is specified.

    You can dispute this and reason that I’m still arbitrarily applying the specification post-hoc. But then it becomes your position that there really is no difference between a sea urchin and a pile of lifeless, functionless proteins. It’s arbitrary, splitting hairs.

    Again, you are assuming that there is some fundamental difference between life and other arrangements of matter. Just because human beings belong to the category of the former, does not mean that anyone or anything intended it. And so it is not warranted to say that the complex functions that result in what we call life are ‘specified’.

  44. 44
    ScottAndrews2 says:

    Gee, when you put it that way drawing a trillion atoms from the deck and getting a sea urchin doesn’t sound so strange.

    I can’t say that DNA code symbolically representing proteins and regulations that produce an egg and grow it all the way into a reproducing sea urchin is specialized because ‘I have no theory behind the distribution of function?’

    You cannot independently determine a sequence that is one base pair from functionality from a purely random sequence.

    If you cannot distinguish a degraded functional sequence from a random sequence, you cannot assign bits of information to it.

    Wow, what a corner you are painting yourself into. You’re saying that if I can randomly substitute a gene in sea urchin, not knowing which gene, and still have a sea urchin, that means that I have know way of knowing whether or if there was ever any functional information to start with.

    Except, you know, the sea urchin.

    You seem to be reasoning that when determining whether a sequence is specified or not, we should disregard the output of that sequence if we don’t precisely understand the correlation of the sequence to the output. Because.. why? The visible, undeniable evidence that the sequence does correlate to a functional output is inconvenient and your case makes more sense without it?

    Let’s put this another way.

    If you give instructions to a man and he translates them in language you don’t understand to a second man who then follows most but gets a few things wrong, you cannot ascribe any functional information to those translated instructions because you don’t know which part was degraded.

    Am I misunderstanding? To repeat, you said,

    If you cannot distinguish a degraded functional sequence from a random sequence, you cannot assign bits of information to it.

    How does your statement not apply in the above example? You’re asserting something that’s ridiculous if you just think about it for a minute.

  45. 45
    kairosfocus says:

    Dr Rec:

    By now, it should be clear that you are imposing the a prioris, not me.

    Let’s look at your:

    I) Use of Durston’s, or any related metric imposes a post-hoc specification-a design in the search for design.

    Look at the tables of Fits. There is an estimate based of the length, and number of sequences. But sequences of what? A post-hoc specified design

    Really, now. A protein family is observed, and its variability while retaining function is used to quantify the info in the AA sequence. We have a macro-observable state that asks only: does it do job X in living systems. That is more than sufficiently independent.

    The redundancy in the strings reduces the bit value from 4.32 per AA residue.

    After the reduction, the number of functional bits is totted up. A comparison to threshold then tells us what you obviously do not wish to hear: a functional family that isolated in AA string config space that does a job that specific to the string sequence, is not likely to have been come upon by blind processes.

    So we see a selectively hyperskeptical objectopn.

    Only problem, this is the same problem as explaining functional text on blind forces.

    Cicero could spot the problem c 50 BC, and so can we today, providing we are not blinkered by materialist a prioris.

    GEM of TKI

  46. 46
    Petrushka says:

    we *know* that symbols and syntax are being utilized, and we know what they specify.

    Then you can answer my question about how to distinguish a sequence that is one base pair from being functional, from a collection of randomly generated sequences.

    After all, that’s one of the characteristics of syntax, recognizable words and sequences of words.

  47. 47
    M. Holcumbrink says:

    This ignores:
    I) Evolution has no target, where this metric narrowly specifies one
    II) Other insulin and insulin like sequences that perform the same function have been lopped out
    III) This isn’t a true mapping of functional space-other proteins not yet sequences, or that don’t even exist in nature that could substitute

    This is where the work of D. Axe & others becomes important. What exactly is the best estimate of this functional space? Based on their work, best as I can tell, this functional space is *very* narrow.

    With that said, I find this to be analogous to the macro world of mechanical components. These components as a necessity must be defined with tolerances. There are therefore ranges of sizes, profiles, orientations and form for particular features that are functional (i.e. a functional space). So for less critical features, the tolerances are relatively loose (larger functional space), but for the business end of these components, the tolerances need to be relatively tight (smaller functional space). And if these tolerances can accommodate certain components being utilized in other systems, with only very minor adjustments, then it is the engineer’s duty to factor that in as well (which is certainly the case for engineered components at the macro level). And best as I can tell, this is exactly what we see in molecular biological components (e.g. horizontal gene transfer).

    Maybe “brainwashed” and “incorrigible” was a little harsh. But that’s sure what it looks like. You are at least stuck in a rut, it seems to me.

  48. 48
    Petrushka says:

    The problem with the “very narrow” argument is that when it has been tested, as in the work of Thornton, there are bridges between variant versions of sequences in living organisms.

    It doesn’t matter how sparse functional space is if it is bridgeable.

  49. 49
    ScottAndrews2 says:

    I’m still waiting for someone to explain how they would detect design without a pre-established or independently known pattern.

    How about the storage of symbolic information arranged in sequences that, when translated using a specific protocol, produces components that interact with each other as if were known in advance, when the information was first stored, what the output would be if they were translated using that protocol?

    I’m pretty sure that’s a pattern. I’m referring, not to the arrangements of DNA, but to the behavior of the entire system. Does anything else do that? Are there any manufacturing processes that start with an abstract representation of components, that, when manufactured, function together?

    It’s quite simple. Detecting design does not involve matching the finished product to a previous specification, or arbitrarily determining that the finished product was specified. It is the presence and application of abstract information itself that is a pattern.

  50. 50
    DrREC says:

    “A protein family is observed”

    And you go out into the archery field, and draw a bulls-eye around it-as though that were an intended target. Further, you define what belongs in that family, and what doesn’t.

    “its variability while retaining function is used to quantify the info in the AA sequence”

    No response whatsoever to the flaws in that technique.

    “Function” is also defined by you. The only function evolution cares about is an increase in fitness.

    “a functional family”
    What about all other families that could perform that function, and all potential families?

    If you really believe in this method, you should at least be able to answer this from above:

    “The number of fits for some whole domains and enzymes is well below the universal probability bound. If two of these recombine (which is almost a given in some organisms) the number would be pushed over the universal probability bound. So apparently natural+natural undergoing a natural (and highly probable) process yields design detection.”

  51. 51
    ScottAndrews2 says:

    I have a six-foot wooden plank. I’ve demonstrated repeatedly that I can use it to bridge seemingly impassible gaps. I have video.

    I’ve also used it to walk to Hawaii and back. Don’t laugh. I told you it serves as a bridge between points. Don’t ask me to show you, either. Just go with the extrapolation.

  52. 52
    Petrushka says:

    If you give instructions to a man and he translates them in language you don’t understand to a second man who then follows most but gets a few things wrong, you cannot ascribe any functional information to those translated instructions because you don’t know which part was degraded.

    It’s your problem, not mine.

    My argument is that functional sequences are what they are because they have been built incrementally. I recognize that this requires a sequence space that supports bridges and ridges. That’s why the work of and Thornton are important. If there is no way to improve sequences incrementally, evolution is impossible.

    ID seems to argue that there is a shortcut to evolution, that one can somehow design a long functional sequence without building it incrementally.

    If so, one should be able to distinguish a sequence that has 499 out of 500 words in place, without testing it in a living system.

    It seems to me, however, that ID has no theory of functional sequences or how to recognize them or design without doing the chemistry. No theory that specifies the shortest possible functional sequence and no theory of how to design them without starting with known functional sequences.

    Gpuccio’s calculation involves a step function. A sequence is either functional or not functional. If it’s functional he declares that all the bits are necessary, but he has no theory of why this should be the case. He has no theoretical reason why a sequence could not be the current state of a series of incremental changes.

    To make this kind of assertion, one would need to be able to demonstrate that a sequence X, that is one character short of functional, has X – 1 characters specified.

  53. 53
    DrREC says:

    There is similar work that gives much lower values, like 1 in 10^24 (Taylor et al., PNAS 98, 10596-10601, 2001).

    Big number, but with 10^30 or so bacteria on earth, with 1000’s of genes, times many generations…..

    Axe’s work doesn’t explore all of sequence space, or take into account other families with similar activities.

    As Petrushka points out, it also doesn’t show the functional sequences are isolated or impossible to reach by evolution.

  54. 54
    Petrushka says:

    I have a six-foot wooden plank. I’ve demonstrated repeatedly that I can use it to bridge seemingly impassible gaps. I have video.

    You have no theory to back up your characterization of functional space as unbridgeable. Comparative genomics indicates that sequences of cousin species have differences that form a nested hierarchy.

    When actual gaps have been tested, there are viable intermediate sequences.

  55. 55
    ScottAndrews2 says:

    Petrushka,

    It’s your problem, not mine.

    No, really, it’s your problem, because you said

    If you cannot distinguish a degraded functional sequence from a random sequence, you cannot assign bits of information to it.

    To test whether you actually believe this or are applying it selectively, I posed a simple scenario:

    If you give instructions to a man and he translates them in language you don’t understand to a second man who then follows most but gets a few things wrong, you cannot ascribe any functional information to those translated instructions because you don’t know which part was degraded.

    In this scenario, because you do not speak the language you cannot tell which information is degraded nor can you distinguish any of it from random gibberish. It seems to test your assertion perfectly. You “you cannot distinguish a degraded functional sequence from a random sequence.”

    Your assertion is simple, and so is my illustration, which demonstrates that what you assert is false.

  56. 56
    M. Holcumbrink says:

    We do? Could you give an example of each in biology?

    Alternative splicing is algorithmic compression, e.g. splicing together portions of code from various genes to make a new sequence of code. Regulated stepwise procedures (algorithms) are required make this happen. And any time you have a gene that makes protein A, then you jump over one base pair to code for protein B, you have multilayered encryption. Do you deny that this happens in the cell?

  57. 57
    M. Holcumbrink says:

    Then you can answer my question about how to distinguish a sequence that is one base pair from being functional, from a collection of randomly generated sequences

    The resultant product of feeding the sequence into the processing equipment (which embodies the protocol) is what distinguishes a random string from a functional string. If it’s only one base pair off, I would imagine that the output would still be considered functional, if tolerances allow.

  58. 58
    kairosfocus says:

    Really! Just now, can you report to us on how you composed your rebuttal post by chance keystrokes that got lucky . . .?

    Case over.

  59. 59
    Upright BiPed says:

    Welcome back CJYman

  60. 60
    kairosfocus says:

    P,

    You sound like someone who has never had to design or implement a complex object that had to get mutually fitting parts to work.

    Recall, that 3 of 64 possible codons mean STOP. Then, tell us about how chance variations of codons can write complex proteins of novel function and regulate their production and transport to the right site where lo and behold all is in place to wait for them.

    Then, recall also that the proportion of proper fold domains is like 1 in 10^70 of the AA sequence space.

    The algorithmic challenge to get to functional sequences is real, and tightly constraining. Just as are the challenges to get to a viable post in English by blind processed.

    These are obvious to all, save those who have long since swallowed the materialist a prioris that t6ell them what MUST have been so.

    As I noted above, Cicero nailed it c 50 BC.

    Next time you want to promote the equivalent of a perpetual motion machine, my answer is the same: SHOW and then tell.

    GEM of TKI

  61. 61
    kairosfocus says:

    Dr Rec:

    If you want us to accept the equivalent of a perpetual motion machine, SHOW us complex protein origination or similar origin of functionally specific strings by blind processes, then tell us.

    All you are telling me absent such, is that you are under the control of an a priori that blocks you from acknowledging how hard it is to get to functionally specific sequences by blind processes.

    If you saw rocks tumbling down a hillside, would you be surprised to see them say falling out in the pattern: Cicero was right?

    Why or why not?

    KF

  62. 62
    DrREC says:

    “SHOW us complex protein origination or similar origin of functionally specific strings by blind processes, then tell us.”

    “Cross-species analysis revealed interesting evolutionary paths of how this gene had originated from noncoding DNA sequences: insertion of repeat elements especially Alu contributed to the formation of the first coding exon and six standard splice junctions on the branch leading to humans and chimpanzees, and two subsequent substitutions in the human lineage escaped two stop codons and created an open reading frame of 194 amino acids. We experimentally verified FLJ33706’s mRNA and protein expression in the brain.”

    http://www.ploscompbiol.org/ar.....bi.1000734

  63. 63
    DrREC says:

    9.3.1.1.1 kairosfocus

    “Really! Just now, can you report to us on how you composed your rebuttal post by chance keystrokes that got lucky . . .?

    Case over.”

    Is there any doubt I designed my post? I see you aren’t intelligently defending your reduced chi metric.

  64. 64
    Upright BiPed says:

    …all this takes for granted (assuming the speculation) is the system of coordinated physical protocols which allows the transfer of information in the first place.

    Do you have any examples of the onset of those?

  65. 65
    Petrushka says:

    It’s true that it is possible to translate a statement into gibberish. That is what’s done in encryption.

    But that’s not my scenario. I’m asking if you can distinguish an unencrypted sequence with one character out of place from a random sequence.

    I do not see functional DNA sequences as random. My theory is that they have been built incrementally.

    What’s the alternative? If you cannot build them incrementally, how do you go about finding functional sequences?

  66. 66
    ScottAndrews2 says:

    Petrushka,

    You said, “you cannot distinguish a degraded functional sequence from a random sequence.”

    It is not possible to “translate” anything into gibberish, only into something with the appearance of gibberish. But no one is even talking about translating anything into gibberish.

    Here’s my scenario again to compare against your statement:

    If you give instructions to a man and he translates them in language you don’t understand to a second man who then follows most but gets a few things wrong, you cannot ascribe any functional information to those translated instructions because you don’t know which part was degraded.

    In this case you cannot distinguish any of it from a random sequence, because you do not understand it.
    Unless, that is, if you discern its specification by its functional effect, which is clearly possible even though it is likely a degraded functional sequence.

    Again, your statement is simple. “You cannot distinguish a degraded functional sequence from a random sequence.” My illustration is also simple and shows that your statement is clearly wrong. Without understanding the language at all, you can tell that it is not random because it conveys at least some of your specifications, even if the content is degraded.

    Perhaps what you mean to say is that you can determine the presence of functional content but not measure it. But that doesn’t matter either.

    I could argue that your posts contain no functional content and that to prove otherwise you must rigorously calculate that content. I will inevitably find fault with your calculation and insist that if you cannot measure the bits of information, then it is not specified. I may find a typo and quibble over whether it renders the post non-specified because you didn’t intent to type it.

    Wouldn’t that be a really silly way to determine whether something contains complex, specified information?

  67. 67
    Upright BiPed says:

    lastyear,

    Human do not specify what products nucleic acid sequences result in, the physical protocols instantiated in the genetic translation machinery does that.

    We just came along later and observed it. But because we observed it does not change the fact that the specification is built into the system. To suggest otherwise could not be a more anthropocentric statement.

  68. 68
    ScottAndrews2 says:

    You have no theory to back up your characterization of functional space as unbridgeable.

    And I need one why? There are an infinite number of implausible ideas I have no theories to disprove. Should they all be taken seriously until I get around to disproving them?

    When actual gaps have been tested, there are viable intermediate sequences.

    I’ve bridged every gap I’ve tested with my six-foot plank. That’s how I know I can walk on it to Hawaii.

  69. 69

    DrREC:

    I’m still waiting for someone to explain how they would detect design without a pre-established or independently known pattern.

    Well, it has been pretty well laid out by Dembski and Meyer, but I have a hunch you may not accept their explanation, so I’m not sure that will convince you.

    “Pre-established” and “independent” are different issues, so perhaps part of the difficulty is that you may be confusing these concepts? Independence is an important factor, in that the specification has some meaning or function beyond the pure description of the physical system. That is why the prime numbers example is recognized as a specification: prime numbers have meaning beyond just the description of the string of digits themselves.

    Pre-established, however is not a requirement. Since you haven’t answered (or perhaps haven’t seen) the question I posed, I will answer it. Yes, it is possible to recognize a specification and subsequently determine design even if we don’t know the precise specification we should be looking for from the outset. We do it all the time in our regular everyday experience.

    The idea that we have to identify and articulate the specification beforehand leads to outrageous and absurd conclusions. By that logic, we can never know if Stonehenge, the statues on Easter Island or any other never-before-seen thing is designed (we certainly didn’t know the specification beforehand). Or consider the following example, based on that faulty logic:

    Two research colleagues are working to decipher a code that has not been deciphered before. The researchers work independently on separate strings of the same code for days, without success. One afternoon Researcher A bumps into Researcher B at the water cooler. Researcher B excitedly tells Researcher A that as he was looking at the symbols in a certain way he finally figured out the code and tells Researcher A what to look for. Researcher A returns to his office, lays out the symbols and, sure enough, everything falls into place.

    Now, based on DrREC’s logic, we have the following absurd result: Researcher A can rightfully and validly claim that the code was designed because he had a specification to look for when he walked back into his office after the water cooler conversation. However, Researcher B can never conclude that the code was designed, because he discovered the code without having a pre-specification in mind.

    Pre-specification is not a requirement. We infer design all the time without knowing beforehand what the precise specification will be.

  70. 70

    Is there any doubt I designed my post?

    Of course not. Because it contains a specification: a meaning beyond the mere description of the letters (electrons in this case) themselves. And we recognized that pattern after we saw it, not because we knew exactly what you would write.

  71. 71
    John D says:

    Motors were known to be designed BEFORE they were found in cells.

  72. 72
    DrREC says:

    Molecular ‘motors’ are an analogy drawn to human design.

    Now you’ve taken the analogy too far.

  73. 73
    DrREC says:

    “And we recognized that pattern after we saw it,”

    Because it conforms to a preset specification-the English language.

  74. 74
    DrREC says:

    Yeah-I said “pre-established or independently known ”

    Or has meaning.

    All your examples are pre-established-they conform to expectation of human design or codes, and distinguishing nature from design isn’t in question.

    Seriously, do you think some explorer could wander up on Easter Island, and say those look natural? Or would the knowledge of statutes in human design be sufficient?

  75. 75
    John D says:

    Oh? They look like motors, they function like motors, they have the same types of parts that motors have, BUT…. they aren’t motors because we know motors are designed. GOT IT!

    I see 4 definitions at dictionary.com that molecular motors fit.

  76. 76
    kairosfocus says:

    Dr Rec:

    Pardon, but do you understand the difference between an observation and an assumption?

    Let’s take a microcontroller object program for an example.

    Can you see whether the controlled device with the embedded system works? Whether it works reliably, or whether it works partially? Whether it has bugs — i.e. we can find circumstances under which it behaves unexpectedly in non-functional ways, or fails?

    Can you see that we can here recognise that something is functional, and may even be able to construct some sort of metric of the degree of functionality?

    Now, we observe that the microcontroller depends on certain stored strings of binary digits, and that when some are disturbed by injecting random changes it keeps on working, but beyond a certain threshold, key functions or even overall function break down.

    This identifies empirically that we are in an island of function.

    [As a live case in point, here at UD, last week I had the experience of discovering a “feature” of WP, i.e. if you happen to try square brackets — like I am using here — in a caption for a photo the post display process will fail to complete and posting of the original post, but not comments, will abort. I suspect that’s because square brackets are used for certain functional tasks and I happened to half-trigger some such task, leading to an abort.]

    Do you now appreciate that we can empirically detect FSCI, and in particular, digitally coded FSCI?

    Do you in particular see that the concept of islands of function shaped by the constraints on — in this case — strings of algorithmically functional data elements, naturally leads to the islands of function effect?

    That, where we see functional constraints in a context of complex function, this is exactly what we should EXPECT?

    For, parts have to fit into a context of a Wicken-type “wiring diagram” for the whole to work, and absent the complex, mutually adapted set of elements wired on that diagram for that case, the system will wholly or partly degrade. That is, we see here the significance of functionally specific, integrated complex organisation. It is a commonplace of the technology of complex, multi-part, functionally integrated, organised systems, that function depends on fairly specific organisation, with a bit of room for tolerance, but not very much relative to the space of configurational possibilities of a set of components.

    And, we may extend this fairly simply to the case where there are no explicit strings, by taking the functional diagram apart on an exploded view, and reducing the information of that 3-D representation and putting it in a data structure based on ordered, linked strings. That is what Autocad etc do. And of course the assembly process is generally based on such an exploded view model.

    (Assembly of a complex functional system based on a great many parts with inevitable tolerances is in itself a complex issue, riddled with the implications of tolerances of the many components. Don’t forget the cases in the 1950’s where it was discovered that just putting a bolt in the wrong way on I think it was the F 86, could cause fatal crashes. Design for one-off success is much less complex than design for mass production. And, when we add in the issue in biology of SELF-assembly, that problem goes through the roof!)

    In short, we can see how FSCO, FSCI, and irreducible complexity emerge naturally as concepts summarising a world of knowledge about complex multi-part systems.

    These things are not made-up, they are instantly recognisable and understandable to anyone who has had to struggle with designing and building or simply troubleshooting and fixing complex multi-part functional systems.

    BTW, this is why I can only shake my head when I hear talking points over Hoyle’s fallacy, when he posed the challenge of assembling a jumbo jet by passing a tornado through a junkyard.

    Actually — and as I discussed recently here in the ID foundations series (notice the diagram of the instrument), we may take out the rhetorical flourish and focus on the challenge of assembling a D’Arsonval galvanometer movement based instrument in its cockpit. Or even the challenge of screwing together the right nut and bolt in a bowl of mixed parts, by random agitation.

    And, BTW, the just linked shows how Paley long since highlighted the problem with the dismissive “analogy” argument, when in Ch 2 of his work, he pointed out the challenge of building a self-replicating watch:

    Suppose, in the next place, that the person who found the watch should after some time discover that, in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself – the thing is conceivable; that it contained within it a mechanism, a system of parts — a mold, for instance, or a complex adjustment of lathes, baffles, and other tools — evidently and separately calculated for this purpose . . . .
    The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done — for referring the construction of the watch to design and to supreme art . . . . He would reflect, that though the watch before him were, in some sense, the maker of the watch, which, was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair — the author of its contrivance, the cause of the relation of its parts to their use.
    [Emphases added. (Note: It is easy to rhetorically dismiss this argument because of the context: a work of natural theology. But, since (i) valid science can be — and has been — done by theologians; since (ii) the greatest of all modern scientific books (Newton’s Principia) contains the General Scholium which is an essay in just such natural theology; and since (iii) an argument’s weight depends on its merits, we should not yield to such “label and dismiss” tactics. It is also worth noting Newton’s remarks that “thus much concerning God; to discourse of whom from the appearances of things, does certainly belong to Natural Philosophy [i.e. what we now call “science”].” )]

    In short, the additionality of self replication of a functioning system is already a challenge. And Paley was of course too early by over a century to know what von Neumann worked out on his kinematic self-replicator that uses digitally stored information in a string structure to control self assembly and self replication. (Also discussed in the just linked onlookers.)

    On the strength of these and related considerations, I then look at say Denton’s description (please watch the vid tour then read) of the automated multi-part functionality of the living cell:

    To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter [[so each atom in it would be “the size of a tennis ball”] and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell.

    We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines . . . . We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of components, error fail-safe and proof-reading devices used for quality control, assembly processes involving the principle of prefabrication and modular construction . . . . However, it would be a factory which would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours . . . .

    Unlike our own pseudo-automated assembly plants, where external controls are being continually applied, the cell’s manufacturing capability is entirely self-regulated . . . .

    [[Denton, Michael, Evolution: A Theory in Crisis, Adler, 1986, pp. 327 – 331. This work is a classic that is still well worth reading. Emphases added. (NB: The 2009 work by Stephen Meyer of Discovery Institute, Signature in the Cell, brings this classic argument up to date. The main thesis of the book is that: “The universe is comprised of matter, energy, and the information that gives order [[better: functional organisation] to matter and energy, thereby bringing life into being. In the cell, information is carried by DNA, which functions like a software program. The signature in the cell is that of the master programmer of life.” Given the sharp response that has provoked, the onward e-book responses to attempted rebuttals, Signature of Controversy, would also be excellent, but sobering and sometimes saddening, reading.) ]

    We could go on and on, but by now the point should be quite clear to all but the deeply indoctrinated.

    [ . . . ]

  77. 77
    kairosfocus says:

    Namely, we have every reason to see why complex, integrated functionality on many interacting parts naturally leads to islands of functional configurations in much wider spaces of possible but overwhelmingly non-functional configurations. (And, this thought exercise will rivet the point home, in a context that is closely tied to the statistical underpinnings of the second law of thermodynamics.)

    Clearly, it is those who imply or assume that we have instead a vast continent of function that can be traversed incrementally step by step starting form simple beginnings that credibly get us to a metabolising, self-replicating organism, who have to empirically show their claims.

    It will come as no surprise to the reasonably informed that the original of cell based life bit is neatly snipped out of the root of the tree of life, precisely because after 150 years or so of speculations on Darwin’s warm little pond full of chemicals and struck by lightning, etc, the field of study is in crisis.

    Similarly, the astute onlooker will know that he general pattern of the fossil record and of today’s life forms, is that of sudden appearance, stasis, sudden disappearances and gaps, not at all the smoothly graded overall tree of life as imagined. Evidence of small scale adaptations within existing body plans has been grossly extrapolated and improperly headlined as proof of what is in fact the product of an imposed philosophical a priori, evolutionary materialism. That is why Philip Johnson’s retort to Lewontin et al was so cuttingly, stingingly apt:

    For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

    . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. [–> those who are currently spinning toxic, atmposphere poisoning, ad homiem laced talking points about “sermons” and “preaching” and “preachers” need to pay particular heed to this . . . ] The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

    So, where does this leave the little equation accused of being question-begging:

    Chi_500 = I*S – 500, bits beyond the solar system threshold

    1 –> The Hartley-Shannon information metric is a standard measure of info carrying capacity, here being extended to cover a case were we must meet some specificaitons, and pass a threshold of complexity.

    2 –> the 500 bit threshold is sufficient to isolate the full Planck Time Quantum State [PTQS] search capacity of our solar system’s 10^57 atoms, 10^102 states in 10^17 or so seconds, to ~ 1 in 10^48 of the set of possibilities for 500 bits: 3 * 10^150.

    3 –> So, before we get any further, we know that we are looking at so tiny a fractional sample that (on well-established sampling theory) ANYTHING that is not typical of the vast bulk of the distribution is utterly unlikely to be detected by a blind process.

    4 –> The comparison to make this familiar is, to draw at chance or at chance plus mechanical necessity, a blind sample of size of one straw from a cubical hay-bale 3 1/2 light days across, which could have our solar system out to Pluto in it [about 1/10 the way across]. With maximal probability — all but certainty, such a sample will pick up straw.

    5 –> The threshold of complexity, in short is reasonable, and if you want to challenge the solar system (our practical universe which is 98% dominated by our Sun, in which no OOL is even possible . . . ) then scale up to the observed cosmos as a whole, 1,000 bits. (The calculation for THAT hay bale would have millions of cosmi comparable to our own lurking within and we would have the same result.)

    6 –> So, the only term really up for challenge is S, the dummy variable that is set to 0 as default, and if we have positive, objective reason to infer functional specificity or more broadly ability to assign observed cases E to a narrow zone T that can be INDEPENDENTLY described (i.e. the selection of T is non-arbitrary, we have a definable collection in the set theory sense and a set builder rule — or at least, a separate objective criterion for inclusion/exclusion) then it can be set to 1.

    7 –> The S = 0 case, the default, is of course the blind chance plus necessity case. The assumption is that phenomena are normally accessible by chance plus necessity acting on matter and energy in space and time.

    8 –> But, in light of the sort of issues discussed above (and over and over again elsewhere over the course of years . . . ), it is recognised that certain phenomena, especially FSCI and in particular dFSCI — like the posts in our thread — are in fact only reasonably accessible by intelligent direction on the gamut of our solar system or observed cosmos.

    9 –> Without loss of general force, we may focus on functional specificity. We can objectively, observationally identify this, and routinely do so.

    10 –> So, what the equation ends up doing is to give us an empirically testable threshold for when something is functionally specific, information-bearing and sufficiently complex that it may be inferred that it is best explained on design, not chance plus necessity.

    11 –> Since this is specific and empirically testable, it cannot be a mere begging of the question, it is inviting refutation by the simple expedient of showing how chance and necessity without intelligent guidance or starting within an island of function already — that is what Genetic Algorithms do, as the infamous well-behaved fitness function so plainly shows — can give rise to FSCI.

    12 –> The truth is that the talking point storm and assertions about not sufficiently rigorous definitions, etc etc etc, are all because the expression handily passes empirical tests. the entire Internet is a case in point, if you want empirical tests.

    13 –> So, if this were a world in which science were done by machines programmed to be objective, the debate would long since have been over as soon as this expression and the underlying analysis were put on the table.

    14 –> But, humans are not machines, and so recently the debate talking point storm has been on how this eqn is begging questions or is not sufficiently defined to suit the tastes of those committed to a priori evolutionary materialism, or how GA’s — which start inside islands of function! — show how FSCI can be had without paying for it with the hard cash of intelligence. (I won’t bother with more than mentioning the sort of hostile, hateful attack that was so plainly triggered by our blowing the MG sock-puppet campaign out of the water. Cf link here for the blow by blow on how that campaign failed.)

    15 –> To all this, I simply say, the expression invites empirical test and has billions of confirmatory instances. Kindly show us a clear case that — without starting on an existing island of function — shows how FSCI, especially dFSCI (at least 500 – 1,000 bits), emerges credibly by chance and necessity, within the scope of available empirical resources.

    16 –> For those not familiar with the underlying principle, I am saying that the expression is analytically warranted per a reasonable model and is directly subject to empirical test with a remarkable known degree of success, and so far no good counter-examples. So, we are inductively warranted to trust it absent convincing counter-example.

    17 –> Not as question begging a prioris, but as per the standard practice of science where laws of science and scientific models are provisionally warranted and empirically reliable, not necessarily true beyond all possibility of dispute.

    18 –> Indeed, that is why the laws of thermodynamics can be formulated in terms that perpetual motion machines of the first, second and third kind will not work. So far quite empirically reliable, and on reasonable models, we can see why. But, provide such a perpetual motion machine and thermodynamics would collapse.

    ____________

    So, Dr Rec, a fill in the blanks exercise:

    your empirical counter-example per actual observation is CCCCCCC, and your analytical explanation for it is WWWWWWW

    If you cannot directly fill in the blanks, we have every reason to accept the Chi_500 expression on the normal terms for accepting a scientific result, no matter how uncomfortable this is for the a priori materialists.

    GEM of TKI

  78. 78
    kairosfocus says:

    Sorry, out of place above . . .

    Namely, we have every reason to see why complex, integrated functionality on many interacting parts naturally leads to islands of functional configurations in much wider spaces of possible but overwhelmingly non-functional configurations. (And, this thought exercise will rivet the point home, in a context that is closely tied to the statistical underpinnings of the second law of thermodynamics.)

    Clearly, it is those who imply or assume that we have instead a vast continent of function that can be traversed incrementally step by step starting form simple beginnings that credibly get us to a metabolising, self-replicating organism, who have to empirically show their claims.

    It will come as no surprise to the reasonably informed that the original of cell based life bit is neatly snipped out of the root of the tree of life, precisely because after 150 years or so of speculations on Darwin’s warm little pond full of chemicals and struck by lightning, etc, the field of study is in crisis.

    Similarly, the astute onlooker will know that he general pattern of the fossil record and of today’s life forms, is that of sudden appearance, stasis, sudden disappearances and gaps, not at all the smoothly graded overall tree of life as imagined. Evidence of small scale adaptations within existing body plans has been grossly extrapolated and improperly headlined as proof of what is in fact the product of an imposed philosophical a priori, evolutionary materialism. That is why Philip Johnson’s retort to Lewontin et al was so cuttingly, stingingly apt:

    For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

    . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. [–> those who are currently spinning toxic, atmposphere poisoning, ad homiem laced talking points about “sermons” and “preaching” and “preachers” need to pay particular heed to this . . . ] The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

    So, where does this leave the little equation accused of being question-begging:

    Chi_500 = I*S – 500, bits beyond the solar system threshold

    1 –> The Hartley-Shannon information metric is a standard measure of info carrying capacity, here being extended to cover a case were we must meet some specificaitons, and pass a threshold of complexity.

    2 –> the 500 bit threshold is sufficient to isolate the full Planck Time Quantum State [PTQS] search capacity of our solar system’s 10^57 atoms, 10^102 states in 10^17 or so seconds, to ~ 1 in 10^48 of the set of possibilities for 500 bits: 3 * 10^150.

    3 –> So, before we get any further, we know that we are looking at so tiny a fractional sample that (on well-established sampling theory) ANYTHING that is not typical of the vast bulk of the distribution is utterly unlikely to be detected by a blind process.

    4 –> The comparison to make this familiar is, to draw at chance or at chance plus mechanical necessity, a blind sample of size of one straw from a cubical hay-bale 3 1/2 light days across, which could have our solar system out to Pluto in it [about 1/10 the way across]. With maximal probability — all but certainty, such a sample will pick up straw.

    5 –> The threshold of complexity, in short is reasonable, and if you want to challenge the solar system (our practical universe which is 98% dominated by our Sun, in which no OOL is even possible . . . ) then scale up to the observed cosmos as a whole, 1,000 bits. (The calculation for THAT hay bale would have millions of cosmi comparable to our own lurking within and we would have the same result.)

    6 –> So, the only term really up for challenge is S, the dummy variable that is set to 0 as default, and if we have positive, objective reason to infer functional specificity or more broadly ability to assign observed cases E to a narrow zone T that can be INDEPENDENTLY described (i.e. the selection of T is non-arbitrary, we have a definable collection in the set theory sense and a set builder rule — or at least, a separate objective criterion for inclusion/exclusion) then it can be set to 1.

    7 –> The S = 0 case, the default, is of course the blind chance plus necessity case. The assumption is that phenomena are normally accessible by chance plus necessity acting on matter and energy in space and time.

    8 –> But, in light of the sort of issues discussed above (and over and over again elsewhere over the course of years . . . ), it is recognised that certain phenomena, especially FSCI and in particular dFSCI — like the posts in our thread — are in fact only reasonably accessible by intelligent direction on the gamut of our solar system or observed cosmos.

    9 –> Without loss of general force, we may focus on functional specificity. We can objectively, observationally identify this, and routinely do so.

    10 –> So, what the equation ends up doing is to give us an empirically testable threshold for when something is functionally specific, information-bearing and sufficiently complex that it may be inferred that it is best explained on design, not chance plus necessity.

    11 –> Since this is specific and empirically testable, it cannot be a mere begging of the question, it is inviting refutation by the simple expedient of showing how chance and necessity without intelligent guidance or starting within an island of function already — that is what Genetic Algorithms do, as the infamous well-behaved fitness function so plainly shows — can give rise to FSCI.

    12 –> The truth is that the talking point storm and assertions about not sufficiently rigorous definitions, etc etc etc, are all because the expression handily passes empirical tests. the entire Internet is a case in point, if you want empirical tests.

    13 –> So, if this were a world in which science were done by machines programmed to be objective, the debate would long since have been over as soon as this expression and the underlying analysis were put on the table.

    14 –> But, humans are not machines, and so recently the debate talking point storm has been on how this eqn is begging questions or is not sufficiently defined to suit the tastes of those committed to a priori evolutionary materialism, or how GA’s — which start inside islands of function! — show how FSCI can be had without paying for it with the hard cash of intelligence. (I won’t bother with more than mentioning the sort of hostile, hateful attack that was so plainly triggered by our blowing the MG sock-puppet campaign out of the water. Cf link here for the blow by blow on how that campaign failed.)

    15 –> To all this, I simply say, the expression invites empirical test and has billions of confirmatory instances. Kindly show us a clear case that — without starting on an existing island of function — shows how FSCI, especially dFSCI (at least 500 – 1,000 bits), emerges credibly by chance and necessity, within the scope of available empirical resources.

    16 –> For those not familiar with the underlying principle, I am saying that the expression is analytically warranted per a reasonable model and is directly subject to empirical test with a remarkable known degree of success, and so far no good counter-examples. So, we are inductively warranted to trust it absent convincing counter-example.

    17 –> Not as question begging a prioris, but as per the standard practice of science where laws of science and scientific models are provisionally warranted and empirically reliable, not necessarily true beyond all possibility of dispute.

    18 –> Indeed, that is why the laws of thermodynamics can be formulated in terms that perpetual motion machines of the first, second and third kind will not work. So far quite empirically reliable, and on reasonable models, we can see why. But, provide such a perpetual motion machine and thermodynamics would collapse.

    ____________

    So, Dr Rec, a fill in the blanks exercise:

    your empirical counter-example per actual observation is CCCCCCC, and your analytical explanation for it is WWWWWWW

    If you cannot directly fill in the blanks, we have every reason to accept the Chi_500 expression on the normal terms for accepting a scientific result, no matter how uncomfortable this is for the a priori materialists.

    GEM of TKI

  79. 79
    kairosfocus says:

    Dr Rec:

    That’s patently below the belt, you full well know that ever before the point you clip and strawmannised, the case was presented above, and that this case has long been laid out over and over again in details elsewhere.

    (E.g., onlookers, cf here at UD recently and here on in context.)

    You also know full well that I am pointing out that posts in this thread are illustrative cases on point on the empirical reliability of the point I made in brief.

    So, your strawman is a willful misrepresentation.

    And, a red herring distraction from the evidence that points out that he expression is indeed empirically reliable, and pivots analytically on the issue of the infinite monkeys/needle in the haystack analysis, most recently explored again at UD in the post linked above.

    Onlookers will note that the sort of distraction is itself evidence of want of a sound response on your part.

    But, just in case you want something specific to this thread, I point you here.

    GEM of TKI

  80. 80
    kairosfocus says:

    OOPS: The discussion on the galvanometer and that on the infinite monkeys analysis are actually to be found here, no 11 in the ID foundations series, no 12 is on Paley-style self replication as an additional capacity of a functional entity per the von Neumann kinematic self-replicator. [Do, watch the vid!]

  81. 81
    Joe says:

    DrREC,

    If you don’t like analogies nor the design inference all YOU have to do is actually step-up and demonstrate that stochastic processes can account for what we say is designed.

    OR you can continue whining.

    Your choice…

  82. 82
    M. Holcumbrink says:

    Molecular ‘motors’ are an analogy drawn to human design. Now you’ve taken the analogy too far

    Take ATP synthase or the flagellum: these molecular motors are composed of simple machines, e.g. wheels & axles (free turning rotor which is constrained in 5 degrees of freedom by a stator imbedded in the membrane), ramps (which transform linear momentum to rotational momentum due to a flow of ions), levers (clutch mechanism to reverse direction of rotation), and screws (as the filament turns it acts as a propeller). Any machine designed and built in the macro world contains some or all of these simple machines. And please note the purpose of such is to transform one form of energy into another. In the case of ATP synthase and the flagellum, the energy of a proton gradient is converted into torque, which is used to generate chemical energy (ATP) and linear motion, respectively. Motors are a physical mechanism by which a form of potential energy is channeled and converted into a form of *useful* energy. This is exactly what we see in the cell, which means we are not speaking in analogies here. These are actual motors, in every sense of the term.

    Now I would ask you: do you avoid calling these things “motors” in an effort to avoid the clear, purposeful design implications, or because you are fundamentally ignorant of what motors actually are?

  83. 83
    Petrushka says:

    Without understanding the language at all, you can tell that it is not random because it conveys at least some of your specifications, even if the content is degraded.

    You are simply wrong about this.

    I asked if you could distinguish a coding sequence that has been degraded by one character change, from a population of randomly generated sequences. I make only one stipulation, that the sequence must not match any known functional sequences. It must be unique.

    The answer is that you can’t. When designing a completely new coding sequence, you cannot tell when you are 50 percent done, or 90 percent done, or 99.99 percent done.

    That is not true of syntactical languages. If you are writing a computer program or a sonnet, you can distinguish progress. A partial computer program will do something, even if it is nothing but a goto statement. A sentence fragment is distinguishable from random characters.

    a sntnce wit speling erors an grammer mistackes can be distingushd form gbbirsh.

  84. 84

    DrREC:

    Seriously, do you think some explorer could wander up on Easter Island, and say those look natural? Or would the knowledge of statutes in human design be sufficient?

    Excellent. So finally we get nearer to the heart of the matter. DrREC acknowledges that we don’t have to know the exact specification we are looking for. It is enough to have seen some similar systems. In other words, we look at a system of unknown origin and analogize to systems that we do know. This is one important aspect (though not complete) of the way we draw design inferences. We work from what we know, not from what we don’t know. We work from our understanding of cause and effect in the world, not from what we don’t know. And with those tools under our belt, we consistently and regularly infer design as the most appropriate explanation, even when we don’t know the exact specification we will find.

    DrREC has no issue with this approach. He thinks it is perfectly reasonable and appropriate. He even suggests above that it is absurd to think otherwise. All correct.

  85. 85
    Petrushka says:

    The interesting thing is there are so many different versions of flagella, and so many genomes containing bits and pieces of the code, used for so many different purposes.

    There are at least 20 different species of microbes having subunits of the flagellum code.

    I find it interesting that when it seems convenient to ID, the code is digital (and subject to being assembled by incremental accumulation). But at other times the analogy switches to objects like motors that are not digitally coded and do not reproduce with variation.

  86. 86
    ScottAndrews2 says:

    Petrushka,

    You are simply wrong about this.

    I asked if you could distinguish a coding sequence that has been degraded by one character change, from a population of randomly generated sequences. I make only one stipulation, that the sequence must not match any known functional sequences. It must be unique.

    I am right (and simply.)

    If someone speaks in a language you do not understand, you cannot distinguish any of what is said from random noise. Therefore you cannot tell from the sequence itself whether it is entirely random and functionless, functional and perfect, or functional and degraded.

    You stated

    You cannot distinguish a degraded functional sequence from a random sequence
    …and…
    If you cannot distinguish a degraded functional sequence from a random sequence, you cannot assign bits of information to it.

    This is obviously false because in this scenario, you cannot distinguish a degraded functional sequence from a random sequence, and yet you can determine from the effect alone that the sequence does contain functional information.

    I’m quoting your words repeatedly because they are simple and clear, and I am placing them alongside a simple, clear scenario which demonstrates they they are wrong.

  87. 87
    Petrushka says:

    and yet you can determine from the effect alone that the sequence does contain functional information.

    What effect?

    Tell us how to distinguish a functionless sequence that is one character from being functional from a population of random sequences.

  88. 88
    ScottAndrews2 says:

    The effect was described in the illustration.

    If you give instructions to a man and he translates them in language you don’t understand to a second man who then follows most but gets a few things wrong, you cannot ascribe any functional information to those translated instructions because you don’t know which part was degraded.

    I’ve deliberately put this side-by-side with your statements. Your statements are simple, as is the scenario.

    You cannot distinguish a degraded functional sequence from a random sequence
    …and…
    If you cannot distinguish a degraded functional sequence from a random sequence, you cannot assign bits of information to it.

    In the scenario above, the sequence is clearly functional although apart from the functional effect you cannot distinguish it from a random sequence, a specified sequence, or a degraded specified sequence.

  89. 89
    ScottAndrews2 says:

    Let’s make it more even by removing your specification and hiding the source of the information. Keep in mind that this is an illustration, not an analogy. Your assertions can be applied directly to it.

    You describe a series of physical symptoms to a man. He then speaks in a language you do not understand to second man who specifically treats all but one of your symptoms.

    The treatment was specific. But who specified it? Not you. You didn’t know about that treatment. Was it the man you spoke to, or the man he translated to?

    He spoke another language. You don’t know what he said.
    It could have been a) random, b) irrelevant, c) a translation of your symptoms, d) instructions for treatment, or a degraded version of c or d, or a combination of any of the above. Maybe he described one symptom, gave instructions for treating another symptom, got one wrong, and asked what the other guy wanted for lunch.

    You have no idea what the content of that sequence was or if it was degraded, and yet it evidently contained functional content. You are able to determine it by the output, which you could not have specified because it applied medical knowledge you did not have.

    Compare that to this:

    You cannot distinguish a degraded functional sequence from a random sequence
    …and…
    If you cannot distinguish a degraded functional sequence from a random sequence, you cannot assign bits of information to it.

    It’s not like I spent a few hours brainstorming these scenarios. They were easy. And they demonstrate clearly that what you are asserting is false.

  90. 90
    M. Holcumbrink says:

    The interesting thing is there are so many different versions of flagella, and so many genomes containing bits and pieces of the code, used for so many different purposes. There are at least 20 different species of microbes having subunits of the flagellum code

    So what? Do you not realize that in the macro world there are single component parts that are used in a multitude of disparate systems, nuts & bolts being the most obvious example. In fact, a good engineer strives to make the hardware from system to system as standard as possible. The more variation there is in the hardware, the more headaches it causes. Copper wiring is another example, with wiring of the same gage and shielding used all over the place. Standard circuit cards, standard housings for gear boxes, standard junctions, standard belts, and the list goes on forever. Standard components is just as much a sign of design as anything, friend.

    I find it interesting that when it seems convenient to ID, the code is digital. But at other times the analogy switches to objects like motors that are not digitally coded and do not reproduce with variation

    What’s your point here? Are you suggesting that the flagellum is not a motor because its components are constructed of discrete modular building blocks? If so, that’s asinine. And here’s some news for you: motors designed by humans in the macro world are reproduced also. Weird, huh? And what if the flagellum varies over time; does that plasticity make it not a motor? Nope. Still a motor.

  91. 91
    kairosfocus says:

    P,

    This is ever so inadvertently revealing:

    I find it interesting that when it seems convenient to ID, the code is digital (and subject to being assembled by incremental accumulation). But at other times the analogy switches to objects like motors that are not digitally coded and do not reproduce with variation.

    1 –> Codes, generally speaking, use symbolic representations [whereby one thing maps to another and per a convention MEANS that], and are inherently discrete state, i.e. digital.

    2 –> The DNA-> RNA –> Ribosome –> AA chain for protein system uses just such symbols, and goes through transcription, processing that allows reordering, and translation in a translation device that is also a manufacturing unit for proteins.

    3 –> the fact that you find yourself resisting such patent and basic facts is revealing.

    4 –> A motor is a functional, composite entity. It is made up from parts, that fit together in a certain specific way, per an exploded view wiring diagram, and when they fit together they do a job.

    5 –> As has been pointed out for a long time now, that sort of 3-D exploded view can be converted into a cluster of linked strings that express the implied information, as in Autocad etc.

    6 –> However, the point of a motor, is that it does a certain job, converting energy into shaft work, often but not always in rotary form. (Linear motors exist and are important.)

    7 –> A lot of was and means can be used to generate the torque [power = torque * speed], but rotary motors generally have a shaft that carries the load, on the output port, and an energy converter on the input port. (Motors are classic two-port devices.)

    8 –> Electrical motors work off the Lorentz force [which in turn is in large part a reflection of relativistic effects of the Coulomb force], hydraulic and pneumatic ones, off fluid flows, some motors work off expanding combustion products, etc etc.

    9 –> Two of the motors in living forms seem to work off ion flows and associated barrier potentials. Quite efficiently and effectively too.

    10 –> Wiki, testifying against known ideological interest:

    An engine or motor is a machine designed to convert energy into useful mechanical motion.[1][2] Heat engines, including internal combustion engines and external combustion engines (such as steam engines) burn a fuel to create heat which is then used to create motion. Electric motors convert electrical energy in mechanical motion, pneumatic motors use compressed air and others, such as wind-up toys use elastic energy. In biological systems, molecular motors like myosins in muscles use chemical energy to create motion.

    11 –> In short, we see here a recognition of what you are so desperate to resist: there are chemically powered, molecular scale motors in the body, here citing a linear motor, that makes muscles work.

    12 –> So, there is no reason why we should suddenly cry “analogy” — shudder — when we see similar, ion powered rotary motors in the living cell.

    ______________

    In short, the strained objections we are seeing are telling us a lot, and not to your benefit.

    GEM of TKI

  92. 92
    Petrushka says:

    Analogies are of no use in the example I am using, because a key claim I am making is that protein folding is not predictable from a coding sequence.

    I am making this claim because decades of research have revealed no shortcuts and no glimpse of a shortcut. And unlike will-o-the-wisps, like the Higgs particles, there is no theoretical reason to expect a shortcut.

    This is a very narrow and specific claim, that could be proven wrong by a single counterexample.

    It is relevant to the calculation of “information” in the genome because you have no way of determining the information content of a sequence if you can’t account for the necessity of each character in the sequence string.

    I assert you have no theory of sequence formation or sequence utility or syntax that would enable design.

    Regardless of asserted probabilities, the only reasonable way to build a protein or protein domain is to try modifications of existing sequences.

    There are theoretical ways to do it atom by atom, but chemistry is billions of times faster.

    My claim is that chemistry will always be faster than computation, and evolution is the fastest and most efficient way to navigate through functional sequence space.

  93. 93

    Petrushka: “But at other times the analogy switches to objects like motors . . .”

    No-one is switching analogies because it is convenient. A digital code is an example of complex specified information. An integrated functional system is an example of complex specified information. There are lots of examples. No-one is switching anything.

    BTW, analogies are useful and there is nothing wrong with them as far as they go in helping us think through things.

    But in this case we don’t even have to analogize. The code in DNA is a digital code, it isn’t just like a digital code. Molecular motors in living cells aren’t just like motors, they are motors.

  94. 94
    ScottAndrews2 says:

    Analogies are of no use in the example I am using, because a key claim I am making is that protein folding is not predictable from a coding sequence.

    I didn’t make an analogy. I used an example against which your claims could be directly tested. You don’t seem to dispute that what you asserted as a rule of logic was false, so I guess we’ve moved on from that.

    “Unpredictable” is synonymous with “random.” Are you saying that coding sequences result in random protein folds? I’m pretty sure every second of our lives depends upon the predictability of protein folds from coding sequences. How can you say that they are unpredictable?

  95. 95
    DrREC says:

    Funny everyone keeps coming with analogies where the design is not in question! It is almost as if they assume design, and proceed from there.

    Oh, right.

    The funny thing Eric can’t do it tell us what the independent specifications for protein design are.

    But at any rate, there are a couple of things that went unanswered.

    Despite the attempts to distract with other analogies, my post at 9.3 clearly demonstrates IN PRACTICE, that fsci calculations narrowly and subjectively define a design as part of estimating functional space.

  96. 96
    DrREC says:

    I can tell I’ve made a salient point when you punt to “solve abiogenesis.”

    I showed you complex protein origination by natural processes.

    What is the fcsi of the de novo protein. It isn’t short!

  97. 97
    Joe says:

    DrREC,

    Those “natural” processes you speak could very well be artificial processes.

    Also there isn’t any “punting to abiogenesis” as much as it is cowardice for starting with that which needs explaining in the first place. So perhaps you need to figure out your problem.

  98. 98
    ScottAndrews2 says:

    As opposed to what, using examples in which design is in question? Or examples of things that don’t appear designed?

    Most designed things don’t have independent specifications that one can produce or refer to. From where have you invented this requirement?

    Your position seems to be that if you have this thing and you don’t know whether it was designed, comparing it to outputs of known design is an inherently invalid approach.

    You also suggest that design is such a shockingly unimaginable phenomenon that it must be seen to be believed. Astoundingly, you do not apply this same skepticism to vague, unformed hypotheses of self-organization.

    Living things, from the molecular level up, follow the very same patterns as any number of design technologies, except that they appear far more advanced. They do not follow any known patterns of self-organization.

    You can lead a horse to water, but you can’t stop it from drinking sand. And you can’t use logic to convince someone to respect logic.

  99. 99
    DrREC says:

    “As opposed to what, using examples in which design is in question? Or examples of things that don’t appear designed?”

    Yeah. Seriously, the constant insult laced posts that go “Hey look at this human-designed object. Only a moron couldn’t tell it was deigned.” get old fast.

    It also isn’t really what you guys are trying to do. You’re trying to make a design inference, based on ruling out natural possibilities through the use of improbability+specification.

    At least my pulsar example was an attempt to query this.

    I haven’t seen an answer. Without a prior or independent specification (pi or prime numbers), how could we tell natural pulsars from alien signals?

    If a researcher took a complex pulsar sequence, fit parameters to it, and declared that the specification, and then measured how well the pulsar fit that specification, does that prove the pulsar is specified (designed)?

    Why is this so obviously wrong in the pulsar case, but so right for you?

    “Most designed things don’t have independent specifications that one can produce or refer to. From where have you invented this requirement?”

    On the contrary, I think all designed things would have specifications. I’m sure my computer, couch car were all pre-specified.

  100. 100
    Upright BiPed says:

    Dr REC I have not followed this conversation closely, but I gave you an instance where we have nothing whatsoever to do with creating any specification, because that sepcification is built into the system itself. It’s also irreducibly complex.

    Yet you walked from that conversation.

  101. 101
    ScottAndrews2 says:

    Was your post pre-specified?

    Do you have the schematics for your computer? Can you tell me right now how every circuit was specified?

    Now you’re saying that you’ll believe it if you see the specification in advance of the implementation. With a wave of the hand you have dismissed the possibility that anything of unobserved origin was designed.

    Your whole pulsar tangent is one giant strawman. No one is going about randomly trying to infer design to anything and everything for no reason. You’re arguing against the design inference by applying where it obviously doesn’t fit. That’s because you have no valid objection or alternative where it does fit.

    Don’t count out, “Just look at it, it must have been designed.” No insult intended, but that’s the voice of common sense. It’s not always right, but common sense plus abundant evidence always beats a hand-waving explanation of ‘something happened, we don’t know what except that we have ruled out intelligence.’

  102. 102
    DrREC says:

    “Dr REC I have not followed this conversation closely, but I gave you an instance where we have nothing whatsoever to do with creating any specification, because that sepcification is built into the system itself. It’s also irreducibly complex.

    Yet you walked from that conversation.”

    You say you haven’t followed the conversation closely, but then chide me for walking away. By the way, you and Barry Arrington almost NEVER answer my questions. It isn’t a one sided cross-ex.

    You’ll forgive me-these threads are difficult to follow, terminating every so often, and my posts have been spread over three or four new threads. So what was this example.

  103. 103
    DrREC says:

    “Was your post pre-specified?”

    Yes, or at least independently. By the rules english grammar and syntax.

    “Do you have the schematics for your computer? Can you tell me right now how every circuit was specified?”

    No, but I’m sure someone at apple does. And again, with the endless human designs. Bored now.

    “Now you’re saying that you’ll believe it if you see the specification in advance of the implementation.”

    Or independently. Or in any way that doesn’t draw a target around something in nature, infer that to be the specification, declare it is specified, and deduce design.

    “Your whole pulsar tangent is one giant strawman. No one is going about randomly trying to infer design to anything and everything for no reason.”

    SETI or NASA might be interested. I think it is right up your alley. Aren’t you design detectors?

    “Don’t count out, “Just look at it, it must have been designed.”

    That’s really sad for ID.

  104. 104
    Petrushka says:

    I didn’t make an analogy.

    I apologize. I didn’t realize you had answered my coding sequence question. I’ll look back over it.

    I thought you wrote about language and translation.

  105. 105
    M. Holcumbrink says:

    DrREC:

    I haven’t seen an answer. Without a prior or independent specification (pi or prime numbers), how could we tell natural pulsars from alien signals?

    SA2:

    Don’t count out, “Just look at it, it must have been designed.”

    DrREC:

    That’s really sad for ID

    Lets go back to the Voynich Manuscript. How is it that *you* don’t know what it says, or even if it says anything at all, yet *you* know for a fact that someone did it? You can just look at it and know.

  106. 106

    DrREC:

    Funny everyone keeps coming with analogies where the design is not in question! It is almost as if they assume design, and proceed from there.

    Heeeello! We are showing you examples of how design is inferred. These aren’t just unrelated “analogies.” These are live examples of design inference. Set aside for a moment your philosophical bias against examples, analogies, whatever and think through this for a moment.

    Under your logic, the only way we can ever know if something was designed is if we already know that what we are looking for is designed. That is entirely circular and, pardon, but frankly absurd. That would mean that it is impossible to ever discover if something is designed. Because in order to discover that, we must have already known the design we were looking for.

    The fact of the matter is design is inferred all the time. The only reason you are hung up is that it happens to be in life this time, which, apparently, is philosophically unpalatable.

  107. 107
    Upright BiPed says:

    amazing

  108. 108

    I should add that it is funny that so much energy is being spent denying that there is specification in living cells. Specification, in terms of functional complex specified information, is really a no brainer as it relates to many cellular systems. Indeed, most Darwinists and other materialists admit the specification because it is so obviously there. What they then spend their energies on is attempting to show that the specification isn’t really that improbable (inevitability theorists), or that it comes about through some emergent process (emergent theorists), or alternatively, that while wildly improbably “evolution” can overcome it through the magic of lots of time and errors in replication (Dawkins et al.).

    Don’t get me wrong, I love a good discussion about specification. Just seems funny that it is being so strenuously denied when so many evolutionists admit it as a given.

  109. 109
    DrREC says:

    What is amazing?

    Can you give me a link to the discussion you’d like me to respond to?

  110. 110
    DrREC says:

    The Voynich Manuscript?

    Where is the design detection? Are you saying it is natural, or needs to be distinguished from nature.

    Some independent specifications:

    1) On vellum (human product)
    2) Iron ink with quill and pen (human product)
    3) Conforms to manuscript and illustrations of the period

    Should we continue with this absurdity?

  111. 111
    DrREC says:

    “the only way we can ever know if something was designed is if we already know that what we are looking for is designed.”

    No, for the umpteenth time, if it is INDEPENDENTLY specified (pi, prime numbers) in my pulsat example, that is fine.

    I’ve walked through how fsci calculations specify a design post-hoc in the detection of design in explicit detail above. No one seems to want to deal with that, and has resorted to broad chest thumping rhetoric.

    Don’t you just SEE the design? Lol.

  112. 112
    DrREC says:

    Could you provide a reference where a “Darwinist” acknowledges a specification by a designer in life? I’m really puzzled at the who the hell these “so many evolutionists admit it as a given” are.

    This is almost a fourth grade playground lie you tell the kid you want to do something idiotic-everyone’s doing it. Why won’t you? All the other Darwinists are admitting design specifications…come on, just admit it….please….

    “Specification, in terms of functional complex specified information, is really a no brainer”

    So, in a few days, fsci has gone from being a calculation to a “no brainer” Next I’ll here the “only an idiot would deny it.” Is this science to you? Next paper, I’ll write “this is a no brainer, proof not required.”

    Wow. Just Wow.

    You’re wildly equivocating on the use of the word specification. At best. This is the oddest “no, your peers disagree with you” bluff I’ve ever read.

  113. 113
    DrREC says:

    By the way, Eric Anderson, I think you forgot to pay the bill on your website: “http://www.evolutiondebate.info/”

    When I click your name, it goes to “Buy FIORICET pills online without a prescription.”

    Illegal sales of barbiturates might be frowned upon here….

  114. 114
    DrREC says:

    “Those “natural” processes you speak could very well be artificial processes.”

    Yep. The reactions I saw in my test tube could be mediated by fairies.

    Thanks for reminding us again that design inferences are unfalsifiable.

  115. 115
    ScottAndrews2 says:

    That’s really sad for ID

    It’s not ID at all. It’s common sense backed up by unmistakable evidence.

    ID is science, not common sense.

    You’re bored? How many times have I pointed out that the very nature of extrapolation and inference requires us to reason beyond what we observe? If inference is invalid unless the subject is identical to that with which it is compared, then it is invalid in every case except when we do not need it. You’ve just invalidated the concept of inference.

    Next you claim that we draw a target around everything in nature. No, we draw a target around around anything that appears to have come about by a process of arranging symbolic information that exhibits planning and foresight to arrive at a functional result. I don’t need to say more than that, such as the amazing attributes or behaviors of any living things. That a thing which reproduces and processes energy is generated from symbolic information is enough. The rest is icing on the cake, lots of it. Reducing function to abstract instructions is intelligent behavior. Yeah, humans do it, and humans are the only one’s we’ve seen do it. But it doesn’t look like humans were in on this one. If you think that somehow nullifies the obvious expression of a similar pattern, you’re free to make whatever excuses you can to deny whatever evidence you wish. But it’s still there.

    To say that we draw targets around living things is to suggest that the targets weren’t already drawn. A crab is no different from the rock it sits on.

    You’re wrong. Every living thing shares a profound, fundamental difference from every non-living thing. Crabs aren’t funny-shaped rocks that eat and reproduce and run away from bigger things. Every child knows that. That’s the incomprehensible, twisted aberration of reason you are forced to accept when you commit to a conclusion that is diagonally opposed to the evidence.

    UB has it right. Instantiation of semiotic information transfer = intelligence. There is no alternative explanation, real, hypothetical, or imaginary. That’s a lifeline from reality. Grab it or don’t.

  116. 116

    That’s funny, I hadn’t seen that before! LOL!

    Yeah, I haven’t done anything with my site for years, so this last time around when the bill came due I gave up on it and decided not to renew.

    Oh, well. At least we now all know where we can get some cheap fioricet if we need it! 🙂

  117. 117

    Whoa, there pardner. Let’s back up and not miss the forest for the trees.

    Do I have a quote from a Darwinist who says that they have analyzed cellular systems from a standpoint of design and believe the cellular systems meet the specification criteria outlined by intelligent design theorists Dembski, Meyer, and others? Of course not. They don’t use that terminology. But they do acknowledge the same point, in different words.

    Starting with Darwin, who marveled at the wonderous “contrivance” of the eye, the primary goal has not been to deny that life contains complex, functionaly-integrated systems, but to argue that they can come about through natural processes.

    Dawkins went so far as to define biology as the “study of complicated things that give the appearance of having been designed for a purpose.” What did he mean by that? Precisely that when we look at living systems, the appearance of design jumps out at us. Why is that; what is this appearance of design? It is because of the integrated functional complexity — precisely one of the examples of complex specified information. It is the fact that in our universal and repeated experience when we see systems like these they turn out to be designed. Dawkins isn’t arguing against the appearance of design or that such information doesn’t exist in biology (the specification); rather he argues that the appearance need not point exclusively to design because evolution can produce it through long periods of time and chance changes.

    If I recall correctly, Michael Shermer is the one who has even taken to arguing in debates that, yes, life is designed, but, he adds, the design comes about without a designer, through natural processes.

    Can complex specified information be calculated? Sure it can (particularly in cases where we are dealing with digital code) and there are interesting cases and good work to be done in identifying and calculating what is contained in life. But that there is a large amount of complex specified information in cells, absolutely; most everyone realizes it is there. The entire OOL enterprise is built upon trying to figure out how the complex specified information — which everyone recognizes is there — could have arisen.

    The recognition that life contains digital code, symbolic representations, complex integrated functional systems is pretty universal. That is complex specified information. So, yes, most Darwinists don’t spend their energy arguing against complex specified information in life. Rather they spend their energies trying to explain how it could have arisen through purely natural causes.

  118. 118

    Great, so you agree with Dembski that there is an independence requirement. We all agree.

    But why on earth do you think the specification has to be known and set out beforehand?

    I’ve asked this several times. Is it possible to crack a code that was previously unknown and therefore realize it was designed? All I can see from your various responses is one of two possibilities: either you are saying, yes it is possible because (i) we know it was designed in the first place (that is what it sounded like you were saying, thus my comment in 22), or (ii) we’ve seen similar systems (in other words, we analogize to our prior experience).

    So which is it? Do we have to know beforehand the specification, or can we analogize to our prior experience and thus recognize the specification when it is discovered?

    Let me know which of these two options you support, and then we can continue the discussion. If you support (ii) and not (i), then I apologize for having misunderstood you and withdraw my comment 22.

  119. 119
    krtgdl says:

    Dear DrREC, you don’t like examples of human design because you consider them obvious. I disagree: it took me a lot of study, patience of my supervisor and above all intelligence to start programming in a decent way and not with a great ‘main’ to be changed every time a new issue popped up (does this remember you something?), resulting in a great mess. Anyway, coming back to pulsars, we could detect design without pre-specification if we found an unknown language based on an unknown alphabet made of certain bit arrays.

  120. 120
    gpuccio says:

    DrREC:

    With short time available, I am trying to cathc up on this thread, possibly connecting with our previos discussions on others.

    I am not sure what the point of the debate is now. So, just to start, I would restate a couple of importa points here, and kindly ask you to update me about the main problems you see in the discussion here:

    a) In functionally specified information, and especially dFSCI, the specification can (and indeed is) explicited “post-hoc”, from the observed information in the object. The function is defined objectively, and obviously defines a functional subset of the search space (for that specific function). However, the functional target must be computed in some way, because it includes all the sequences that confer the function, as defined.

    b) In the example of a signal “writing” the digits of pi in binary code, the design inference IMO is completely justified (if the number of digits in the signal is great enough). That would be an example of dFSCI. the digits of p in binary form are a good example of dFSCI, and of “post hoc” specification.

    c) Still, those who infer design in that case have the duty to seriously consider if the pattern could be generated by some law, that is by some known necessity driven system, even with possible chance contributions. In this case, as far as I know, there is no known physic system that could generate the binary code for the digits of such a fundamental mathemathical constant. So I refute a necessity explanation, at the present state of knowledge.

    d) Obviously, it remains possible that some physical system, by necessity or necessity + chance, could generate that output, but, as far as I know, there is no logical reason to believe that this is true, and no empirical evidence in favor of that statement. That’s why that possibility is not at present a valid scientific explanation of the origin of our signal.

    Your thoughts on that?

  121. 121
    kairosfocus says:

    F/N 1: I have clipped a key part of the exchanges here.

    F/N 2: As Dr Rec knew from the beginning, pulsars have long since been explained as rotating neutron stars. Wiki summarises:

    A pulsar (portmanteau of pulsating star) is a highly magnetized, rotating neutron star that emits a beam of electromagnetic radiation. The radiation can only be observed when the beam of emission is pointing towards the Earth, much the way a lighthouse can only be seen when the light is pointed in the direction of an observer, and is responsible for the pulsed appearance of emission. Neutron stars are very dense, and have short, regular rotational periods. This produces a very precise interval between pulses that ranges from roughly milliseconds to seconds for an individual pulsars.

    The precise periods of pulsars makes them useful tools. Observations of a pulsar in a binary neutron star system were used to confirm the existence of gravitational radiation. The first extrasolar planets were discovered around a pulsar, PSR B1257+12. Certain types of pulsars rival atomic clocks in their accuracy in keeping time.

    In short, the pulses are of low contingency, they are not a candidate to have high contingency under given initial conditions explained.

    If Dr Rec understands the issues of hi/lo contingency and organisation vs order, he would know that he has no grounds for the objection. We see why S = 0 for pulsars, on a fairly simple explanation.

    If a pulsar were modulated to emit digits of pi on a code, that would be a strong sign of functionally specific, complex contingency, and would strongly point to design.

    F/N 3: Durston et al have in fact provided a strong empirical basis to infer that protein families are designed.

  122. 122
    Joe says:

    DrREC,

    Obvioulsy you have serious issues- we were talking about reactions inside of a living organisms and you spaz out and switch to a test tube.

    You are pathetic.

    Also Newton’s First Rule tells us how to falsify any given design inference. But then again you don’t seem to understand how science operates.

  123. 123
    kairosfocus says:

    F/N 4: On Voynich, cf here on.

  124. 124
    kairosfocus says:

    F/N 5: caption for Fig I.1 on Voynich (note contrast to points highlighted by DR aboe]: >>
    Fig. I.1 (iii): Page 64 of the mysterious Voynich Manuscript, showing unknown glyphs of unknown meaning (if any) in a string data structure that has statistical patterns reminiscent of natural languages and “word” repetition patterns that may reflect certain East Asian languages. The plant images seem to be by and large composite, but are in effect two-dimensional visual representations and organisation that reflect patterns of plant life. >>

  125. 125
    M. Holcumbrink says:

    The Voynich Manuscript?

    Where is the design detection? Are you saying it is natural, or needs to be distinguished from nature.

    Some independent specifications:

    1) On vellum (human product)
    2) Iron ink with quill and pen (human product)
    3) Conforms to manuscript and illustrations of the period

    Should we continue with this absurdity?

    Two points in response here:

    1) You left out the most important independent specification. If the enigmatic text was written on a cave wall with clay from the floor, minus the illustrations, you would still know that someone did it, without even thinking about it.

    2) Likewise, when I see bona fide motors (human product), and other mechanisms that perform Boolean logic and mathematical computations (human product), all regulated and constructed from algorithmically compressed, hierarchically nested, multilayer encrypted machine code with error correcting mechanisms (human product), yet there is no way a human could have had any part in building such a system, I feel quite safe in inferring that someone figured it out before we did, with very little thought put into it. Dembski merely provides a mathematical confirmation of what common sense already tells us.

    Absurd indeed.

Leave a Reply