Intelligent Design

FAQ4 is Open for Comment

Spread the love

4. ID does not make scientifically fruitful predictions.

This claim is simply false. To cite just one example, the non-functionality of “junk DNA” was predicted by Susumu Ohno (1972), Richard Dawkins (1976), Crick and Orgel (1980), Pagel and Johnstone (1992), and Ken Miller (1994), based on evolutionary presuppositions. In contrast, on teleological grounds, Michael Denton (1986, 1998), Michael Behe (1996), John West (1998), William Dembski (1998), Richard Hirsch (2000), and Jonathan Wells (2004) predicted that “junk DNA” would be found to be functional.

The Intelligent Design predictions are being confirmed and the Darwinist predictions are being falsified. For instance, ENCODE’s June 2007 results show substantial functionality across the genome in such “junk DNA” regions, including pseudogenes.

Thus, it is a matter of simple fact that scientists working in the ID paradigm carry out and publish research, and they have made significant and successful ID-based predictions.

A more general and long term prediction of ID is that the complexity of living things will be shown to be much higher than currently thought. Darwin thought the cell was a relatively simple blob of gelatinous carbon. He was wrong. We now known the cell is a high-tech information processing system, with superbly functionally integrated machinery, error-correction-and-repair systems, and much more that surpasses the most sophisticated efforts of the best human mathematicians, mechanical, electrical, chemical, and software engineers. The prediction that living systems will turn out to be vastly more complicated than previously thought (and thus much less likely to have evolved through naturalistic means) will continue to be verified in the years to come.

211 Replies to “FAQ4 is Open for Comment

  1. 1
    Hoki says:

    In contrast, on teleological grounds, Michael Denton (1986, 1998), Michael Behe (1996), John West (1998), William Dembski (1998), Richard Hirsch (2000), and Jonathan Wells (2004) predicted that “junk DNA” would be found to be functional.

    I’m curious as to why ID would predict junk DNA to be functional rather than non-functional (or simply silent on the issue). Surely, this requires you to say something regarding the designer’s abilities and desires – something that ID does NOT address.

  2. 2
    mauka says:

    Indeed, whenever someone speculates about what a designer would or would not choose to do, the usual response of ID proponents is to jump all over them.

    What gives?

  3. 3
    mauka says:

    If William Dembski predicted that junk DNA would be functional, then why did he write a paper entitled “Intelligent Design is not Optimal Design” which states the following?

    Within biology, intelligent design holds that a designing intelligence is indispensable for explaining the specified complexity of living systems. Nevertheless, taken strictly as a scientific theory, intelligent design refuses to speculate about the nature of this designing intelligence. Whereas optimal design demands a perfectionistic, anal-retentive designer who has to get everything just right, intelligent design fits our ordinary experience of design, which is always conditioned by the needs of a situation and therefore always falls short of some idealized global optimum… To find fault with biological design because it misses an idealized optimum, as Stephen Jay Gould regularly does, is therefore gratuitous. Not knowing the objectives of the designer, Gould is in no position to say whether the designer has come up with a faulty compromise among those objectives.

  4. 4
    kairosfocus says:

    Onlookers:

    Kindly look at the weak argument as presented, and at its corrective. (Resist the temptation from our friends over at Anti Evo to switch subjects.)

    1 –> A basic factual assertion is being made by ID critics: ID does not make scientifically fruitful predictions

    2 –> That claim is part of a constantly drummed out rhetorical theme that ID is not legitimately an exercise in scientific investigation, much less a valid emerging paradigm and research programme. (never mnind the long history ont eh point, much less the point of newton’s genral Scholium: across time not only founders of modern science but many, many practitioners have worked in the general design paradigm, that expects the world to be intelligible and organised based on principles, as they understand it to be the product of a Mind. indeed, for many scientists then and now, science is thinking God’s creative and sustaining thoughts after him. That is, the Lewontinian materialism — that imagines that “we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door” — is an unjustifiable, censoring imposition on science by people committed to evolutionary materialistic atheism, not a well warranted frame of thought. Indeed, credibly, such evo mat is just the opposite — it undermines both mind and morals; ending in inescapable self-referential absurdity. [Cf the recent thread on the latter for an illustration. [Oh yes, link on the Hawthorne argument.]])

    3 –> Per certain basic brute facts, the objectors’ claim is factually false. (Thus, the inclusion of the matter under the weak arguments correctives.) And, in fact, the ID view is helping open up a new way of understanding life forms in light of say the complex digital information system in the heart of the cell.

    4 –> Specifically, we have had contrasting predictions on what was termed “junk DNA” by those enamoured of the Darwinian paradigm. (Turns out that, increasingly, on actual investigation, junk DNA ain’t junk. As design thinkers expected and PREDICTED, and as the chance + mechanical thinkers plainly did not.)

    5 –> Moreover, per an early diversional talking point: functionality is not the same as optimality. And indeed, that which is optimal is that which has peak performance in a specific context under particular constraints. Thus, optimality requires a high degree of specialisation that tends to reduce flexibility, adaptability and robustness.

    6 –> On the contrary: all that we need to confidently infer to design per reliable and empirically well warranted sign, is to see that complex, information rich function beyond the UPB threshold is present. (Evo mat objectors and their fellow travellers are consistently unable to show a counter-instance to the general observation that we know just one empirically credible cause for such FSCI: design. Which is massively instantiated. We also know that by the threshold of complexity implicated above, random walk based searches, even if there are hill climbing subroutines, will face a massive probabilistic challenge to get to shores of islands of function within the search resources of the observable universe; before such hill climbing can be engaged.)

    So, let’s make a prediction, in the hopes that it will be falsified: objectors will major on diversions, and will focus very little on the substantial factual issue as refocussed just above.

    GEM of TKI

  5. 5
    mauka says:

    KF,

    Two questions for you:

    1. Is it possible for a designer to choose to include nonfunctional ‘junk’ elements in his or her design? Please answer ‘yes’ or ‘no’; a treatise is unnecessary.

    2. Assuming that your answer to question #1 is ‘yes’ (the only reasonable answer), then how can ID predict that junk DNA is actually functional without knowing something about the designer’s objectives?

    Again, a treatise is unnecessary. Please just address the question.

    Dembski again:

    Not knowing the objectives of the designer, Gould is in no position to say whether the designer has come up with a faulty compromise among those objectives.

    If Dembski is correct about Gould, then an ID proponent who doesn’t know the designer’s objectives is in no position to predict that junk DNA will turn out to be functional.

  6. 6
    Oramus says:

    Hoki,

    Why would you think predicting all DNA having some type of function requires reference to a designer?

    It’s like saying I have to tell you if it was in fact a mailman that sent you the letter. And if it was, then why is it he sent the letter and how did he get to your house in the first place to put the letter in the mailbox before you accept the fact that it is in fact a letter, it is addressed to you, and it contains information.

    IOW, why do you need to know anything about the mailman or his mail truck in order to know the meaning of the information contained in that mail ?

    Rather, IMO this prediction is based on a logical conclusion that the extraordinary level of biological complexity we observe requires a high level of efficiency. Junk DNA does not fit this profile and therefore is a failed interpretation of empirical observation.

    These folks were looking through the eyes in their head not the eye in their mind.

    Out of sight is not out of mind.

    I’m curious as to why ID would predict junk DNA to be functional rather than non-functional (or simply silent on the issue). Surely, this requires you to say something regarding the designer’s abilities and desires – something that ID does NOT address.

  7. 7
    kairosfocus says:

    Mauka:

    From behaviour on previous threads, you have forfeited the right of discussion. I will only address your talking points by way of illustration, and indirectly. (At least, until I see signs that you have begun to actually discuss through serious give and take; instead of playing talking points games.)

    Onlookers:

    Observe the predicted focus-shift already in action (and the underlying subtext of contempt: a responsible answer to a serious question is more than yes/no; (i) answer yes/no and you will be strawmanned to death, (ii) resort to a serious rebuttal and you will be derided as providing treatises that will be ignored as (iii) the talking points are propagated yet again).

    M et al, unfortunately, on evidence of track record, are simply here to project disruptive rhetorical talking points, not to engage in serious discussion on a responsible scientific or comparative difficulties basis.

    In case the talking points trouble you:

    1 –> The fact is that an assertion has insistently been made, that ID does not carry out fruitful — successful and scientifically provocative — scientific predictions.

    2 –> The further fact is, this is false. Of this, the very name “junk DNA” — shortly doubtless to get into the hole where “real” Darwinists — oops — don’t use that term — is proof. but increasingly “junk DNA” ain’t junk. Fact. (And observe how M et al, sadly, are ever so loath to acknowledge or responsibly interact with any facts or arguments that do not fit the talking points caricatures. WHY: “he who frames the terms and context of a debate wins it.” But, let us never ever forget: debate is that wicked art that makes the worse appear the better case, being therein abetted by the deceptive arts of rhetoric, which are about persuasion, not proof. [Adapted, Jefferson; onward, all the way back to Socrates.] The best antidote to toxic talking points and associated spin games is to insist on a focus on the balance of the material facts, logic and inference to best explanation. [Cf here.])

    3 –> Further fact: the long term understanding of the complexity of life forms is growing and has long been growing. To the point where design is a serious alternative explanation.

    4 –> Designs often contain non-optimal elements [just think Vista . . . ], and may contain items that are non-functional, for many reasons. that does not undercut the point that such entities are designed and may be seen to be designed per their complex, information rich functionality.

    5 –> Moreover, optimality is situation-specific and goal specific [Think: MAX/MIN of {Obj Function} SUBJECT TO {constraints}.] So, the objection that unless we understand purpose and context we cannot judge optimality is valid.

    6 –> However, designs usually incorporate in the main, functional elements. So when the Darwinists were busily suggesting that most/much DNA is junk (the product of random genetic accidents across the ages), or at least a large fraction of it is, it was not very hard for design thinkers to see that this was likely to be false; e.g. some DNA was likely to be involved in the regulation of the sections that code for proteins, etc.

    7 –> So, a risky prediction was made, per the insights provided by a different paradigm than chance + necessity only. One that is increasingly being vindicated by onward findings over the past decades; with a strong further trend to that side.

    _____________

    But, we must not forget the basic facts: a claim was made to try to discredit design thought, and that claim is being overturned by onward research as we speak. The case of junk DNA that ain’t junk no more is illustrative.

    GEM of TKI

  8. 8
    mauka says:

    Oramus writes:

    Rather, IMO this prediction is based on a logical conclusion that the extraordinary level of biological complexity we observe requires a high level of efficiency. Junk DNA does not fit this profile and therefore is a failed interpretation of empirical observation.

    Oramus,

    Many polyploid species are perfectly viable despite their inefficient use of DNA, so it would be a lost cause for an ID supporter to assert that junk DNA must be functional in order to support complex biology.

    Also, remember that ID proponents like to say that “intelligent design is not optimal design.” There is a reason for that. They cling to this dictum, because without it they would have no defense against charges that no competent designer would ever come up with something as wasteful and inefficient as, for example, the giraffe’s recurrent laryngeal nerve:

    In mammals, for instance, the recurrent laryngeal nerve does not go directly from the cranium to the larynx, the way any competent engineer would have arranged it. Instead, it extends down the neck to the chest, loops around a lung ligament and then runs back up the neck to the larynx. In a giraffe, that means a 20-foot length of nerve where 1 foot would have done. If this is evidence of design, it would seem to be of the unintelligent variety.

    — I’ve always felt this argument quite poor, for reasons already given. And that discussion does not even take into account potential Intelligent Evolution hypotheses. — Patrick

  9. 9
    kairosfocus says:

    Onlookers:\\let us not forget that a matter of checkable fact is on the table, one where the evo mat objectors to ISD are demonstrably in error.

    Let us see if we can find evo mat advocates (or fellow travellers) willing to acknowledge the basic fact before us.

    (And, if such are unable to acknowledge the existence and import of an easily accessible fact in the here and now world in front of us, what does that tell us about their ability to handle the issues of a remote and unobserved, unrepeatable past?)

    GEM of TKI

  10. 10
    mauka says:

    Onlookers (heh — KF, you crack me up),

    I asked KF the following two questions:

    1. Is it possible for a designer to choose to include nonfunctional ‘junk’ elements in his or her design? Please answer ‘yes’ or ‘no’; a treatise is unnecessary.

    2. Assuming that your answer to question #1 is ‘yes’ (the only reasonable answer), then how can ID predict that junk DNA is actually functional without knowing something about the designer’s objectives?

    His answer to #1 was a verbose ‘yes’, as expected.

    Note that he did not answer #2. Think about it and you will see why; he cannot answer #2 without undermining his own position.

  11. 11
  12. 12
    mauka says:

    KF,

    Credit where credit is due.

    Your argument isn’t very effective, but I’m impressed with its brevity.

  13. 13
    kairosfocus says:

    Onlookers

    Again, note that there is in this case a basic question of brute fact.

    Design theory has been accused: “ID does not make scientifically fruitful predictions.”

    In response, facts to the contrary – with specific named prominent scholars/scientists on both sides of the question — have been adduced, cf the original post:

    To cite just one example, the non-functionality of “junk DNA” was predicted by Susumu Ohno (1972), Richard Dawkins (1976), Crick and Orgel (1980), Pagel and Johnstone (1992), and Ken Miller (1994), based on evolutionary presuppositions. In contrast, on teleological grounds, Michael Denton (1986, 1998), Michael Behe (1996), John West (1998), William Dembski (1998), Richard Hirsch (2000), and Jonathan Wells (2004) predicted that “junk DNA” would be found to be functional.

    The Intelligent Design predictions are being confirmed and the Darwinist predictions are being falsified. For instance, ENCODE’s June 2007 results show substantial functionality across the genome in such “junk DNA” regions, including pseudogenes.

    Thus, it is a matter of simple fact that scientists working in the ID paradigm carry out and publish research, and they have made significant and successful ID-based predictions.

    A more general and long term prediction of ID is that the complexity of living things will be shown to be much higher than currently thought. Darwin thought the cell was a relatively simple blob of gelatinous carbon. He was wrong. We now known the cell is a high-tech information processing system, with superbly functionally integrated machinery, error-correction-and-repair systems, and much more that surpasses the most sophisticated efforts of the best human mathematicians, mechanical, electrical, chemical, and software engineers. The prediction that living systems will turn out to be vastly more complicated than previously thought (and thus much less likely to have evolved through naturalistic means) will continue to be verified in the years to come.

    These facts cannot be controverted, as they are well established and easily followed up. So, we see resort to a distractor that rests on misrepresentations.

    (NB: In a world of entropic forces, that we may currently observe that which is non-functional may simply reflect the passage of time and the ravages of events. Further to this — on our observation and experience of real world design over many centuries and cultures — a robust, widely adaptable design will not in general be optimised for a specific environment, so will normally be sub-optimal on performance in any one environment at any given time. But, overall, it will outperform the overly specialised once we have environments that are reasonably variable across time and space. And, as designers and observers of designs ourselves, we are in every position to know that.)

    So, we must again pose the challenge:

    if such [evo mat adviocates, their fellow travellers, and general detractors of ID] are unable to acknowledge the existence and import of an easily accessible fact in the here and now world in front of us, what does that tell us about their ability to handle the issues of a remote and unobserved, unrepeatable past?

    Distractive talking points are simply not good enough.

    GEM of TKI

  14. 14
    Alan Fox says:

    Of this, the very name “junk DNA” — shortly doubtless to get into the hole where “real” Darwinists — oops — don’t use that term — is proof. but increasingly “junk DNA” ain’t junk.

    But who is working at finding out explanations and functions for non-coding DNA. Granted, “junk” was an unfortunate meme that stuck, but is it not mainstream science that is the main engine of research here. Or am I missing some ID rsearch into this area?

  15. 15
    kairosfocus says:

    Mr Fox:

    That is a cross-complaint, which has been answered elsewhere in the WACs, at no 3.

    (FREE ADVICE: If you have a wife, do not try to answer an issue she raises by making a cross-complaint. Worse, if you have a boss or even a colleague or subordinate. Cross-complaining is the death of any positive outcome. All it does is compounds the issue through providing a distraction.)

    It turns out that research is research and scientific knowledge gained through research is a commons, not a property of any one faction. to such research there is a right of fair comment and critique or analysis. (And such analysis is itself often research in its own right; direct empirical investigations are not the only or even in many cases the most important form of research. For science as a cultural programme seeks to describe, explain, predict and influence or control empirical phenomena.)

    It turns out that he ENCODE results are pointing to one side of an issue, and are showing that indeed the ID thinkers listed above were right.

    In short, the answer WAC 4 is correct on the merits.

    GEM of TKI

  16. 16
    KRiS_Censored says:

    Could someone please give a detailed explanation of exactly how ID predicts that “junk DNA” is actually functional? It doesn’t seem to follow from the theory as I understand it, but perhaps that’s just because I don’t properly understand it.

  17. 17
    Diffaxial says:

    This FAQ should be much stronger.

    Some time ago, on the “A word about our new moderation policy” thread (which had long wandered off topic), a poster named Reciprocating_Bill posed a question directly pertinent to this FAQ, without getting a clear or convincing answer:

    Describe an entailment unique to ID that, were there sufficient funding, could be subject to empirical test. Something that follows from ID such that were we to fail to observe it, ID or a major tenet of ID would be falsified. Your research question should conform to this very simple logical format:

    If P2 (ID) then Q2 (a unique entailment of ID).
    If not Q2 (if the entailment is not observed), then not P2 (ID is false).

    Describe that Q2, and describe how you would research it. Imagine unlimited funding.

    Assertions regarding the entailments of P1 (evolutionary biology) don’t meet this standard, because it doesn’t follow that if not Q1 then P2. That rules out re-interpretations of ongoing biological research.

    I’m truly interested. Surprise me.

    Eventually his question boiled down to this:

    Would you please complete the following?

    If design is true then we should observe ________. If we don’t observe _______ our theory is at risk of disconfirmation.

    There were a lot of responses but no good reply, IMHO. Ironically, on a thread in which Barry had announced and clarified UD’s new moderation policy, Reciprocating_Bill was banned for reasons that explicitly and directly contradicted that newly articulated policy – leaving the inescapable impression that he was really banned for asking an important question for which no one here has a good answer. That is how it looked to me.

    But the question itself was well-taken, and must be addressed squarely if you want to establish ID’s credibility as a science. If you want your FAQ to be truly convincing, it should specify a number of clear and unequivocal responses to this challenge.

  18. 18
    Diffaxial says:

    The reason a prediction regarding the functional status of so called junk DNA is unsatisfactory is that the prediction did not put ID at risk. That is to say, had junk DNA remained junk DNA, it wouldn’t necessarily have followed that ID or a major tenet of ID is false, because it could easily be maintained that the designer implemented a design that included large stretches of junk DNA.

    You need a prediction that, were it to be disconfirmed, compels the conclusion that ID is false.

  19. 19
    Diffaxial says:

    And, preemptively, your prediction must not be in the form, “ID is falsifiable. All someone needs to do is show that RM & NS can generate complex biological systems.” That refers to the success or failure of predictions made by an alternative theory, and therefore fails to really test ID (both theories could be wrong).

  20. 20
    Adel DiBagno says:

    I think that a key word in the FAQ is “fruitful.” Only one ostensible example, that of “junk” DNA is given, implying that the issue is settled thereby. More examples should be easy to come by, and should be added.

    I think also that the statement:

    A more general and long term prediction of ID is that the complexity of living things will be shown to be much higher than currently thought.

    is not even remotely dispositive, being vague and not unique to any theory of origins.

  21. 21
    mauka says:

    kairosfocus is unable to answer this question:

    How is it possible for ID to predict that junk DNA is functional if nothing is known about the designer’s objectives?

    Any other takers?

  22. 22
    jerry says:

    I do not think Junk DNA is something that ID wants to hang its hat on, at least in the way that most here seem to imply. While I believe that a lot of the so called Junk DNA has function, there is still the possibility that a large amount of it currently doesn’t. And by the way, that doesn’t invalidate ID and it definitely doesn’t support any form of the Darwinian paradigm.

    I would like to ask some questions on the ENCODE project. It is about my impressions and I could definitely be wrong which is why I am asking questions.

    Is the basis that the DNA is not junk because nearly all of it is transcribed? And if so has any function been found for all these transcribed elements? I assume a lot of them will never be translated into proteins but serve some purpose as an RNA polymer. But do all the RNA polymers have a function? We know that some do have highly specific functions.

    Is it the possibility many of these transcribed RNA polymers have no current function? And if so then maybe their ultimate function is to be reverse transcribed into the DNA polymer for some future use. Is this one of the ways that the DNA molecule is modified over time to include more information?

    Now this is certainly a naturalistic point of view but could it also be an ID point of view if true? What I am describing is the basis of how many in the evolutionary biology community think genomes get changed over time to produce the new complex novel functional elements. How feasible is this? They (many in the evolutionary biology community) believe so but is it really?

    So what I am asking is if the ENCODE project has really found uses for all this junk DNA or have they just found they are transcribed?

    Remember that there are all sizes of genomes out there for very primitive organisms and maybe we will find out why these genomes are so big. But could it be just natural processes producing these genomic elements by the various engines of variation and most of the genomes have no current functional use?

  23. 23
    jerry says:

    ID is many theories, some of them contradictory. So one has to invalidate each theory and cannot really invalidate all of them with one killer experiment or series of experiments.

    There is front loading of different types and as John Davison has so persistently made, no one has invalidated his while he has said he has invalidated the Darwinian paradigm.

    There is a continuing intervention model which fits all of the data but which is unacceptable to most since it implies some unknown super intelligence which makes people uncomfortable. There really isn’t any disproving of such a model because as we know we expect that our intelligence levels will be able to duplicate the complexity of current life especially at the cellular level. So the question becomes whether naturalistic forces can produce what we see in life. Which is why the battle is fought on this front. The rear guard actions such as purpose, how, when, why are only meant to distract from the main front.

    If materialists had any coherent theory to explain OOL or macro evolution they would drop it on the table and dare the opposing forces to counter this ultimate weapon. But since they only have small arms caliber weapons they attempt to fight elsewhere. They would care less about imperfect design or the contradictions of the monotheistic God or the apparent inadequacies of the so called “designer” if they had the goods.

  24. 24
    madsen says:

    jerry,

    I find it very odd to see the following two statements in the same post by the same person:

    ID is many theories, some of them contradictory. So one has to invalidate each theory and cannot really invalidate all of them with one killer experiment or series of experiments.

    ***

    If materialists had any coherent theory to explain OOL or macro evolution they would drop it on the table and dare the opposing forces to counter this ultimate weapon.

    In the second paragraph, you scold materialists for lacking a “coherent” solution to the OOL. Yet in the first paragraph, the incoherence, internal contradictions, and lack of falsifiability of ID theory is perceived as a strength.

  25. 25
    Adel DiBagno says:

    On fruitfulness, evolutionary theory keeps on giving. In the April 10, 2009 issue of http://www.sciencemag.org/ there’s a news item:

    “EVOLUTIONARY MEDICINE: Darwin Applies to Medical School,”

    which notes that courses in evolutionary medicine are increasingly being offered around the globe, and that there has been a sevenfold increase from 1995 and 2004 in direct interactions between evolutionary biology and medicine as shown by citation maps (cross-references in scientific papers).

  26. 26
    gpuccio says:

    Mauka:

    1. Is it possible for a designer to choose to include nonfunctional ‘junk’ elements in his or her design? Please answer ‘yes’ or ‘no’; a treatise is unnecessary.

    2. Assuming that your answer to question #1 is ‘yes’ (the only reasonable answer), then how can ID predict that junk DNA is actually functional without knowing something about the designer’s objectives?

    You questions have no sense. Why do you ask if it is “possible” for a designer to include nonfunctional elements in his desing? It is certainly “possible”, but it makes no sense. A designer includes elemetns with a rason. That reason can be imperfect, the elements can in the end not work as expected, or the reason can be not evident to some observer, but one of the characteristics of design is that the designer does things with a purpose.

    So, it is certainly “possible” that a designer includes completely non functional elements in his design, but it’s certainly not a good explanation. Again you, like many ill-adviced critics, confound logical necessity with empirical credibility. We are looking for best explanations, not for mathemathical demonstrations.

    ID predicts that non coding DNA is functional simply because it is there, it forms 98.5% of our genome, and there is no reason why, if the genome is designed, most of it should be non functional. So, the best explanation from an ID perspective is that it is functional, but we still don’t understand the function.

    It’s darwinian theorists who have jumped to the conclusion that junk DNA was junk, and therefore evidence of the darwinian evolutionary mechanism. That was, IMO, not even a prediction (indeed, darwinian theory seems capable to predict nothing or everything), but rather a foolish attempt to use strange and unexpected facts as some kind of support for a wrong theory.

  27. 27
    madsen says:

    jerry,

    Re: my post #24—I think I was probably a bit hasty in reading your statement in #23, and that you likely were not touting the diversity of sometimes contradictory theories as a strength of ID. My apologies if I misstated your position.

    Nevertheless, isn’t it a bit much to demand specifics from materialists regarding OOL when ID still hasn’t made any firm commitments on fundamental issues such as common descent?

  28. 28
    B L Harville says:

    Barry Arrington,
    In order for you to say that ID theory predicts no junk DNA you’re going to have to come up with an actual theory first. Saying the-Intelligent-Designer-did-it is just a tad too vague to qualify as a scientific theory.

  29. 29
    gpuccio says:

    Diffaxial:

    Eventually his question boiled down to this:

    Would you please complete the following?

    If design is true then we should observe ________. If we don’t observe _______ our theory is at risk of disconfirmation.

    There were a lot of responses but no good reply, IMHO.

    Let me try.

    If design is true then we should observe that in the natural emergence of new proteins in different species, as soon as we know more about them and about protein function, we will repeatedly observe “saltations” corresponding to a sudden increse in information in the emergent protein, without any possible selectable intermediate form, and well boyond the probabilistic capacities of random variation. This will be proved to be a constant and repeated observation throughout natural history, clearly observable and inferrable through all the instruments that biological research will provide us with. In that case, the design explanation will remain the best explanation for such sudden inputs of functional information.

    If we don’t observe that, and if on the contrary we will observe and define gradual paths of transformation perfectly in the range of what RV and NS can do, in most if not all cases for which enough detail is acquired, then our theory is discomfirmed.

  30. 30
    gpuccio says:

    Adel:

    I think also that the statement:

    A more general and long term prediction of ID is that the complexity of living things will be shown to be much higher than currently thought.

    is not even remotely dispositive, being vague and not unique to any theory of origins.

    It is not vague. The main reason why the current theory based on randomnes and necessity is not acceptable is exactly that: it cannot explain the levels of complexity we observe. It could not explain the leves of complexity we knew 20 years ago, but it can even less explain those we know today. While the depth and richness of biological complexity increses practically day by day, the explanatory power of the darwinian theory remains always the same: practically nil, except for a few minor cases of microevolution.

    That’s why if the level of known complexity goes on increasing at the present rates, darwinian theory is doomed even in the presence of the stubborn and blind loyalty of its current supporters.

    And, Adel, let’s not play games. There are not many “theories of origins”. If RV and NS are ruled out, what are we left with? I’ll tell you. Design, or no theory.

    Those who prefer, can opt for “no theory”. As for myself, I will stick to design.

  31. 31
    gpuccio says:

    Diffaxial:

    “You need a prediction that, were it to be disconfirmed, compels the conclusion that ID is false.”

    I really can’t understand why everybody is so obsessed with falsifying ID. After all, the general position od darwinists is that ID is not even a scientific theory. So, why all this trouble?

    After all, we in ID are only stating that design is the best explanation for a big part of reality, which the current theory pretends to explain while in reality it doesn’t expalin the least part of it. What is so dangerous in such a strange conviction? After all, the only thing you nees to “falsify” ID is a good theory which explains that part of reality better than ID.

    I will make a very clear statement here: any other theory which can credibly explain biological information without assuming a designer is better than ID. So, just show one such theory, and I will admit that ID has been falsified, and has no more any role in science.

    So, you see, it’s easy. Do it!

    After all, as Dawkins teaches, darwinain evolution is a theory which shows how things which appear designed are in reality explainable without a designer. So, I am just waiting that darwinian theory satisfy its premises and its promises. What need will we have of ID, then?

  32. 32
    Alan Fox says:

    If design is true then we should observe that in the natural emergence of new proteins in different species, as soon as we know more about them and about protein function, we will repeatedly observe “saltations” corresponding to a sudden increse in information in the emergent protein, without any possible selectable intermediate form…

    But if the process of RV and NS is true, we will also expect to see sudden increases in information, as a single point mutation can cause a large phenotypic effect (“saltation” if you will) as in achondroplasia.

    …and well boyond the probabilistic capacities of random variation.

    How do you establish what is beyond the capacity of random variation probabilistically?

  33. 33
    Joseph says:

    Alan Fox:

    But if the process of RV and NS is true, we will also expect to see sudden increases in information, as a single point mutation can cause a large phenotypic effect (”saltation” if you will) as in achondroplasia.

    Achondroplasia doesn’t even result in a different species.

    It is nothing more than a variation within a species.

    Also Dr Behe’s “Edge of Evolution” has withstood attacks.

    IOW you don’t have enough time to get two specified mutations to accumulate.

  34. 34
    Joseph says:

    B L Harville,

    What is the theory of evolution other than saying “it evolved”?

    For example how can we test the premise that the bacterial flagellum evolved from a population tat never had one via an accumulation of genetic accidents?

    What is the hypothesis for such a thing?

    Trying to answer that will demonstrate just how vague and useless the “theory” is.

  35. 35
    Joseph says:

    If design were true, as with ALL design-centric venues, we would expect to see signs of design. If we do not observe signs of design then ID is at risk of confirmation.

    And as I said both IC and CSI are signs of design and to refute that all one has to do is demonstrate that IC and CSI can arise without agency involvement.

  36. 36
    gpuccio says:

    Alan Fox:

    But who is working at finding out explanations and functions for non-coding DNA. Granted, “junk” was an unfortunate meme that stuck, but is it not mainstream science that is the main engine of research here. Or am I missing some ID research into this area?

    Junk was a stupid and arrogant attempt at finding new support for darwinian evolution. It was supposed to show how the presence of such a big quantity of non functional code was evidence of no design, and of the gradual accumulation of errors throughout natural history. It was a deformation of perspective, fully intentional and purposeful, the bad child of a bad theory and of bad dogma.

    Again, nobody can equal biological research, or what you call “mainstream science”, with darwinian theory. Biological research has and must have one purpose: discovering new facts. Darwinian theory is a wrong theory: new facts will destroy it. Therefore, any good research is ID research, if ID is a better theory than darwinism, as we believe. Or any good research will support darwinism in the other case.

    There is a very simple fact here that a lot of people seem to miss: ID and darwinism are largely alternative theories. They interpret the known facts with completely different models. It is obvious that, as facts accumulate, it will be easier to better compare those contrasting models.

    Is it so difficult to understand that the accumulation of sequenced genomes will give many answers to questions which can well differentiate between the ID and darwinian position? And that the same is true for protein engineering, for the study of transcriptomes, and for many other fields of biology which are in very quick progress today?

    Our only purpose is to understand. If darwinian theory is a good theory, it will easily be proven such as our understanding gorws.

    But the opposite is true, too.

  37. 37
    Alan Fox says:

    Hi Joe,

    Where was it you were studying marine biology? I must have missed your answer.

  38. 38
    Alan Fox says:

    Junk was a stupid and arrogant attempt at finding new support for darwinian evolution.

    This doesn’t make sense. Do you mean the coining and rapid uptake of the word “junk”? Sorry, you will have to parse.

  39. 39
    gpuccio says:

    Alan Fox:

    But if the process of RV and NS is true, we will also expect to see sudden increases in information, as a single point mutation can cause a large phenotypic effect (”saltation” if you will) as in achondroplasia.

    No, I will not. A single point mutation is in no way a “saltation” in information content. I think you miss the point completely.

    a) One aminoacid mutation is obviously in the range of what RV can do.

    b) Destroying a function by even one single random mutation is certainly possible and easy. Building a new complex function by hundreds of coordinated random mutations is not.

    I think you really don’t understand ID. You ask:

    “How do you establish what is beyond the capacity of random variation probabilistically?”

    Please, go to the thread “Extra Characters to the Biological Code”, where a very detailed discussion about that is taking place between me, Adel DiBagno and Nakashima. We cannot repeat the same things thousands of times, each time somebody wants to evade the actual discussion being made.

  40. 40
    Diffaxial says:

    gpuccio:

    If design is true then we should observe that in the natural emergence of new proteins in different species, as soon as we know more about them and about protein function, we will repeatedly observe “saltations” corresponding to a sudden increse in information in the emergent protein…
    If we don’t observe that, and if on the contrary we will observe and define gradual paths of transformation perfectly in the range of what RV and NS can do, in most if not all cases for which enough detail is acquired, then our theory is discomfirmed.

    This strikes me as an ad hoc prediction, rather than (as Reciprocating_Bill would have asserted) a “necessary entailment” of ID theory. If you are advancing a peculiar form of ID that constrains the designer to the utilization of change by saltation only, you should state why you believe that must be so, within the framework of your theory. Otherwise, the failure to observe what you predict could easily be accommodated by stating, “the designer utilized gradualistic change that also happens to be within the range of what mutation and selection can accomplish.”

    Your prediction also has the flaw that it “predicts” something that we already know: that the dynamics of protein folding permit only certain stable protein configurations, “between” which we don’t expect to find gradual transitions of form.
    Lastly, the second portion of your assertion runs afoul of this:

    And, preemptively, your prediction must not be in the form, “ID is falsifiable. All someone needs to do is show that RM & NS can generate complex biological systems.” That refers to the success or failure of predictions made by an alternative theory, and therefore fails to really test ID (both theories could be wrong).

    The failure of nature to conform to the predictions of orthodox evolutionary theory does not amount to support for ID: both theories can be wrong. Therefore predictions arising from “the other” theory are not really tests of ID.

  41. 41
    gpuccio says:

    Alan Fox:

    “This doesn’t make sense. Do you mean the coining and rapid uptake of the word “junk”? Sorry, you will have to parse.”

    No, I mean the silly coining and the rapid uptake of the theory that 98.5% of the human genome was due to the accumulation of non functional material throughout darwinian evolution of the species.

  42. 42
    Alan Fox says:

    Not long after the genetic code was first elucidated and all triplets were found to code via RNA for an amino acid (or stop codon), I can recall wondering where all the other information that must be needed for growing, say, a human being was stored, if the DNA only coded for proteins. Later it was a surprise to hear that genomes contained so much non-coding DNA. I don’t recall the existence of junk DNA being an indication that the theory of evolution was thus more or less likely to be correct. Just that our understanding was far from complete. Work continues.

  43. 43
    Alan Fox says:

    b) Destroying a function by even one single random mutation is certainly possible and easy. Building a new complex function by hundreds of coordinated random mutations is not.

    I am not suggesting hundreds of coordinated mutations. But single sifted cumulative mutations…

  44. 44
    Alan Fox says:

    I think you really don’t understand ID.

    I really don’t. you are absolutely right.

  45. 45
    gpuccio says:

    Diffaxial:

    Otherwise, the failure to observe what you predict could easily be accommodated by stating, “the designer utilized gradualistic change that also happens to be within the range of what mutation and selection can accomplish.”

    No, you are wrong. The problem here is not if the designer acted more or less graduallly. If you knew ID, you would know that the theory infers design only if the observed information is beyond the range of what mutation and selection can accomplish.

    Therefore, your objection has no sense. If a good theroy based on randomness and/or necessity can explain the emergence of biological information, then ID is falsified. Creationist may go on saying that ““the designer utilized gradualistic change that also happens to be within the range of what mutation and selection can accomplish.” But ID will be falsified.

    Again, you don’t know, or don’t understand, what ID is.

    You say:

    The failure of nature to conform to the predictions of orthodox evolutionary theory does not amount to support for ID: both theories can be wrong. Therefore predictions arising from “the other” theory are not really tests of ID.

    Yes, it doesamount to support for ID. Again, you forget that we are talking empirical science here, and not logical demonstartions. I am really surprised of how often I have to repeat this simple epistemological concept, which should be obvious to anybody who deals with empirical science.

    If what you call “orthodox evolutionary theory” is proced wrong, then ID is at present the best explanation, I would say the only explanation. You say: “both theories can be wrong”. That’s true, and so? If and when a better explanation arises, we will be happy to embrace it. we are not darwinists, after all 🙂

    Until then, ID remains the best explanation. Because you seem to miss the important point: ID does explain biological information, and darwinian theory doesn’t.

  46. 46
    Alan Fox says:

    Please, go to the thread “Extra Characters to the Biological Code”, where a very detailed discussion about that is taking place between me, Adel DiBagno and Nakashima. We cannot repeat the same things thousands of times, each time somebody wants to evade the actual discussion being made.

    I have been reading the thread as time permits.

  47. 47
    Alan Fox says:

    ID does explain biological information, and darwinian theory doesn’t.

    This is obviously the crux of what I don’t understand. In the hope this is not naive like the small boy watching the emperor parade in his new clothes, but how does ID explain biological information. Is it really just that a designer steps in and adds bits as necessary?

  48. 48
    gpuccio says:

    Alan Fox:

    I can recall wondering where all the other information that must be needed for growing, say, a human being was stored, if the DNA only coded for proteins.

    You definitely had the right attitude. That question remains absolutely valid, although many seem to have forgotten it.

    Work continues.

    I agree. And am happy of that.

    I am not suggesting hundreds of coordinated mutations. But single sifted cumulative mutations…

    Again, go to the discussion on the other thread, please. Mutations can cumulate only if each is selected. Otherwise, their probability is equal to the cumulative probability of the whole change which brings the new function.

    I will accept the “cumulative mutations” model only when someone shows at least one model where those single mutations are fixed and expanded by natural selection. In other words, when you can deconstruct all the path of transformation towards a new protein in sinle steps on 1 or 2 aminoacid mutations (indeed, for the sake of discussion, I have concede to Adel 4), and each step is selectable for a new or increased function bringing a reproductive advantage. IOW, a single credible model of macroevolution.

  49. 49
    gpuccio says:

    Alan Fox:

    If you observe a saltation in information content, and that saltation has both complexity (it is utterly unexplainable as a random effect) and sepcification (it has function and purpose), and there is no theory based on necesiity which can explain that change, then the only observed explanation of that kind of situation in our whole observational experience is the action of a designer.

    So, yes, a designer steps in and adds bits of functional information as necessary. According to a plan. For a purpose. Consciously. Like we are doing when we blog here.

  50. 50
    Joseph says:

    Genomes may contain a lot of non-protein-coding DNA, but that DNA still codes for other RNAs and regulates the use of those protein-coding sections.

    Genomes also contain the information required for proof-reading, error-correction and editing.

    None of which would lead an objective to person to the evolutionary postion of accumulated genetic accidents.

  51. 51
    Hoki says:

    Oramus:

    Why would you think predicting all DNA having some type of function requires reference to a designer?

    Because in order to do so, you have to make the assumption that the designer COULD and WANTED all DNA to have function. ID doesn’t make that assumption any more than it assumes that the designer was the god of the bible.

    To use Dembski’s own words:
    (http://www.arn.org/docs/dembsk.....gclean.htm)

    To be sure, designers, like natural laws, can behave predictably. Yet unlike natural laws, which are universal and uniform, designers are also innovators. Innovation, the emergence of true novelty, eschews predictability. It follows that design cannot be subsumed under a Humean inductive framework. Designers are inventors. We cannot predict what an inventor would do short of becoming that inventor

  52. 52
    mauka says:

    Alan Fox writes:

    I can recall wondering where all the other information that must be needed for growing, say, a human being was stored, if the DNA only coded for proteins. Later it was a surprise to hear that genomes contained so much non-coding DNA. I don’t recall the existence of junk DNA being an indication that the theory of evolution was thus more or less likely to be correct. Just that our understanding was far from complete. Work continues.

    True. There is a certain desperation on the part of ID supporters in their search for supposed failed predictions of “Darwinism”.

    Their algorithm seems to work like this:

    1. Wait for working scientists to make an interesting new discovery X.

    2. Find a “Darwinist” who is either surprised by X or who actually predicted not-X.

    3. Claim that “Darwinism” therefore predicts not-X and that the prediction has been falsified.

    Step 2 is easy since 99.9% of practicing biologists are “Darwinists” and there is always a range of opinions regarding any hot area of research.

    What ID supporters never do is to explain how Darwinian principles lead to the supposed “prediction”. It’s always just “a Darwinist said it; therefore Darwinism predicts it.”

    In the hope this is not naive like the small boy watching the emperor parade in his new clothes, but how does ID explain biological information? Is it really just that a designer steps in and adds bits as necessary?

    Under the “big tent” of ID, I’m aware of three possible “mechanisms” by which the designer can impart information to a genome:

    1. The designer creates lifeforms fully formed, ab initio. These forms can change over time but only within strict limits enforced by stabilizing selection. What I’m describing is, of course, creationism. To borrow the language of Barry’s other thread, you might call this the macropoof mechanism. One big Poof followed by no active intervention.

    2. The designer inserts information discretely over time, guiding descent with modification as it unfolds. In keeping with the language of the other thread, you might call this the micropoof mechanism. Lots of little poofs along the way, as needed.

    3. The designer imparts the information all at once, as in mechanism #1, but different portions of the information are turned on and off at different times and places as descent with modification unfolds. The expression of previously dormant information is presumably triggered by internal timer mechanisms or by external environmental cues, or both.

    Mechanism #2 is the most plausible, but it is embarrassing to ID supporters because of its reliance on continual micropoofs over time.

    Mechanism #1 has the the advantage of requiring only one embarrassing poof, discreetly tucked away in the mists of history. However, it runs afoul of the enormous mass of scientific evidence in favor of descent with modification.

    Mechanism #3 shares the advantage of requiring only one embarrassing macropoof, and it doesn’t butt its head against the evidence for descent with modification. It does, however, have a huge flaw: it depends on the preservation, over billions of years, of unexpressed genetic information. Natural selection can only act to preserve information that is expressed. Unexpressed information quickly decays due to accumulated mutations.

    For these reasons, I think that most ID supporters, when they talk about it at all, will admit (reluctantly and with some embarrassment) to being micropoofers. There are definitely lots of macropoofers and front-loaders out there, however.

  53. 53
    jerry says:

    I have a question. Is your comment a micropoof or a macropoof? And does that make you a poofer?

  54. 54
    jerry says:

    I’m sorry if I appeared sexist. You could be a poofette.

  55. 55
    Hoki says:

    Another quote from Dembski:
    (Very similar to the one I gave in #51)
    (http://www.arn.org/docs/dembsk.....stable.htm)

    But what about the predictive power of intelligent design? To require prediction fundamentally misconstrues design. To require prediction of design is to put design in the same boat as natural laws, locating their explanatory power in an extrapolation from past experience. This is to commit a category mistake. To be sure, designers, like natural laws, can behave predictably (designers often institute policies that end up being rigidly obeyed). Yet unlike natural laws, which are universal and uniform, designers are also innovators. Innovation, the emergence to true novelty, eschews predictability. Designers are inventors. We cannot predict what an inventor would do short of becoming that inventor. Intelligent design offers a radically different problematic for science than a mechanistic science wedded solely to undirected natural causes. Yes, intelligent design concedes predictability.

  56. 56
    mauka says:

    Wow. I wonder how Dembski can reconcile that with his junk DNA position.

    Does anyone have a link to Dembski’s original statement regarding junk DNA?

  57. 57
    mauka says:

    Never mind. I found it (or at least one of them) here:

    Consider the term “junk DNA.” Implicit in this term is the view that because the genome of an organism has been cobbled together through a long, undirected evolutionary process, the genome is a patchwork of which only limited portions are essential to the organism. Thus on an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we expect DNA as much as possible to exhibit function.

    Intelligent Design, p. 150

    Bill, since you seem to be hanging out at UD tonight, could you comment on the discrepancy?

  58. 58
    Hoki says:

    Dembski makes two mistakes in the quote in #57.

    1. ID predicts that there will be little junk DNA.

    2. Evolution predicts that there will be lots of junk DNA.

  59. 59
    gpuccio says:

    mauka:

    Mechanism #2 is the most plausible, but it is embarrassing to ID supporters because of its reliance on continual micropoofs over time.

    Why embarrassing? I am not embarrassed at all.

    For these reasons, I think that most ID supporters, when they talk about it at all, will admit (reluctantly and with some embarrassment) to being micropoofers.

    Why reluctantly? I am not reluctant at all. ID allows me to be an intellectually satisfied micropoofer 🙂

  60. 60
    gpuccio says:

    mauka:

    Just to clarify: all designers are micropoofers. When I write a post here, I am not just creating it in a single moment, and then waiting for the words to develop on the screen from my instant poof. I micropoof consistently and gradually when I write, when I speak, when I write computer code (which I do, just a little), and so on. And so, I am certain, do you.

    So, welcome to the category of micropoofers. Why would you want to shut out from our group the designer(s) of biological information?

  61. 61
    mauka says:

    kairosfocus was unable to answer this question:

    How is it possible for ID to predict that junk DNA is functional if nothing is known about the designer’s objectives?

    gpuccio took a stab at it, somewhat obliquely:

    ID predicts that non coding DNA is functional simply because it is there, it forms 98.5% of our genome, and there is no reason why, if the genome is designed, most of it should be non functional. So, the best explanation from an ID perspective is that it is functional, but we still don’t understand the function.

    In other words, gpuccio, you are assuming that you know something about the designer’s objectives, and that putting large amounts of nonfunctional DNA into the genome isn’t among them. My point is that without this assumption, you could not make any predictions regarding the functionality of junk DNA.

    You are willing to assume that the designer would not do something as senseless (to you and me) as filling the genome mostly with junk, but that puts you at odds with those of your fellow ID supporters who insist that ID can make no assumptions about the designer’s motives and objectives (see the Dembski quote in #55, for example).

    It also opens a can of worms for you as an ID supporter. For if you argue that the designer would not fill the genome largely with nonfunctional junk, how can you argue that he would design something as wasteful as the recurrent laryngeal nerve of mammals, including the giraffe?

    In mammals, for instance, the recurrent laryngeal nerve does not go directly from the cranium to the larynx, the way any competent engineer would have arranged it. Instead, it extends down the neck to the chest, loops around a lung ligament and then runs back up the neck to the larynx. In a giraffe, that means a 20-foot length of nerve where 1 foot would have done. If this is evidence of design, it would seem to be of the unintelligent variety.

    And believe me, the recurrent laryngeal nerve is just the tip of the iceberg. There are dozens of other examples of wasteful, illogical and kludgy “designs” for you to confront. Are you prepared to explain why your sensible designer has done all these senseless things?

    — I’ve always felt this argument quite poor, for reasons already given. And that discussion does not even take into account potential Intelligent Evolution hypotheses. — Patrick

  62. 62
    gpuccio says:

    mauka:

    Always to clarify (and believe me, without any reluctance), I am not necessarily a strict micropoofer. According to what we know of natural history, some poofs are really big: OOL, the cambrian and similar explosions, and so on. But many other seem to be more gradual. So, even in the field of poof analysis, I prefer to stick to empiricism, and not ideology 🙂

  63. 63
    gpuccio says:

    mauka:

    In other words, gpuccio, you are assuming that you know something about the designer’s objectives, and that putting large amounts of nonfunctional DNA into the genome isn’t among them. My point is that without this assumption, you could not make any predictions regarding the functionality of junk DNA.

    Correct.

    You are willing to assume that the designer would not do something as senseless (to you and me) as filling the genome mostly with junk, but that puts you at odds with those of your fellow ID supporters who insist that ID can make no assumptions about the designer’s motives and objectives (see the Dembski quote in #55, for example).

    I can live with that. As far as I know, ID is not a dogmatic club. I have been in partial disagreement with many of my fellow IDists, including Dembski, and still esteem and love them.

    And as far as I know, ID does not state that we cannot try to understand the motives of the designer. I absolutely agree, however, that the theory of ID proper (design detection) in itself does not give us clues about those motives. But once design is inferred, it is perfectly correct to try to ask other questions about the designer, and try to answer them through all possible means. That has always been my position here, and I believe it is shared by many.

  64. 64
    kairosfocus says:

    GP:

    Excellent inputs!

    In particular, the data point that 98.5% of the human genome is non-coding for protein, and its latching on to by the Darwinist thinkers as “obviously” the result of accumulated mutational accidents = junk in the DNA attic, is key.

    And, in the face of such a “strong” data point [we have to look back at how it looked when it was freshly apparent that a lot of the DNA set did not code for proteins], to predict that much of the 98.5% will turn out to have other functions BECAUSE designers usually incorporate functions in the major features of their designs was equally obviously risky. (Indeed, I recall being derided by an acerbic Darwinist in another forum several years ago, before the ENCODE results were in — “lookit all that junk”; surely a designer would not do that, etc etc.)

    But now, some years later, the evidence has begun to roll in, and — predictably! — there is an attempt to re-write the history and the terms of the exchange, to try to minimise the impact of the success of such a risky prediction.

    That tells us a lot.

    But also, we should note on the point that the WAC discussion was being deliberately brief and simple. In revising it I think a cite or two would help ground the facts further, and reference to the 98.5% “junk” figure would help.

    Onward, I observe the bluff that RV + NS is capable of getting to the sort of increments in genetic information we are discussing: starting at about 500 – 1,000 bits of info, equivalent to 250 – 500 DNA based, or 170 amino acids. At the upper end of the threshold, we are talking about more configurations than the SQUARE of the 10^150 or so quantum states of the atoms of the observed universe across its estimated lifetime.

    Indeed — as you are wont to observe — even the lower end is very overgenerous, as for macroevo to happen in the window of the Cambrian on earth, or subsequently, we are talking of a lot fewer atoms on the Earth’s surface [the whole earth weighs about 6 * 10^24 kgs [much of that in Fe, Al, Ni, Si etc . . . Ni-Fe Core and a crust of ceramic “oxides” in large part [SiO2, Fe2O3 etc], including Carbon bearing “oxides”: Ca CO3], and the surface zone is a thin crust of that], and a much shorter window.

    Further to this, the empirical evidence is that just 2 – 3 small mutations is a very serious probabilistic barrier, on the largest scale investigation on such in human history, malaria. And, that fits into the theoretical expectations.

    Worse, we are not only dealing with the origin of individual proteins, but with the origin of novel body plans and associated organs, cell types and underlying proteins as a tightly integrated functional cluster that has to be expressed starting early in embryonic development. All, expressed in a digital, code-bearing, language based information system that works based on specifically co-tuned molecular nanomachines. (Which raises the onward origin of life issue of where did such a computer come from by spontaneous ordering forces, including the linguistic element of codes and the logical one of algorithms; credibly requiring 600,000 bits of information to get to a functioning independent life form.)

    We know where FSCI-rich systems are observed to come from: designers. inference to best known explanation, anyone? [And, material distinction between empirically anchored inductive inference and a priori-based deductions? [See why i occasionally have to refer to selective hyperskepticism, since science is about the former not the latter?]

    We also know — per search resource exhaustion on the gamut of our observed universe issues — that forces of mechanical necessity and stochastic explorations of configuration spaces are maximally unlikely to arrive at such organisation starting from any reasonable pre-biotic environment; not to mention, to originate such integrated novel body plans in the same way by accident.

    So, why is the inference to design so stoutly resisted? ANS: a priori, worldview level commitments to materialism that have become embedded in key institutions in our civilisation. As Mr Lewontin has so openly and publicly declared.

    So, we have an explanation on the merits of the relevant facts, and in the context of the above, we have a clear point of fact: Darwinists on discovering a large proportion [~ 98.5%] of non protein coding DNA in our genome pounced on that as proof — they thought — of junk in the DNA attic. Design thinkers took the opposite position — designers are more likely to make functional entities, so we just don’t understand the function yet. (How I recall being derided over how the non-protein section did not seem to make any sense relative to the AA code!)

    A few years later, along comes ENCODE, and there is plain evidence of other function, taking up a good slice of the “junk.”

    Risky prediction confirmed, and in a context that brims over with fruitful potential.

    GEM of TKI

    PS: I think we need to focus on OOL then on macroevo in that light, when we come to making a systematic survey of the state of origins science, if we are to have a fair and balanced understanding of what is going on, what is at stake and where the balance on the merits lies.

  65. 65
    mauka says:

    gpuccio wrote:

    Just to clarify: all designers are micropoofers. When I write a post here, I am not just creating it in a single moment, and then waiting for the words to develop on the screen from my instant poof. I micropoof consistently and gradually when I write…

    gpuccio,

    We’re not micropoofers, because nothing supernatural has to happen for us to compose a comment. No poofs required. Your Designer, on the other hand, has to poof his edits into the biosphere — unless you are going to claim that your designer isn’t God, but rather some physical being, a genetic leprechaun who scampers around and alters genomes when we aren’t looking.

    P.S. Now that ‘poof’, ‘micropoof’ and ‘macropoof’ are becoming part of the vocabulary here, do you think Barry regrets bringing up poofs in the first place?

  66. 66
    mauka says:

    gpuccio:

    I absolutely agree, however, that the theory of ID proper (design detection) in itself does not give us clues about those motives.

    Then you presumably agree with the following statement:

    The theory of ID proper does not predict that junk DNA will be functional.

    Right?

  67. 67
    mauka says:

    gpuccio:

    I have been in partial disagreement with many of my fellow IDists, including Dembski, and still esteem and love them.

    That’s good news. It means that Dembski can still love himself, despite disagreeing with himself on this issue.

  68. 68
    TCS says:

    The “junk DNA” prediction of Darwinists is symptomatic of a larger problem with Darwinian theory. Lacking transitional fossils, they pointed to “vestigial” organs as filling the gap. When that failed, they pointed to junk DNA as a way to fill the gap. They also frequently point to “design flaws,” which is also often another major science stopper. Instead of examining the matter further, they simply point at things as being flawed as a result of evolutionary history without examining alternative hypotheses. Intelligent Design theory has an advantage in all of these areas and would promote fruitful scientific discoveries.

  69. 69
    Diffaxial says:

    gpuccio @ 45

    The problem here is not if the designer acted more or less graduallly. If you knew ID, you would know that the theory infers design only if the observed information is beyond the range of what mutation and selection can accomplish.

    Unfortunately, you are equivocating on “weak ID” (merely asserting the possibility of design detection) and “strong ID” (positive assertions about design). Your original suggested prediction,

    If design is true … we will repeatedly observe “saltations” corresponding to a sudden increse in information in the emergent protein…

    Is clearly an instance of “strong ID,” an assertion about what must be observed if ID is true. Were saltationism actually a necessary posit of your version of strong ID, your suggestion might meet Reciprocating_Bill’s original challenge. But arguing that “ID can only detect design when it rises above a certain threshold” isn’t an entailment of your theory. It is a limitation of your proposed method.

    You say: “both theories can be wrong”. That’s true, and so?

    And so ID needs to make positive predictions that are powerful enough to place it at risk of disconfirmation independently of the fate of predictions arising from an alternative theory.

    The failure of predictions arising from an alternative theory won’t do, because that result provides no information with bearing upon the question of whether ID is also wrong. The success of the alternative theory certainly would render “strong ID” irrelevant, without actually disproving it. Conceivably, both theories could be right in some instances, but ID’s uselessness when it comes to actually guiding research ultimately commands preference for the more productive theoretical framework.

    The fact that research conducted from within an alternative framework, which continues apace with no input whatsoever from the design hypothesis, represents your best case exemplar of research capable of disconfirmation of ID – and the fact that no one can suggest a stronger test such as requested by Reciprocating_Bill – should leave you deeply suspicious that ID has no relevance to empirical research whatsoever. So, no, these aren’t logical exercises. This is where the rubber meets the road.

    An effective rebuttal of the above is to offer responses to Reciprocating_Bill’s original challenge. Your FAQ will be much stronger if you do so. That’s all.

  70. 70
    Adel DiBagno says:

    gpuccio [30]

    I had written

    I think also that the statement:

    A more general and long term prediction of ID is that the complexity of living things will be shown to be much higher than currently thought.

    is not even remotely dispositive, being vague and not unique to any theory of origins.

    To which you responded:

    It is not vague.

    Vague = lacking in detail. Just saying that you expect complexity is not dispositive because it doesn’t uniquely define what you expect.

    You also said:

    There are not many “theories of origins”. If RV and NS are ruled out, what are we left with? I’ll tell you. Design, or no theory.

    This the logical fallacy of false disjunction:

    Either A or B
    Not A
    Therefore B

    See Diffaxial [19]:

    And, preemptively, your prediction must not be in the form, “ID is falsifiable. All someone needs to do is show that RM & NS can generate complex biological systems.” That refers to the success or failure of predictions made by an alternative theory, and therefore fails to really test ID (both theories could be wrong).

  71. 71
    Adel DiBagno says:

    Further on complexity in relation to design:

    Why must design products be complex?

    In some quarters, the hallmark of good design is simplicity.

  72. 72
    derwood says:

    It seems to me that in order for someone to make a prediction, they cannot or should not already know that what they are predicting has already been discovered.

    That is, in order for ID advocates to take credit for “predicting” function in junkDNA, that such function should not already have been discovered, otherwise, they are not really making predictions.

    So I have to wonder what the status of papers like:

    Cell. 1975
    Feb;4(2):107-11.

    The general affinity of lac repressor for E. coli DNA: implications for gene regulation in procaryotes and eucaryotes.

    In which a function for junkDNA was predicted, or maybe this one:

    A general function of noncoding polynucleotide sequences
    1981
    Zuckerkandl.

    wherein it is proposed that junk DNA acts as transcription factor binding sites and such (which it does).

    Or any of the papers cited here.

    What do ALL of these papers have in common?

    1. They were researched and written by run of the mill evolutionists.

    2. They pre-date the supposed ID predictions by decades.

    It is not fair to claim that ID advocates ‘predicted’ something after the fact. Wels’ supposed 2004 ‘prediction’ is especially suspect.

  73. 73
    Joseph says:

    mauka:

    There is a certain desperation on the part of ID supporters in their search for supposed failed predictions of “Darwinism”.

    The theory of evolution does NOT make any predictions based on the proposed mechanisms.

    So it would be very hard to find any “failed predictions”.

    And in the end all evolutionists have are magical mystery mutations.

    Magical because they change one thing into another.

    And mysterious because they have never been observed.

  74. 74
    Joseph says:

    Diffaxial,

    I have answered Reciprocating_Bill’s original challenge.

    That you choose to ignore it says more about you than it does about ID.

    And again YOU talk of predictrions yet the theory of evolution does not have any predictions based on the proposed mechanisms.

    IOW it appears as if you like to be ignorant.

  75. 75
    Joseph says:

    One for the evolutionists:

    If the theory of evolution is true then we should observe ________.

    If we don’t observe _______ our theory is at risk of disconfirmation.

  76. 76
    Diffaxial says:

    Joseph:

    Diffaxial,

    I have answered Reciprocating_Bill’s original challenge.

    Why not repeat your response here, for the purpose of being helpful to the FAQ?

  77. 77
    Hoki says:

    So…
    Is the reason that ID alledgedly predicts that there will be little junk DNA because some prominent ID figures have said that THEY think that that’s the way it should be? That’s the only justification I think I’ve seen (in Barry’s original post).

  78. 78
    Joseph says:

    Diffaxial:

    If the theory of evolution is true then we should observe ________.

    If we don’t observe _______ our theory is at risk of disconfirmation.

  79. 79
    Joseph says:

    And once again:

    If design were true, as with ALL design-centric venues, we would expect to see signs of design. If we do not observe signs of design then ID is at risk of confirmation.

    And as I said both IC and CSI are signs of design and to refute that all one has to do is demonstrate that IC and CSI can arise without agency involvement- ie nature, operating freely.

  80. 80
    Alan Fox says:

    Re Joe’s question to Diffaxial

    Ooh pick me!!!

    “Nested hierarchy”

    BTW Joe, I must have missed your response about where you studied marine biology.

  81. 81
    Joseph says:

    Alan, the theory of evolution does not predict a nested hierarchy- nevermind a nested hierarchy based on the proposed mechanisms.

    IOW all you are doing is proving that you don’t know anything and you are just a blind follower of the unreasonable.

  82. 82
    Joseph says:

    And BTW we do not observe a nested hierarchy throughout living organisms.

    So if we listen to YOU the theory of evolution is disconfirmed.

  83. 83
    Alan Fox says:

    Alan, the theory of evolution does not predict a nested hierarchy- nevermind a nested hierarchy based on the proposed mechanisms.

    I know I only trained as a biochemist and not a marine biologist (where was that, Joe?) but I am pretty sure that if DNA comparisons, fossil evidence etc., showed that the xpected pattern of common descent was wrong, the T of E would be in trouble. There is a former poster, Zachriel, who would explain it much more clearly than I could, if perhaps Barry could be persuaded to reactivate his account.

  84. 84
    Joseph says:

    Alan Fox:

    I know I only trained as a biochemist and not a marine biologist (where was that, Joe?) but I am pretty sure that if DNA comparisons, fossil evidence etc., showed that the xpected pattern of common descent was wrong, the T of E would be in trouble.

    But there isn’t any expected pattern based on random mutations and natural selection.

    And Zachriel couldn’t explain anything.

  85. 85
    Joseph says:

    What part of transcription and translation- complete with proof-reading, error-correction and editing, strikes you as being cobbled together via an acumulation of genetic accidents?

    And

    How can we test the premise that a bacterial flagellum, for example, arose from a population that never had one via an acumulation of genetic accidents?

  86. 86
    Joseph says:

    Oops- i forgot the “did”:

    If design were true, as with ALL design-centric venues, we would expect to see signs of design. If we do not observe signs of design then ID is at risk of DISconfirmation.

  87. 87
    Joseph says:

    Oops- “diS”…

  88. 88
    Alan Fox says:

    And Zachriel couldn’t explain anything.

    Well, that could be demonstrated if he were allowed the opportunity to comment.

  89. 89
    Diffaxial says:

    Joseph:

    If the theory of evolution is true then we should observe ________.

    – convergent phylogenetic hierarchies (eg. paleontological and genetic)
    – chronological fossil series
    – geographic distributions of features
    – transitional forms – eg. Tiktallik, the cynodont therapsids, hominid evolution, legged fossil whales, etc.)
    – inactivated human genes for the production of vitamin C
    – flightless birds species necessarily unique to the islands upon which they are found.
    – incipient/recent speciation in allopatrically separated populations

    etc. etc. etc. etc.

    And as I said both IC and CSI are signs of design and to refute that all one has to do is demonstrate that IC and CSI can arise without agency involvement- ie nature, operating freely.

    Oops – another “test” that revolves around the lack of success of predictions arising from an alternative theory, which asserts that complex structures meeting the definition of ID arise by means of scaffolding, exaptation, etc.

    Whether or not you agree that the above predictions have been confirmed (another question entirely – and we already know you don’t, so don’t bother going there), what would strengthen this FAQ are predictions of analogous specificity that arise uniquely from ID, such that failure to observe puts ID at risk of disconfirmation.

  90. 90
    Diffaxial says:

    That’s “Tiktaalik.”

  91. 91
    derwood says:

    What part of transcription and translation- complete with proof-reading, error-correction and editing, strikes you as being cobbled together via an acumulation of genetic accidents?

    Arguments via personal incredulity are not very effective.

    How can we test the premise that a bacterial flagellum, for example, arose from a population that never had one via an acumulation of genetic accidents?

    It would appear that there is sufficient evidence to indicate the “the” bacterial flagellum arose via the cooption of parts of other systems, but I do not consider myself well versed on the subject.

    How do you propose we test the hypothesis that “the” bacterial flagellum was designed by a non-natural intelligence?
    Via analogy?
    Analogies are not arguments or evidence.

    It is a shame that despite the fact that the subject of this thread is the supposed “prediction” of junkDNA function by ID advocates – ‘predictions’ made in somme cases decades after functions had already been identified – that that very fact is being ignored.

  92. 92
    gpuccio says:

    Diffaxial (#69):

    I appreciate your interesting contributions, but I have to disagree on many points.

    1) You say:

    But arguing that “ID can only detect design when it rises above a certain threshold” isn’t an entailment of your theory. It is a limitation of your proposed method.

    I don’t understand whta you mean. First of all, I don’t understand what you mean with “weak ID”. The ID we practice here is certainly strong. It certainly makes “positive assertions about design”. I am not interested in your “weak ID” (merely asserting the possibility of design detection), and so I can’t see how I could be equivocatiiong on it.

    To be cler: my ID is strong. It makes positive assertions about design. Indeed, it gives quantitative methods to infer it. So is the ID of all IDists I know of.

    That clarified, the fact tha “ID can only detect design when it rises above a certain threshold” is not a limit of my proposed method: it isn the essence itself of the design inference in ID. To be more clear, ID has never stated that “all” design is detectable. ID states that design is detectable only if it rises above some threshold of complexity. That is ID theory. Anything else is not ID.

    Therefore, I can’t understand your problems with my statement. Saltationism is not “a necessary posit of my version of strong ID”. Saltationism, in the sense that I have clarified, is a necessary posit for ID detection: IOW, we have to observe the emergence of informational content which cannot be explained by randomness and/or necessity, to infer design, and that is possible only if the emergent information is well above a very strict threshold of complexity. That is certainly a saltation in information content. So, I do maintain that my statement satisfies perfectly Reciprocating_Bill’s original challenge.

    2) You say:

    The failure of predictions arising from an alternative theory won’t do, because that result provides no information with bearing upon the question of whether ID is also wrong.

    No. You are wrong. The context here is of the kind of rejection of a null hypothesis in Fisherian Hypothesis testing. We reject the hypothesis that random variation can explain the information we onserve, even with the help NS.

    Once we reject that, the field is open to any alternative theory which can explain what we observe. Design is on. Is it better than the others? Well, to answer that I would have to know the others. At present, my opinion is that design is the “only” alternative theory. In any case, it is the best explanation, which is all we need. As I said before, if you have a better alternative theory, please provede it, and we will compare that theory with design.

    But indeed, what you seem not to understand, is that once we falsify the darwinian theory, we do need an alternative theory. Why? Because biological information is there, and needs to be explained. Design “is” an alternative theory. It is indeed a very good alternative theory. So good that for centuries living beings have been considered as designed. So good that Dawkins himself admits that biological realities appear as designed.

    So what you are saying is that we need not consider an explanation which is the best explanation, the only explanation, and a natural and very old explanation, only because you don’t like the concept of a designer? I could say just the same that I don’t like the concept of RV and NS… But no, I have shown that RV and NS don’t work, I have not just argued that “other theories could work”, or that “both theories could be wrong”.

    So, please wake up. Wec arec in the real world here. We are in the field of empirical science. Questions require possible answers. Credible answers.

    ID is a good answer, believe it or not. Darwinian theory isn’t. But if you prefer to stick to the “no theory” position out of purely dogmatic reasons, well you are entitled to that.

    3) Finally, you continue to insist to mark research as “conducted from a framework”. I very strongly object to that. As I have said, research (collection of true facts) is not marked by the ideological framework of those whom conduct it. That would be very biased reasearch.

    You insist to confound reasearch (the gathering of facts) with intellectual elaboration of known facts (a theoretical activity). They are two different things. The fact that most research today is conducted by people who accept the framework of darwinian theory has many obvious explanations, which I will not even try to debate here. But that fact does not change the subsstance of the results. The results are of all and for all.

  93. 93
    Joseph says:

    Diffaxial,

    You are right as not one of your alleged “predictions” is borne from random variation nor natural selection. Not one.

    And if we didn’t observe any of that the ToE would be going strong.

    And as I said both IC and CSI are signs of design and to refute that all one has to do is demonstrate that IC and CSI can arise without agency involvement- ie nature, operating freely.

    Oops – another “test” that revolves around the lack of success of predictions arising from an alternative theory, which asserts that complex structures meeting the definition of ID arise by means of scaffolding, exaptation, etc.

    Actually it is a test that has withstood the tests of time.

    That is EVERY time we have observed CSI and/ or IC and KNEW the cause it has ALWAYS been via agency involvement.

    ALWAYS.

    But anyways there isn’t any chronological order of fossils.

    The VAST majority of fossils are of marine inverts. And in that vast majority, ie >95%, there isn’t any indication of universal common descent.

  94. 94
    Joseph says:

    derwood:

    Arguments via personal incredulity are not very effective.

    I noticed you didn’t answer the question.

    It would appear that there is sufficient evidence to indicate the “the” bacterial flagellum arose via the cooption of parts of other systems, but I do not consider myself well versed on the subject.

    Do you even realize what that would involve?

    Do you also realize that two specified mutations appears to beyond the reach of your processes?

    How do you propose we test the hypothesis that “the” bacterial flagellum was designed by a non-natural intelligence?
    Via analogy?
    Analogies are not arguments or evidence.

    1- YOU don’t think analogies are not good arguments because YOUR position doesn’t have any.

    2- I would test the premise by figuring out if it was reducible to matter, energy, chance and necessity.

    For example give a bacteria population the genes- genes only- required for a flagellum and see if the rest can come about- the rest being binding sites, regulators, chaperones- all the required meta-information.

  95. 95
    gpuccio says:

    Adel (70 and 71):

    Vague = lacking in detail. Just saying that you expect complexity is not dispositive because it doesn’t uniquely define what you expect.

    I thought you had already clear the meaning of complexity in the ID context. We ave even debating the Durston paper… I cannot define the same things each time.

    This the logical fallacy of false disjunction:

    Either A or B
    Not A
    Therefore B

    See Diffaxial [19]:

    Not again, please… I think we had already clarified that tis is not a logical point. And so there is no logical fallacy. I had clarified that even to Diffaxial:

    “Yes, it does amount to support for ID. Again, you forget that we are talking empirical science here, and not logical demonstrations. I am really surprised of how often I have to repeat this simple epistemological concept, which should be obvious to anybody who deals with empirical science.”

    It is not:

    Either A or B
    Not A
    Therefore B

    (logical disjunction)

    but rather:

    We have to explain X.

    At present, we have only two theories, A and B.

    A does not work.

    B works.

    At present, B is the best explanation (always waiting for any possible C)

    This is empirical reasoning.

    Why must design products be complex?

    In some quarters, the hallmark of good design is simplicity.

    Who says that design products must be complex? ID says that only complex design can be inferred. I think that is a completely different statement.

    A simple design can be appreciated as design if we have direct observation of the designer or the process of design. But when we don’t have those things, we have to infer design, and the design inference can be made only for complex design. Is that clear?

  96. 96
    Joseph says:

    Alan, RE Zachriel

    That very thing has been demonstrated on my blog as well as other venues.

    He is well-known and that is why he now relies on various sock-puppets.

    He probably even uses a program to slightly alter his chosen words so that he can escape detection.

  97. 97
    Alan Fox says:

    He is well-known and that is why he now relies on various sock-puppets.

    That’s scurrilous, even by your standards, Joe. Zachriel has never posted except under that handle. He has attempted to post here using his registration details but his account is non-functional. If his ability to represent ideas is so poor, you have nothing to fear.

  98. 98
    gpuccio says:

    mauka (#65):

    We’re not micropoofers, because nothing supernatural has to happen for us to compose a comment. No poofs required. Your Designer, on the other hand, has to poof his edits into the biosphere — unless you are going to claim that your designer isn’t God, but rather some physical being, a genetic leprechaun who scampers around and alters genomes when we aren’t looking.

    I don’t agree. That is just your view of reality, a philosophical take that I can respect, but which I would never share.

    You say that “nothing supernatural has to happen for us to compose a comment”. I don’t like the word “supernatural”, and try never to use it (I have debated that many times in previous threads). So, let’s say that for me the observed phenomenon oh how our consciousness daily interacts with our body and with the material world is no more “natural” and no less “mysterious” then the inferred agency of a designer, maybe a god, on the biological world.

    For me, in both cases a consciousness interferes and interacts with a material reality. Poofs in both cases, or no poof at all.

    You probably take as a given that human consciousness is the product of the activity of the human brain. IOW, you probably believe in strong AI. I don’t. We are entitled to our opinions, but I would like to remark that they are opinions, or at best philosophical convictions. So, from my point of view, poofs are everywhere.

    P.S. Now that ‘poof’, ‘micropoof’ and ‘macropoof’ are becoming part of the vocabulary here, do you think Barry regrets bringing up poofs in the first place?

    I don’t think so. It’s such a pretty term! My compliments, Barry.

  99. 99
    gpuccio says:

    Alan Fox:

    More than one time I have asked that Zachriel be allowed to post here. I do it again now. I miss him.

  100. 100
    gpuccio says:

    mauka:

    Then you presumably agree with the following statement:

    The theory of ID proper does not predict that junk DNA will be functional.

    Please, let’s not play word games. We have better things to do.

    My position is simple. ID proper centers on design detection. Design detection, in biological structures, is largely based on functional specification (FSCI). Design detection in itself does not say much on the designer or his purposes. Other simple approacges (for instance, an analysis of the design) can probably do much more in that field.

    However, design detection “is” based on functional specification. That’s why it is perfectly natural to expect function in design, even if in some cases we could not recognize the function because we could not understand the designer’s purposes.

    So, I maintain that even in absence of clues about the general purposes of the designer, in a set of information like the human genome, where 1.5% has been shown to be perfectly functional, if we infer design for the whole set it is perfectly natural to expect that function will be very likely discovered also for much or all of the rest.

  101. 101
    JayM says:

    Diffaxial @17 summed up the problems with this FAQ answer, and his points have yet to be answered.

    This FAQ should be much stronger.
    . . .
    Eventually his question boiled down to this:

    Would you please complete the following?

    If design is true then we should observe ________. If we don’t observe _______ our theory is at risk of disconfirmation.

    There were a lot of responses but no good reply, IMHO. Ironically, on a thread in which Barry had announced and clarified UD’s new moderation policy, Reciprocating_Bill was banned for reasons that explicitly and directly contradicted that newly articulated policy – leaving the inescapable impression that he was really banned for asking an important question for which no one here has a good answer. That is how it looked to me.

    That’s how it appeared to me, as well.

    But the question itself was well-taken, and must be addressed squarely if you want to establish ID’s credibility as a science. If you want your FAQ to be truly convincing, it should specify a number of clear and unequivocal responses to this challenge.

    I concur. In fact, this FAQ answer would be far stronger if it took the form of the statement reposted by Diffaxial.

    Frankly, I don’t think we can yet write such a statement. At its current level of maturity, ID is an exciting hypothesis, not a full scientific theory. Personally, I believe that the work of Dr. Behe and others in finding the limits to evolutionary mechanisms is likely to be the most direct route to a scientific theory of intelligent design, but we’re not there yet.

    And that’s okay! Let’s be open and honest about both our successes and our current limitations. That way, when we do come to the table with a positive, predictive, testable, falsifiable scientific theory, we’ll have the credibility we need to have it fairly considered.

    JJ

  102. 102
    Joseph says:

    Alan,

    If reality is scurrilous then so be it.

    I just call ’em as I see ’em.

    And there isn’t anything to fear, just more wasted energy.

  103. 103
    derwood says:

    derwood:

    Arguments via personal incredulity are not very effective.

    I noticed you didn’t answer the question.

    There was little that required answering. It was essentially a strawman question. You mentioned an ‘accumulation of accidents.’ That is akin to the old ‘when are going to stop beating your wife’ sort of scenarios. Whose cytochrome c gene contains more ‘accidents’ – humans or tuna’s?


    It would appear that there is sufficient evidence to indicate the “the” bacterial flagellum arose via the cooption of parts of other systems, but I do not consider myself well versed on the subject.

    Do you even realize what that would involve?

    Yes, but perhaps with your background as a research scientist and nearly a marine biologist/zoologist, you might take the time to explain it to me to make sure we are on the same page?

    Do you also realize that two specified mutations appears to beyond the reach of your processes?

    Yes, I am aware of Seelke’s “experiments,” but I am unaware of any requirement in evolution for two pre-specified mutations to occur such that a desired outcome is produced.
    Can you point to a non-ID source in which this is indicated to be the case?


    How do you propose we test the hypothesis that “the” bacterial flagellum was designed by a non-natural intelligence?
    Via analogy?
    Analogies are not arguments or evidence.

    1- YOU don’t think analogies are not good arguments because YOUR position doesn’t have any.

    No, analogies are not good arguments because analogies are nto arguments. They are tools employed to make concepts and issues easier for the uninitiated to understand.

    My side does not seem to require the use of analogies AS ARGUMENTS. My side appears to use them in the correct fashion. My side appears able to produce evidence.

    2- I would test the premise by figuring out if it was reducible to matter, energy, chance and necessity.

    And how would you do that?
    What assumptions go into such a ‘test,’ and what are their validity?
    One could assume that mutations occur and can be passed on. This assumption is valid because it can be demonstrated.
    In terms of ID, the ONLY demonstration is analogy to human activity, and unless your position is that humans designed the bacterial flagellum, I really cannot see what the assumptions are and how they are valid.

    For example give a bacteria population the genes- genes only- required for a flagellum and see if the rest can come about- the rest being binding sites, regulators, chaperones- all the required meta-information.

    So, if we provide a human with steel, concrete, and rivets and we come back 2 weeks later and do not see the bridge we had wanted them to build, we could conclude that humans cannot build bridges?

  104. 104
  105. 105
    derwood says:

    Trying to answer that will demonstrate just how vague and useless the “theory” is.

    So, what is the ID hypothesis for the appearance of the bacterial flagellum?

    What is the ID explanation for as to why there is more than one kind of flagellum in bacteria?

    What is the ID explanation for as to why not all bacteria have flagella?

  106. 106
    Diffaxial says:

    gpuccio @ at 92:

    I don’t understand whta you mean. First of all, I don’t understand what you mean with “weak ID”. The ID we practice here is certainly strong.

    I don’t intend “weak” as a pejorative relative to strong. I model the use of those terms after the “weak” versus “strong” anthropic principles, and similarly “weak” versus “strong” claims of emergence, etc. It refers to a distinction you appeared to draw in one of your earlier posts. Whether you intended it or not, I distinguish the claims as follows: “Weak ID” is the fairly limited claim that design detection is possible. Weak ID refrains from making further assertions about design and certainly the designer. Therefore no further testable assertion arise from that sort of ID. “Strong ID” makes claims about the design and the designer. You appeared to make a claim consistent with “strong” ID when you stated,

    If design is true…we will repeatedly observe “saltations” corresponding to a sudden increse in information in the emergent protein, without any possible selectable intermediate form…

    But then appeared to retreat to weak ID with,

    If you knew ID, you would know that the theory infers design only if the observed information is beyond the range of what mutation and selection can accomplish.

    That strikes me as a “weak” claim. But perhaps I misread what you intended by that.

    That clarified, the fact tha “ID can only detect design when it rises above a certain threshold” is not a limit of my proposed method: it isn the essence itself of the design inference in ID. To be more clear, ID has never stated that “all” design is detectable. ID states that design is detectable only if it rises above some threshold of complexity. That is ID theory. Anything else is not ID.

    Then it is a limit inherent in the essence of the design inference in ID. Worse, by designating that threshold in light of the putative failures of a competing theory (as you did above), that threshold is shown not to be a necessary entailment of a theory of design, but to arise from other considerations in an ad hoc manner. And, as has been commented upon repeatedly over the years, the probabilities of specific biological structures emerging absent design are not in fact calculable for the purposes of the design inference, and indeed NO calculations have EVER been offered by advocates of ID establishing the probability of specific biological structures arising by natural means, thereby demonstrating an instance of design. To be honest, it rather baffles me that ID adovocates continue to advance this argument in light of that embarrassing fact.

    No. You are wrong. The context here is of the kind of rejection of a null hypothesis in Fisherian Hypothesis testing. We reject the hypothesis that random variation can explain the information we onserve, even with the help NS.

    I disagree. Because both theories can be wrong, rejection of the alternative doesn’t comprise a test of ID. It would certainly improve its prospects – but it would still fall to ID to make positive predictions that arise from the entailments of a “strong” ID, such that failure to observe those predicted entailments places ID at risk of disconfirmation. Until then, ID remains a conjecture, even in the face of the ardently wished for complete collapse of current evolutionary theory.

    In any case, it is the best explanation, which is all we need.

    Due to ID’s inherent inability to guide empirical research, it fails to demonstrate scientific superiority over no theory at all, much less over the dominant paradigm, which provides a fertile framework for empirical research.

    Design “is” an alternative theory. It is indeed a very good alternative theory.

    To support this theory somebody, SOMEBODY is going to have to demonstrate it in scientific action. Being the default isn’t enough.

    So what you are saying is that we need not consider an explanation which is the best explanation, the only explanation, and a natural and very old explanation, only because you don’t like the concept of a designer?

    In my opinion it is an explanation that doesn’t explain, as it offers no hooks for the incremental acquisition of empirical knowledge. Absent that, it is really just a conjecture.

    Finally, you continue to insist to mark research as “conducted from a framework”. I very strongly object to that. As I have said, research (collection of true facts) is not marked by the ideological framework of those whom conduct it. That would be very biased reasearch.
    You insist to confound reasearch (the gathering of facts) with intellectual elaboration of known facts (a theoretical activity). They are two different things.

    With all due respect, you have little grasp of the basics of scientific methodology. Facts aren’t lying around to be passively collected and later assessed for significance, as much as ID advocates would like that to be the case (so they can claim that “reinterpreting” others empirical data is “doing science”). Theory and observation are in constant dialog, with theory informing us where to turn our observational spades (and specifying what should be found there), and observation serving as tests of theory.

    Until you get that, you’re not going to get Reciprocating_Bill’s question.

  107. 107
    Joseph says:

    derwood,

    the theory of evolution is based on the accumulation of genetic acidents.

    Ya see according to the theory ALL mutations are mistakes.

    And also according to the ntheory the accumulation of those mistakes is what formed the diversity of life.

    Your position doesn’t have any analogies and it doesn’t have any evidence that the proposed mechanisms can do what you claim they did.

    I would test the premise by figuring out if it was reducible to matter, energy, chance and necessity.

    I said that because it is YOUR position which should be doing such a thing.

    Ya see if something is so reducible then the design inference is unwarranted.

  108. 108
    Joseph says:

    Diffaxial

    YOU do not understand scientific methodology and that is evinced by your nonsensical “predictions”.

    As if the ToE would be disconfirmed if humans had working copies of the genes for vC.

    And BTW if there are two options and one is proven false, what is left?

  109. 109
    Joseph says:

    derwood,

    If it were so widely “known” that “junk” DNA really wasn’t, then why have evolutionists continued to push that premise?

    I would say that they thought “OK some of this is used, but the rest is real junk.”

    The evolutionary scenario does NOT have ANY explanation for the regulatory sequences.

  110. 110
    gpuccio says:

    Diffaxial et al:

    Guys, it’s very late for me, and I will be away for a couple of days. I hope I can catch up as soon as I am back.

    For the moment, just a few very brief comments to Diffaxial’s detailed #106:

    1) I understood what you meant for “weak” and “strong”, but I have no room for “weak” ID. When I say:

    “If you knew ID, you would know that the theory infers design only if the observed information is beyond the range of what mutation and selection can accomplish.”

    I am just referring to ID methodology, which uses complexity as a tool to avoid false positives due to random effects. That has nothing to do with any “weak” conception of ID.

    2) You say:

    “Then it is a limit inherent in the essence of the design inference in ID”

    That’s correct. But then you say:

    “Worse, by designating that threshold in light of the putative failures of a competing theory (as you did above), that threshold is shown not to be a necessary entailment of a theory of design, but to arise from other considerations in an ad hoc manner. ”

    That’s completely wrong. The threshold is designate to avoid false positive due to randomness, not in light of a competitive theory. Indeed, the concepts of design detection were not created by ID, and apply, as you know, to many other fields. Avoiding false positives due to randomness or to necessity mechanisms is an essential requirement of scientific design detection, and is not done “in light of a competing theory”. It is true, however, that as the main (and only) competing theory largely uses randomness as an engine of variation, it is directly falsified by the ID arguments. But that does not mean that the arguments themselves are designated for it. The arguments obey the intrinsic methodology of science.

    3) You say:

    “And, as has been commented upon repeatedly over the years, the probabilities of specific biological structures emerging absent design are not in fact calculable for the purposes of the design inference, and indeed NO calculations have EVER been offered by advocates of ID establishing the probability of specific biological structures arising by natural means, thereby demonstrating an instance of design.”

    That’s simply not true. We have done that many times. I was recently commenting exactly on that in the thread “Extra Characters to the Biological Code”. Therefore, I see no “embarrassing fact”.

    4) You say:

    “I disagree. Because both theories can be wrong, rejection of the alternative doesn’t comprise a test of ID. It would certainly improve its prospects – but it would still fall to ID to make positive predictions that arise from the entailments of a “strong” ID, such that failure to observe those predicted entailments places ID at risk of disconfirmation. Until then, ID remains a conjecture, even in the face of the ardently wished for complete collapse of current evolutionary theory.”

    I don’t understand. A moment you seem to admit that my strong ID makes predictions, another moment you deny it. I stick to the prediction I made about informational saltations.

    And, in classical hypothesis testing, rejection of the null hypothesis does not comprise a test of the alternative hypothesis, but the alternative hypothesis can be affirmed if it is the “best explanation”.

    And, as for me, the “collapse of current evolutionary theory” is not “ardently wished for”: it is a fact.

    5) You say:

    “Due to ID’s inherent inability to guide empirical research, it fails to demonstrate scientific superiority over no theory at all, much less over the dominant paradigm, which provides a fertile framework for empirical research.”

    That would be too long. I simply believe that ID could very well guide empirical research, if it were accepted, and that its superiority over the dominant paradigm is absolutely obvious.

    6) You say:

    “Being the default isn’t enough.”

    I am rather perplexed by that. Has science become so strange that being the default (and only) explanation to understand what we observe does not count?

    7) You say:

    “In my opinion it is an explanation that doesn’t explain, as it offers no hooks for the incremental acquisition of empirical knowledge. Absent that, it is really just a conjecture.”

    Again, that’s only your opinion. I respect it, and disagree.

    8) You say:

    “With all due respect, you have little grasp of the basics of scientific methodology. Facts aren’t lying around to be passively collected and later assessed for significance, as much as ID advocates would like that to be the case (so they can claim that “reinterpreting” others empirical data is “doing science”). Theory and observation are in constant dialog, with theory informing us where to turn our observational spades (and specifying what should be found there), and observation serving as tests of theory.”

    With all due respect, it’s you that IMO have a rather trivial view of science. It’s true that “theory and observation are in constant dialog”, and ID is trying to take part to that dialog, and many are trying to make that impossible.

    But if you really believe that observations merely serve as tests of existing theories, I am afraid you have a really strange conception of science and knowledge. So, I state again with all my strength and conviction what I have already stated, as you phrased it:

    a) Facts are absolutely lying around to be collected and later assessed for significance. That’s how science has always worked. Observation of facts, “any facts”, is the first step of the scientific method.

    b) Reinterpreting others’ empirical data absolutely is “doing science”. It is not the only way to do science (collecting data and interpreting one’s own data is certainly science too). But reinterpreting others’ data is absolutely science, and a very essential part of it.

    9) For all these reasons, I maintain that I have answered Reciprocating_Bill’s question. But it’s fine that you don’t agree.

    I’ll be back as soon as possible. I would just like to add that I have really appreciated your detailed contributions to this discussion.

  111. 111
    Hoki says:

    gpuccio (#100):

    However, design detection “is” based on functional specification. That’s why it is perfectly natural to expect function in design, even if in some cases we could not recognize the function because we could not understand the designer’s purposes.

    So, I maintain that even in absence of clues about the general purposes of the designer, in a set of information like the human genome, where 1.5% has been shown to be perfectly functional, if we infer design for the whole set it is perfectly natural to expect that function will be very likely discovered also for much or all of the rest.

    I might be misunderstanding you here, but are you saying that since some of the DNA is functional, we somehow assume that the rest of it is and therefore ID predicts that most DNA will have function. Sounds a tad circular to me.

    I agree that ID is based on the detection of things that have functional specification. However, given that there is no known function for all DNA, it is not natural to expect to find function either (not to me – and I am intelligent and I sometimes design things). Unless, of course, you for some reason assume that the desinger wanted/could ensure that all DNA had funtion.

    Perhaps you expect the designer to think like most humans do?

  112. 112
    Joseph says:

    Why does ID predict little or no “junk” DNA?

    Experience.

    That is we have experience with designers that do NOT design junk into their functioning designs- especially designs that require coding.

    How many successful computer programs contain lines “junk” code?

  113. 113
    derwood says:

    Why does ID predict little or no “junk” DNA?

    Experience.

    Yes – the experience of reading papers published by ‘darwinists’decades ago then hoping nobody else will know about them as they make ‘predictions.’

    Who cares about computer programs, genomes operate somewhat differently. Seems a zoology student might understand that.

  114. 114
    derwood says:

    Joseph:

    derwood,
    If it were so widely “known” that “junk” DNA really wasn’t, then why have evolutionists continued to push that premise?

    Perhaps the same reason that ID advocates and creationists do – to make a name for themselvesd when they discovera new function?

    Difference being, of course, it is not actually IDcreationists doing any of the discovery.

    Of course, claiming that ‘evolutionists’ push it is a bit overly broad and it is an unfortuantely case of wanting there to be a universal in all this, and there is not.

    It is true that some noncoding DNA has a function. It is also true that some does not, for how could you remove nearly 3 million bps form a mouse genome and have the mouse suffer no ill effects were it all functional and thus, via implication, necessary?
    If all junk DNA uis functional, then its function does not seem to depend on the specific sequence nor the amount.

    How many computer programs can run with varying amounts of code and in fact large chunks of code missing?

    Not too many, I suspect. The problem then would be the computer analogy, not a problem with evolutonary genetics.

    I would say that they thought “OK some of this is used, but the rest is real junk.”

    So would I.

    The evolutionary scenario does NOT have ANY explanation for the regulatory sequences.

    Does the ID creationism scenario have an answer as to why they claim that ID creationists predicted function in junk DNA when it was evolutionists that dioscovered it decades before they made their ‘predicitons’?

  115. 115
    derwood says:

    Joseph:

    Ya see according to the theory ALL mutations are mistakes.

    ‘Mistakes’ followed by selection, if adaptive or detrimental.
    So, whose cyt c gene has more miostakes, our’s or a tuna’s?

    And also according to the ntheory the accumulation of those mistakes is what formed the diversity of life.

    Along with selection and some other stuff.

    Your position doesn’t have any analogies and it doesn’t have any evidence that the proposed mechanisms can do what you claim they did.

    Actually, we use the language and the computer analogies, too, when instructing freshman/sophomore students. But we tend to understand the limitations of analogies and use them in their intended way.
    We have lots of evidence that mutations occur and did occur (to include duplications, etc.) and there is some evidence that such mutations confer adaptive traits.
    Not the whole story, of cours,e but then it seems more substantial than contrived mathematical arguments.

    I would test the premise by figuring out if it was reducible to matter, energy, chance and necessity.

    I said that because it is YOUR position which should be doing such a thing.

    Why should we be testing YOUR side’s claims?

    In science, the one making the claim (the hypothesis, etc.) is typically the one that does such things.

    Odd how IDCs think that their ‘opponants’ should be doing their work for them.

    Ya see if something is so reducible then the design inference is unwarranted.

    And how would one go about such testing?

    Which is what I asked – if you cannot even explain how such tests are to be done, isn’t it a bit odd to expect us to do them?

  116. 116
    derwood says:

    So, what is the ID hypothesis for the appearance of the bacterial flagellum?

    What is the ID explanation for as to why there is more than one kind of flagellum in bacteria?

    What is the ID explanation for as to why not all bacteria have flagella?

  117. 117
    Joseph says:

    Experience.

    That is we have experience with designers that do NOT design junk into their functioning designs- especially designs that require coding.

    How many successful computer programs contain lines “junk” code?

    Yes – the experience of reading papers published by ‘darwinists’decades ago then hoping nobody else will know about them as they make ‘predictions.’

    No, the experience I was talking about.

    Ya see Darwinists STILL think that most of the genomes are “junk”.

    Who cares about computer programs, genomes operate somewhat differently.

    They are both codes- one for a computer and one genetic.

  118. 118
    JayM says:

    gpuccio @110

    And, as has been commented upon repeatedly over the years, the probabilities of specific biological structures emerging absent design are not in fact calculable for the purposes of the design inference, and indeed NO calculations have EVER been offered by advocates of ID establishing the probability of specific biological structures arising by natural means, thereby demonstrating an instance of design.

    That’s simply not true. We have done that many times.

    I’m afraid you’re overstating the case here. To the best of my knowledge, and I have searched, there are no calculations of CSI or other probabilities for organic constructs that take into account known evolutionary mechanisms. The only calculations I’ve found are equivalent to computing 2 to the power of however many bits are assumed to be required to define the artifact in question. Such calculations ignore so much of modern biology as to be useless.

    Are you aware of any papers that rigorously define CSI and use it to calculate a reproducible value for something like a real world bacterial flagellum?

    JJ

  119. 119
    Joseph says:

    derwood,

    You are clue-less.

    Mutations are allowed to accumulate via selection- many types of selection.

    Next you would be testing YOUR sode’s claims by figuring out how far something can be reduced.

    It is YOUR position that says living organisms and their parts can be reduced to matter, energy, chance and necessity.

    That you can’t even understand that simple fact demonstrates you are well beyond reasoniung with.

  120. 120
    Joseph says:

    BTW IDcreationism only exists in the minds of the willfully ignorant.

    Was Darwin an “evolutionary creationist”?

    He wrote the following:

    There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone circling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved.- Charles Darwin in “The Origin of Species by means of Natural Selection” last chapter, last sentence (bold added)

  121. 121
    Joseph says:

    So, what is the ID hypothesis for the appearance of the bacterial flagellum?

    That if it was designed it would not be reducible to matter, energy, chance and necessity.

    What does the ToE say?

    What is the ID explanation for as to why there is more than one kind of flagellum in bacteria?

    Why does ID have to explain that?

    The ToE doesn’t.

    What is the ID explanation for as to why not all bacteria have flagella?

    Not all bacteria require one.

    Your turn:

    How can we test the premise tat the bacterial flagellum arose from a population that never had one via an acumulation of genetic accidents?

  122. 122
    kairosfocus says:

    Putting a few facts into play . . .

    1] Mr Dawkins, in the Devil’s Chaplain, 2004:

    Genomes are littered with nonfunctional pseudogenes, faulty duplicates of functional genes that do nothing, while their functional cousins (the word doesn’t even need scarequotes) get on with their business in a different part of the same genome. And there’s lots more DNA that doesn’t even deserve the name pseudogene. It too is derived by duplication, but not duplication of functional genes. it consists of multiple copies of junk, ‘tandem repeats,’ and other nonsense which may be useful for forensic detectives but which doesn’t seem to be used in the body itself. . . . Creationists might spend some earnest time speculating on why the Creator should bother to litter genomes with untranslated pseudognes and junk tandem repeat DNA. [p. 98]

    2] Donald Voet and Judith G. Voet, Biochemistry, pg. 1020 (Jon Wiley & Sons, 2006)

    No function has been unequivocally assigned to moderately repetitive DNA, which has therefore been termed selfish or junk DNA. This DNA apparently is a molecular parasite that, over many generations, has disseminated itself throughout the genome through transposition. The theory of natural selection predicts that the increased metabolic burden imposed by the replication of an otherwise harmless selfish DNA would eventually lead to its elimination. Yet for slowly growing eukaryotes, the relative disadvantage of replicating an additional 100 bp of selfish DNA in an 1-billion-bp genome would be so slight that its rate of elimination would be balanced by its rate of propagation. Because unexpressed sequences are subject to little selective pressure, they accumulate mutations at a greater rate than do expressed sequences.

    3] Mr Dembski, in First Things on “Science and Design,” oct 1, 1998:

    . . . Even if we have a reliable criterion for detecting design, and even if that criterion tells us that biological systems are designed, it seems that determining a biological system to be designed is akin to shrugging our shoulders and saying God did it. The fear is that admitting design as an explanation will stifle scientific inquiry, that scientists will stop investigating difficult problems because they have a sufficient explanation already.

    But design is not a science stopper. Indeed, design can foster inquiry where traditional evolutionary approaches obstruct it. Consider the term “junk DNA.” Implicit in this term is the view that because the genome of an organism has been cobbled together through along, undirected evolutionary process, the genome is a patchwork of which only limited portions are essential to the organism. Thus on an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we expect DNA, as much as possible, to exhibit function. And indeed, the most recent findings suggest that designating DNA as “junk” merely cloaks our current lack of knowledge about function. For instance, in a recent issue of the Journal of Theoretical Biology, John Bodnar describes how “non-coding DNA in eukaryotic genomes encodes a language which programs organismal growth and development.” Design encourages scientists to look for function where evolution discourages it.

    –> Contrast: 2004 and 2006 vs 1998, and . . .

    4] BBC, May 12, 2004: “‘Junk’ throws up precious secret”:

    Researchers inspecting the genetic code of rats, mice and humans were surprised to find they shared many identical chunks of apparently “junk” DNA.

    This implies the code is so vital that even 75 million years of evolution in these mammals could not tinker with it . . . .

    Before scientists began laboriously mapping several animal life-codes, they had a rather narrow opinion about which parts of the genome were important.

    According to the traditional viewpoint, the really crucial things were genes, which code for proteins – the “building blocks of life”. A few other sections that regulate gene function were also considered useful.

    The rest was thought to be excess baggage – or “junk” DNA . . . .

    David Haussler of the University of California, Santa Cruz, US, and his team compared the genome sequences of man, mouse and rat. They found – to their astonishment – that several great stretches of DNA were identical across the three species.

    To guard against this happening by coincidence, they looked for sequences that were at least 200 base-pairs (the molecules that make up DNA) in length. Statistically, a sequence of this length would almost never appear in all three by chance.

    Not only did one sequence of this length appear in all three – 480 did . . . .

    The regions largely matched up with chicken, dog and fish sequences, too; but are absent from sea squirt and fruit flies.

    “It absolutely knocked me off my chair,” said Professor Haussler. “It’s extraordinarily exciting to think that there are these ultra-conserved elements that weren’t noticed by the scientific community before.”

    The really interesting thing is that many of these “ultra-conserved” regions do not appear to code for protein. If it was not for the fact that they popped up in so many different species, they might have been dismissed as useless “padding”.

    But whatever their function is, it is clearly of great importance.

    We know this because ever since rodents, humans, chickens and fish shared an ancestor – about 400 million years ago – these sequences have resisted change. This strongly suggests that any alteration would have damaged the animals’ ability to survive.

    “These initial findings tell us quite a lot of the genome was doing something important other than coding for proteins,” Professor Haussler said.

    He thinks the most likely scenario is that they control the activity of indispensable genes and embryo development.

    Nearly a quarter of the sequences overlap with genes and may help slice RNA – the chemical cousin of DNA involved in protein production – into different forms, Professor Haussler believes.

    The conserved elements that do not actually overlap with genes tend to cluster next to genes that play a role in embryonic development.

    “The fact that the conserved elements are hanging around the most important development genes, suggests they have some role in regulating the process of development and differentiation,” said Professor Haussler . . . .

    Despite all the questions that this research has raised, one thing is clear: scientists need to review their ideas about junk DNA.

    Professor Chris Ponting, from the UK Medical Research Council’s Functional Genetics Unit, told BBC News Online: “Amazingly, there were calls from some sections to only map the bits of genome that coded for protein – mapping the rest was thought to be a waste of time.

    “It is very lucky that entire genomes were mapped, as this work is showing.”

    He added: “I think other bits of ‘junk’ DNA will turn out not to be junk. I think this is the tip of the iceberg, and that there will be many more similar findings.”

    5] Colin Nickerson,“DNA unraveled A ‘scientific revolution’ is taking place, as researchers explore the genomic jungle,” Boston Globe Staff, September 24, 2007:

    The science of life is undergoing changes so jolting that even its top researchers are feeling something akin to shell-shock. Just four years after scientists finished mapping the human genome – the full sequence of 3 billion DNA “letters” folded within every cell – they find themselves confronted by a biological jungle deeper, denser, and more difficult to penetrate than anyone imagined.

    “Science is just starting to probe the wilderness between genes,” said John M. Greally, molecular biologist at New York’s Albert Einstein School of Medicine. “Already we’re surprised and confounded by a lot of what we’re seeing.”

    A slew of recent but unrelated studies of everything from human disease to the workings of yeast suggest that mysterious swaths of molecules – long dismissed as “junk DNA” – may be more important to health and evolution than genes themselves.

    Meanwhile, a tricky substance called RNA – for decades viewed as the lowly “messenger boy ” for genes and proteins – turns out to be a big league player in cell function. It may even represent the cell’s command and control system, according to its more vigorous proponents.

    In any event, lots of basic biological beliefs are going out the window these days as new discoveries come so rapid-fire that the effect is almost more disorienting than illuminating.

    The discoveries have one common theme: Cellular processes long assumed to be “genetic” appear quite often to be the result of highly complex interactions occurring in regions of DNA void of genes. This is roughly akin to Wall Street waking to the realization that money doesn’t make the world go ’round, after all.

    “It’s a radical concept, one that a lot of scientists aren’t very happy with,” said Francis S. Collins, director of the National Human Genome Research Institute. “But the scientific community is going to have to rethink what genes are, what they do and don’t do, and how the genome’s functional elements have evolved.

    “I think we’re all pretty awed by what we’re seeing,” Collins said. “It amounts to a scientific revolution.”

    For half a century, the core concept in biology has been that every cell carries within its nucleus a full set of DNA, including genes. Each gene, in turn, holds coded instructions for assembling a particular protein, the stuff that keeps organisms chugging along.

    As a result, genes were assigned an almost divine role in biological “dogma,” thought to govern not only such physical characteristics as eye color or hair texture, but even much more complicated characteristics, such as behavior or psychology. Genes were assigned blame for illness. Genes were credited for robust health. Genes were said to be the source of the mutations that underlay evolution.

    But the picture now emerging is more complicated, one in which illness, health, and evolutionary change appear to be the work of almost fantastical coordination between genes and swaths of DNA previously written off as junk.

    The idea that genes possess a singular supremacy took a knock when the human genome was fully sequenced in 2003, revealing that only about 1.5 percent of our DNA consists of actual genes coding for protein.

    Another 3.5 percent of DNA is of gene-linked regulatory material whose function isn’t well grasped, but which is recognized as vital because it has been precisely duplicated in living things for hundreds of millions of years. “That’s smoking gun evidence that nature cares about this stuff,” said Eric S. Lander, director of the Broad Institute, a research center affiliated with MIT and Harvard that focuses on applying genomics to medicine.

    As for the remaining 95 percent of the genome? “There’s this weird lunar landscape of stuff we don’t understand,” Lander said. “No one has a handle on what matters and what doesn’t.”

    Until recently, the rest of the genome – the murky regions between individual genes – was viewed as occupied by more or less useless glop. Noncoding DNA is the polite term for junk DNA.

    But the glop is starting to look like gold. And genes, in a sense, are losing some of their glitter.

    “To our shock and consternation, we’re learning how little we know about the parts of the genome that may matter most,” said Dr. David M. Altshuler, associate professor of genetics and medicine at Harvard Medical School and also a top researcher at the Broad Institute.

    “Maybe some of it really is junk. Maybe most of it is junk,” he said. “But one shouldn’t bet against nature. Maybe it all serves some sort of a purpose. We really don’t know.”

    This is how science goes forward, of course. Not in a smooth march to the future, but with stumbles, back-steps, and wrong turns . . . .

    In June, a consortium of 80 research institutions in North America, Asia, Europe, and Australia completed the first comprehensive effort to plumb all the inner workings of the DNA molecule, not just the genetic portions.

    The Encode study shattered the view that genes carry out their labor in relative isolation.

    Instead, genes appear to overlap each other and share stretches of molecular code. Moreover, genes and nongenetic DNA appear to work in close, if mysterious, conjunction and also seem to communicate across relatively vast genomic distances in ways not understood.

    Scientists have long understood that small numbers of RNAs act as “dimmer knobs” that adjust the intensity of genes, thus regulating biological processes.

    But few had predicted the complex orchestration of genes and nongenetic DNA suggested by the Encode research. Even more sunning was the Encode finding that most “junk DNA” is transcribed, or copied, into more RNA molecules than can be accounted for by most prevailing theories.

    ++++++++++++++

    Okay, looks like the point of fact highlighted in Weak Argument Corrective no 4 is fairly well demonstrated then.

    Perhaps the most telling science-stopper point is the report that there were people who did not want to bother with going after the “useless” non-coding regions.

    No prizes for guessing why.

    Almost as hard hitting is the point that ever so few sceintists predicted that non-coding regions of the chromosomes would have function. I wonder why, and I wonder just who were among those few exceptions, why.

    But, bottomline: once the fact cited in rebuttal is “there,” the weak argument is corrected.

    GEM of TKI

  123. 123
    Hoki says:

    Joseph (#117)

    Why does ID predict little or no “junk” DNA?

    Experience.

    That is we have experience with designers that do NOT design junk into their functioning designs- especially designs that require coding.

    How many successful computer programs contain lines “junk” code?

    Experience from humans, by any chance? You are making the assumption that the designer would think and act like a human would. ID doesn’t make this assumption. Neither should you.

  124. 124
    kairosfocus says:

    Hoki:

    Experience and observation of human intelligent designers in action is just that: experience/ observation of intelligent designers in action.

    Where such designers have known characteristic artifacts or effects that are beyond the credible reach of chance + necessity, then we are entitled to infer to ID on seeing such reliable signs of intelligence.

    Are you in a position to argue that humans exhaust the possible set of designers?

    Or, that non-human designers will never manifest such diagnostic signs?

    Or, can you show where signs such as IC and FSCI are known — observed — to be the result of spontaneous, undirected chance + necessity?

    If not, we are entitled to reason from signs of design to designers, even beyond where humans are implicated.

    GEM of TKI

  125. 125
    kairosfocus says:

    PS: Cf the fate of junk DNA as summarised above . . . and who predicted it.

  126. 126
    Hoki says:

    kairofocus (#124)

    Or, that non-human designers will never manifest such diagnostic signs?

    No…… my point was that ID says nothing about the designer what-so-ever, whether it’s human or not. But here we have Joseph and yourself claiming that non-human intelligence would manifest such diagnostic signs.

    Where such designers have known characteristic artifacts or effects that are beyond the credible reach of chance + necessity, then we are entitled to infer to ID on seeing such reliable signs of intelligence.

    And this is as far as ID alledgedly goes. Given an observation that, for example, something has lots of CSI and function, the hypothesis is that something intelligent created it. I have a sneaking suspicion that some ID supporters seem to infer from this that as soon as something intelligent does something, it will always create something functional containing CSI.

  127. 127
    JayM says:

    kairosfocus @124

    Experience and observation of human intelligent designers in action is just that: experience/ observation of intelligent designers in action.

    Not exactly. It is observation of designers with the type and level of intelligence that can be expected of humans applying that intelligence to human problems.

    Hoki is raising the point that I raised lo these many months ago in one of Dr. Fuller’s threads: It is impossible to identify design without making some assumptions about the nature of the designer. Further, any investigation into design necessarily increases our understanding of the designer. The attempt to keep the nature of the designer off limits is doomed to failure.

    JJ

  128. 128
    JayM says:

    kairosfocus @124

    Or, can you show where signs such as IC and FSCI are known — observed — to be the result of spontaneous, undirected chance + necessity?

    Can you show where CSI has been calculated for a biological construct such as the bacterial flagellum, taking into account known evolutionary mechanisms? Despite repeated requests for a reference to such a calculation, I have never seen one.

    With regard to irreducible complexity, there are strong suggestions that the blood clotting cascade and the various bacterial flagella are the result, at least in part, of exaptation. I have high hopes the Dr. Behe’s research into the limits of known evolutionary mechanisms will result in observations that positively support ID, but those two initially promising examples of IC are probably not such evidence.

    I would also note that you are still falling into the fallacy of the false dichotomy. It is not sufficient to tear down modern evolutionary theory. For ID to be a scientific theory, it must make positive, testable predictions which could potentially falsify ID theory. That is what the answer to this FAQ needs.

    JJ

  129. 129
    Joseph says:

    Hoki:

    Experience from humans, by any chance? You are making the assumption that the designer would think and act like a human would. ID doesn’t make this assumption. Neither should you.

    Wrong!!!!

    Ya se it is very safe to make assumptions as long as those assumptions have some basis. Scientists do that every day.

    Then we test those assumptions.

  130. 130
    Joseph says:

    Hoki:

    No…… my point was that ID says nothing about the designer what-so-ever, whether it’s human or not.

    Your point is wrong.

    Ya see ID is NOT about the designer but that does NOT mean IDists cannot make assumptions based on our knowledge of designers.

    Also ID is about the detection AND study of said design.

    ID does NOT stop at detection.

  131. 131
    Hoki says:

    Joseph (#127)

    Ya se it is very safe to make assumptions as long as those assumptions have some basis. Scientists do that every day.

    Let’s run with that. Let’s assume that the designer(s) had imperfect techniques at the molecular level for stringing together DNA. Like humans, we’ll assume that they spliced bits and pieces together using various restriction enzymes and ligases. This can be a hit-and-miss affair. They never checked to make sure that they only got the DNA fragments they were interested in. From this we can predict that we should find lots of junk DNA.

    So… different assumptions, different conclusion. ID predicts both.

  132. 132
    Joseph says:

    Let’s assume that the designer(s) had imperfect techniques at the molecular level for stringing together DNA.

    First you have to remeber what we are observing today is NOT what was originally designed.

    It is the result of numerous generations with random effects.

    But anyway- again our EXPERIENCE doesn’t offer any/ many successful designers that don’t check what they are doing.

    And you are (wrongly) assuming the sequence is the information.

    It isn’t any more than the disk is the compuetr information.

  133. 133
    JayM says:

    Joseph @132

    First you have to remeber what we are observing today is NOT what was originally designed.

    If ID theory says nothing about the designer, how do you know this?

    JJ

  134. 134
    Joseph says:

    First you have to remeber what we are observing today is NOT what was originally designed.

    If ID theory says nothing about the designer, how do you know this?-JayM

    What I said has nothing to do with the designer.

    I have no reason to infer that all living organisms are as they were originally designed.

    Things change- mutations happen.

    So given that my statement seems pretty elementary.

  135. 135
    Hoki says:

    Joseph (#134):

    Things change- mutations happen.

    And they couldn’t have created junk DNA? You have just made the ability of ID to predict the existence of junk DNA even worse.

    But anyway- again our EXPERIENCE doesn’t offer any/ many successful designers that don’t check what they are doing.

    I take it you don’t know much about genetic engineering?

    And you are (wrongly) assuming the sequence is the information.

    No I don’t.

  136. 136
    JayM says:

    Joseph @134

    First you have to remeber what we are observing today is NOT what was originally designed.

    If ID theory says nothing about the designer, how do you know this?

    What I said has nothing to do with the designer.

    Of course it does. You are saying both that the designer is no longer interfering with natural processes and that the designer didn’t design in such a way as to prevent accidental changes.

    I have no reason to infer that all living organisms are as they were originally designed.

    You have no reason to infer otherwise, either, and yet you are.

    Things change- mutations happen.

    So given that my statement seems pretty elementary.

    How do you know that the mutations aren’t part of the designer’s design?

    Do you see now why detection of design requires some assumptions regarding the nature of the designer? It’s pretty elementary.

    JJ

  137. 137
    kairosfocus says:

    Onlookers (all up to 6,000 – 9,000 per day; and of course participants):

    Much of the immediately above is rather tangential to the focus of the thread, but a few notes will be helpful:

    1] JM, 128: Can you show where CSI has been calculated for a biological construct such as the bacterial flagellum, taking into account known evolutionary mechanisms?

    Of course, first, the flagellum is usually presented as an illustration of irreducible complexity, not of complex, specified information. (And, after years of attempted dismissals, it still stands as a clear demonstration of the limits of chance + necessity. Along with many, many other cases, to which I add the case that the DNA- RNA- Ribosome- enzymes- etc system constitutes an algorithmic, flexible program digital information processing device; i.e. a computer. In point of fact the real — and unmet — burden of proof is that evolutionary materialists ant heir fellow travellers have a challenge to show that such entities are reasonably achievable on their premises. Much in the way of just-so stories, very little in the way of showing that the proposed prebiotic and macroevolutionary mechanisms are capable of what they claim, on the gamut of our observed universe, much less this small planet. Indeed, on this,t he latest evidence is that 2 – 3 single point mutations are a formidable threshold, per the largest single empirical test case we have seen, the malaria parasite. As to the blood clotting cascade, in the most public case of a challenge, the objector cleverly selected the portions that were not in the context of where irreducible complexity applies, i.e a strawman fallacy.)

    So, the first issue is that until the information is present,t he system will not work and will be selected against by natural selection. And, there has been no empiriclaly anchored presentation of a means by which already functional entities can be so co adapted all at once that we get around that. For instance the TTSS is on the evidence a derivative of teh flagellum, just the opposite of the assumption that the flagellum is an extension of it. Indeed, that leads to the complex-ification of functionality that a functional subsystem is embedded in the flagellum gene set and assembly instructions/algorithm.

    Also, given that the flagellum embeds several dozen specific proteins and a truly unusual self-assembly algorithm [think about chained proteins injecting themselves up a pre-assembled tube then reassembling in situ, while locking together to grow a filament, and a metric based on chain length that helps control the phases of the process . . .] the flagellum is doubtless functionally specific and complex information-based. Indeed, just taking 40 proteins times a short typical protein length of 150 AA by 3-letter codons by 2 bits per base gives 40 * 150 * 3 * 2 = 36,000. this greatly exceeds the 1,000 bit rule of thumb threshold for functionality that is beyond the reach of random search strategies on the gamut of our observed cosmos.

    Why so? [Simple: 1,000 bits specifies a config space of 10^301, ten times the square of the number of quantum-states of the ~10^80 atoms in the cosmos over its reasonably estimated lifespan. So, for an entity taking up just 1,000 bits of information in its functionality, we would not be able to scan as much as 1 in 10^150 of the available configs. In that context the likelihood of the chance variation part of any spontaneous mechanism — and, per empirical observation, it is either chance variation or purposeful arrangement that one sees as possible sources of information-bearing configs [unless there is yet another bit of materialist magic — “poof” — to be played . . .] — getting to the shoreline of an island of functionality is negligibly different from zero. So, it is utterly unsurprising that all observed entities that use at least that much information in their function, of which we do know the causal story, are artifacts of design.]

    (Onlookers, notice, not one time that this has ever been put at UD as a challenge, have evo mat advocates and their friends been able to put up a single counter example. Given that we have a whole Internet full of cases in point of the validity of the FSCI threshold, that is no great surprise.)

    So, on the simplest metric, for the subset of CSI we have termed FSCI, we have excellent reason to see that something is amiss with the assertions above. (And, onlookers, similar calculations have been presented in this blog many times, by myself and by others esp GP.)

    On a more formal metric, and as repeatedly pointed out here at UD — it is even in the WACs — there are 35 published protein FSC calculations, embedded in a methodology that leads to a general approach. A methodology that is a direct application of the Dembski general FCSI metric,using the incidence of observed functional families of proteins to estimate target zones in config spaces.

    So, the above was an exercise in improper shifting of burden of proof, and a bluff.

    But all of this is tangential, there is . . .

    2] I would also note that you are still falling into the fallacy of the false dichotomy. It is not sufficient to tear down modern evolutionary theory. For ID to be a scientific theory, it must make positive, testable predictions which could potentially falsify ID theory. That is what the answer to this FAQ needs.

    Again, we are in the regime of inference to best, empirically anchored explanation across known alternatives. That is, the scientific method. (Translated: promissory notes on possible future explanations hold no cogency, so one cannot properly hold up vague logical possibilities as though this specifically discredits the best current explanation. That is selective hyperskepticism playing off improper shifting of a burden of warrant.)

    Second, what is being addressed in general by the design inference is a key issue: design is a known cause of complexity, one that is known to go beyond the capacity of chance variation and mechanical forces of necessity. thus, to suppress this as a possible explanation on matters where worldview level assumptions of evolutionary paradigms come into play — a la Lewontin — is improper.

    Worse, the progress of design theory is increasingly showing how design may be credibly empirically detected, extending methods well known from statistics, various applied sciences and even statistical thermodynamics and information theory. So, the idea that ID is just tearing down MET is false. A methodology is being corrected in science and evo mat censorship of scientific inference on origins is being exposed, along with the way that resulting just so stories have thus been given a free pass.

    As to positive, testable predictions, you have just dodged one of them: at the simplest level, FSCI will — practically speaking — only be seen in the context of design, per search space reasons. tha tis FSCI, FSC and CSI will be shown to be reliable signs of intelligent design.

    There are abundant confirmatory cases and ZERO disconfirmations. So, the test has been passed — just not acknowledged.

    As to the WAC 4, above I have documented where per wider themes in design, it was a prediction of leading ID scientists and thinkers that junk dna so-called would turn out to have serious degrees of function. This cut across the documented expectations of the evolutionary materialism champions. And, it is the design thinkers who have been vindicated by the facts emerging across the past decade or so.

    On the broader theme of the complexity of life, this expectation has been vindicated across the past 50 – 100 years or so.

    3] JM, 136: detection of design requires some assumptions regarding the nature of the designer

    Again, this inverts the empirical ladder of inference, turning it into a metaphysical assertion instead.

    Design detection methods make no claims to be able to infer to all instances of design, just those that are empirically evident relative to the way known — i.e. observed — designers work; in light of tested, empirically reliable [not logically bulletproof] signs of intelligence. (Post Godel, not even mathematics is logically bulletproof.)

    We have known designers, observed to create entities that exhibit IC, FSCI, FSC, CSI etc, as reliable signs of intelligence.

    Parallel to that, we have well-warranted knowledge that [1] necessity produces low contingency, [2] chance search is stochastic, [3] purposeful configuration is capable of outperforming chance on finding islands of function. Observed designers do this through insightfully, imaginatively and sometimes expertly harnessing the forces and materials of nature, as well as powers of reasoning etc, to create novel patterns that work towards goals. In so working, there are traces that reflect the underlying purposefulness and cleverness that are sufficiently often empirically evident to be worth the following up.

    So, from reliable signs per OBSERVED designers, we can infer to characteristics of cases of evident design where we do not observe the designers directly. This is of course falsifiable through counter example or analysis that shows that the capacity of chance to generate such configs is greater than is currently estimated.

    Which is simply to say that this is a case of science at work, producing provisional knowledge claims, as per usual. And, as per usual, since we are resign on large bases of empirical evidence, we are reasonably confident of th extensions — so much so, that unjustifiable a priori censoring rules of imposition of materialism a la Lewontin are being inserted by evo mat advocates to try to block such from being made.

    Therefore, just opposite to t e assertion, we are inferring FROM the observed signs to the presence and characteristics of designers, form their evident artifacts.

    In this case, we have contrasting expectations that DNA molecules will store a lot of functional information vs that most of it will be junk.

    At the first the evidence seemed to be in favour of the junk advocates, but further investigation is proving the ones who inferred from the FSCI evident in the known aspects of DNA that there would be further functional aspects embedded in what was being written off as junk, to be right.

    And, such successful risky prediction in the teeth of the expectations of the dominant — and in this case rather hostile — school of thought is precisely a strong indicator of the power of an emerging research programme.

    So, the WAC is precisely correct to highlight this as a strong case in point of a successful courageous prediction now increasingly vindicated by the facts.

    GEM of TKI

    PS: BTW, interwoven multi-layer data and regulatory codes in a data storage system — as now observed for DNA — is a multiplication of the degree of complexity beyond the mere enumeration of bit length. I never ever even tried to do that trick in my own designs — tough to do, very, very tough to do. And the odds of that happening by chance are so close to zero as makes no difference.

  138. 138
    kairosfocus says:

    PPS: I probably need to remind us all that natural selection plays no creative role in the evolutionary paradigm. It is a culler that “weeds out” those entities that are non functional or inadequately functional relative to competitors, in reproducing populations in environments where — as Malthus explored — there is competition for scarce resources. So, it is proper to focus our attention on the required engine of variation — one form or another of chance contingency generation. Then, we look at the issue that certain degrees of function are informationally complex [500 – 1,000 bits being a handy rule of thumb threshold], so that islands of function [recall codes are specific, and algorithms moreso, as well, and coded data and data structures too] will be deeply isolated int eh space of possible configurations. So isolated that it is maximally unlikely for chance based processes to get to shorelines of function for hill-climbing cumulative selection processes to have any hope of success. Worse, such cumulative processes per the malaria case, are credibly impotent to create the scale of novel information that would be onward required . . . (Toy scale simulations like Weasel and even its modern Genetic Algorithm descendants routinely duck the scale of complexity challenge.)

  139. 139
    derwood says:

    Wrong!!!!

    Ya se it is very safe to make assumptions as long as those assumptions have some basis. Scientists do that every day.

    Then we test those assumptions.

    This is true.

    So what are the assumptions for this Designer that have a basis (in fact? reality?)?

    I mentioned before that the basic mechanisms for evolutionary change are empirically verifiable and thus assuming that they exiost is warranted.

    What of thid Designer can we do the same for?

    And NOT rely on mere analogies to human activity?

  140. 140
    derwood says:

    First you have to remeber what we are observing today is NOT what was originally designed.

    It is the result of numerous generations with random effects.

    And this can be differentiated from evolution how?

    And you are (wrongly) assuming the sequence is the information.

    It isn’t any more than the disk is the compuetr information.

    And yet if you wipe a disc of it’s information, the disc remains intact. Wiping a chromosome of it’s “information” would require altering the DNA sequence or destroying it.
    So clearly, in a genome, the ‘information’ IS the DNA (or is on it or however you wish to phrase it).
    It is inherent in the sequence of nucleotides.

    Which is why I am unhappy with the use of the term ‘accidents’ when referring to mutations, for accidents implies a negative outcome.
    And I know ‘evos’ use the term, and I wish they would stop.

  141. 141
    derwood says:

    Experience and observation of human intelligent designers in action is just that: experience/ observation of intelligent designers in action.

    Actually, observing human intelligent designers is observing humanintelligent designers. Transferring human activity to an unknown entity or entities is at best unwarranted.

    Are you in a position to argue that humans exhaust the possible set of designers?

    Or, that non-human designers will never manifest such diagnostic signs?

    Are you in a position to argue the contrary?
    If we can identify ‘design’ in nature, can we also identify ‘poor’ design?
    If not why not?

    Or, can you show where signs such as IC and FSCI are known — observed — to be the result of spontaneous, undirected chance + necessity?

    Is there a way,other than via post hoc rationalizations and contrived ‘filters’ – to demonstrate that such properties even exist in nature?

    But design is not a science stopper. Indeed, design can foster inquiry where traditional evolutionary approaches obstruct it. Consider the term “junk DNA.” Implicit in this term is the view that because the genome of an organism has been cobbled together through along, undirected evolutionary process, the genome is a patchwork of which only limited portions are essential to the organism. Thus on an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we expect DNA, as much as possible, to exhibit function. And indeed, the most recent findings suggest that designating DNA as “junk” merely cloaks our current lack of knowledge about function.

    I may be new here and all, but it would be nice if ID advocates could slow down and read something an opponant writes for a change, instead of churning out knee-jerk, pre-fabricated expositions.

    For had anyone done so, they might have read some facts that contradict these prevailing – and quite false – notions that evolutionists declared all ‘junk DNA’ totally useless, etc.

    You might have seen that, in fact, it was evolutionists who not only speculated on and predicted, but actually FOUND function in some junk DNA decades before ID creationists claimed to have “predicted” it.

    *****************

    It seems to me that in order for someone to make a prediction, they cannot or should not already know that what they are predicting has already been discovered.

    That is, in order for ID advocates to take credit for “predicting” function in junkDNA, that such function should not already have been discovered, otherwise, they are not really making predictions.

    So I have to wonder what the status of papers like:

    Cell. 1975
    Feb;4(2):107-11.

    The general affinity of lac repressor for E. coli DNA: implications for gene regulation in procaryotes and eucaryotes.

    In which a function for junkDNA was predicted, or maybe this one:

    A general function of noncoding polynucleotide sequences
    1981
    Zuckerkandl.

    wherein it is proposed that junk DNA acts as transcription factor binding sites and such (which it does).

    Or any of the papers cited here.

    What do ALL of these papers have in common?

    1. They were researched and written by run of the mill evolutionists.

    2. They pre-date the supposed ID predictions by decades.

    It is not fair to claim that ID advocates ‘predicted’ something after the fact. Wells’ supposed 2004 ‘prediction’ is especially suspect.

  142. 142
    derwood says:

    Joseph:

    Ya see Darwinists STILL think that most of the genomes are “junk”.

    And Johnson-Dembskiists still insist that it was designed by an anthropomorphic superbeing.

    The difference is, there is at least evidence that some junk DNA is indeed ‘junk’. I’m not sure what the policy on linking to blogs is, so I will just say search for Sandwalk and JunkDNA.

    Who cares about computer programs, genomes operate somewhat differently.

    They are both codes- one for a computer and one genetic.

    And yet beyond so basic and simplistic an analogy, they are quite different.
    Being conversant with softare does not, in fact, give one an upper hand when discussing genetics.

    You are clue-less.

    Mutations are allowed to accumulate via selection- many types of selection.

    Yes, I know, which is essentially what I wrote.
    And clueless does not have a hyphen.

    Next you would be testing YOUR sode’s claims by figuring out how far something can be reduced.

    At which point you would just declare that at THAT point ID occurred.
    And so on.

    It is YOUR position that says living organisms and their parts can be reduced to matter, energy, chance and necessity.

    I don’t recall ever reading anyone on ‘my side’ ever mentioning such things.

    That you can’t even understand that simple fact demonstrates you are well beyond reasoniung with.

    I suppose so, but being a former research scientist and zoology/marine biology student, it seems that you could explain it all to me. After you divulge where you engaged in this study and what research you have done.


    So, what is the ID hypothesis for the appearance of the bacterial flagellum?

    That if it was designed it would not be reducible to matter, energy, chance and necessity.

    That is not an hypothesis as to why or how it was designed.

    What does the ToE say?

    I don’t know – I already explained that I do not know enough about it specifically to comment.
    But OK – the Designer is constrained to Designing bacterial locomotive parts and some intracellular molecules.
    When it is discovered that such Designer action is not necessary (I have little doubt that such a thing will eventually be discovered – ‘naturalism’ has a pretty good track record in finding things out and thus minimizing the role for magical tinkerers), what then? Will IDists and creationists claim that individual monomers are the product of Design?

    Allow me to answer that – Yes. I have already been told this by an IDist on another forum. There is no way that ID advocates will allow that their position is built on sand.


    What is the ID explanation for as to why there is more than one kind of flagellum in bacteria?

    Why does ID have to explain that?

    Doesn’t ID present itself as a viable alternative to evolution?
    As such, does it not have to be at least AS good as evolution, if not better? If not, what is the impetus for abandoning evolution?

    The ToE doesn’t.

    Perhaps not specifically (as I said, I don’t know much about bacterial flagella), but it would seem to me that since it appears that one type of flagellum arose via modification of related structures, that other types of similar structures could similarly be coopted.
    ReMine wrote in his latest entry that life is set up to look like it was designed by a single designer. Why would a single designer but different flagella in organisms that are essentially the same?


    What is the ID explanation for as to why not all bacteria have flagella?

    Not all bacteria require one.

    How do you know what bacteria require?

    Your turn:

    How can we test the premise tat the bacterial flagellum arose from a population that never had one via an acumulation of genetic accidents?

    We can analyze the genomes/proteomes of bacteria in a phylogenetic framework and infer the homologies and polarity of changes occurring through time as indicated by genetic changes.

    This has been done, and the analyses caused your own Mike Gene to all but acknowledge that the flagellum likely arose via evolution:

    “But in a far more important and global sense, it does indeed look like Matzke’s hypothesis is correct and that the TTSS machinery is homologous to the F-ATPase.
    In The Design Matrix, I explore how the concept of IC interfaces with cooption and intelligent design and offer the following as part of my approach:
    […]
    Multiple points of homology between the components of the F-ATPase and flagellum/TTSS would clearly qualify as “various parts of the machine” having “homologs that are in turn part of a system that is more ancient than the machine.””

    Hadn’t you heard?

  143. 143
    Joseph says:

    Hoki,

    Point taken- Yes those random effects on the original design that led to what we observe today could lead to real “junk” DNA.

    Let me soak on that while I respond to derwood…

  144. 144
    Joseph says:

    derwood,

    1- Evolution is not being debated

    2- It appears that two specified mutations are about all your scenario can handle- That is from evolutionists trying to refute Dr Behe in a peer-reviewed journal-

    “Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution” (Durrett, R & Schmidt, D. 2008. Genetics 180: 1501-1509)

    You can read Behe’s responses here, here, here, here, and here

    3- As for DNA being the information:

    “Yet by the late 1980s it was becoming obvious to most genetic researchers, including myself, since my own main research interest in the ‘80s and ‘90s was human genetics, that the heroic effort to find the information specifying life’s order in the genes had failed. There was no longer the slightest justification for believing that there exists anything in the genome remotely resembling a program capable of specifying in detail all the complex order of the phenotype. The emerging picture made it increasingly difficult to see genes in Weismann’s “unambiguous bearers of information” or to view them as the sole source of the durability and stability of organic form. It is true that genes influence every aspect of development, but influencing something is not the same as determining it. Only a very small fraction of all known genes, such as developmental fate switching genes, can be imputed to have any sort of directing or controlling influence on form generation. From being “isolated directors” of a one-way game of life, genes are now considered to be interactive players in a dynamic two-way dance of almost unfathomable complexity, as described by Keller in The Century of The Gene.”
    Michael John Denton page 172 of Uncommon Dissent

    It just isn’t so.

    4- By observing human and other designers AND also observing what nature, operating freely can do, we can then take that KNOWLEDGE and apply it to cases in which we are investigating and don’t know the cause.

    So if we observe something and we then infer design all one has to do is step up and demonstrate that no designer is required to produce it.

    5- The anti-ID position is that living organisms are reducible to matter, energy, chance and necessity.

    The ToE says that the bacterial flagellum is too.

    Yet to date not one scientist has been able to test that premise.

    6- If all bacteria required a flagellum then there wouldn’t be any bacteria without at least one

    7- Mike Gene thinks that if the genes are similar and the all exist- even if in different bacteria, then it is possible that all the “correct” genes to end up in one.

    However that doesn’t even start to address the issues- assembly being one of them.

    Some derived molecules require chaperones to take them to the correct place to avoid cross-reactions. And most derived products are required in high quantities.

    So with new genes you need new binding sites, promoters, enhancers, repressors and THEN you need to slide all of that in to the existing combinatorial logic.

    There just isn’t enough time unless it was designed to evolve.

    8- What do you think the design hypothesis was for Stonehenge?

  145. 145
    Joseph says:

    derwood:

    I mentioned before that the basic mechanisms for evolutionary change are empirically verifiable and thus assuming that they exiost is warranted.

    The problem is you appear to conflate “evolution” with the blind watchmaker.

    Read “Not By Chance” By Dr Lee Spetner and then you will (maybe) understand the telic position.

    Hint- not all mutations are random.

  146. 146
    kairosfocus says:

    Onlookers & participants:

    Thanks to Internet Wayback machine [captured a lot of stuff from a lost page from ReasearchID], a trove of further facts:

    ________________

    1] origin:

    The earliest use of the term “Junk DNA” in the biological literature appears to be by Dr. Susumu Ohno in his 1972 paper titled, “So much “junk” DNA in our genome.”[1] Ohno focused “mainly on the fossilized genes, called pseudo genes, that are strewn like tombstones throughout our DNA.” Ohno laments that it was difficult to perceive the role repetitive DNA played, and expected much of DNA to be non-functional, due to his evolutionary presuppositions. Ohno’s paper is in the collection “Evolution of Genetic Systems” (1972), part of a symposia anthology of the Brookhaven National Laboratory, Biology Department, Upton, New York.

    2] Brenner’s subrtlety:

    When challenged by someone with the argument that God would not have created us with 97 per cent of redundant or useless DNA, Brenner is said to have retorted, “I said it was ‘junk’ DNA, not ‘trash’. Everyone knows that you throw away trash. But junk we keep in the attic until there may be some need for it.”

    3] Dawkins weighs in, others follow:

    In 1976, Richard Dawkins published his gene-centric view of evolution in a book titled The Selfish Gene.[22] In this book, Dawkins presented a view of evolution that was divergent from most other thinkers of the time, and Dawkins’ book became a common reference for papers seeking to explain repetitive DNA as parasitical, including Crick and Orgel (1980). In 1992, Pagel and Johnstone reiterated these evolutionary predictions of junk-DNA having no function

    4] The consequent general triumph of the “junk” thesis:

    “These regions have traditionally been regarded as useless accumulations of material from millions of years of evolution.”

    – Yam, 1995[25]

    “. . . scientists generally accept the notion that most of this [non-coding] DNA is junk.”

    “Although the high content of “junk DNA” was initially surprising when it was discovered, our current understanding of the mechanisms of genome expansion (duplication and insertion) and the apparent lack of significant selective pressure to minimize genome size combine to make the accumulation of useless sequences in our DNA seem inevitable.”

    – Edward Max, 2003[26]

    “They are the remains of nature’s experiments which failed. The earth is strewn with fossil remains of extinct species; is it a wonder that our genome too is filled with the remains of extinct genes?”

    – Susumu Ohno, 1972[27]

    “Most Darwinists erroneously predicted that 98.7% of the DNA was devoid of function (“junk”), while the ID/ET theory correctly predicted some yet to be decoded function of junk DNA.”

    – Andras Pellionisz

    “The simplest way to explain the surplus DNA is to suppose that it is a parasite or at best a harmless but useless passenger, hitching a ride in the survival machines created by the other DNA.”

    – Christian de Duve, 1996[31]

    “The excess DNA in our genomes is junk, and it is there because it is harmless, as well as being useless, and because the molecular processes generating extra DNA outpace those getting rid of it.”

    – Sydney Brenner, 1998

    –> Notice, there is always room for the exceptional persons who say no, but we must recognise that there was a movement that had a strong reason to swim against a very strong tide.

    –> that is why citing odd cites to suggest that oh, there were folks who said much the same thing long ago has very little weight. [On MOST scientific topics of note, there is a pattern of minority or even fringe alternatives and oppositions.)

    –> We are dealing with the main weight of opinion here; what by the 1990’s was the “traditional” view, propped up by the observation that 97 – 98+% of say the human genome was “junk”. [Recall how that was being used rhetorically by Mr Dawkins as late as 2003 – 4 as already cited]

    –> By contrast, the design thinkers said no, this is wrong, and for a reason: how observed designers [the family resemblance standard against which we seek to recognise other cases of design] act, in a purposeful fashion that tends not to have great excess of that which is useless in key functional elements.

    –> For, you may have reserve capacity, safety margins, room for robustness etc, but as a rule, you do not load up that which is intended to function with that which has no function.

    5] Design theorists and friends

    ResearchID summarises:

    >>Michael Denton in 1986 discussed “functional” and “junk” DNA sequences, in Evolution: A Theory in Crisis[36] He found that major classes of animals appear to be equidistant, not the expected evolutionary tree pattern. i.e., he posited that “junk” DNA had function. In Nature’s Destiny, Denton again summarized the two positions:[37]

    “If it is true that a vast amount of DNA in higher organisms is in fact junk, then this would indeed pose a very serous challenge to the idea of directed evolution or any teleological model of evolution. Junk DNA and directed evolution are in the end incompatible concepts. Only if the junk DNA contained information specifying for future evolutionary events, when it would not in a strict sense be junk in any case, could the finding be reconciled with a teleological model of evolution. Indeed, if it were true that the genomes of higher organisms contained vast quantities of junk, then the whole argument of this book would collapse. On any teleological model of evolution, most, perhaps all, the DNA in the genomes of higher organisms should have some functions.”

    In 1994, pro-ID scientist and Discovery Institute fellow Forrest Mims III warned against assuming that ‘junk’ DNA was ‘useless.'” Science [a leading general journal, second to Nature] refused to print Mims’ letter in 1994 and again in 2003.[38] . . . .

    In 1996, Michael Behe, in Darwin’s Black Box, countered Kenneth Miller’s 1994 “junk and scribbles” arguments. e.g., Behe posited possible functionality of pseudogene regions:[39]

    “A couple of potential uses that spring to mind as I sit here at my desk include bonding to active hemoglobin genes during DNA replication in order to stabilize the DNA; guiding DNA recombination events; and aligning protein factors relative to active genes.”

    In 1998, John G. West, associate director of the Discovery Institute predicted:

    “an ID theorist, reckoning that an intelligent designer would not fill animals’ genomes with DNA that had no use, predicted that much of the “junk” DNA in animals’ genomes — long seen as the detritus of evolutionary processes — will someday be found to have a function.” [40]

    William A. Dembski predicted in 1998: [41]

    “But design is not a science stopper. Indeed, design can foster inquiry where traditional evolutionary approaches obstruct it. Consider the term “junk DNA.” Implicit in this term is the view that because the genome of an organism has been cobbled together through along, undirected evolutionary process, the genome is a patchwork of which only limited portions are essential to the organism. Thus on an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we expect DNA, as much as possible, to exhibit function. And indeed, the most recent findings suggest that designating DNA as “junk” merely cloaks our current lack of knowledge about function. For instance, in a recent issue of the Journal of Theoretical Biology, John Bodnar describes how “non-coding DNA in eukaryotic genomes encodes a language which programs organismal growth and development.” Design encourages scientists to look for function where evolution discourages it.” [Observe how, years later, the Darwinist mainstream was still standing by the junk estimation and was surprised by the ENCODE results etc. Notice the high street press reports on that as excerpted above in 122. paradigms are ways of seeing and simultaneously ways of NOT seeing.]

    Subsequent ID theorists repeated this ID prediction that functionality would be found in agenic or “Junk” DNA.

    Roland Hirsch in 2000 observed:

    “2. The complexity of the genome-proteome-phenotype relationship. Research in biochemistry and molecular biology is revealing increasing complexity in this relationship, beyond the capacity of ‘natural selection’ and Darwinian theory to deal with. For example, segments of DNA that are not part of any gene nevertheless appear to have critical functional roles in all living organisms; they are not ‘junk DNA’.”[42]

    Jonathan Wells in 2004:[43]

    “Since non-coding regions do not produce proteins, Darwinian biologists have been dismissing them for decades as random evolutionary noise or “junk DNA.” From an ID perspective, however, it is extremely unlikely that an organism would expend its resources on preserving and transmitting so much “junk.” It is much more likely that noncoding regions have functions that we simply haven’t discovered yet.” . . . .

    Dr. Richard Sternberg in 2002 reviewed extensive evidence of functionality of certain types of junk-DNA. He concluded that:[64]

    “neo-Darwinian ‘narratives’ have been the primary obstacle to elucidating the effects of these enigmatic components of chromosomes.”. . . “the selfish DNA narrative and allied frameworks must join the other ‘icons’ of neo-Darwinian evolutionary theory that, despite their variance with empirical evidence, nevertheless persist in the literature.” >>

    6] The data rolls in . . .

    >> Recent research indicates that active transposable elements (which make the largest part of the “junk”) actually have a strong mutagenic power. They are preferably activated under stress (i.e. situations where rapid evolution is required). Thus, one evolutionary function of transposable elements is to provide genomic rearrangements (i.e. genomic turnover) that accelerates genome evolution and provides genomic “raw material” from which new variation can arise.

    Recently, researchers have begun operating on the premise that there is functionality in agenic DNA (the 98% non-coding portion of the genome) and that it is not “Junk DNA”. For example Rosetta Genomics began exploring this region. Rosetta subsequently discovered about half the microRNA genes known.

    Further functionality is being discovered in “Junk DNA.” e.g., in the following research:

    * Y-chromosome control functions
    * Junk DNA diseases
    * Association with Alzheimer’s disease
    * Antifreeze-protein gene has evolved from Junk DNA
    * Two-Step Recruitment of RNA-Directed DNA Methylation to Tandem Repeats
    * An essential role for the DXPas34 tandem repeat and Tsix transcription in the counting process of X chromosome inactivation

    In 2007, the ENCyclopedia Of DNA Elements (ENCODE) project, organized by the National Human Genome Research Institute of the National Institutes of Health (NHGRI, of NIH), reported on its exhaustive, four-year effort studying 1% of the human genome for a parts list of biologically functional elements. The ENCODE consortium discovered that:

    1. the majority of DNA in the human genome is transcribed into functional RNA molecules
    2. these RNA transcripts extensively overlap one another.

    This broad transcription pattern challenges the “junk DNA” perspective that the vast majority of the genome has no biological function, with only a small active set of discrete genes. The consortium concluded:

    ‘However, we have also encountered a remarkable excess of experimentally identified functional elements lacking evolutionary constraint, and these cannot be dismissed for technical reasons . . . >>

    7] Backing away . . .

    >> The term “junk DNA” is slowly disappearing from research findings in the biological literature, and many occurrences of the expression appear as pejorative slurs against the concept.

    “I don’t think people take the term very seriously anymore.”

    – Eric Green, researcher involved with the mapping of chromosome 7 (NHGRI). 1998[70]

    “The fact that we don’t know something has a function doesn’t mean that in reality it does nothing.”

    – Carl Schmid, Ph.D., chemist, UC Davis, 1998[71]

    “I think this will come to be a classic story of orthodoxy derailing objective analysis of the facts, in this case for a quarter of a century. … The failure to recognize the full implications of this–particularly the possibility that the intervening noncoding sequences may be transmitting parallel information in the form of RNA molecules may well go down as one of the biggest mistakes in the history of molecular biology.”

    – John S. Mattick, Director, Institute for Molecular Bioscience, Univ. Queensland, Brisbane, Australia, 2003.[72: The Gems of “Junk” DNA, W. Wayt Gibbs, Scientific American, November, 2003.]

    “…a certain amount of hubris was required for anyone to call any part of the genome ‘junk’.”

    – Francis Collins (2006)[73] >>

    __________________

    GEM of TKI

  147. 147
    rna says:

    If the view that most of the DNA was really ‘junk’ is so central to ‘darwinist positions’ how come that a number of die-hard darwinists – the heads of the human genome project – decided already in the late 80’/early 90’s of the last century to sequence the complete genome including all that ‘junk’ and reject the alternative proposal to focus only on the protein coding regions? If anyone would have been really convinced that the ‘junk’ is really ‘junk’ and not just a very oversimplifying media and textbook catch phrase they would not have rejected the approach of only sequencing protein coding regions saving a lot of ressources since this was discussed at the time for exactly this economic reason. Thus, this bunch of ‘darwinists’ including jim watson and later francis collins must have been already convinced back then that they would find a lot of interesting things in that ‘junk’ before deciding to spend a lot of money on sequencing it. This is really not surprising since already in the early 60’s jacob and monod proposed important regulatory roles for non-coding dna sequences in their operon model of gene regulation. Thus research into the function of non-coding dna sequences is mainstream molecular biology for at least four decades.

  148. 148
    jerry says:

    No one has answered my question posed in #22 about ENCODE and junk DNA. At least I did not see an answer. I think it is necessary to answer it before making a big deal out of the uses of Junk DNA.

  149. 149
    Dave Wisker says:

    Hi jerry,

    No one has answered my question posed in #22 about ENCODE and junk DNA. At least I did not see an answer. I think it is necessary to answer it before making a big deal out of the uses of Junk DNA.

    You might be interested in the blog “The RNA Underworld”, maintained by RNA researcher (and occasional Panda’s Thumb contributor) Art Hunt. This particular article. “Junk to the Second Power”, deals with your question:

    http://aghunt.wordpress.com/20.....ond-power/

    From the article:

    The ID blogosphere is much agog, and has been for some time, about recent (and not so recent) results that suggest some sort of functionality in what has long considered to be nonfunctional (junk) DNA in eukaryotes. The most recent buzz centers on studies (such as ENCODE ) that indicate that large swaths of so-called junk DNA are “expressed” by RNA polymerase II. Apparently, the fact that RNA polymerase transcribes alleged junk DNA is a blow to Darwinism, and a feather in the cap of ID. Their excitement in this regard, I suspect, will wane greatly once they learn some of the true implications of these results. For the matter of “expression” in junk DNA is one wherein ID meets, and gets swallowed by, the Garbage Disposal.
    What follows is a discussion of a relatively recent report that rains on the ID parade. As is my habit, I’ll summarize the essay for those with short attention spans – the bottom line is that the so-called “function” that so excites the ID proponents may be little more than manifestations of quality control in gene expression, and that the supposed functional swaths of non-coding junk DNA may be nothing more than parts of the genome that encode, and lead to the production of, “junk” RNA (if I may so bold as to coin a phrase). In a nutshell, junk piled on top of junk.

  150. 150
    Adel DiBagno says:

    jerry [148],

    You ask a good question. I’ve been looking for updates, but haven’t found anything yet.

    Note that the pilot project data, which were published in June 2007, involved only 1% of the genome of a single species (human). Since then, the pilot project has been expanded, according to the NIH ENCODE project site, which links to other resources:

    http://www.genome.gov/10005107

    My hunch is that it’s going to take a lot more work – and time – to more fully understand the ENCODE results.

    However, there are other sources of information on expression and function of non-coding DNA. Try a search for “junk DNA function” at PubMed.

  151. 151
    Nakashima says:

    Mr Jerry,

    I agree with you that junk DNA is not enough. If ID works as a mathematical idea, there should confirmatory applications outside of historical biology. Or is it now the contention that the only ‘designed’ object is the human genome? Is the lack of junk DNA in another organism’s genome also a prediction of ID? Does ID make a prediction about the fine tuning of the universe?

    Just in terms of categories, a design inference is not a prediction. If various authors in the ID camp have made predictions about junk DNA, it seems to me that they are stating a belief that the designer is conservative, efficient, parsimonious, etc.

  152. 152
    Polanyi says:

    People where do we draw the line?

    when do we say enough is enough? no more get-out-jail-free cards for evolution “theory”?

    Is the mere fact that evolution clouds our understanding not a good enough reason to abandon the sacred cow??

  153. 153
    kairosfocus says:

    Gentlemen:

    No one said that the issue of junk DNA was enough to establish design theory!

    It serves as a simple, direct example of a successful prediction where the majority paradigm plainly led to decades of going down the wrong road on interpreting non genetic code DNA. (Go back up and see the actual citations . . . )

    And, design thinking pointed in the right direction, swimming strongly against the tide.

    Precisely because thinking in terms of design allowed ID thinkers to expect what seemed to be unreasonable to those whose thought pattern was different. And now, against the odds, that odd man out view is being substantiated.

    [For me this is a matter of direct memory: only several years ago I recall the reasons of why Junk DNA is Junk being trotted out and thrown in my face with mocking scorn as though this proved that I was an ignoramus butting in where he did not have any business being. Never mind, that my background precisely equips me to think about digital information in algorithmically functional contexts. just what we see DNA and its co-molecules providing. So, when I see what looks a lot like “ho hum, so what else do you have . . .” I think that that too is telling.]

    GEM of TKI

  154. 154
    Diffaxial says:

    gpuccio at 110

    1) I understood what you meant for “weak” and “strong”, but I have no room for “weak” ID.

    Then mine was a “garden path” error induced by your remark, “If you knew ID,” because most of ID proper, including this website, has most often advanced weak ID.

    I am just referring to ID methodology, which uses complexity as a tool to avoid false positives due to random effects. That has nothing to do with any “weak” conception of ID…That’s completely wrong. The threshold is designate to avoid false positive due to randomness, not in light of a competitive theory. Indeed, the concepts of design detection were not created by ID, and apply, as you know, to many other fields. Avoiding false positives due to randomness or to necessity mechanisms is an essential requirement of scientific design detection, and is not done “in light of a competing theory”

    In my opinion you are mistaken. The phenomena that can be explained by means of “randomness or…necessity mechanisms” shifts as research within evolutionary biology progresses, leaving for design explanations only those phenomena that are not swept up by the competing research project. Hence the boundary beyond which design can be claimed arises not from necessary positive entailments of ID theory, nor from research motivated by ID theory, but by research that continues without taking notice of or receiving useful input from design theory.

    That’s simply not true. We have done that many times. I was recently commenting exactly on that in the thread “Extra Characters to the Biological Code”. Therefore, I see no “embarrassing fact”.

    Neither does the guy trailing TP from his pants unawares see an “embarrassing fact.” 🙂

    Would you please provide some references that document the application of the explanatory filter to a biological structure of unknown origin (natural vis necessarily designed), and that includes computations of the relevant probabilities?

    I don’t understand. A moment you seem to admit that my strong ID makes predictions, another moment you deny it. I stick to the prediction I made about informational saltations.

    A moment ago you approached the threshold of making an empirically verifiable prediction arising from a “strong” form of ID (that designed forms must include saltations) but backed away to a “weak” position (the prediction of saltations arises from the logic of design detection, not as a necessary entailment of the design hypothesis.) The latter reminds me of my theory of lost coins: I predict that lost coins are distributed such that they will all be found near streetlights. My theory of coin detection specifies a night time search.

    And, in classical hypothesis testing, rejection of the null hypothesis does not comprise a test of the alternative hypothesis, but the alternative hypothesis can be affirmed if it is the “best explanation”

    But as we’ve seen in this discussion, absent tests of necessary entailments of a given theory, “best explanation” remains entirely subjective, as IMHO design is an explanation that doesn’t explain within biology. How to resolve such impasses? Describe empirically testable entailments that arise from ID theory, such that failure to observe those predicted entailments places the theory at risk of discomfirmation.

    I simply believe that ID could very well guide empirical research, if it were accepted…

    Yet no one seems able to describe the research that would arise from such guidance. Such descriptions are free, and have ample venues on the internet within which they may be proposed. But…?

    But if you really believe that observations merely serve as tests of existing theories, I am afraid you have a really strange conception of science and knowledge.

    It does not follow from the assertion that the epistemological foundation of science lies in theoretical prediction, and empirical tests of such predictions, that the only role for observation in science is to conduct such tests. Of course observations serve other purposes. But observation serving these other purposes absent the relationship between theory and test I describe, are not alone sufficient to establish an investigation as a scientific investigation, nor to increase confidence in a theory by empirical means.

    But reinterpreting others’ data is absolutely science, and a very essential part of it.

    Similarly, reinterpreting others’ data may have role in science, but absent, for example, replication studies that test predictions that arise uniquely from your own, competing theory, what you are left with is data that is amenable to two interpretations, with no reason to prefer your reinterpretation over the paradigm that originally generated the data.

    Ultimately, you’ve got to roll up your sleeves in the manner I describe. Absent that, you aren’t doing science.

  155. 155
    Hoki says:

    So, seems like the argument for why ID predicts that there should be little junk DNA is because the experience is that human designers make functional things. A favorite analogy seems to be the functionality of computer code. However, if the experience of human intelligence should be used in the first place, we should, perhaps, use human genetic engineering as a model instead. Here, genetic fragments are spliced together in test tubes and the results from such procedures is far from predictabe. Multiple insertions of desired fragments (as well as non-intended fragments) occur constantly (and potential deletions and more). The easiest way to check if you got your desired genetic fragment is to check the PHENOTYPE of the organism you are engineering. The phenotype will most likely say very little about the genotype – e.g. if you got junk inserted.

    So, if we were to use the assumption that the designer did something like above, we could reasonably predict that we should find junk DNA. Does anyone think that my assumption is better/worse than the one used by Dembski et al? Can you even remotely put a number on the plausibility of either?

  156. 156
    gpuccio says:

    Diffaxial #151:

    most of ID proper, including this website, has most often advanced weak ID.

    I still don’t understand what you mean with “weak ID”, least of all why you make such a strange statement. In your #69 you state:

    ““weak ID” (merely asserting the possibility of design detection) and “strong ID” (positive assertions about design)”

    To be clear, ID in a strict sense is about how to detect design when it is possible to detect it. That does not mean that it does not make positive assertions about design. The only point is that simple design detection does not necessarily give information about the designer and his purposes or means of implementation. Design detection detects design. But the nature of design and its phenomenology is obviously a premise of ID. So, again, I think that your concept of “weak” or “strong” ID is completely specious. There is no such division. There is the science of design detection, with its quantitative and formal methods, and there is a broader science of design as a phenomenological reality. The two are strictly connected.

    In my opinion you are mistaken. The phenomena that can be explained by means of “randomness or…necessity mechanisms” shifts as research within evolutionary biology progresses, leaving for design explanations only those phenomena that are not swept up by the competing research project.

    IMO you are mistaken. If you apply the correct methods of design detection, not one single phenomenon attributable to design has ever been shown to have other explanations. That’s why we say that the method of design detection has practically no false positives. So, yours is only wishful thinking and mythology.

    Hence the boundary beyond which design can be claimed arises not from necessary positive entailments of ID theory, nor from research motivated by ID theory, but by research that continues without taking notice of or receiving useful input from design theory.

    There is no boundary. The structures which are recognized as designed are designed. See previous point. No false positives.

    A moment ago you approached the threshold of making an empirically verifiable prediction arising from a “strong” form of ID (that designed forms must include saltations) but backed away to a “weak” position (the prediction of saltations arises from the logic of design detection, not as a necessary entailment of the design hypothesis.)

    I must say that you really don’t understand ID. So I shall try to be even more clear.

    If something is designed, there is always a saltation. When the designer inputs information, he always creates a saltation. The result he models would not have occurred by itself. It is marked by specification.

    The only problem is that if the information inputted by the designer is simple, then we can still recognize the specification but we have no way, unless we have observed the design process or know the designer, to be sure that it is not a “pseudo-specification”, arisen by some random process. That’s why design detection has many false negatives: all simple designs cannot be detected with certainty by the method of design detection used by ID.

    So, what I am saying is not that saltations will be there because my method recognizes saltations, as you seem to believe. I am saying that saltations will be there in all designed things, but that in all cases where design is complex enough, they will be recognizable with certainty. But the presence of saltations is a characteristic of design, and the presence of complex saltations is an intrinsic characteristic of all design whose complexity is beyond a certain threshold.

    Now, we know perfectly well that almost anything we can observe in living beings is highly complex. Proteins are just the first example. So, what I am saying is not they we will find saltations: saltations are everywhere under our eyes, in the biological world. I am saying that, when the progress of biological research will allow us to evaluate those saltations quantitatively, with all the necessary knowledge of the relationship between protein structure and function to definitely evaluate the target space of a search, and enough information about natural history to definitely build a model with some detail, it will be obvious that the observed saltations will not be explained by a random variation + NS model. That is a prediction, and it will be verified. For those verified saltations, quantitatively and qualitatively the same as those observed in human design products, design remains the best explanation, indeed the only available explanation.

    But as we’ve seen in this discussion, absent tests of necessary entailments of a given theory, “best explanation” remains entirely subjective, as IMHO design is an explanation that doesn’t explain within biology.

    That’s not true. Best explanations are not subjective. If you have a set of data, either you can explain them or you can’t. If you have an explanatory theory, that’s quite different from not having any theory.

    And design (which is an observed, empirical process) does explain designed things, both in biology and in all other fields.

    Describe empirically testable entailments that arise from ID theory, such that failure to observe those predicted entailments places the theory at risk of discomfirmation.

    I have done that. If, when more is known, the darwinist model can generate a credible quantitative explanation, ID is falsified. We only need the necessary information about protein functional space and natural history of protein emergence, and then we will be able to make detailed calculations.

    Yet no one seems able to describe the research that would arise from such guidance. Such descriptions are free, and have ample venues on the internet within which they may be proposed. But…?

    Some suggestions? Quantitative research about the size of protein function space, aimed to the qunatification of the target space. Analysis of as many genomes as possible, to try to asses in detail the natural history of protein emergence. Quantification of the functional information content in as many protein families as possible, by the Durston method or other approaches. Research about the role of non coding DNA, especially introns and transposons. Research about the transcriptome dynamics in the process of differentiation. Research about the regulation of alternative splicing and of post translational modifications. And so on. All of that is research which is extremely pertinent to the ID theory. And please, don’t answer that most of that research is already in some way being done by conventional biologists. That was exactly my point.

    But observation serving these other purposes absent the relationship between theory and test I describe, are not alone sufficient to establish an investigation as a scientific investigation, nor to increase confidence in a theory by empirical means.

    I deeply disagree with your view about science. I am happy that it is not you who can decide what establishes “an investigation as a scientific investigation”, or other such things. For me, observations and the capacity to give a credible explanation of them remain the foundation of science. And the incapacity to explain observations, as shown so brilliantly by darwinian theory, remains the hallmark of non science.

    Anyway, I have given you also the most important prediction which IMO will be able to discriminate between the two “explanations”.

    Similarly, reinterpreting others’ data may have role in science, but absent, for example, replication studies that test predictions that arise uniquely from your own, competing theory, what you are left with is data that is amenable to two interpretations, with no reason to prefer your reinterpretation over the paradigm that originally generated the data.

    I state again that the progress of biology is bound to allow a clear confrontation between darwinian theory and ID theory. Your phrase about “the paradigm that originally generated the data” is for me a very clear proof of your cognitive bias. Data are never generated by a paradigm. Data are data. You go on thinking that, if data were discovered by people who believe in a paradigm, or in the hope of supporting a paradigm, they are in some way the property of that paradigm. That’s absolutely false, even offensive for any conception of science. If data are interpreted in an incorrect way (which is the rule in the darwinian paradigm) it is the duty and the privilege of anybody else to point to that error, and to suggest a better interpretation. And there is always reason to prefer a theory which explains data to a theory which doesn’t explain them.

  157. 157
    JayM says:

    Attn: Kairosfocus and moderators

    I see that a post I made refuting Kairosfocus’ claim to have computed the CSI of a biological construct, taking into account known evolutionary mechanisms, has been removed. That post was perfectly polite and well within both the documented moderation guidelines and the behavior of others on this forum.

    So much for the new, open UD.

    JJ

  158. 158
    JayM says:

    To Kairosfocus:

    Since I’m being censored here, I invite you to continue the discussion in a neutral venue. If you go to http://groups.google.com/group/talk.origins and search for “Kairosfocus” you’ll see where I’ve started the discussion anew.

    I sincerely hope you’ll join me. I’d like to finish our chat.

    JJ

  159. 159
    gpuccio says:

    Hoki:

    #111 and #152

    I might be misunderstanding you here, but are you saying that since some of the DNA is functional, we somehow assume that the rest of it is and therefore ID predicts that most DNA will have function. Sounds a tad circular to me.

    It’s not circular. By design detection, we infer that the protein coding genome was designed, because we know its function, and can evaluate its complexity.

    From that, we hypothesize, quite naturally, that the whole genome was designed. You may ask why: very simply, if a designer designed the protein coding part, that is the most natural hypothesis. To hypothesize that the protein coding part was designed, and the rest is the product of random variation, seems a little bit bizarre, at least to me.

    So, the most natural hypothesis under the design scenario is: all the genome is designed. We understand the function of 1.5% of it. The function of the rest will be understood in time.

    Your reference to human genetic engineering is interesting. Indeed, humans use a variety of techniques in protein engineering, for instance. Some of them benefit of the (scarce) understanding we have of protein structure and function relationship. Partial random search is used, too. But above all, intelligent selection is used. And the results, in the end, are intelligent products.

    Your distinction about phenotype and genotype does not mean much. A designer will probably act on the genotype and measure on the phenotype, unless he can directly check the information implemented in the genotype. Anyway, th hypothesis is that the designer has enough control to get 20000 protein coding genes, perfectly functional, interspersed in a 3 Gigabase genome, split in a great number of exons, controlled by procedures we still don’t understand, capable to generate thousands of different ordered transcriptomes for different, ordered states and cell types, to check for errors, to respond to a lot of different challenges, to generate the macroscopic form of the organism, and so on. That does not seem the kind of designer who accumulates 98.5% of useless code.

    All that seems very much common sense to me. A child would probably see that without any difficulty. But darwinists have probably lost their common sense a long time ago…

  160. 160
    Hoki says:

    Here is what I think that Diffaxial means by weak vs strong ID. (I rarely tend to cut into other people’s conversations, but Diffaxial’s point is, I think, important [plus the fact that I have also brought up the same thing in this thread]).

    In bayesianism, one differentiates between posterior probabilities and likelihoods.The likelihood of a hypothesis (H) is the probability that H confers on an observation O while the posterior probability is the probability that O confers on H. Sounds cryptic so far, but bear with me. Say, for example, that you have lots of noises coming from your attic. You hypothesise that the noises emanate from gremlins playing bowling. This is a noisy endeavour, so your hypothesis has a high likelihood (I.e. given your hypothesis, you are very likely to get your observations). I doubt that anyone would argue that the converse is probable, however. I.e. given the observation that there is noise coming from the attic, the probability that this is due to gremlins plaing bowling is low (i.e. the posterior probability is low).

    It seems to me like ID is framed as a posterior probability argument. Given an observation of, for example, CSI, ID says that there is a high probability (one, in fact) that this is due to intelligent intervention.

    However, I have argued that given intelligent intervention, the likelihood of any specific observation is very low.

    At least, I think this is what Diffaxial meant…

  161. 161
    gpuccio says:

    Hoki:

    Thank you for your argument. I am not sure I understand it fully (I don’t know why, but when I read about Bayesianism I get a headache agter ome minute!). And I can’t really see the relatioship with Diffaxial’s weak or strong ID. My limit, probably.

    So, I will try to do what I can do: explain myself (and, I hope, ID) more clearly.

    ID does not invent the relationship between CSI (or FSCI) and the process of design. That relationship is constantly observed in human design. Moreover, no CSI is ever observed out of a design process (except for biological CSI, which is the one at issue).

    That’s the empirical basis of the hypothesis: CSI is a product of design, and of design only.

    Please, take notice that such a hypothesis has never been falsified, not even once, which should mean something.

    The second part is: biological information, whose origin is unknown to us, does exhibit CSI. Indeed, a lot of CSI. Indeed, tons of CSI. Therefore, it seems very plausible to interpret it as the product of design. Moreover, all the other explanations given up to now are bogus.

    That’s the simple reasoning. With all respect for Bayesians, whom I sincerely admire (as I admire all those who can understand what gives me headache). 🙂

  162. 162
    Dave Wisker says:

    jerry,

    I also gave a reply to your ENCODE question, but it was held up in moderation for a long time, in case you missed it. It’s at #149

  163. 163
    Adel DiBagno says:

    On logical fallacies:

    gpuccio [30] wrote:

    There are not many “theories of origins”. If RV and NS are ruled out, what are we left with? I’ll tell you. Design, or no theory.

    To which I responded [70]:

    This the logical fallacy of false disjunction

    gpuccio countered [95]:

    Not again, please… I think we had already clarified that tis is not a logical point. And so there is no logical fallacy. I had clarified that even to Diffaxial:

    “Yes, it does amount to support for ID. Again, you forget that we are talking empirical science here, and not logical demonstrations. I am really surprised of how often I have to repeat this simple epistemological concept, which should be obvious to anybody who deals with empirical science.”

    It is not:

    Either A or B
    Not A
    Therefore B

    (logical disjunction)

    but rather:

    We have to explain X.

    At present, we have only two theories, A and B.

    A does not work.

    B works.

    At present, B is the best explanation (always waiting for any possible C)

    Whenever one says, “either A or B,” one has declared a disjunction. A disjunction is a logical statement. gpuccio’s rephrasing maintains the disjunction when he says “we have only two theories.” What makes his disjunction fallacious is the fact that there are more than two theories. Among them are New Earth Creationism, Young Earth Creationism, Sanford’s Genetic Entropy, Davison’s Semi-Meiosis, Remine’s Message Theory, etc.

    Moreover, as Allen MacNeill has pointed out, the so-called Modern Synthesis is dead (or dying) and a new theoretical framework in evolutionary biology is emerging. New and better scientific theories are always in the cards, as the history of science has demonstrated.

    Therefore, if current evolutionary theory is ruled out, a cornucopia of alternative theories is waiting in the wings, and ID is not proven thereby. As others have noted on this thread and elsewhere, ID still must prove itself in a positive way.

    To the laboratories!

  164. 164
    Hoki says:

    gpuccio,

    I’m not surprised if you didn’t understand my post. It was a terrible one. I forgot to mention that weak ID was supposed to be a posterior probability argument and strong ID a likelihood one. ID predicting anything regarding junk DNA would be strong ID/likelihood.

    Note to self: read what I write.

  165. 165
    Diffaxial says:

    GP:

    I still don’t understand what you mean with “weak ID”, least of all why you make such a strange statement. In your #69 you state:
    ““weak ID” (merely asserting the possibility of design detection) and “strong ID” (positive assertions about design)”

    In a considerable quantity of ID literature the assertion is repeated that ID is about design detection, only, and that ID makes no claims about the characteristics, powers, or methods employed by the designer. Demands for more are demands to provide the notorious “pathetic level of detail.” Did not Behe state at Dover that the only justified inference about the designer is that it is capable of design?

    Other ID advocates are much more bold, in that in addition to claiming the above, much more explicit assertions are made: the designer front-loaded information one or a few of the first cells, to unfold for billions of years thereafter. The designer is God or otherwise supernatural. There can be only one designer. The designer acted by effecting “sudden appearances,” etc. And so on.

    Surely you are not denying the above.

    I chose to designate the first “Weak ID,” and the second “Strong ID.” You don’t like my designations. So be it.

    I don’t have any remarks to make regarding your many arguments by assertion: “not one single phenomenon attributable to design has ever been shown to have other explanations,” “Structures which are recognized was designed are designed” “No false positives,” your “predictions WILL be verified.”

    A refutation with equal merit? “Meh. Sez you.”

    Some suggestions? Quantitative research about the size of protein function space, aimed to the qunatification of the target space. Analysis of as many genomes as possible, to try to asses in detail the natural history of protein emergence. Quantification of the functional information content in as many protein families as possible, by the Durston method or other approaches. Research about the role of non coding DNA, especially introns and transposons. Research about the transcriptome dynamics in the process of differentiation. Research about the regulation of alternative splicing and of post translational modifications. And so on. All of that is research which is extremely pertinent to the ID theory. And please, don’t answer that most of that research is already in some way being done by conventional biologists. That was exactly my point.

    I don’t see any testable predictions arising uniquely from ID in the above. Offer testable predictions in these domains and you will strengthen this FAQ, and the work you describe may become something resembling science.

    That’s not true. Best explanations are not subjective. If you have a set of data, either you can explain them or you can’t. If you have an explanatory theory, that’s quite different from not having any theory.

    You misread my statement. Claims that one possesses “the best explanation,” absent the process I describe (necessary entailments, empirical tests of same such that your theory or elements thereof are placed at risk of disconfirmation), are claims only, and your reports of certainty regarding same are reports of subjective confidence only.

    To firmly yoke your claims to evidence, you need to put them on the line in a testable manner.

  166. 166
    gpuccio says:

    Diffaxial:

    Thank you for clarifying better what you meant with “weak” and “strong”. I can so better specify my position.

    Did not Behe state at Dover that the only justified inference about the designer is that it is capable of design?

    I agree with Behe, at the present state of knoweldge. That does not mean that other aspects of the designer or of the designers’s methods are not open to scientific inquiry or to scientific hypotheses. But the present evidence allows us more or less only to infer design.

    Other ID advocates are much more bold, in that in addition to claiming the above, much more explicit assertions are made: the designer front-loaded information one or a few of the first cells, to unfold for billions of years thereafter. The designer is God or otherwise supernatural. There can be only one designer. The designer acted by effecting “sudden appearances,” etc. And so on.

    My positions on those points (many times explicitly stated on this blog):

    a) “the designer front-loaded information”. I respect front loading as a possible scenario, but personally don’t believe in it. Some evidence is in that sense, and much evidence is against.

    b)”The designer is God or otherwise supernatural”. While I personally believe that the designer is God, that belief has nothing to do with ID. From a scientific point of view, I firmly and sincerely believe that with present evidence we cannot say who the designer is. And, as I have told many times, the word “supernatural” means nothing to me, unless more strictly defined.

    c)”There can be only one designer.” There is absolutely no reason for that statement. Indeed, some evidence could point to different conclusions, but the point is absolutely open.

    d) “The designer acted by effecting “sudden appearances,””. The chronological modalities of design implementation are very interesting, and IMO completely and immediately open to inquiry according to the data which constantly come from the study of natural history. As I have recently stated on another thread, my personal idea, according to what we know now, is that design implementation occurred both in more “acute” form (OOL, ediacara and cambrian explosions) and in more gradual form (speciation). But that’s only my personal opinion. Ther’s no reason for ID to prefer one modality or the other, if not what comes from research.

    So, as you can see, form those points I would be rather a “weak” IDist, according to your standards. But I do maintain that the concept that a designer implements function and purpose in his designs remains true, adn is all another matter. That concept comes from the concept itself of design, and from what we know of the process of design in human design, the only observable model, which we can observe both subjectively and objectively.

    Then process of design is “always” linked to purpose and function. Design is by definition teleologic, because it is the motivated output of a conscious representation. We may not recognize and understand the purpose or the function, but it is there just the same.

    So, to those who ask why we should think that the designer of biological information has the same purposes of humans, I would answer: we don’t know that, but we do know that he has purposes. And we, as humans, are potentially capable of understanding purposes, even if different from ours. If we are empirically capable of partially or completely achieving that result, that remains to be seen.

    So, to sum up, in this sense I am a weak IDists who believe that the expectation of function is implicit in the existence of a designer, and is not a further assumption about him. And who believes that further assumptions about the designer are open to scientific approach, although at present remain at the state of unsupported hypotheses.

    I don’t have any remarks to make regarding your many arguments by assertion: “not one single phenomenon attributable to design has ever been shown to have other explanations,” “Structures which are recognized was designed are designed” “No false positives,” your “predictions WILL be verified.”

    In order:

    a) “not one single phenomenon attributable to design has ever been shown to have other explanations”. That’s simply true. I obviously mean “attributable to design” by the ID inference. Have you any counter-example?

    b) “Structures which are recognized as designed are designed”. That’s only the same point as a).

    c) “No false positives”. Again the same point. Have you any example of false positives? I mean, outside biological information, which remains the controversial issue, can you offere some example where a design detection process with the quantitative threshold given by Dembski gave a false positive?

    d) My complete paragraph was:

    “So, what I am saying is not they we will find saltations: saltations are everywhere under our eyes, in the biological world. I am saying that, when the progress of biological research will allow us to evaluate those saltations quantitatively, with all the necessary knowledge of the relationship between protein structure and function to definitely evaluate the target space of a search, and enough information about natural history to definitely build a model with some detail, it will be obvious that the observed saltations will not be explained by a random variation + NS model. That is a prediction, and it will be verified. For those verified saltations, quantitatively and qualitatively the same as those observed in human design products, design remains the best explanation, indeed the only available explanation.”

    Well, that’s a prediction, isn’t it? That was what you were looking for. Obviously, the statement that it will be verified is just my opinion. But the prediction is there.

    I don’t see any testable predictions arising uniquely from ID in the above. Offer testable predictions in these domains and you will strengthen this FAQ, and the work you describe may become something resembling science.

    a) “Quantitative research about the size of protein function space, aimed to the qunatification of the target space.”

    The target of functional space will be shown to be so small that the probability of achieving a target space in most, or all, hypothesized protein transitions independent from NS will be shown to be so low that those transitions will be obviously empirically impossible by RV alone. IOW, the “islands of functionality”, of protein superfamilies at least, will be shown to be completely separated by oceans of non functional search space totally unsurmountable by RV, and not available to NS.

    b) “Analysis of as many genomes as possible, to try to asses in detail the natural history of protein emergence.” That’s fundamental for the establiahment of true natural history of protein emergence. The prediction? Once protein natural history is known enough in detail, it will be obvious that new proteins, totally unrelated to others, have constantly emerged in relatively short times, and that will make possible to apply to specific models the quntitative analysis suggested at point a).

    c) “Quantification of the functional information content in as many protein families as possible, by the Durston method or other approaches.”

    The prediction? In most independent proteins (at least, protein superfamilies, the functional information content will be shown to be well higher than any possible RV engine can ever explain, given all the possible biological resources.

    d) “Research about the transcriptome dynamics in the process of differentiation.” The prediction? That will show, sooner or later, where the regulation information is, and how big it is, Then it will be possible to apply quntitative analysis to that information too, and not only to the protein structure information. And the regulation information (the “procedures”) will be shown to be much richer in functional information content than mere protein structure information, adding heavily to the credibility of the design explanation.

    e) “Research about the regulation of alternative splicing and of post translational modifications.”

    The prediction: that will prove that further written information and procedures guide those processes, and will allow us to understand where that information is, and how big and complex it is. Same point as in d).

    Claims that one possesses “the best explanation,” absent the process I describe (necessary entailments, empirical tests of same such that your theory or elements thereof are placed at risk of disconfirmation), are claims only, and your reports of certainty regarding same are reports of subjective confidence only.

    I have tried to give you as many objective points as possible. My subjective confidence remains untouched, but that’s only my problem. Perhaps, I am a bayesian after all, and like betting. 🙂

  167. 167
    kairosfocus says:

    Onlookers (and participants):

    Most current discussion on this thread on the fourth corrective of weak anti-design arguments seems to now be tangential.

    One can take it that that is a sign that he basic point has been made.

    Namely, that precisely because the design view is a distinct paradigm, it led people to stick up on the point that it is unlikely that we knew sufficient on the DNA to infer confidently that by far and away most of it was essentially non-functional “junk,” the accumulation of accidents across time. And now, as results have rolled in in recent years, there has been a wave of support for what was in certain quarters quite mockingly dismissed only a few short years ago.

    In that context, the basic point in the WAC 4 is supported.

    Now, I do see a few points that I would comment further on:

    1] JM, 158: Calculations of FSCI

    First, a threshold value is a metric. And, it remains true that there are no observed cases where 1,000-bit length functional data strings have been observed to originate by chance or chance + necessity.

    Second, on the flagellum, I presented a simple estimate that puts it beyond that threshold quite comfortably.

    So, on the background that we have a known and frequently observed cause of FSCI, and we also have a good explanation for why the other main observed source of highly contingent outcomes, chance, will not be reasonably likely to give rise to FSCI, we asre entittled to deraw a conclusion that the observed DNA basis for the flagellum is designed. (And, BTW, mechanical necessity is not the actual source of highly contingent outcomes: we see that under certain conditions, forces act that push processes or phenomena along predictable lines, often to astonishing degrees of precision. Thus, from reliable natural regularity we infer to forces of mechanical necessity; Newtonian mechanics being the classic case. We infer to the known causes of highly contingent outcomes when we see that that is a dominant factor: chance and intent.)

    So, I provided the simplest type of calculation in the thread above, despite the dismissive objection made. (And, the line that I am providing pointless verbiage has long passed its sell-by date.)

    I find too that the comment I found is a mere dismissal, it has no provision of [1] an observed case of FSCI beyond 500 – 1,000 bits being shown to have originated spontaneously by chance + necessity, nor [2] some evidence that the flagellum is within that threshold sufficiently that we can be confident that the search resources of our planet or cosmos as observed are adequate to provide a probabilistically reasonable explanation. [Recall, 1,000 bits in a functional string specifies a contingency space of 2^1,000 ~ 10^301 possible configs, or ten times the square of the number of plausible quantum states of our observed cosmos across its reasonably estimated lifespan. the flagellum is credibly at least an order of magnitude beyond the threshold, simply on DNA to code the required proteins.]

    Furthermore, at the more sophisticated level, as pointed out in WAC no 27, Durston et al have published, with details of underlying numerical considerations, since 2007, a list of 35 measured values of FSC [cf the table they published]; this being both a more technical form of what the FSCI calculation is about and a manifestation of CSI [the superset]. In turn, this method traces to the underlying CSI approach, which rests on the principle of finding small targets in large contingent spaces, multiplied by the explorations of the space of funcionality of various protein families.

    I therefore consider that your challenge has been answered at WAC 27, long since; on the standing record at UD.

    2] “Censor[ship]” accusation

    The searched out comment by JM as reproduced in another forum has in it the direct import that I have contributed to/acquiesced in silencing of those who merely differ with me.

    This is not so; and such a false, loaded insinuation would easily explain why the comment in quesiton was removed for cause. It also shows to me the wisdom of not subjecting myself to the exemplified tone of debate — and notice, I almost never use that term in a positive sense — likely to dominate such a forum.

    Moreover, since there is abundant evidence that discussion on the merits is freely permitted at UD, while invective, privacy violation and general nastiness occur abundantly in other fora [and mostly coming from the evo mat and friends side], I see no reason to transfer discussion to such fora.

    In short, JM: the issue is answered on the merits, and if you have a substantial response [which includes of course the peer review published Durston et al table of calculated FSC values for proteins and the underlying analysis], I have no doubt that it would be entertained here. [After all, in recent weeks a string of threads on the Weasel 86 program ran to over 1,000 comments and several threads. My closing summary in my always linked is in what is now Appendix 7.]

    3] Adel, 163: Whenever one says, “either A or B,” one has declared a disjunction. A disjunction is a logical statement. gpuccio’s rephrasing maintains the disjunction when he says “we have only two theories.” What makes his disjunction fallacious is the fact that there are more than two theories. Among them are New Earth Creationism, Young Earth Creationism, Sanford’s Genetic Entropy, Davison’s Semi-Meiosis, Remine’s Message Theory, etc.

    Pardon an intervention on the issue of epistemological warrant on abduction vs disjunctive reasoning in the logic of deduction.

    For, a disjunction per purported demonstrative “proof” — as GP has said — is very different from alternatives presented in the context of inference to best explanation [IBE] per competing empirically testable hypotheses.

    That is, IBE is a species of induction in which a cluster of puzzling facts F1, F2, . .. Fn is on offer and alternative explanations E1, E2, Em are put.

    Per criteria of relative warrant, e.g. factual adequacy, coherence and explanatory power and elegance, the currently best explanation is preferred.

    In the relevant context, chance and necessity [blind spontaneity] is one major family of explanations, and design is another [and all the cluster of alternatives you suggested are post design, dealing with proposed mechanisms and candidate designers].

    The overarching design explanation is proffered on the criteria that there are certain features of observed designed systems that are strongly correlated with designs and NOT with spontaneous occurrences. For instance, functionally specific complex digital information. For similar instance, function dependent on multiple mutually coupled parts that is broken when for a certain core subset any one part is removed. Similarly, coded algorithms instantiated in physical systems that effect said procedures. And more.

    On abundant evidence, it is held that these are observed to be consistently associated with design. For, in cases where we directly know the causal story, this is observed without significant counter-instance, once a reasonable threshold of complexity is seen. [Per the Dembski Universal Probability Bound, that is at about 500 – 1,000 bits for practical purposes; though in contexts that are much narrower, these are very generous margins.]

    Further to this, we can easily see that 10^150 states is a reasonable upper search limit for the observed cosmos across its reasonable lifespan.

    1,000 bits specifies a config space of 10^301 states; i.e well over the SQUARE of the reasonable search limit of the observed cosmos. So, as long as function is sufficiently specific that arbitrary configurations will not be functional — i.e so long as function is even moderately vulnerable to perturbation of the information [think of randomly reshuffling letters in sentences to see what I am getting at] — then it is highly reasonable to see that random search strategies on the scope of the observed cosmos would be maximally unlikely to find the shores of islands of function. [Indeed, odds of 1 in 10^50 are a reasonable threshold of practical — as opposed to logical — thermodynamic impossibility used in statistical thermodynamics; i.e. this is the same basic thinking that underlies the statistical form of the second law of thermodynamics.]

    Further to this, we know from observation and experience of design, that designers are able to use their intelligence and imagination to create initial configurations of complex entities that are close enough to functional that with relatively modest investigation nd modification, they are able to achieve function. that is, the active information that cuts down the search space to manageable proportions is known to originate in intelligent action, per empirical investigation.

    So, we have excellent reason to see that we have identified to best explaantion with causal framework and reason why we see what we see, that intelligence is the cause of the sorts of complex organisation we are discussing.

    This is generally uncontroversial. But, it is now a matter of hot debate in science of origins because it tends to support the idea that intelligence may have had something to do with the origin of life and the observed, life-facilitating, fine-tuned cosmos.

    So, the root issue is a worldviews clash, not really a matter of whether the detection of design per empirical investigation is reasonably feasible. Our courts are abundant demonstration that yes, such is feasible, and that there are reliable signs of intelligence.

    GEM of TKI

  168. 168
    jerry says:

    kairosfocus,

    I thing this FAQ should be shelved till further work on the genome is complete. There is no evidence that most let alone all of the genome has current day function. They are transcribed, that is all. There is no evidence at present that these transcribed nucleotides to RNA then actually do anything to monitor, regulate, repair, catalyze or anything else within the cell.

    We have to wait on that. So for ID to crow about the supposedly functionality of the DNA may turn in to crow to eat when they cannot find this functionality. It may turn out that a good bit of these transcribed nucleotides do have function but ID should wait.

    It also may be that the function of the transcribed elements have use in a way we cannot now imagine.

  169. 169
    gpuccio says:

    jerry:

    I don’t follow you. this FAQ is about predicions, not about verified predictions.

    There are already many demonstrations that at least part of non coding DNA has rgulatory functions, and other evidence that almost everything is transcribed. I think ID can and must make the prediction that all or almost all non coding DNA wil be found to be funtional, and regulatory, even if it will be, as you sa, in ways that we cannot now imagine.

    That is a strong and useful prediction, wich can be verified or not. Maybe in the end darwinists will show that most non coding DNA is really evolutionary garbage. That would certainly be a heavy blow for ID.

    But I think we can accept the risk.

  170. 170
    gpuccio says:

    Adel:

    Whenever one says, “either A or B,” one has declared a disjunction. A disjunction is a logical statement. gpuccio’s rephrasing maintains the disjunction when he says “we have only two theories.” What makes his disjunction fallacious is the fact that there are more than two theories.

    This was my original comment:

    “We have to explain X.

    At present, we have only two theories, A and B.

    A does not work.

    B works.

    At present, B is the best explanation (always waiting for any possible C)

    This is empirical reasoning.”

    OK, I had not taken into consideration “New Earth Creationism, Young Earth Creationism, Sanford’s Genetic Entropy, Davison’s Semi-Meiosis, Remine’s Message Theory, etc.”. Some of them are not even scientific theories for me. I refer to “New Earth Creationism, Young Earth Creationism” and all forms of creationism. Indeed, while I can respect creationism and creation science, I don’t consider them as scientific theories. Just my view. But whatever they are, creatinist theories are certainly design theories.

    Others, like Sanford’s and ReMine’s, do not look as alternatives to ID. I have not read Sanford’s book, but I cite from a review of it: “The conclusion is that we were created perfect, have been headed downhill ever since and the human race cannot be a thousand generation old yet. Solutions are not in better technology but a relationship with God who will take us out of this decaying creation at the proper time.” That does no look like a non design theory.

    I the same way, although I do not know in the details Davison’s theories, I doubt that they are non design theories, but I could be wrong.

    Regarding ReMine, again I am not an expert, but again I cite from a review of one of his books: “However Remine’s message theory is unique, in that he adds that life was intentionally designed to look unlike evolution.”

    So, why are you quoting design theories as alternatives to ID? Those are ID theories, although each of them has specific peculiarities.

    But to go back to the “logical fallacy” argument, I think kairosfocus has pointed out very correctly:

    “For, a disjunction per purported demonstrative “proof” — as GP has said — is very different from alternatives presented in the context of inference to best explanation [IBE] per competing empirically testable hypotheses.

    That is, IBE is a species of induction in which a cluster of puzzling facts F1, F2, . .. Fn is on offer and alternative explanations E1, E2, Em are put.

    Per criteria of relative warrant, e.g. factual adequacy, coherence and explanatory power and elegance, the currently best explanation is preferred.”

    I would like only to repeat that:

    a) the choice of best explanation is not a logical choice, but an empirical choice. Terefore, a logical disjunction or any similar logical apparatus is in no way necessary to choose the best empirical explanation.

    b) there is no need that only two theories exist. That is just my perception of the current situatuion for our problem. But is you want to include other theories (possibly non design theories; design theories are not in competition with ID), then it’s fine for me. The reasoning becomes:

    We have to explain X.

    At present, we have only five theories, A, B, C, D and E (or any other number yu like).

    A, B, C, D do not work.

    E works.

    At present, E is the best explanation (always waiting for any possible F)

    This is empirical reasoning.

    Or can you show me a non design theory which works?

  171. 171
    gpuccio says:

    Hoki:

    thank you for the clarifications. I will try to understand your bayesian arguments as soon as I fell in the right frame of mind. I do have problems with the bayesian scenario, that was not an excuse. But again, that’s just my pesonal problem.

    However, I hope I have made my points clear enough to be understood (or countered) without the help of a bayesian analysis. 🙂

  172. 172
    Alan Fox says:

    That is a strong and useful prediction, wich can be verified or not. Maybe in the end darwinists will show that most non coding DNA is really evolutionary garbage. That would certainly be a heavy blow for ID.

    Is there an upper bound to the proportion of non-coding DNA that turns out to have some function related to the organism in which it is located, that is favourable to ID?

  173. 173
    jerry says:

    gpuccio,

    I disagree. So far the so called junk DNA has not been shown to have any function. I don’t think anyone ever said that the only useful DNA were the coding regions and that all of the rest of it was nothing but junk. Every right minded scientist thought there was more. Right now we can only point to a small percentage of the genome, less that 10% as having possible function. What if it was found out that 90% or even 50% was junk. After all there are much large genomes than human genomes and very few would point to most of these genomes as having any use. ID would not look very good if this prediction was not proven out.

    So as it stands right now, a large percentage, certainly most of the genome, is not functional in any way we know about. It may or may not be functional but that is for science to determine in the future. To come along and say that ID predicted it would have use and then not to be able to show any use is not very smart and as I said can make ID look foolish.

    It may turn out that there is use for most of the DNA but until that is shown to be true, do not hold it up as something ID predicted that came true.

  174. 174
    gpuccio says:

    Alan Fox:

    Personally, I believe that most of it (let’s say more than 90%) will be found to have function. But it’s not only a question of quantity. In an ID perspective, the various parts of the genome have to be justified in some way. So, it is not certainly a randm case (always in an ID pespsective) that genes are so fragmnted, and separated by introns. Both the fragmentation of genes into exons and the introns will be shown to be necessary and functional, almost certainly to the global regulation of the genome.

    In the same way, I don’t believe that repetitive elements are only what they appear: senseless repetitions of code. I am sure that vthey are powerful tool for the regulation and probably the gradual modification of geneomes, according to a very definite plan.

    I can accept that small parts of the genome be the product of degradation or corrtion of information (some pseudogenes or some ERVs could have that meaning). But I am not sure even of that. As far as I can understand, the genome is highly error checked. Errors can certainly still occur, and do occur, but I do not believe that they can involve 98.5% of the genome. Nor 50%. Nor 70%. And I could go further. Am I definite enough? I am waiting to be falsified.

  175. 175
    gpuccio says:

    jerry:

    You are entitled to your opinions. And I am entitled to mine. You say:

    “What if it was found out that 90% or even 50% was junk. After all there are much large genomes than human genomes and very few would point to most of these genomes as having any use. ID would not look very good if this prediction was not proven out.”

    Well, ID has to take its risks. I am certainly willing to take mine. Yoo can well keep a different stance. If what you suggest happens, I will certainly not look very good, and with me my personal conception of ID. Time will say.

  176. 176
    Hoki says:

    gpuccio:

    That’s the empirical basis of the hypothesis: CSI is a product of design, and of design only.

    Please, take notice that such a hypothesis has never been falsified, not even once, which should mean something.

    The second part is: biological information, whose origin is unknown to us, does exhibit CSI. Indeed, a lot of CSI. Indeed, tons of CSI. Therefore, it seems very plausible to interpret it as the product of design. Moreover, all the other explanations given up to now are bogus.

    I have not made any arguments for or against what you write above. Let’s just for the sake of argument say that you are right. CSI always comes from something intelligent (this is the weak ID or posterior probabilities argument). What I’m saying is that this is vastly different from saying that something intelligent will invariably cram it’s designs full of CSI (strong ID/likelihood).

    In another post you wrote:

    Your distinction about phenotype and genotype does not mean much. A designer will probably act on the genotype and measure on the phenotype, unless he can directly check the information implemented in the genotype.

    (emphasis added) (here you just made a likelihood argument, btw)

    You don’t know ANYTHING about the designer. How can you possibly make such an assumption? I think it’s perfectly reasonable to speculate that the designer only acted on the phenotype. I think it’s perfectly reasonable to speculate that the designer used multiple rounds of intelligent phenotypic selection to create the organisms we see today.

    Note: when I say that I find these things reasonable, I mean that I find them just as reasonable as some other assumptions that have to be used to form junk DNA predictions.

    If you think that the designer would probably have done what you think it did, could you put some (very) rough numbers on that? 90%? 50%? 10%? 0.001%?

  177. 177
    gpuccio says:

    Hoki:

    Let’s just for the sake of argument say that you are right. CSI always comes from something intelligent (this is the weak ID or posterior probabilities argument). What I’m saying is that this is vastly different from saying that something intelligent will invariably cram it’s designs full of CSI (strong ID/likelihood).

    Well, let’s see. “CSI always comes from something intelligent”. That’s what we observe, in human design. And we have only two sets of CSI: in things designed by humans, and in biological information.

    Now, here we have to discuss briefly what is that makes CSI CSI. As you probably know, there are two components: one is specification, the other is complexity.

    Specification (in the case of FSCI, functional specification) is the true mark of design: design is always specified because it always is the projection of a conscious intent. Sometimes that specification, that intent, is easy to detect. Other times it could remain hidden, even forever.

    Complexity is not required for design, but it is what makes design detectable. It is important to note that in human design complexity is almost the rule, especially in design expressed by codes or language. IOW, simple design certainly exists, but complex design is the rule. It is fascinating how human design crosses the threshold of complexity, according to ID definition, with the greatest facility, while any other causal mechanism (RV, necessity, or any mix of the two) never even approaches that threshold.

    So, what I mean is that if a designer designs 3 Gigabases of code, they will be specified. And if the apparent reason why that code has been designed is to guide vital processes in a living being (and there are many reasons to infer that, without knowing anything about the designer, but just reasoning on the designed object and its context), the simplest inference is that all the code is functional. Exceptions can be represented by parts of degenerated code, random errors, and so on. IOW, processes which do not depend on the activity of the designer.

    Again, I object to words like “invariably”. According to your definitions, I can only say that:

    a) What you call “weak ID” is the observational part of ID (the basis for inference)

    b) What you call “strong ID” is the inferential part of ID, the theory.

    There is only one ID. Both the processes you describe are part of it.

    (here you just made a likelihood argument, btw)

    Yes, and in this case it is really a weak likelihood argument, becasue it is an argument about the modalities of implementation. I have many times stated that there is not presently a lot of evidence to choose between different possible modalities of implementation.

    You don’t know ANYTHING about the designer. How can you possibly make such an assumption?

    Well, I know that this designer has designed functional proteins, functional cells, functional beings. That’s something, IMO. Those are informations coming directly from the designed product.

    And I know that if humans want to design a new proteins, they have two options:

    either they know how to write a primary sequence which will have a specific function (and they usually don’t), so they can directly write the genotype and synthesize the protein

    or

    They just know the function they want to obtain, and they make guided random searches and test for the obtained function, and go on by a process of intelligent selection.

    Or some mix of the two.

    Those informations don’t come from a specific knowledge of humans, but from our knowledge of the cognitive process and of the design process. Even if we know nothing of a designer, we can well expect that he will solve problems by a cognitive approach, because that’s our model of design, not of a specific designer. We could be wrong, but you know, then we could be wrong in anything we think we know.

    That’s where I think both you and Diffaxial apparently equivocate: my assumptions don’t come from any knowledge, true or imagined, that I have of the designer, but rather from the knowledge we have of the process of design in conscious beings as we are. IOW, even if we don’t know anything of the designer, we certainly know that he is a designer, and we know the things he has designed. That’s the reasonable origin of all our assumptions.

    And again, “to know nothing of the designer” does not mean that we don’t know that the designer is a designer. We know that. And a designer is a conscious intelligent being who acts out of purpose and intent. That we know. That is true of all designers.

    I think it’s perfectly reasonable to speculate that the designer used multiple rounds of intelligent phenotypic selection to create the organisms we see today.

    Phenotypic selection alone will not do. Intelligent selection of the genotype according to expression of function in the phenotype is a perfectly admissible modality, as I have argued. What it reasonably implies is that the designer knows the function he wants to obtain, but does not know directly how to implement it. That’s certainly a possibility. In human design, there are important models of that kind.

    But my point is that, either the designer writes his code directly, or intelligently selects it after targeted random search, the result is functional, and neither modality explains a result of 98.5% useless code with 1.5% highly functional code interspersed. Even human protein designers, with all their ignorance, are much more efficient than that.

    If you think that the designer would probably have done what you think it did, could you put some (very) rough numbers on that? 90%? 50%? 10%? 0.001%?

    As I have said, I would not bet on the modalities of implementation; I am only sure, form the observation of the results, that a highly efficient modality has been used.

    For the same reason, I am ready to bet on a definite function of at least 80% of the genome, and I bet at 90%. Where and when can I pass to take my money?

  178. 178
    jerry says:

    gpuccio,

    you said

    “Well, ID has to take its risks. I am certainly willing to take mine”

    You are certainly welcome to your risks but I do not see why ID has to take on your risks which is what this FAQ is about. This prediction is being identified with ID and with little corroborating evidence to support that it is true.

    My faith in ID will not be shaken one iota if most of the genome is not obviously functional. In fact it will be strengthened. It will mean that the incredible complexity of the human organism and other organisms was accomplished with so little of an instruction set and way beyond what any random happenstance could accomplish. And it will mean that the function of the genome that is supposedly unused will have another potential use of which we are completely unaware.

    Who ever designed everything had to think of a lot of contingencies and the size of the genome may be due to some of these contingencies for which we as of now have no ideas.

    I think it is a needless position for ID to take.

  179. 179
    Adel DiBagno says:

    Well said, jerry [178].

    And there is also the C-value paradox:

    http://en.wikipedia.org/wiki/C-value_enigma

    that was not predicted by either evolutionary science or ID (but has been explained by one of those theories.).

  180. 180
    Diffaxial says:

    Gpuccio:

    IOW, even if we don’t know anything of the designer, we certainly know that he is a designer, and we know the things he has designed. That’s the reasonable origin of all our assumptions.

    And again, “to know nothing of the designer” does not mean that we don’t know that the designer is a designer. We know that. And a designer is a conscious intelligent being who acts out of purpose and intent. That we know. That is true of all designers.

    As a meta-comment, I find this mode of reasoning completely bizarre. “”We certainly know that he is a designer, and we know the things he has designed” are your starting assumptions?

    You can look high or low, but you’ll never find a more stark example of “argument by definition,” or of an argument that explicitly assumes its conclusions.

  181. 181
    gpuccio says:

    Adel:

    Glad to know that the C-value paradox has been solved. The Wiki page you quote does to seem to agree with you:

    “It is unclear why some species have a remarkably higher amount of non-coding sequences than others of the same level of complexity. Non-coding DNA may have many functions yet to be discovered. Though now it is known that only a fraction of the genome consists of genes, the paradox remains unsolved.”

  182. 182
    gpuccio says:

    Diffaxial #180:

    Have you stopped a moment before writing that?

    There is nothing bizarre. My phrase obviously means:

    The design inference allows us to infer the existence of a designer. It does not give us information about who the designer is or how he acted, but implicit in the inference (if the inference is correct, obviously) is the existence of a designer, and a designer is by definition a designer, a conscious intelligent agent who acts out of purpose and intent.

    Moreover, the designed products we are observing, and on which we built our inference, does give some information at least about some of the purposes of the designer: for instance, in the case of living beings, we can easily understand that the functions we observe in proteins are clearly tied to the metafunction of keeping cells and beings alive. So, even if we may not understand why the designer wants to have living beings, we do understand that he designs things for that purpose.

    So, what is bizarre in this reasoning? I thought my statement was clear enough in the context, being the end of a very long discussion (on which you seem not to comment at all). For instance, the phrase you quote from my #177 is in the context of a very detailed answer to Hoki, and sums up much reasoning I had already made in my long and detailed answer to you at #166, on which I am not aware of any comment from you.

    Is that your way to discuss? Ignore my answers and arguments, take a single phrase out of context, misunderstand it and just comment on that?

  183. 183
    Hoki says:

    gpuccio:

    Phenotypic selection alone will not do. Intelligent selection of the genotype according to expression of function in the phenotype is a perfectly admissible modality, as I have argued. What it reasonably implies is that the designer knows the function he wants to obtain, but does not know directly how to implement it. That’s certainly a possibility. In human design, there are important models of that kind.

    Phenotypic selection alone will not do??? That’s a preposterous claim.

    But my point is that, either the designer writes his code directly, or intelligently selects it after targeted random search, the result is functional, and neither modality explains a result of 98.5% useless code with 1.5% highly functional code interspersed. Even human protein designers, with all their ignorance, are much more efficient than that.

    On top of the design, there is also the implementation of the design. Do you also assume that that is perfect?

    How can you, with a straight face, say that neither modality explains the existence of lots of junk DNA (would it even have to be close to 98.5%?).

    If you think that the designer would probably have done what you think it did, could you put some (very) rough numbers on that? 90%? 50%? 10%? 0.001%?

    For the same reason, I am ready to bet on a definite function of at least 80% of the genome, and I bet at 90%. Where and when can I pass to take my money?

    I wasn’t asking for how much junk DNA you thought there might be. I was was asking for a rough estimate of the probability of your assumptions being correct.

  184. 184
    kairosfocus says:

    Jerry (and others):

    First and foremost, the point of “predictions” in a scientific tyheory is that they are testable and risky points where the theory meets up against reality.

    Theories are inferences to best explanations of the observed state of the world; which is open-ended and ever-growing. So, a good theory will make predictions that are testable [and at least in principle fail-able], thus risky.

    A decade ago or more, leadign design thinkers stepped up top the plate on the Junk DNA “consensus” of that time, and said that on the rationale behind design theory, it would be proved wrong. Over the past decade, evidence has increasingly rolled in, and it is the design thinkers who have been vindicated as what seemed to be inexplicable “junk” has turned out more and more to credibly be functional — BTW, increasing the observed functional complexity of DNA into the bargain.

    On design thinking, it is reasonable to expect that MOST DNA will turn out to be functional, though of course there will be significant room for SOME accidents of teh various tuypes suggested; similar to how Darwinist theories of evolution have a recognised micro-role. But, the notion that 97 – 98+% of DNA is non-functional junk, is very probably dead; and that in the teeth of the confident — and quite recent — expectations of the majority of representative/ spokesmen Darwinists.

    That is, the idea that junk DNA in the genomic attic is in effect the smoking gun pointing to massive undirected macroevolution is increasingly plainly failing the empirical test.

    More broadly, design thinkers have underscored that cell based life is and will remain a functionally complex, information-rich entity that depends on SPECIFIC information to function. (That is, we will see islands and archipelagos of function in a much larger sea of non-function.)

    So, while we are at it, let us not forget the major/central empirically testable claims — thus, predictions — of design theory:

    1 –> The three longstanding causal factors, chance, necessity and design will continue to be a valid trichotomy of causal situations.

    2 –> That is, not only will no fourth factor turn up, but distinguishing signs of the three factors at work will permit us to characterise each of the three as it affects aspects of empirical phenomena.

    3 –> First, blind mechanical forces give rise to empirical regularities, i.e they lead to low contingency outcomes. (Once initial conditions are the same, the same path will play out.) This, we term, necessity.

    4 –> Of course, this leaves room for the case of sensitive dependence on initial conditions; but we should note that the diversity of outcomes here rests on divergent initial conditions and divergence amplification through non-linearities, i.e the diversity does not come from the mechanical necessity at work.

    5 –> In general, highly contingent aspects of outcomes under similar initial conditions will trace to chance and/or design.

    6 –> In some cases, there will be a stochastic pattern [probability/ statistical distribution], i.e. the contingency of outcomes is credibly undirected. This, we term, “chance.”

    7 –> In other cases, contingency will credibly be directed towards a goal, and will often be functionally constrained to achieve that goal; leading to an otherwise unlikely (i.e. on a random walk hypothesis, maximally unlikely) target zone of configurations that is specific, information rich and functional. this, we term, design.

    8 –> Thus, we will see characteristic empirically detectable signs of necessity, chance and design that may be intelligibly discerned on appropriate aspects of empirical objects and phenomena.

    9 –> For design, these will include: [a] irreducible complexity, [b] complex specified (especially functionally specified) information, [c] algorithmic, code-based functionality, [d] language based functionality (e.g. alphanumeric strings in a language like English or a computer language like C).

    Such claims are testable, are open to test and are tested day by day as we work in a digital, information age. So far, on literally millions of tests, the claims are well-supported by actual observations. So, though they are obviously risky assertions, they are confidently proffered, as well-tested observations will tend to be further supported. [Historically corrections tend to come at points where observations are pushed to new limits, with the older generalisations being retained as a well supported limiting case, e.g. Newtonian dynamics in a relativity and quantum age.)

    They also constitute a manifesto for the correction of the Lewontinian materialism that has distorted late C20 to early C21 science.

    GEM of TKI

  185. 185
    kairosfocus says:

    PS: Having noted as above, we should note that — just factoring in the simple FSCI metric as a yardstick — even in the presence of much “junk” [or, better, “noise”] the observation of 500 – 1,000+ bits worth of functional and specific information will be well beyond the reach of chance explanations on the gamut of our observed cosmos and would still substantiate a design explanation. That is where you, Jerry, have a good point. [And if the majority of the genome turns out to be noise, that would sharply constrain explanations of natural history; not least by implying a duration sufficient to achieve that, and also clusters of mass-mutation events, e.g. virus epidemics that mass-inserted DNA strands etc.]

  186. 186
    gpuccio says:

    Hoki (#183):

    Phenotypic selection alone will not do??? That’s a preposterous claim.

    If you read better, you will see what I mean:

    “Phenotypic selection alone will not do. Intelligent selection of the genotype according to expression of function in the phenotype is a perfectly admissible modality, as I have argued.”

    I think the meaning is clear enough. If only a phenotypic variation is selected, but it does not correspond to a genotipic variation, the variation will not be transmitted. In some way, any phenotypic variation has to originate form the genotype, or to be converted to a genotypic difference. What other model have you in mind? And what is preposterous in my claim?

    On top of the design, there is also the implementation of the design. Do you also assume that that is perfect?

    I have never spoken of perfection. But I do believe that both the design and the implementation are highly efficient, for human standards.

    How can you, with a straight face, say that neither modality explains the existence of lots of junk DNA (would it even have to be close to 98.5%?).

    I say it: neither modality explains lots of junk DNA. And my face is straight. Remember, this is an empirical judgement, not a logical one. It takes into account a lot of factors. And, bayesianly, I can bet on it (see later).

    I wasn’t asking for how much junk DNA you thought there might be. I was was asking for a rough estimate of the probability of your assumptions being correct.

    Again, read better and you will see that I have already given the answer:

    “For the same reason, I am ready to bet on a definite function of at least 80% of the genome, and I bet at 90%. Where and when can I pass to take my money?”

    A function of at least 80% of the genome is the specified assumption. 90% is a rough estimate of the probability of my assumptions being correct. And winning the money is a resonable hope.

  187. 187
    gpuccio says:

    Hoki:

    I forgot to close the quote tag. From “Again, read better” it’s me again writing.

  188. 188
    Adel DiBagno says:

    gpuccio:

    Glad to know that the C-value paradox has been solved.

    Surely you knew it was designed by an intelligent power.

  189. 189
    Diffaxial says:

    gpuccio:

    Is that your way to discuss? Ignore my answers and arguments, take a single phrase out of context, misunderstand it and just comment on that?

    Unfortunately, a closer look at context doesn’t help you, Gpuccio. What it does is turn up more instances of tautology and circular reasoning.

    For example:

    Now, here we have to discuss briefly what is that makes CSI CSI. As you probably know, there are two components: one is specification, the other is complexity.
    Specification (in the case of FSCI, functional specification) is the true mark of design: design is always specified because it always is the projection of a conscious intent. Sometimes that specification, that intent, is easy to detect. Other times it could remain hidden, even forever.

    If “specification is the true mark of design,” it must not only be true that all design results in specification. It must be true that “all specification results from design.” Ordinarily that might be considered an empirical question: does all specification result from design, from intent? But here you have DEFINED specification as reflecting conscious intent: “Sometimes that specification, that intent, is easy to detect…” Therefore yours has stopped being an assertion of an empirical entailment of ID theory (“design results in specification; all specification results from design”) and become true BY DEFINITION. CSI may as well be called NDCI “Necessarily Designed Complex Information.”

    Having designed CSI or FCSI in this way, to turn to biological structures and argue “the CSI we observe here is evidence of design” becomes an illegal move, and a sleight of hand, because circular and tautological rather than empirical. To claim that a structure reflects CSI is simply to claim that it is designed, not adduce evidence for design. And that claim is what is at issue.

  190. 190
    kairosfocus says:

    Diff:

    Re, 188:

    To claim that a structure reflects CSI is simply to claim that it is designed, not adduce evidence for design. And that claim is what is at issue.

    Pardon, you seem to have made a misreading of both GP and the import of the claim that [functionally] specified complex information is a reliable EMPIRICAL sign of design:

    1 –> Complex, functionally specific information is an observed fact, e.g. posts in this thread, DNA coding for proteins, computer programs and many other cases.

    2 –> Once such information is originally digital or can be digitised, we can specify a configuration space and in prinsipul assess the presenxe of islands of funktion, where variation across the island may change degree of relevant function but does not destroy it. [For instance there are some deliberate typos here — incorrect spelling but communication is preserved. But beyond a limit the limit of redundancy is passed and the message is garbled.]

    3 –> Once the config space is large enough, and the islands of function are sufficiently isolated, then a random walk from an arbitrary initial point loses the prospect of being likely enough to find such an island on available search resources — lost and sunk in the sea of non-function.

    4 –> But, since we observe as well intelligent agents who design, we can see that such agents, well within available resources, routinely generate FSCI.

    5 –> So, we have millions of cases of FSCI of known provenance, and in every instance beyond a reasonable threshold — e.g. as a rule of thumb 500 – 1,000 bits of storage capacity — where we know the causal story, FSCI originates in an intelligent agent. (Lucky noise is strictly possible but so improbable as to be empirically unobserved. [BTW, this is the foundation of the statistical form of the 2nd law of thermodynamics.])

    6 –> So, as an empirically anchored matter of summary of well-established patterns of the empirical world, FSCI is a reliable sign of intelligent design.

    7 –> One that is backed up by the issue of the search space challenge for the other known cause of highly contingent outcomes, stochastic, undirected contingency.

    8 –> So, no tautologies are being put, no questions are being begged and the issue is not addressed by definition but by observation and inference to best current explanation tied to those observations. [“Current” is used to underscore that his is an open-ended exercise: provide a counter-instance tothe design observations, summary explanations and conclusions, and they would be overthrown; as is true of all defeatable, empirically anchored reasoning.]

    GEM of TKI

  191. 191
    gpuccio says:

    Diffaxial #188:

    You go on not answering my points, and inventing new false points. Maybe I am a little tired of that.

    Anyway, for this time:

    Specification. We may define it as the input of a conscious meaning in the organization of information. In that case, it is obviously derives from design by definition. Let’s call that, for clarity, original intentional specification, or if you want “true specification”.

    Then, there is what we can call “recognizable specification”: what appears to us as specified. That is the object of design detection.

    The two things are not the same thing, and you are obviously equivocating on what is obviously a very brief sum up of a much bigger issue. Again, it seems that you want to elude the more detailed points and stick to what allows you to equivocate.

    What appears as specified is not necessarily the product of “true specification”. It could be the product of a random system, if it is simple enough to be in the range of what a random system can realistically generate, or it could be the product of known laws of necessity: let’s call that “pseudo-specification”.

    And it is not true that all “true specification” is easily recognizable. The recognition depends on the resources of the observer. So, let’s call all “true specification” which we don’t recognize “hidden specification”.

    With those linguistic clarification, let’s see the ID argument as I have given it, and let’s see if it is circular ot not.

    “True specification” is the true mark of design: design is always specified because it always is the projection of a conscious intent.

    Sometimes that specification, that intent, is easy to detect. Other times it could remain hidden, even forever. In those cases of “hidden specification”, design detection is simply not possible. Those cases will remain false negatives, unless specification is at some point recognized.

    For all those cases, instead, where we recognize specification, the question arises:

    a) is that “recognized specification” an expression of “true specification”, and therefore of design?

    or

    b) is it the product of a random mechanism, or of known laws of necessity, and therefore only an instance of “pseudo-specification”?

    That’s where complexity is necessary: it excludes the random hypothesis. If complexity is not enough, “recognized specification” cannot be satisfactorily interpreted: it could be “true specification” or “pseudo specification”, but we cannot be sure. So the case is classified as negative: it could be a true negative (if it was indeed “pseudo specification) or a false negative (if it was indeed “true specification”). We simply can’t say.

    A careful assessment of possible mechanisms based on necessity is also necessary.

    But, if there is “recognizable specification”, and complexity is enough, and no known law of necessity can explain what is observed, then design can be inferred, and it is the best explanation.

    IOW, that’s the EF, which you should be familiar with.

    Nothing is circular in this reasoning. “True specification” is a mark of the process of design, by definition. The design inference uses specific quantitative methods to be empirically certain whether what we observe in a candidate product of design (recognized specification, which is an empirical fact) is truly an expression of “true specification” or not. Under those conditions, design is inferred.

    So, before accusing others of demonstrating things “by definiton” or circularly, please try to understand what others are saying. If sometimes new concepts are introduced by me too briefly, you can ask for clarifications. I usually assume that my interlocutors, especially those who, like you, are so ready to criticize, have some basic understanding of ID. You are repeatedly demostrating that that’s not the case.

  192. 192
    Diffaxial says:

    Gpuccio:

    I now have the time to respond to your earlier post at 166, as well as 188.

    My positions on those points (many times explicitly stated on this blog)…

    Of course, I listed the “strong” hypotheses that you addressed in your response, and you may not wish to defend them. Yet I cannot be too far off the mark, seems to me, given that you say that you have articulated your position on each of these assertions many times on this blog. But perhaps you have others to offer (saltations necessarily follow from design, for example).

    What emerges from your responses on these particular issues is that none are “necessary entailments” of ID theory, such that an empirical test of those entailments places ID, or a tenet of ID, at risk of disconfirmation. Front-loading is a “possible scenario,” although obviously not a necessary one, given that you don’t believe in it. ID survives either finding. ID has nothing to say about whether God is the designer. ID survives any finding. ID entails no assertions regarding the number of designers. ID survives any finding. ID includes no necessary entailments concerning the tempo of design. ID survives any finding regarding tempo. In short, in these domains, ID theory – that is, your theory regarding the characteristics or process of design – offers no testable entailments.

    I then characterized several of your statements as bare assertions, only. Your response, for the most part, was to repeat your bare assertions:

    a) “not one single phenomenon attributable to design has ever been shown to have other explanations”. That’s simply true. I obviously mean “attributable to design” by the ID inference. Have you any counter-example?

    “That is simply true…by the ID inference” is an argument by bare assertion. As stated above (perhaps you didn’t intend this), it is also true by definition. By definition, a phenomenon attributable to design does not have other explanations. But propositions that are true by definition do not necessarily pick out phenomena in the world, as their “truth” is tautological.

    A slightly modified but actually very different assertion would be “not one single phenomenon attributed to design has ever been shown to have other explanations.” But that is the question currently in dispute. For example, the BacFlag has been attributed to design, yet others assert on the basis of empirical findings that it has been shown to have other, natural explanations. Of course you can assert that those other explanations are wrong – but we are then back to a contest of bare assertion. The only exit from such contests is to articulate your theory in such a way that it yields testable assertions, such that the theory is placed at risk of disconfirmation, and get going with the real science.

    b) “Structures which are recognized as designed are designed”. That’s only the same point as a).

    Again, bare assertion. Same response as to a).

    c) “No false positives”. Again the same point. Have you any example of false positives? I mean, outside biological information, which remains the controversial issue, can you offere some example where a design detection process with the quantitative threshold given by Dembski gave a false positive?

    This is peculiar. Biological complexity is indeed the issue, and virtually the entire biological community argues that all of your “positives” in that domain are false positives. Disagree? Get to work with those entailments and those tests. Outside of biology (e.g. in forensics) many false positives have occurred. Your qualifier that no false positives have been observe using Dembsk’s numbers renders the assertion meaningless, as no one applies Dr. Dembski’s standards of probability in such settings. Hence neither “true” nor “false” positives by that standard are observed in these other domains.

    “So, what I am saying is…it will be obvious that the observed saltations will not be explained by a random variation + NS model. That is a prediction, and it will be verified”…That’s a prediction, isn’t it?

    “It will be verified” is, of course, more bare assertion. But it IS a prediction. So is my prediction that the Cubs will never win a world series. Mine isn’t a scientific prediction, however; it is a prediction about future historical events, but not a prediction that arises in any necessary way from a theory about future events. Similarly, yours is not a scientific prediction, as it reflects no positive entailments that arise uniquely from ID. Rather, it is a prediction about the future success or failure of entailments of another theory, and therefore a prediction regarding future history. It is a prediction about science, but it is not a scientific prediction. All of the work required to test that prediction is of necessity conducted from within the framework of that alternative theory, and is assisted not one iota by the design hypothesis. You should be deeply suspicious when a pututive primary empirical “test” of your theory derives no guidance whatsoever from that theory.

    Similar objections may be raised to your other predictions, which all include assertions that turn crucially upon the success of a rival theory.

    On to your next post.
    Your disquisition about the various forms of specification culminates in the following:

    But, if there is “recognizable specification”, and complexity is enough, and no known law of necessity can explain what is observed, then design can be inferred, and it is the best explanation.
    IOW, that’s the EF, which you should be familiar with.
    Nothing is circular in this reasoning.

    This is entirely circular (probably also a moebius strip), and again hides an argument by bare assertion. The circle is closed with the phrase “and no known law of necessity can explain what is observed…” Once again, THAT is the point of contention. Contemporary evolutionary theory argues that all of the complexity observed in biology is explained within a natural framework, absent design. Therefore, upon pointing the EF at a biological phenomenon and and declaring it thereby as “designed,” you are simply taking exception to that assertion of contemporary biology, and simply making the bare contrary assertion, “We don’t believe that there is, or can be, a natural explanation for such complexity. Therefore it is designed.” Hanging numbers on the process (such as the UPB) doesn’t remove the circularity, nor render this other than argument by bare assertion.

    The way out of this circularity? Specify necessary entailments of ID, such that empirical test of those entailments puts ID, or a major tenet of ID, at risk.

  193. 193
    jerry says:

    “Contemporary evolutionary theory argues that all of the complexity observed in biology is explained within a natural framework, absent design. ”

    This is an absurd statement. There is no evidence for it. The whole controversy is about the lack of evidence for such a statement. So until such a statement can be supported all biology textbooks and courses should reflect that lack of empirical support for this very specious proposition.

  194. 194
    kairosfocus says:

    Diff:

    The current discussion on WAC 4 is now far afield, on tangential matters. (That strongly suggests that the critics’ case on the merits is not particularly strong.)

    However, I cannot but observe your:

    ID has nothing to say about whether God is the designer. ID survives any finding. ID entails no assertions regarding the number of designers. ID survives any finding. ID includes no necessary entailments concerning the tempo of design. ID survives any finding regarding tempo. In short, in these domains, ID theory – that is, your theory regarding the characteristics or process of design – offers no testable entailments.

    Now, since the crucial focus of the WAC above is that there are empirical points of test for the design inference, and that it is in fact making successful risky predictions, it is worth taking up this assertion.

    For, in light of abundant and easily accessible [even in this thread, including from GP] pointers to the actual empirical entailments and associated points of test for the design inference. So, you have unfortunately set up a strawman, which you then set out to knock over.

    1: ID has nothing to say about whether God is the designer — ID at core is the scientific study of empirical signs of design.

    –> This may be seen from a classic definition by Dembski:

    intelligent design is . . . a scientific investigation into how patterns exhibited by finite arrangements of matter can signify intelligence.

    –> inference on empirical evidence forming reliable signs of design is very different from inference to who may constitute the relevant set of candidate designers.

    –> As forensics long since tells us, specific circumstantial and testimonial evidence on motive, means and opportunity can lead us from inference that a crime has been committed, to whodunit.

    –> but, it must first be shown that tweredun.

    2: ID survives any finding — in fact, there are any number of possible findings that would either directly overthrow or so fatally weaken the design inference that it would break down.

    –> For instance, the design inference is predicated on the point hat there are identifiable, reliable empirical signs of design.

    –> So, to disestablish ID (and as has been said openly from the outset) all one has to do is to produce a credible case where, e.g. functionally specific complex information — especially algorithmic or data structure information — beyond the Dembski UPB has spontaneously arisen by unguided forces of chance and necessity. (And similarly for irreducible complexity.)

    –> There are millions of cases of CSI and of IC entities of known origin coming to be by intelligent direction. It seems that there is as yet no credible case of the opposite. (Otherwise, it would be trumpeted far and wide across the Internet.)

    3: ID entails no assertions regarding the number of designers.

    –> Again, that tweredun is antecedent to whodunit, and whether by lone assassin or conspiracy.

    4: ID includes no necessary entailments concerning the tempo of design.

    –> How tweredun — and how much time it may take — is again after that tweredun.

    –> First: are there empirically credible signs of design, per sufficient known cases and an explanatory framework that distinguishes art from chance + necessity? YES.

    –> Does that framework show why chance + necessity will be practically — as opposed to logically — hampered from creating FSCI etc? YES.

    –> is it tested? YES, millions of cases of known origin, no counter-instances. [So we have a well anchored inductive generalisation.]

    –> What of Miller’s TTSS? In a nutshell, the TTSS uses a subset of the genetic provision of the flagellum to create a second device, a toxin injector, so that prokaryote cells may prey on eukaryote cells. That is, it is credibly later and derivative.

    5: in these domains, ID theory – that is, your theory regarding the characteristics or process of design – offers no testable entailments

    –> You have here (despite repeated warnings to the contrary) diverted from the openly stated empirically testable claims made by core design theory, to go after what core ID does not attempt to address and seek to dismiss core ID based on what it does not address.

    –> That’s like criticising cricket for not being baseball. (But it is not, and was never set out to be . . . )

    –> Thus, you have set up and have sought to knock over a strawman of your making, not the real core ID theory.

    6] The actual core of design theory and its empirical predictions.

    –> In 184 above, I have summarised:

    let us not forget the major/central empirically testable claims — thus, predictions — of design theory:

    1 –> The three longstanding causal factors, chance, necessity and design will continue to be a valid trichotomy of causal situations.

    2 –> That is, not only will no fourth factor turn up, but distinguishing signs of the three factors at work will permit us to characterise each of the three as it affects aspects of empirical phenomena.

    3 –> First, blind mechanical forces give rise to empirical regularities, i.e they lead to low contingency outcomes. (Once initial conditions are the same, the same path will play out.) This, we term, necessity.

    4 –> Of course, this leaves room for the case of sensitive dependence on initial conditions; but we should note that the diversity of outcomes here rests on divergent initial conditions and divergence amplification through non-linearities, i.e the diversity does not come from the mechanical necessity at work.

    5 –> In general, highly contingent aspects of outcomes under similar initial conditions will trace to chance and/or design.

    6 –> In some cases, there will be a stochastic pattern [probability/ statistical distribution], i.e. the contingency of outcomes is credibly undirected. This, we term, “chance.”

    7 –> In other cases, contingency will credibly be directed towards a goal, and will often be functionally constrained to achieve that goal; leading to an otherwise unlikely (i.e. on a random walk hypothesis, maximally unlikely) target zone of configurations that is specific, information rich and functional. this, we term, design.

    8 –> Thus, we will see characteristic empirically detectable signs of necessity, chance and design that may be intelligibly discerned on appropriate aspects of empirical objects and phenomena.

    9 –> For design, these will include: [a] irreducible complexity, [b] complex specified (especially functionally specified) information, [c] algorithmic, code-based functionality, [d] language based functionality (e.g. alphanumeric strings in a language like English or a computer language like C).

    As I concluded previously: “Such claims are testable, are open to test and are tested day by day as we work in a digital, information age. So far, on literally millions of tests, the claims are well-supported by actual observations. So, though they are obviously risky assertions, they are confidently proffered, as well-tested observations will tend to be further supported. [Historically corrections tend to come at points where observations are pushed to new limits, with the older generalisations being retained as a well supported limiting case, e.g. Newtonian dynamics in a relativity and quantum age.]”

    _____________

    Diff, we are on a cricket pitch, not a baseball field.

    GEM of TKI

  195. 195
    Adel DiBagno says:

    kairosfocus [167]:

    In the relevant context, chance and necessity [blind spontaneity] is one major family of explanations, and design is another [and all the cluster of alternatives you suggested are post design, dealing with proposed mechanisms and candidate designers].

    gpuccio [170]:

    So, why are you quoting design theories as alternatives to ID? Those are ID theories, although each of them has specific peculiarities.

    Congratulations, gentlemen. You have nicely shown that ID and creationism are, in your minds, the same argument.

  196. 196
    jerry says:

    “Congratulations, gentlemen. You have nicely shown that ID and creationism are, in your minds, the same argument.”

    I suggest you read a comment by Allen MacNeill an hour or two ago on another thread to see how a sophisticated anti ID person sees it.

    http://www.uncommondescent.com.....ent-316121

  197. 197
    kairosfocus says:

    Adel:

    Not at all, please read the weak argument correctives above.

    1 –> [Biblical] Creationism at root is about the concept that first people have a trustworthy inscripturated account (subject to onward debates on interpretation) about the true state of the world in the past, so should build that into our reasoning in science.

    2 –> That sense of trustworthiness in turn rests on being a part of a 2,000 year tradition of encounter with God though the risen Christ.

    3 –> So, Creationists believe in a designer, but they have onward frames of thought that are approaching the question from accepted traditions of revelation to experience, not from in-common empirical data in the world to signs of design to inferring designers in cases where we see the signs but have not directly observed the designer.

    4 –> So, it is an error of reasoning to equate [a] making an empirically based design inference to [b] inferrring from trusted scriptures to an account of the claimed true state of the world in the remote past.

    5 –> And, while Creationists can reason along the lines of [a], that — assertions to the contrary notwithstanding — is independent of their thinking along the lines of [b], which indeed the making of such an inference opens up to empirical test.

    6 –> Further to the above, we should observe that design thinkers come from a wide variety of views and traditions, i.e. ID is showing itself to be an in-common empirically anchored approach. (And, one that is inherently open to empirical challenge: if it can be shown that there are no reliable signs of design, the design inference and filter would collapse.)

    7 –> So, you have unfortunately affirmed the consequent: If Creationists then believing in design of life etc, does not sustain the claim: if believing in design then Creationist. For, implication is not equivalence.

    GEM of TKI

    (PS: I seem to be being dogged by what looks like a Firefox bug that kills a comment on moving to another tab then returning. Or, is that a virus that has got through to me?)

  198. 198
    gpuccio says:

    Adel #195:

    Congratulations, gentlemen. You have nicely shown that ID and creationism are, in your minds, the same argument.

    Adel, Adel… I hope that’s only a joke.

    OK, maybe it would have been more precise to say: “Those are design theories” instead of “Those are ID theories”. Maybe I will have to remember that I have to call my lawyer before blogging at UD 🙂

    But my paragraph started with

    “So, why are you quoting design theories as alternatives to ID?”

    so, I believe that any intelligent person would have understood what I menat. And I am sure that you are an intelligent person.

    Have you any doubt that any form of creationism is a design theory? After all, creation is a form of design. The opposite is not necessarily true, obviously.

    But my point is that if things were created, they were designed. I was not implying, however, that the strict scientific methodolofy of ID must necessarily be accepted by all those who believe in a Creator. TEs are a good example of religious people who do not accept, ususally, ID.

    So, in a sense you are right: we can compare the methodology of ID and the methodology of a pure creationists form a scientific point of view. Let’s say creationism is our B and ID our E. I state that E is still the best explanation.

    But if you believe that deriving our scientific conclusions directly from a sacred text is a better scientific methodology than detecting design by the ID methods, well, I am open to discussion…

    But my question to you remains: So, why are you quoting design theories as alternatives to ID?

    Have you no other alternatives left?

  199. 199
    gpuccio says:

    Diffaxial:

    1) That informational saltations necessarily follow from design is not my position: it is an essential concept of ID. That the saltation can be detected if it is complex enough is the essence of the EF.

    2) My #166 included three different “bulleted lists”, all of the a)… d) type (perhaps I should use more formatting imagination). Those you quote at the beginning of your post were in the first list, introduced by a very clear statement:

    “My positions on those points (many times explicitly stated on this blog):”

    So, they were never intended as scientific predictions, but as very simple calrifications of my personal postitons. Why then do you spend so much time criticizing them for their scarce value as predictions? They are not. They never were. I understand that english is not my native language, but is my expression so misleading?

    So, I don’t understand your series of statements of the kind:

    “ID entails no assertions regarding the number of designers. ID survives any finding.”

    And so? Are we obliged to guess how many designers there were, so that when it will be proved that there were three instead of two darwinists can rejoice? I must have missed something…

    Anyway, the predictions were later, as I think you yourself may have realized.

    3) Another statement of mine was:

    a) “not one single phenomenon attributable to design has ever been shown to have other explanations”. That’s simply true. I obviously mean “attributable to design” by the ID inference. Have you any counter-example?”

    You comment:

    “That is simply true…by the ID inference” is an argument by bare assertion. As stated above (perhaps you didn’t intend this), it is also true by definition.

    I must really call my lawyer. Well, the whole phrase, with the specification I added, and with further specifications required by your musundrestandingd, would sound like that:

    “Not one single phenomenon attributable to design by the methodology of design inference offered by the ID theory has ever been shown to have other explanations, excluding the cases of biological minformation which are obviously the object of our controversy. Have you any counter-example? If not, you should agree that my statement is simply true.”

    That is not an argument by assertion. It is an assertion with an invitation to simply falsify it. And certainly there is nothing true by definition in it.

    Given your desire to find me culpable of demonstrating things by definition or by circular arguments, I must give you a disappointment: I never do that. And yes, this is an assertion, you need not point to that in your next post.

    4) Immediately after, you show that indeed you had understood what I meant:

    A slightly modified but actually very different assertion would be “not one single phenomenon attributed to design has ever been shown to have other explanations.”

    Why very different? I had said:

    “not one single phenomenon attributable to design (by the ID inference) has ever been shown to have other explanations”.

    Where is the difference? I beg your pardon, but it is your formulation which is vague and imprecise. I cannot answer for any design attribution made by anybody. I have to specify what kind of design attribution I mean, and by which methodology. And I did exactly that. The result: gratuitous accusations of arguments by definition and similar…

    Then you go on quoting the flagellum as a counter example. But you must understand that the whole biological information is the problem at stake here. So neither I nor you must use examples from biological information at this level.

    So, please show me a single phenomenon, outside biological information, which can be attributed to design by the ID procedure, and which has been proven to have another explanation.

    5) To my repeated assertion of no false positive, you again search for help in biological information. And you really make a very peculiar statement:

    This is peculiar. Biological complexity is indeed the issue, and virtually the entire biological community argues that all of your “positives” in that domain are false positives. Disagree? Get to work with those entailments and those tests.

    IOWs, biological information is the issue of our controversy (then you do understand that!), and as the entire biological community agrees with you, I have to accept that as a counter example? Where is your logic? Please, give me a false positive of the ID procedure outside of biological information, or stop that.

    6) Later, again you prove that you had understood better than you wanted to show:

    Outside of biology (e.g. in forensics) many false positives have occurred. Your qualifier that no false positives have been observe using Dembsk’s numbers renders the assertion meaningless, as no one applies Dr. Dembski’s standards of probability in such settings. Hence neither “true” nor “false” positives by that standard are observed in these other domains.

    Please, take notice that it is not so difficult to apply “Dr. Dembski’s standards of probability” to human artifacts. You can easily apply them to one of these posts, for example. Kairosfocus has stated a lot of times that any english post with definite meaning, and long more than, say, 100 or 200 characters, goes well beyond Dembski’s requirements. So, show me that another explanation is available for one of these posts, or for an equivalent piece of software, and you will have made your point. So, in all products of language or of programming we do have a lot of positives, and none of them is false. Can you counter this statement?

    7) Your “arguments” about my prediction of informational saltations are so out of any pertinence that I have no hope to discuss them briefly. So I just restate my prediction:

    “When we have all the necessary data, which will happen in some time, we can evaluate if informational saltaions have appeared suddenly in natural history, The prediction of ID is that we will observe that those saltations have appeared, and that they will be exactly of the kind anticiapated by ID: the sudden appearance of functional, complex information in a range of time and in a physical system where no random scenario, no kind of possible selection for function, and no known physical law can explain the appearance of that functional information.” That for me is a prediction, and it is a scientific prediction. If you don’t agree, please let’s stop it here.

    8) You final discourse, again about supposed circularity, is really amazing! I will address it for the last time, and then again, please, let’s remain with our opinions. Others will jugde for themselves.

    The circle is closed with the phrase “and no known law of necessity can explain what is observed…” Once again, THAT is the point of contention.

    No, it isn’t. My requirement (which is obviously from the EF) is very simple, and is not a point of contention at all, except in your imagination. Either known laws of necessity can explain one thing, or they can’t. For known laws of necessity I obviously mean things like the laws of physics, and any detailed and quantitative physical explanation based on them.

    Contemporary evolutionary theory argues that all of the complexity observed in biology is explained within a natural framework, absent design.

    Thank you for informing me of that. Maybe I had missed it. You know, THAT is the point of contention. And “contemporary evolutionary theory” is not, you know, a law of necessity. It is a very speculative and unsupported model which we, in ID, believe false.

    I have a sad suspect: in the end, all your arguments, or what remains of them, are arguments from authority. A lot of people think that way, how dare you think differently? Again, how sad.

    Therefore, upon pointing the EF at a biological phenomenon and and declaring it thereby as “designed,” you are simply taking exception to that assertion of contemporary biology, and simply making the bare contrary assertion, “We don’t believe that there is, or can be, a natural explanation for such complexity.

    Wrong. I am taking exception to nothing. I am saying that there is not a credible explanation, either of the random kind or of the necessity kind, or of the mixed kind, unless the design concept is used. That is not a belief, it’s a fact. And strangely, instead of falsifying very simply my statement by providing the desired credible explanation, you are trying all other possible ways…

    Hanging numbers on the process (such as the UPB) doesn’t remove the circularity,

    I am afraid that Dembski’s powerful concept of an universal probability threshold can do many interesting things, but certainly not remove a circularity which is not there.

  200. 200
    Adel DiBagno says:

    jerry [196]:

    I suggest you read a comment by Allen MacNeill an hour or two ago on another thread to see how a sophisticated anti ID person sees it.

    In the link that you gave, Allen said:

    I have already agreed with my good friend and colleague (and intellectual opponent), Hannah Maxson, that such a conflation is both inaccurate and inflammatory.

    I suggest you read comments 167 and 170 to see how your colleagues have, on the contrary, conflated ID and creationism.

    In those posts, they minimized differences between ID and alternative theories of origins to excape my assertion of improper disjunction in their earlier arguments. Earlier, they had insisted that there are only two alternatives – evolution or ID. When I pointed out that other theories are out there (and I only mentioned a few), they knocked themselves out to deny that any alternative is significant.

    They’ll have to live with their words. Those words are on the record.

  201. 201
    jerry says:

    Adel,

    There are only two broad alternatives in the evolution debate. They are right. One espouses a naturalistic macro evolution and the other espouses some form of design for macro evolution. These are very broad classifications and any theory one proposes will fit into one or the other. So those who argue for special creation of each species 6,000 years ago are part of a design paradigm. The designer is God. But that does not mean the ID supporters agree with them on how it happened. I suggest you read another comment a few comments above the other one I sent you and follow the link in the comment.

    http://www.uncommondescent.com.....ent-316060

    YEC is design by definition since God designed and made each species which is why a lot of YEC’s glom onto ID and roam this site. Part of ID is discrediting naturalistic evolution and YEC’s are also interested in this and in fact pioneered a lot of the ideas that undermines Darwinism.

    If one believes an ancient spirit arose from the middle of the earth and made man, then that person is also a design believer. Their ideas would probably not fit in here. Whatever group you find in the world that thinks life was made by some god or some alien will be by definition a design supporter but they may not agree with what has become ID. And that does not mean those here or most other places who are espousing ID agree with them or want any part of their ideas.

  202. 202
    kairosfocus says:

    Adel:

    It seems that the main focus for this thread is over.

    but, a further note on the tangential issue you have raised is relevant.

    Kindly note that your incorrect inference to an equation of design and [Biblical] creationist thought has been corrected by three persons: GP, the undersigned and Jerry [not to mention Mr MacNeill as linked].

    In sum:

    1 –> Per definition, all creationists are design thinkers [cf. Newton’s General Scholium to his Principia as a case in point].

    2 –> However, Creationists are practicing a form of what is called by Plantinga Augustinian science. (That is, there is a particular set of writings that are held to report the actual state of the world in the past, from a trustworthy source; that trustworthiness being based on the tradition of knowing God in the face of Christ, multiplied by a particular view on the reading of the relevant scriptural tradition.)

    3 –> Design theory is not Augustinian, appealing instead to generally accessible and accepted empirical data and to an otherwise uncontroversial principle that per such evidence we may infer accurately and even reliably to intelligent vs unintelligent causal factors.

    4 –> As a result, Creationists [of various flavours], theistic evolutionists [broad sense], members of other faith traditions than the Christian or event he Judaeo-Christian one, deists, agnostics and even atheists may — and do — practice design science.

    5 –> Further to this, various models of how “tweredun” are inherently compatible with the question of a method that allows us to credibly test if tweredun. (No sense putting a suspect on trial if there is no good reason to think that here was arson . . . )

    6 –> This last being logically prior, we can see how Creationist, Frontloading etc models of how tweredun are all committed to the principle that design happened.

    7 –> So, a method that makes design expectations specific, and makes in effect testable predictions [as summarised above] — i.e. through the “aspects form” explanatory filter — exposes all such onward models to a key point of empirical test. (And BTW, that is part of the objection to ID made by that school of theistic evolutionists who hold that design is real but undetectable by empirical methods.)

    GEM of TKI

    (PS: Looks like the prob is with FF 3.0.10 [Safari has no such probs . . . ], am now living off 3.5 beta, which has interesting “betazoid” effects!)

  203. 203
    Adel DiBagno says:

    kf:

    On FireFox – Have you tried Google Chrome?

    It’s FAST and sleek.

  204. 204
    Adel DiBagno says:

    kf, gpuccio, jerry,

    Pax vobiscum

  205. 205
    jerry says:

    et cum spiritu tuo

  206. 206
    Diffaxial says:

    gpuccio:

    1) That informational saltations necessarily follow from design is not my position: it is an essential concept of ID. That the saltation can be detected if it is complex enough is the essence of the EF.

    One can imagine a scenario in which a designer simply nudged certain mutations in a particular direction for particular adaptive purposes, in a manner that in each case was indistinguishable from mutations that are random with respect to the adaptive significance of the mutations. An examination of the phylogenetic history of the resulting organisms would reveal no shifts of either phenotype or subparts (proteins, for example) larger than those that were possible by means of random mutations and selection – yet the the outcome was preconceived and actualized by that means. No saltations would be present, yet the outcome was designed.

    You appear to be saying that it is not possible that a designer operated in this way. Why? Why is it a necessary entailment of ID that designers are constrained to work only by effecting “informational saltations?”

    So, I don’t understand your series of statements of the kind:
    “ID entails no assertions regarding the number of designers. ID survives any finding.”

    What I am asking for are entailments of ID theory, and the empirical tests of such predictions such that failure to observe the predicted outcome is something that ID may not “survive.”

    “not one single phenomenon attributable to design has ever been shown to have other explanations”.
    “not one single phenomenon attributed to design has ever been shown to have other explanations.”

    The difference between these two statements is huge. It is analogous to the difference between:

    “Not one single geometric circle has ever been shown to have a shape other than that of a circle” (true by definition), and

    “Not one single geometric form thought to have been a circle has ever been shown to have a shape other than that of a circle.”

    The first is certainly true, by definition. The second is not.

    That’s the difference.

    So, please show me a single phenomenon, outside biological information, which can be attributed to design by the ID procedure, and which has been proven to have another explanation.

    I’m not aware of anyone actually “applying” the EF either “outside” or “inside” biological information. Moreover, the filter could be (for the sake of argument) 100% accurate in discerning human artifacts from natural objects, yet return 100% false positives when pointed at biological objects (because biological objects are not designed).

    Either known laws of necessity can explain one thing, or they can’t. For known laws of necessity I obviously mean things like the laws of physics, and any detailed and quantitative physical explanation based on them.

    “Laws of necessity” don’t explain things – people do. In the relevant instances, scientists do. Nor are phenomena “either explained or not explained.” Virtually all physical phenomena for which he now have explanations once were in need of explanation, and moved from the category of “unexplained by known laws of necessity” into the category of “explained by the known laws of necessity” by dint of human effort. Moreover, there remain many phenomena for which we have no firm explanation in the context of mathematical physics and chemistry (say, the particular compositions of the moons of jupiter, or then nature of dark energy), yet for which we have no reason to believe that natural explanations are not attainable. It follows that there may be many phenomena for which natural explanations cannot be offered in terms of the (imbecilic, repetitive ID shibboleth) “chance or necessity,” yet for which we have no reason to conclude that they were thereby designed.

    This is why the filter doesn’t work. The status, “can’t be explained in terms of chance or necessity” is contingent: explanation in terms of chance or necessity doesn’t simply lie there, self-evident. Rather, the class of objects that are explained thereby is constantly changing, and constantly expanding, as explanations are proffered and tested.

    Over approximately the last 150 years, biological phenomena have moved progressively from the “unexplained” category into the explained. Of course, you don’t find those explanations “credible,” the escape hatch you have built into your definitions that enables you to force them to be “true.” But no matter: that migration will continue apace, whether you like it (or even acknowledge it) or not.

    I have a sad suspect: in the end, all your arguments, or what remains of them, are arguments from authority. A lot of people think that way, how dare you think differently? Again, how sad.

    Actually, what I have repeatedly, even monotonously stated is that contests of bare assertion (either yours or mine) get us nowhere, and that the way out of such contests is to specify necessary entailments of your theory that are subject to empirical test, such that your theory is placed at risk of disconfirmation. That is the furthest thing possible from an argument from authority.

  207. 207
    Joseph says:

    Diffaxial,

    The EF works. The ONLY problems the EF has are the data used and the people using it.

    The EF demands TWO criteria be met:

    1- the ruling out of chance and necessity

    PLUS

    2- Specification

    If those two are met then it is safe to infer design.

    And yes as with ALL scientific inferences that can either be confirmed or refuted with future research.

    It is true that biological phenomemon have been explained but not by nor because of the theory of evolution.

    So to refute the design inference for the bacterial flagellum all one has to do is demonstrate that a designing agency is not required.

    To do that just take some populations of flagella-less bacteria and see if a flagellum develops.

    Right now your position doesn’t have any empirical evidence to support it…

  208. 208
    Joseph says:

    The entailments of ID, as with archaeology and forensic science (SETI also), is that designing agencies leave traces of their involvement behind.

    Therefor if we did not observe any traces or the traces we thought we observed turned out to be caused by nature, operating freely, the design inference would fall.

    I have stated that several times already and not one of you can comprehend it.

    Why is that?

  209. 209
    kairosfocus says:

    Adel:

    And to you.

    GEM of TKI

    PS: Tried Chrome — nope. Safari may tempt me, or Opera. But it’s maybe 6 weeks out on the fix-up to release candidate . . .

  210. 210
    Hoki says:

    First of all, let me apologize for not writing in a few days. I was away and had no internet access. Anyhow:

    gpuccio (186):

    I think the meaning is clear enough. If only a phenotypic variation is selected, but it does not correspond to a genotipic variation, the variation will not be transmitted. In some way, any phenotypic variation has to originate form the genotype, or to be converted to a genotypic difference. What other model have you in mind?

    I think I see where your problem lies here. You seem to be assuming that there should be selection FOR junk. I have never claimed such a thing. The designer would select for whatever phenotype desired and the junk would simply go along for the ride. Even non-selected for DNA can be transmitted to subsequent generations.

  211. 211
    Diffaxial says:

    Looks like gpuccio has beaten a retreat.

Leave a Reply