Intelligent Design

Ken Miller’s Strawman No Threat to ID

Spread the love

Earlier today the News desk posted a video of Brown University biochemist Ken Miller’s takedown of ID. This is a fascinating video and it is worthwhile to post a transcript for those readers who do not have time to stream it. The video is excerpted from a BBC documentary called, with scintillating journalistic objectivity, The War on Science.

BBC Commenter: In two days of testimony [at the Dover trial] Miller attempted to knock down the arguments for intelligent design one by one. Also on his [i.e., Miller’s] hit list, Dembski’s criticism of evolution, that it was simply too improbable.

Miller: One of the mathematical tricks employed by intelligent design involves taking the present day situation and calculating probabilities that the present would have appeared randomly from events in the past. And the best example I can give is to sit down with four friends, shuffle a deck of 52 cards, and deal them out and keep an exact record of the order in which the cards were dealt. We can then look back and say ‘my goodness, how improbable this is. We can play cards for the rest of our lives and we would never ever deal the cards out in this exact same fashion.’ You know what; that’s absolutely correct. Nonetheless, you dealt them out and nonetheless you got the hand that you did.

BBC Commentator: For Miller, Dembski’s math did not add up. The chances of life evolving just like the chance of getting a particular hand of cards could not be calculated backwards. By doing so the odds were unfairly stacked. Played that way, cards and life would always appear impossible.

Now, to be fair to Miller, in a letter to Panda’s Thumb, he denies that his card comment was a response to Dembski’s work. He says poor BBC editing only made it appear that he was responding to Dembski, when really, “all I was addressing was a general argument one hears from many ID supporters in which one takes something like a particular amino acid sequence, and then calculates the probability of the exact same sequence arising again through mere chance.”

The problem with Miller’s response is that even if one takes it at face value he still appears mendacious, because no ID supporter has ever, as far as I know, argued “X is improbable; therefore X was designed.” Consider the example advanced by Miller, a sequence of 52 cards dealt from a shuffled deck. Miller’s point is that extremely improbable non-designed events occur all the time and therefore it is wrong to say extremely improbable events must be designed. Miller blatently misrepresents ID theory, because, as I noted above, no ID proponent says that mere improbability denotes design.

 
Suppose, however, your friend appeared to shuffle the cards thoroughly and dealt out the following sequence: all hearts in order from 2 to Ace; all spades in order from 2 to Ace; all diamonds in order from 2 to Ace; and then all clubs in order from 2 to Ace.  As a matter of strict mathematical probability analysis, this particular sequence of 52 cards has the exact same probability as any other sequence of 52 cards. But of course you would never attribute that sequence to chance. You would naturally conclude that your friend has performed a card trick where the cards only appeared to be randomized when they were shuffled. In other words, you would make a perfectly reasonable design inference.

What is the difference between Miller’s example and my example? In Miller’s example the sequence of cards was only highly improbable. In my example the sequence of cards is not only highly improbable, but also it conforms to a specification. ID proponents do not argue that mere improbability denotes design. They argue that design is the best explanation where there is a highly improbable event AND that event conforms to an independently designated specification.

Here’s the interesting part. Ken Miller has been debating design proponents all over the country for many years. He knows ID theory very well. Yet instead of choosing to take ID’s arguments headon, he constructs a strawman of ID theory and knocks it down.

I am not a scientist or a mathematician. I am a lawyer, but perhaps my legal training has given me an invaluable tool in the Darwin-ID debate, the tool Phil Johnson calls a “baloney detector.” And my baloney detector tells me that Ken Miller is full of baloney. Miller knows that no reputable ID proponent equates mere “improbability” with “design.” Yet there he is declaring to all the world that it is a “general argument” of “many ID supporters.”

I have to wonder. If, as the Darwinsts say, ID theory is so weak, why don’t they take it on squarely? Why do they feel compelled to attack a strawman caricature instead of the real deal? Indeed, Darwinists’ apparent fear of taking on ID on its own terms is one of the things that gives me great confidence in the theory, and that confidence will be shaken only if Darwinists ever begin to knock down the real ID instead of their ridiculous caricatures of the theory.

73 Replies to “Ken Miller’s Strawman No Threat to ID

  1. 1
    T. lise says:

    Not exactly of this argument but a similar kind of argument was also used by Dennis Alexander as well. Take for example in his book “Creation or Evolution” under the chapter “Intelligent design and Creation’s order”, a passage goes:

    “Many people impressed……of the huge improbabilities involved in biochemical systems coming into being ‘by chance’. But what the reader might miss easily is that the calculations are based on the whole system self-assembling all in one go……But this is tilting at windmills. No scientist believes that this is the way evolution works.”

    what would be your response to that?

  2. 2
    material.infantacy says:

    Evolution functions, such that it does, by heritable variation. Such variation requires a self-replicating system to be in existence, one capable of said heritable variation. One cannot invoke evolution to explain the origin of a system that is required to be in existence before evolution can happen — it is begging the question.

    “How does evolution work? By heritable variation, which is an artifact of a self-replicating system composed of functionally integrated complexity. How did that system come about? By evolution.”

    It does not follow. We’re told that evolution can innovate novel proteins by determining, via NDE mechanisms, the sequences that code for them. However those proteins, along with the DNA that specifies them, must be together and in place — in a functionally sound organism — before the system can function at all. Evolution cannot build an integrated system if it requires that same integrated system in order to innovate in the first place.

    If evolution works at all the way it’s purported to, then the DNA-based replicator must already be in existence. This means that the problem of the system’s origin is in an entirely different category from what the system can do once it is operational. Evolution cannot explain the origin of systems which are required for evolution to occur.

  3. 3
    material.infantacy says:

    Additionally, one cannot construct a functional DNA-based replicator in steps (since at each step it would still need to be a functional replicator capable of evolutionary innovation) — it is an irreducibly complex system. Not only are there no demonstrable, intermediate, functional steps between blind chemistry and a self-replicating organism, there is no evolution in effect until the entire system comes online.

  4. 4

    Check out my analysis of the accuracy of something else that Ken Miller said: http://www.uncommondescent.com.....mousetrap/

  5. 5
    gpuccio says:

    The “deck of cards” argument is certainly the most infamous, shameful, absolutely stupid argument ever used by darwinists. It is an offense to reason and to human cognition. I have read it in different contexts, always presented as a magical demonstration of how stupid IDists are.

    All those who have ever used this argument, in whatever form, should be deeply ashamed of themselves. Miller should be deeply ashamed of himself. Even considering this argument for a couple of seconds makes me feel stupid!

  6. 6
    Jon Garvey says:

    No, what evolutionists believe is that gene frequencies change over time… no, that is to say that near-neutral mutations accumulate without natural selection, with purifying selection eliminating the monsters … no, well, you see in complex organisms the neutral mutations swamp the purifying selection, so that’s how these irreducably complex things develop… or actually, they can occur quite quickly from whole genome duplication or symbiosis, which is to say…

    Oh let’s put it so everyone can understand. Evolutionists believe that evolution works by a whole series of hypothetical changes occurring, each step conferring unspecified infinitesimal advantages on the organism that are all big enough to be selected phenotypically. Once that can be imagined, it becomes true. It’s too beautiful to be doubted.

  7. 7
    gpuccio says:

    Jon:

    Very well said. They do exactly that! 🙂

  8. 8
    Collin says:

    What I don’t fully understand is the specification in biology. Is DNA specified with the proteins that it creates? I mean, why is DNA not like a shuffle of cards, that, overtime, is fixed into a system that causes proteins to be created that bring about a beneficial function?

  9. 9
    material.infantacy says:

    Don’t hold anything back GP, tell us how you really feel. xp

    Since you’re not fond of the cards analogy, “Suppose I have a bag containing 52 sequentially numbered beads, and I draw them out one by one, keeping a careful record of the sequence in which they were drawn….”

    ;-)

  10. 10
    gpuccio says:

    Collin:

    A protein coding gene is certainly functionally specified. The specification is:

    “A sequence of nucletides in a DNA molecule that, if read as a digital string according to a well known code (the genetic code) contains the information to build a protein that has this biochemical function”.

    So, a protein coding gene is like a shuffle of cards that corresponds to a very specific symbolic and functional meaning.

    The problem is, that specified result has an extremely low probability of being found by chance.

    Instead, generic meaningless sequences are the rule. It is true that each specific random meaningless sequence has the same low probability to be found, but the category of random meaningless sequences is a result so likely that it happens practically always, with large search spaces.

    The silliness of the argument is that Miller and his similar don’t understand at all (or pretend they don’t understand) how probability works, not even at very simple levels.

    If I toss a die, each of the six simple results has probability 1/6. But if I define two events as follows:

    1) The result is 1

    2) The result is any numer different from 1

    Then the probability are 1/6 and 5/6. If I had a die with 10^150 faces, the probability of getting 1 would be about 1:10^150, and the probability of not getting 1 would be almost 1.

    That’s more or less the situation with protein coding genes. The probability of functional sequences is so low, that they cannot be found in a purely random system.

    Therefore, the argument of the “deck of cards” is wrong, stupid, infamous, offending.

    Please, note that in all that discussion we are not considering the supposed effects of NS. NS is a necessity mechanism. It has no part in the discussion of probabilities, and requirs a separate treatment.

  11. 11
    gpuccio says:

    material.infantacy:

    I don’t know why, but I can probably tolerate the beads better :).

    Ah, the mysteries of human nature…

  12. 12
    material.infantacy says:

    Don’t be so down on the cards analogy. The argument is now an iconic example of either the misunderstanding or the mischaracterization of specified complexity by ID opponents, depending on who’s using it. I’m sure it will provide us with much amusement as time goes on. Personally, I’m sort of warmed by it, because it’s so simple to demonstrate the misrepresentation using the same example. =D

    m.i.

  13. 13
    gpuccio says:

    material.infantacy:

    I would share your happiness if I believed that the general public (including most darwinists 🙂 ) understands the basics of probability theory. But apparently, that is not the case…

  14. 14
    gpuccio says:

    RalphDavidWestfall:

    Very good indeed!

    And I still believed that Miller just used mousetraps as tie clips! The guy is really creative, I must say…

  15. 15
    Collin says:

    thanks gpuccio, that helps me understand it a little better. I understand how language is information because it is an arbitrary symbolic system. What I mean by that is that the word “tree” has nothing to do with an actual tree. We could use the word “barf.” A rose by any other name would smell as sweet.

    But does DNA “mean” proteins in the same arbitrary way? There is just something more machine-like to DNA that makes it seems less like information than language is. Perhaps it would be better if I know more about computer science because the same problem seems to arise (in my mind) with computer languages. After all, the lowest level of computer languages are called “machine” code or “assembly” code. It seems a lot more like the instructions “fit” the computer like a machine and are less like the abstract symbols of higher level languages. So it seems like the languages are merely special gears that fit the machine in a specific way and cause a specific effect. So no other “code” could replace the “word.” In other words, for DNA or machine code, (if I understand correctly) there can be only one word for tree (I’m speaking by analogy). There cannot be another symbol for a certain protein because that symbol would create a different protein. Maybe someone who knows more about DNA and/or computers can help me understand.

  16. 16
    Upright BiPed says:

    Collin, you ask the question of a thinking person.

    Have you read this. It might be interesting to you.

  17. 17
    gpuccio says:

    Collin:

    I see that UB has beaten me to the task 🙂

    Pelase, read his contribution: it is very good and complete.

    I may just point out some simple thoughts, to give a first answer to some of your questions:

    1) the information in a protein coding gene corresponds to the sequence of aminoacids in the protein. It is, by all means, what Abel calls “prescriptive information”. The word “tree” is what Abel calls “descriptive information”. Both are subsets of semiotic information. DI outputs a meaning. PI outputs a function.

    2) In a sense, the PI in a protein coding gene must be what it is: the sequence of aminoacids must be correctly defined, otherwise the protein will not be the right one. That is not symbolic: it is the real message that has to be transferred, and it has to be that way. But in another sense, the information is wholly symbolic: each aminoacid is described by a codon of 3 nucleotides, and the association between codons and aminoacid (that is the genetic code) is completely arbitrary, and is not due to biochemical reasons. The genetic code is symbolic and redundant, and optimized (as shown by many studies). But in principle, any codon could describe any aminoacid. The important thing is that the same code is used in the writing of the information (the gene), and in the translation apparatus (the 20 aminoacyl-tRNA synthetases, the tRNAs, the rybosome). So, the connection between the codon and the aminoacid is purely symbolic.

    A difference between computer language and human language is also that computer language is usually context independent, while human language is vastly context dependent. That means that human language is more ambiguous and flexible. In computer language, any instructions means only one thing (unless the language is programmed to be partially context dependent, but the final instructions at machine level are however non ambiguous). In that sense, at least for the protein coding part, DNA behaves like a computer.

  18. 18
    melvinvines says:

    Ken Miller gets debunked in this video…

    Irreducible Complexity – Why It’s Real
    http://www.youtube.com/watch?v=S4tpYozY2CM

  19. 19
    Collin says:

    Thanks guys. This is definitely what I’ve wanted to know for a long time: “So, the connection between the codon and the aminoacid is purely symbolic.”

  20. 20
    Upright BiPed says:

    Yes Collin, the spacial arrangement of cytosine-thymine-adenine in DNA has absolutely nothing whatsoever to do with leucine.

    And the aminoacyl synthetase that makes the relationship possible has nothing to do with either, either.

    🙂

  21. 21
    lastyearon says:

    Upright,
    If protein coding were shown to be a result of purely biomechanical processes, would it no longer be symbolic to you?

  22. 22
    DrREC says:

    1) “The genetic code is …. optimized (as shown by many studies).”

    a) Which genetic code is optimized? Here are 24 alternative genetic codes that have evolved. Could you tell me which one is optimal?
    b) This is debatable. “Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization.” http://www.biomedcentral.com/1471-2105/12/56

    2) “the spacial arrangement of cytosine-thymine-adenine in DNA has absolutely nothing whatsoever to do with leucine.”

    Except that it physically imprints the leucine codon on the mRNA, which is recognized by physical base pairing with the tRNA, which in turn is charged by the leucyl-tRNA synthetase, which directly interacts with the anticodon on the tRNA in recognizing it for charging.

    3) “And the aminoacyl synthetase that makes the relationship possible has nothing to do with either, either.”

    Except for direct physical contact with the anticodon (or in a few cases another unique part of the tRNA) that is required for amino acid charging-
    http://www.nature.com/emboj/jo.....3059a.html

    4) “But in principle, any codon could describe any aminoacid.”
    This contradicts your claim of optimality. Since the last base of many codons is less important (wobble), trying to use, say CCC for Asn would be quite difficult, since Pro is coded for by CCx. There is a lot of contingency built in the system due to the biochemical mechanisms, which limit the “arbitrariness” of the code.

  23. 23
    DrREC says:

    “because no ID supporter has ever, as far as I know, argued “X is improbable; therefore X was designed.”

    “The question remaining is how improbable does a specified thing have to be before we know it was designed? … That means we set the bar very high, meaning the thing in question will have to be extremely improbable to pass our design test.”

    Intelligent Design Uncensored: An Easy-To-Understand Guide to the Controversy By William A. Dembski, Jonathan Witt p. 67

    Am I missing something?

  24. 24
    DrREC says:

    Except specification, which is a tautology, because you define it to be so, and there is almost no attempt to deal with it in probability calculations.

  25. 25
    Collin says:

    Specification is a tautology, why? And if it is a definition, are all definitions unhelpful in science? Can’t we make definitions? Can you see the specification of a straight flush?

  26. 26
    DrREC says:

    Because in determining if something is designed, you start from the assumption that it is designed.

    A straight flush is an interesting example-out of 2.6 million poker hands, there are 40 straight flushes. Which is the specification-getting one of them, or any of them? Or any hand better than your opponent’s?
    Choosing the specification inserts a design assumption-that 1 of the flushes, or all of them are what was “specified.”

    In nature, this is even clearer. A single protein of 20 amino acids is one member of 20^100. But what is the specification? Having that exact sequence=20^100. Having the same function? Some untestably more probable number. Having any function useful to the organism? Maybe something not so improbable at all.

    So choosing the specification is making assumptions abou what you think the “design” must be.

  27. 27
    Joe says:

    Because in determining if something is designed, you start from the assumption that it is designed.

    Wrong again. We have been over this- Newton’s First Rule.

    And specification = function wrt biology. Also not all amino acid chains (polypeptides) will form a functioning protein. If they could then there wouldn’t be any specification.

    So choosing the specification is making assumptions abou what you think the “design” must be.

    Dude, specification doesn’t automatically = design. But it does have to be accounted for. And so far a specification of high probability has always led to a designer.

    So we start with X- the thing being investigated and then try to determine its cause.

    Specification could very well be due to law/ regularity.

  28. 28
    Joe says:

    Does a blind watchmaker use symbols?

  29. 29
    DrREC says:

    Newton’s first rule again Joe? What an odd choice to add to your repertoire of one-line answers. No clue how you’ve decided it answers this question.

    Let’s put this in easy terms. Specification requires a specifier (you)! You choose the function. Is a bacterial flagella specified? Why? Is it required for life? Does it need to look the way it does? Are there other things in life that perform the same function? Are there other things that could have evolved to perform the same function?

    “And so far a specification of high probability has always led to a designer.”

    I think you mean a specified (design?) of low probability. But see what your saying: “X is improbable; therefore X was designed.” And since you specified the specification, it is circular.

  30. 30
    gpuccio says:

    DrRec:

    Funcctional specification, for a protein, is the specific definition of the biochemical function of that protein, and a threshold of minimal functionality for it. It is defined by a conscious designer, on the basis of what he observes. The target space is defined as the sum of all sequences that exhibit that function, at least at the threshold level.

    Once defined that fubction, we can compute the functional complexity for it.

    You are right that it is the probability to get that particular function, and not any possible function.

    But that does not help much. In an already organized biological environment, the already existing complexity limijts drastically the possible new selectable functions. Remember that the function must be selectable, that is it must, as it is, give a reproductive advantage. That restricts the functional target a lot.

    Moreover, even if there are n possible selectable functions of reasonable complexity (lets’ say not more than 500 bits) that could be found in a specific biological environment, please remember that their probabilities can only be summed to get to the final probability of “any possible selectable function”. That will not help much, with search spaces of that dimension. So, your statement “Maybe something not so improbable at all.” looks incredibly optimistic.

    Darwinists continue to hide behind the fairy tale of “any possible function”, only because they believe that it cannot be quantitatively computed. Those “possible functions” must be selectable, and integrated in already existing complexity. In many cases, maybe in most, a selectable function will be irreducibly complex, because it needs the coexistence of many new proteins.

    The thresholds we consider in ID, be it Dembski’s 500 bits or my 150 bits, are extreme, and they easily take into account the possibility that more than one new function may be selectable. The empirical threshold for success in the wild ot in the lab is still in the order of 2-6 AAs.

    So, be reassured: it is and remains extremely improbable, all considered.

    And the initial statement remains perfectly true:

    “no ID supporter has ever, as far as I know, argued “X is improbable; therefore X was designed.”

    Complexity and specification are lawys connected in ID, whatever you may think.

  31. 31
    lastyearon says:

    A specification is defined by a designer to produce something that performs a specific function resulting in a specific desired effect. You can’t use that term in your argument that proteins were designed because that’s circular reasoning.

  32. 32
    Joe says:

    Yes Newton’s First Rule and I see you are still choking on it.

    And no specification does not require a specifier- I take it you did not read my response or perhaps you are too dim to grasp it.

    Dude, specification doesn’t automatically = design. But it does have to be accounted for. And so far a specification of high probability has always led to a designer.

    So we start with X- the thing being investigated and then try to determine its cause.

    Specification could very well be due to law/ regularity.

    Specification PLUS improbability- and yes we determine specification via functionality.

    So to sum up DrREC doesn’t understand how scientists determine design from not and he thinks his ignorance is a refutation.

    Cause and effect relationships “doc”- so all YOU have to do to refute any given design inference is actually step up and demonstrate the power of your position, which as of now seems to be a lot of bloviating.

  33. 33
    Joe says:

    Look, just admit that you are clueless and move on.

    Functionality is the specification. And if any amino acid sequence could produce the same function then there isn’t any specification.

  34. 34
    lastyearon says:

    Everything has a function. In that everything interacts with the stuff around it to produce an effect. Sometimes the function is simple, sometimes complex. A function doesn’t imply a designer.

  35. 35
    Joe says:

    Complex systems that produce a useful function always arise from a designer- at least that is what all of our experiences and observations say.

    But then again all you care about is obfuscation

  36. 36
    lastyearon says:

    Useful to whom? Your still assuming a conscious entity intended life in your starting premise.

  37. 37
    Joe says:

    Your still assuming a conscious entity intended life in your starting premise.

    No, I am not. And YOU don’t get to tell me what I am assuming.

    My inference is based on KNOWLEDGE of cause and effect relationships whereas your position relies solely on the batlle-cry “anything but design at all costs!”

  38. 38
    Heinrich says:

    Hmm, there’s a subtle problem about specification in here. To calculate the probabilities of a specified pattern, we have to have the specification before we see the data (otherwise we’re drawing our target around the arrow).

    In Miller’s example, the pattern is specified afterwards. In Barry’s example, there’s no explicit a priori specification of the pattern. However, we have some intuitive sense that the observed pattern is interesting. So, we can think that there is a vague a priori specification.

    The problem is, though, that to make any calculations relevant, we need to specify “interestingness”. i.e. we need to be able to list all “interesting” patterns. I think we can get away with doing this after we see the data, as long as we can be very clear about the range of “interesting” patterns. But then that just raises the question of whether it’s been done in ID.

    Anyone?

  39. 39
    gpuccio says:

    Heinrich:

    I quote from my answer to DrREC in this thread:

    “Functional specification, for a protein, is the specific definition of the biochemical function of that protein, and a threshold of minimal functionality for it. It is defined by a conscious designer, on the basis of what he observes. The target space is defined as the sum of all sequences that exhibit that function, at least at the threshold level.”

    Just to start the discussion.

  40. 40
    gpuccio says:

    lastyearon:

    So, you conflate “function” with “interaction”? On what basis?

  41. 41
    Petrushka says:

    Just for fun, don’t forget the experiment in which the viability of a bacterium was recovered by a synthetic protein having no known function.

    I know we had some fun with that, but my current question would be how does a designer anticipate this kind of result, assuming he is using directed evolution?

  42. 42
    gpuccio says:

    Petrushka:

    Do you mean the rugged landscape paper with the viral model?

  43. 43
    lastyearon says:

    gpuccio,
    In order to justify that a protein was designed for a specific function, you need some evidence that a conscious designer had an intended purpose for the protein. You haven’t done that. All you’ve done is note an observed current function and fit a specification around it. It’s no different than saying that a specification of gold is to provide people with a good investment.

  44. 44
    gpuccio says:

    lastyearon:

    I really don’t follow your reasoning.

    I don’t need to “justify” that a protein was designed for a specific function.

    What I do is:

    a) I observe that a protein has a specific function. I define it and provide a way to measure it and a minimal threshold for it.

    b) By some approximation, I measure the search space for that kind of protein and the target space (the number of fucntional sequences that ensure that function as I have defined it).

    c) The rate of the target space to the search space is the dFSCI of that functional protein (expressed as -log2, in functional bits).

    d) If the dFSCI is higher than a conventional threshold (I usually propose 150 bits for a realistic biological system, and I believe I am still too generous), I infer design as the best explanation, on the basis that such levels of dFSCI has ever been observed only in designed things.

    I am afraid you don’t really understand the process.

    And yes, “anything that provides people a good investment in a well defined context” is certainly a possible specification for something. But what has that to do with biology?

  45. 45
    lastyearon says:

    a) I observe that a protein has a specific function. I define it and provide a way to measure it and a minimal threshold for it.

    As I understand what your saying, you observe a particular function, and define what the minimal qualifications are to accomplish that function.

    By some approximation, I measure the search space for that kind of protein and the target space (the number of fucntional sequences that ensure that function as I have defined it).

    I think your saying that you try to identify how improbable that protein’s function is by identifying the range of possible functions (and non-functions) for it.

    c) The rate of the target space to the search space is the dFSCI of that functional protein (expressed as -log2, in functional bits).

    d) If the dFSCI is higher than a conventional threshold (I usually propose 150 bits for a realistic biological system, and I believe I am still too generous), I infer design as the best explanation, on the basis that such levels of dFSCI has ever been observed only in designed things.

    So a function that is extremely improbable, based on a large search space and a small target space, has high dFSCI.

    Missing in your process is some way of independently assessing whether the function has hit a specific target that a designer intended. Without that, all your doing is observing that protein X is very complicated, and it’s very unlikely that anything else could accomplish the things it does. In no way does that imply that anyone or anything intended it to do that.

  46. 46
    gpuccio says:

    lastyearon:

    You have understood well, except for the last step.

    The design inference is, as the word says, an inference.

    The reasoning goes (briefly) as follows:

    a) I define a formal property, objectively verifiable in an object.

    b) I check that property in objects that have benn certainly designed (human artifacts, where can ascertain directly the design process), and find that it is often present in that category.

    c) I check that property fro objects that are not designed (any natural onject where we can exclude, empirically and reasonably, any intervention of a conscious designer in the determination of the specific form we observe), and find that it is never observed.

    d) Biological objects, being tht controversial category, are obviously excluded from this phase.

    e) I can also check my property in a blind way agains huamn designed objects and non designed objects: I find that, is used as an empirical marker of design, it gives no false positives and many false negatives (all objects with dFSCI are designed, but not all designed objects exhibit dFSCI).

    f) From the previous empirical passages, I derive the reasonable expectation that dFSCI is a good marker of designed objects, very specific (no false positives), but not sensitive (many false negatives).

    g) Analyzing the controversial set of objects, biological objects, I find that many of them exhibit veri high levels of dFSCI.

    h) On that basis, I infer a design origin for those objects as the best scientific explanation available.

    i) That implies that other proposed explanations must be shown to be wrong, or flawed, and that is part of ID theory too.

  47. 47
    DrREC says:

    8.1.1.1.2 gpuccio

    “Funcctional specification, for a protein, is the specific definition of the biochemical function of that protein, and a threshold of minimal functionality for it. It is defined by a conscious designer, on the basis of what he observes.”

    So in the determination of design, you have “Specified X is improbable; therefore X was designed.” And specified=”defined by a conscious designer.” So you use choose design in determining design. Specification requires a specifier, and that is you.

    Lets think about what this actually means. In determining specificity, you assume design-that say, an enzyme, is necessary. You assume it needs to be of that form and function. You assume no ancestral promiscuous function covered for it. You assume the design of the system in a way that ignores the evolutionary hypothesis. You specify the specificity.

    Then you handwave about the complexity-never actually calculating the number of forms that could cover the same function.

  48. 48
    gpuccio says:

    DrREC:

    Wrong.

    The specification is a possible function recognized and defined explicitly by a conscious observer.

    Let’s take the examble of an enzyme. It accelerates a specific biochemical reaction. That is a function that can be defined objectively, and measured in the lab. I am not making it up. My only role is to recognize it as a function, because functions have a meaning only for conscious and purposeful agents.

    You assume it needs to be of that form and function.

    What do you measn? I am assuming nothing. My specification, at this point, is: “any molecule that can accelerate reactio X at least of Y, in the lab”. There is no assumption at all.

    You assume no ancestral promiscuous function covered for it.

    That is completely gratuitous. If you have followed at least some of my posts here, you should know that I have many times stated that dFSCI gives us the probability of getting to a certain type of functional sequence in a purely random system, either thorugh a random search or a random walk from an unrelated state. That I have said clearly many times.

    Possible functional intermediates can certainly be taken into consideration, but they must be shown to exist, and to be naturally selectable in a specific context. They cannot only be “imagined” or declared “possible”. That is not science, but fairy tales.

    If a selectable intermediate is knwn, dFSCi will be computed for that intermediate, and then for the transcition to the final result.

    IOWs, if B is the final protein, and no selectable intermediate is known, I will compute dFSCI for B. If A is shown to be a functional selectable intermediate for B, I will compute dFSCI for A, and dFSCI for the transition from A to B.

    IOWs, I compute dfSCI only for the parts of the algorithm that are attributed to RV. NS is a necessity mechanisms, and it is treated separately. But it must be explicit, demonstrated NS.

    For basic protein domains, no path based on selectable intermediate is known. Therefore, with our present knowledge, their dFSCI corresponds to the whole functional information of the molecule.

    Now, please, don’t come and say that we cannot exclude that functional intermediates will be found some day. Have some respect for my intelligence (and patience).

    You assume the design of the system in a way that ignores the evolutionary hypothesis.

    Not true. I simply ignore generic hypotheses of possible and never proposed, and never shown, paths that, as any sensible observer can understand, are in no way a scientific alternative, being based on mere imagination and faith.

    You specify the specificity.

    That’s really tough! Is it meant as an offense? 🙂

    Anyway, whatever it means, it’s simply not true. What I do is: I define the specification.

    Then you handwave about the complexity-never actually calculating the number of forms that could cover the same function.

    Completely false. If you read what I have written, you will see that the computation of the functional space is a fundamental step in the computation of dFSCI. Where is your problem?

  49. 49
    DrREC says:

    I find this reply a bit unusual.

    First, you keep acting as though fcsi is actually calculatable, in a meaningful manner. I’ve never seen you do it. Perhaps take human aldolase reducatse, and walk me through the process. It might clarify the process. What is its specificity? What it is complexity?

    Secondly, there seems to be an awful lot of knowledge that is dispersible to you:

    “Possible functional intermediates can certainly be taken into consideration, but they must be shown to exist, and to be naturally selectable in a specific context.”

    Wouldn’t it be necessary to rule them out to make a design inference?

    and again: “If a selectable intermediate is knwn, dFSCi will be computed for that intermediate, and then for the transcition to the final result.

    IOWs, if B is the final protein, and no selectable intermediate is known, I will compute dFSCI for B. If A is shown to be a functional selectable intermediate for B, I will compute dFSCI for A, and dFSCI for the transition from A to B.

    IOWs, I compute dfSCI only for the parts of the algorithm that are attributed to RV. NS is a necessity mechanisms, and it is treated separately. But it must be explicit, demonstrated NS”

    So until evolution is demonstrated for something, to your satisfaction, you assume design? Why not do the actual work, as design scientists, and determine the specificity and complexity?

    “Now, please, don’t come and say that we cannot exclude that functional intermediates will be found some day.”

    That seems inherently reasonable, given the work of those reconstituting ancestral proteins and determining those intermediates.
    http://scholar.google.com/scho.....i=scholart

    “For basic protein domains, no path based on selectable intermediate is known. ”

    Is actually false. Small peptides can symmetrically fold to make functional domains. This may be a reason so many protein domains have internal symmetry or are built of repeats.
    http://www.pnas.org/content/108/1/126.full

    Last, we’re straying from my original point. You expressed it best yourself: ““Functional specification, for a protein, is the specific definition of the biochemical function of that protein, and a threshold of minimal functionality for it. It is defined by a conscious designer, on the basis of what he observes.”

    In detecting design, you assign the design. It is really simply that stupid.

  50. 50
    PaV says:

    DrREC:

    So in the determination of design, you have “Specified X is improbable; therefore X was designed.” And specified=”defined by a conscious designer.” So you use choose design in determining design. Specification requires a specifier, and that is you.

    Why do I get the feeling that you, like MathGrrl before you, are not interested in learning about ID, but only in trying to find fault with it?

    Your statement, “Specification requires a specifier” completely misunderstands the technical meaning of a “specification”. Why don’t you read a book about ID? Why don’t you read NFL, for example?

    I’ve read R.E Fisher’s book and Origins. Why not spend the time learning about this stuff before you come over to the website?

  51. 51
    DrREC says:

    PaV,

    This seems a common tactic of yours. If you’re so well read, why don’t you dismiss my stupid questions with a line or two?

    I’m quite aware of Dembski’s argument. I’m also aware of the many many permutations is seems to have spawned on this site. CSI, dFSCI, FIASCO or whatever KF’s pet version is called.

    Do you disagree with gpuccio’s (someone else who claims to be well read on the matter) statement that: “Functional specification…is defined by a conscious designer.” Since functional specification is what is used to determine design, you are determining design with a determined design.

    In light of the counterhypothesis that evolution can produce results that appear designed, this is most unsatisfying.

  52. 52
    Petrushka says:

    Why do I get the feeling that you, like MathGrrl before you, are not interested in learning about ID, but only in trying to find fault with it?

    Both sides could gain by making the assumption that the other side is arguing in good faith.

    From my side I wonder why ID continues to avoid proposing a theory of design. It would seem to me that before arguing that living things are designed rather than evolved, one should be able to demonstrate that this is even possible.

  53. 53
    DrREC says:

    Let’s go back to this post of mine, a simple example:

    “Because in determining if something is designed, you start from the assumption that it is designed.

    A straight flush is an interesting example-out of 2.6 million poker hands, there are 40 straight flushes. Which is the specification-getting one of them, or any of them? Or any hand better than your opponent’s? Choosing the specification inserts a design assumption-that 1 of the flushes, or all of them are what was “specified.”

    In nature, this is even clearer. A single protein of 20 amino acids is one member of 20^100. But what is the specification? Having that exact sequence=20^100. Having the same function? Having any function useful to the organism?

    So choosing the specification is making assumptions about what you think the “design” must be.

  54. 54
    gpuccio says:

    DrREC:

    I find this reply a bit unusual.

    I take that as a compliment.

    First, you keep acting as though fcsi is actually calculatable, in a meaningful manner. I’ve never seen you do it. Perhaps take human aldolase reducatse, and walk me through the process. It might clarify the process. What is its specificity? What it is complexity?

    Durston has done it for me (and for you). Look here:

    http://www.tbiomed.com/content.....2-4-47.pdf

    In Table 1, you find the computation of functional complexity in Fits (functional bits) for 35 different protein families.

    Let’s take one as an example: Ribosomal S7.

    Length: 149 AAs;

    Number of sequences examined for the computation: 535

    Null state (search space): 644 bits

    Functional complexity: 359 Fits

    As you can see, it’s not me that “keep acting as though fcsi is actually calculatable, in a meaningful manner”. It is.

    Wouldn’t it be necessary to rule them out to make a design inference?

    Absolutely not. Darwinists have a theory that depends on the existence of those functional intermediates. It’s not my theory. Darwinist have the obligation to show that those intermediates exist. We have no obligation to “rule them out”, no more than we have obligations to rule out the existence of unicorns. Simple epistemology.

    So until evolution is demonstrated for something, to your satisfaction, you assume design?

    Yes, I infer (not “assume”; epistemology, again!) design as the best explanation, because it explains well, and because there is no other explanation.

    Why not do the actual work, as design scientists, and determine the specificity and complexity?

    As shown, it has been done.

    The first paper you quote is completely non pertinent (and even extremely speculative). The reason is simple, and you find it in the methodology part:

    “We start with a set of homologous proteins connected by a known evolutionary tree T, and the amino acids found at a given location in the previously aligned sequences of the homologous proteins”

    Well, either you have not read the paper, or you have not understood my argument. Here, they are debating the possible ancestor ptorin of a specific protein family, as can be seen by the words “homologous proteins connected by a known evolutionary tree”.

    Mt argument, instead, is clearly (if you have read my words) about the generation of basic protein domains,which have no homology one with the other. Is that more clear, this time?

    The second paper is even more speculative, and in no way shows an evolutionary path with selectable intermediates, if not at complete fairy tale level. If that is the best evidence you can gather, I am really happy I am in the opposite field.

    However, the paper is an intertesting piece of top down protein engineering, that could be interesting for Petrushka 🙂

    “To address this question, a unique “top-down symmetric deconstruction” strategy was utilized to successfully identify a simple peptide motif capable of recapitulating, via gene duplication and fusion processes, a symmetric protein architecture”

    Top down protein engineering, Petrushka! Can you hear me?

    Finally, you say:

    In detecting design, you assign the design. It is really simply that stupid.

    There is definitely something stupid in that remark, but out of courtesy I will not say what.

    The simple truth is that “to detect design, I recognize a function, define it, and compute the complexity necessary for the implementation of that function. If the complexity is high enough, I infer design as the best explanation.”

    Does it sound as the same thing? If to you it does, then there is no hope…

  55. 55
    DrREC says:

    ““to detect design, I recognize a function, define it”

    You choose the design.

  56. 56
    DrREC says:

    Go with this one:

    A straight flush is an interesting example-out of 2.6 million poker hands, there are 40 straight flushes. Which is the specification-getting one of them, or any of them? Or any hand better than your opponent’s? Choosing the specification inserts a design assumption-that 1 of the flushes, or all of them are what was “specified.”

    And answer it.

  57. 57
    DrREC says:

    Is fsc=to fcsi? Seems like the calculations are pretty different. By the way, the method estimates the functional portion of sequence space by known sequences. Since there are many sequences that have no function (and thus may share the same function, and since evolution has likely not explored all of sequence space, and since most sequences are from evolutionarily related organisms, this is a pretty weak technique.

    Hilariously, the number of fits for some whole domains and enzymes is well below the universal probability bound. Oops.

    The first paper demonstrates finding functional intermediates, which you dispute. The second is a proof of principle, that small peptides can assemble into domains. Yes, it uses engineering and design. What do you expect science to do? Wait around and observe for millions of years? This could be the dumbest of all ID arguments-that experiments are designed!

  58. 58
    Upright BiPed says:

    What function does tRNA serve in protein synthesis?

    When biologists described it as an “adapter molecule” were they choosing its design?

  59. 59
    DrREC says:

    sorry, i guess that is dFSCI.

    Somehow the calculation in the paper and your description above seem quite at odds….

  60. 60
    DrREC says:

    No, they are describing a role in a process.

    Determining that role is “specified” i.e. target, iis inserting a design assumption into your design detector.

    Can you provide me a metric of specification that doesn’t make this assumption.

    Try it with the poker analogy.

  61. 61
    DrREC says:

    Or is is FSCIO? I forget……

    And why not use your own units instead of Fits, if this is established and easily calculable?

  62. 62
    Joe says:

    Dude,

    If you are playing poker then the specification is set by the rules of the game.

    That said there isn’t a design inference if someone gets one royal flush dealt to them. But if someone gets dealt ten pat hands in a row only a moron wouldn’t suspect something is wrong.

  63. 63
    CJYman says:

    That is incorrect. The fact that a “target” (specified event) may exist does not on its own indicate design. Up to this point in design detection there is no assumption of design. It may be designed or it may not be designed. The next step is to calculate both the probability of that preliminarily identified target against all other possible patterns (calculate specificity) and then compare against the UPB. At this point, depending on the calculation and if the pattern is not defined by the physical properties of the medium in which it exists, intelligent design can be determined the most likely explanation.

  64. 64
    DrREC says:

    Ok-we’ve got an event, say a pulsar transmission Let’s assume it is complex.

    “The next step is to calculate both the probability of that preliminarily identified target against all other possible patterns (calculate specificity) and then compare against the UPB.”

    All other patterns of what? Just all other patterns?

  65. 65
    Joe says:

    Consider pulsars – stellar objects that flash light and radio waves into space with impressive regularity. Pulsars were briefly tagged with the moniker LGM (Little Green Men) upon their discovery in 1967. Of course, these little men didn’t have much to say. Regular pulses don’t convey any information–no more than the ticking of a clock. But the real kicker is something else: inefficiency. Pulsars flash over the entire spectrum. No matter where you tune your radio telescope, the pulsar can be heard. That’s bad design, because if the pulses were intended to convey some sort of message, it would be enormously more efficient (in terms of energy costs) to confine the signal to a very narrow band. Even the most efficient natural radio emitters, interstellar clouds of gas known as masers, are profligate. Their steady signals splash over hundreds of times more radio band than the type of transmissions sought by SETI.- Seth Shostak

  66. 66
    gpuccio says:

    DrREC:

    I have been away one day, and you are now famous! 🙂

    Anyway, I have not time now to read “your” threads, so I will for the moment just answer a couple of points here.

    fsc = functionally specified complexity

    fcsi = functionally complex specified information

    dFSCI (the term I usually use) = digital funtionally specified complex information

    CSI = complex specified information.

    The concept is the same. The letters may vary. The only meaningful differences are, IMO:

    a) CSI is the widest concept: any information that is complex and specified. That is, usually, Dembski’s concept. Various kinds of specifications can apply.

    b) FSCI is a subset, where the specificationis exclusively functional: the recognition of a function implemented by the information. That is the subset most appropriate for biologcial information.

    c) dFSCI (my term) is still a subset, where the information is digital. It applies well to biological information in the genome and proteome, and it can be treated more simply.

    I hope that clarifies.

    The unit of complexity is always the same: bits, expressed as -log2 of the functional complexity. As they express the bits connected to the function, Durston calls them Fits (functional bits). No difference here.

    You say:

    Seems like the calculations are pretty different

    No. there are two ways to approxiamte the functional spcae of proteins. One is to study the structure function relationship in specific cases, abd to reason on the available data (that has been pursued mainly by Axe). Another one, the Durston method, is to use the existing proteome and compute the reduction in Shannon uncertainty at each AA site.

    By the way, the method estimates the functional portion of sequence space by known sequences.

    Correct. Sequences that are the result of billion of years of evolution.

    Since there are many sequences that have no function

    And so? Would that be an objection?

    and thus may share the same function

    It’s not clear what you mean.

    and since evolution has likely not explored all of sequence space

    That is probably true, at least in part. That’s why the Durston method is an approximation of the measure, not an exact measure. The best approximation available. There are some assumptions, all of them very reasonable. One of them is that the fucntional seqeunce space for the analyzed functions has been reasonably explored. And anyway, the application of Shannon’s method gives anyway a very good approximation, if we assume (reasonably) that existing proteins with that fuctnion are a representative sample of all possible proteins with that function.

    and since most sequences are from evolutionarily related organisms

    And so? That’s eactly what we are looking for: the exploration of a functional space thorugh neutral evolution, that preserves the function. And many of those families are very old. Many of them are LUCA families. At that level, all organisms are evolutionarily related, if we accept common descent (as I do).

    this is a pretty weak technique

    I completely disagree, for all the reasons I have given. It is a brilliant techinque, and it really measures what it tries to measure: the functional complexity of protein families. The great differences in mean complexity per site are extremely interesting, pointing to the important fact that not all proteins have the same level of functional complexity in relation to their raw sequence length.

    The Durston method is brilliant, simple and powerful. It is refuted by darwinists for mere ideological reasons.

    You started with a simple question:

    First, you keep acting as though fcsi is actually calculatable, in a meaningful manner. I’ve never seen you do it. Perhaps take human aldolase reducatse, and walk me through the process. It might clarify the process. What is its specificity? What it is complexity?

    I have given a simple answer, but it seems that it is not comfortable for you. At least, please, admit that fcsi is actually calculatable, and has been calculated, even if in your opinion the method is weak. That would be a more correct position.

  67. 67
    gpuccio says:

    DrREC:

    Let’s go to the papers:

    The first paper demonstrates finding functional intermediates, which you dispute.

    No. Read what I have written. The first paper is about finding ecolutionary continuity in protein families, exactly the point on which the Durston method is based.

    I do believe in neutral evolution of protein families, and maybe in limited functional micro-evolution at the level of a few AAs, especially at active site level. That cab be discussed, as Axe has done in a recent paper.

    What I do dispute is that any functional naturally selectable path has been presented for the origin of basic protein domains (those considered by Durston). I am sure you can appreciate the difference.

    The second is a proof of principle, that small peptides can assemble into domains.

    And so? It is not certainly a path that shows that those small peptides were functional precursors, naturally selectable and naturally selected in natural history.
    As you say, it is a “proof of principle” that intelligent engineering can build more complex structures from simpler ones. Thank you for the news.

    What do you expect science to do?

    Well, let me think a moment… Maybe find in the proteome the precursors that are believed to exist, define thjeir function, explain why they are naturally selectable and in the past have given reproductive advantage to some population, show how they could have expanded, compute the probability that they could anyway assemble into bigger structure by RV… Am I expecting really too much? Can science do something to prove its theories, or must they remain forever fairy tales, accepted only in the name of academic authority?

    Hilariously, the number of fits for some whole domains and enzymes is well below the universal probability bound. Oops.

    What’s hilarious in that? It is perfectly natural that some proteins, especially the shorter ones, have functional complexity below the UPB. And so? Why are you sa amused by something that is perfectly expectable?

    Moreover, the UPB is not an appropriate threshold for a realistic biological system. I have discussed that here:

    http://www.uncommondescent.com.....ent-410355

    proposing a biological threshold of 150 bits for biological systems.

    Even so, however, it is perfectrly expectable that some smaller protein are under that threshold too. But, just to sum it up, of 35 protein families analyzed by Durston:

    6 (17%) are above the 1000 bits limit

    11 (31%) are above the 500 bits limit (Dembski’s UPB)

    28 (80%) are above my proposed 150 bits limit

    I find all that extremely interesting and significant, and not hilarious at all. Different sensibilities, maybe.

  68. 68
    gpuccio says:

    DrREC (et al.):

    I apologize for the lack of order in my posting: for a better understanding, post 21.2 should be read before post 21.1.2.

  69. 69
    Joe says:

    Is there a theory of archaeology? How about a theory of forensic science?

    Also ID is NOT anti-evolution and is perfectly OK with organisms evolving by design.

  70. 70
    Joe says:

    from Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, “Measuring the functional sequence complexity of proteins,” Theoretical Biology and Medical Modelling, Vol. 4:47 (2007):

    [N]either RSC [Random Sequence Complexity] nor OSC [Ordered Sequence Complexity], or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life. FSC [Functional Sequence Complexity] includes the dimension of functionality. Szostak argued that neither Shannon’s original measure of uncertainty nor the measure of algorithmic complexity are sufficient. Shannon’s classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that “different molecular structures may be functionally equivalent.” For this reason, Szostak suggested that a new measure of information—functional information—is required.

    Here is a formal way of measuring functional information:

    Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak, “Functional information and the emergence of biocomplexity,” Proceedings of the National Academy of Sciences, USA, Vol. 104:8574–8581 (May 15, 2007).

    See also:

    Jack W. Szostak, “Molecular messages,” Nature, Vol. 423:689 (June 12, 2003).

  71. 71
    DrREC says:

    “What’s hilarious in that? It is perfectly natural that some proteins, especially the shorter ones, have functional complexity below the UPB. And so? Why are you sa amused by something that is perfectly expectable?”

    Sorry, I stopped short. What is hilarious is that if two domains, say of the 20% below even a 150 bit limit recombine, they are suddenly above it.

    So two ‘natural’ proteins undergoing a highly probable natural process, yields a product with the appearance of design.

    This doesn’t make you hesitate?

  72. 72
    gpuccio says:

    DrREC:

    This doesn’t make you hesitate?

    No. Again, you seem not to see the difference between compuing a probability in a purely random system, or computing what happnes if NS can be shown to intervene.

    That’s why my argument is always about “basic protein domains”, those for which not explicit intervention of NS in precursors is known.

    Now, let’s say that a protein AB contanins two different functional domains: A and B.

    Now, if no single domain (A or B) can be shown to be individually funtional and naturally selectable, we can still treat the whole protein as one functional object, and compute its total functional complexity, that will be the sum of the bits of funcional complexity in A and B, because the protein is an irreducibly complex object.

    But if A and B are individually functional and naturally selectable, the scenario is different. Each of the two shorter components must be evaluated for its functional complexity, and we can no more just add the bits of one to those of the other if they recombine to make a new, different functional protein. The system becomes different, because the expansion of A and B because of the reproductive advantage that each of them confers redefines the probabilistic resources, and we should consider separately:

    – The probability of getting A in a random system

    – The probability of getting B in a random system

    – The probability of a functional recombination of A and B, if both A and B are selected functionally and expanded in the population.

    There is no doubt that tyhe functional natural selection of A and B would represent a valid path to AB, not necessarily a credible path, but one with higher probabilities of success.

    Moreover, you are not right in equating a protein with less than, say, 150 bits of FI with a “natural” protein, and one with more than that with “the appearance of design”.

    It would be more correct that a protein with less than 150 bits is a model where we cannot explicitly infer design, while the second case is one where design can be inferred.

    And again, the correct model to infer design is that of basic protein domains. At present, the vast majority of them exhibits much more than 150 bits of FI, and for none of them a gradual naturally selectable path has been shown.

    Multi domain proteins can certainly be studied too, but as said the analysis becomes more complex.

  73. 73
    DrREC says:

    “But if A and B are individually functional and naturally selectable, the scenario is different. Each of the two shorter components must be evaluated for its functional complexity, and we can no more just add the bits of one to those of the other if they recombine to make a new, different functional protein. The system becomes different, because the expansion of A and B because of the reproductive advantage that each of them confers redefines the probabilistic resources, and we should consider separately:

    – The probability of getting A in a random system

    – The probability of getting B in a random system

    – The probability of a functional recombination of A and B, if both A and B are selected functionally and expanded in the population.”

    Excellent. You’re starting to get it.

    ID can’t simply go from big protein to big numbers. The calculation of fsci must be for a protein that cannot be deconstructed into simpler, functional components.

    Could you name me one? Note the databases are FULL of domain fusions, where proteins that work together are fused to a larger protein in another.
    http://www.biomedcentral.com/1471-2105/5/161

    Why do you think repeat proteins abound in nature? Why are proteins built of simpler domains, which in turn are built of simpler motifs?

    So to make the design inference, you need a protein that has been rigorously demonstrated to be UNABLE to have been evolved from simpler components.

    Considering the traceability of examples to the contrary, and even de novo genes, I’d love an example and some calculations.

Leave a Reply