Darwinism ID Foundations Intelligent Design

The Relationship Between ID and Common Descent

Spread the love

Since this has popped up a lot in the last few weeks, I wanted to repost an old post of mine describing the relationship between ID and Common Descent. I think it is pretty much as relevant now as when I originally posted it almost 6 years ago.

Many, many people seem to misunderstand the relationship between Intelligent Design and Common Descent. Some view ID as being equivalent to Progressive Creationism (sometimes called Old-Earth Creationism), others seeing it as being equivalent to Young-Earth Creationism. I have argued before that the core of ID is not about a specific theory of origins. In fact, many ID’ers hold a variety of views including Progressive Creationism and Young-Earth Creationism.

But another category that is often overlooked are those who hold to both ID and Common Descent, where the descent was purely naturalistic. This view is often considered inconsistent. My goal is to show how this is a consistent proposition.

I should start by noting that I do not myself hold to the Common Descent proposition. Nonetheless, I think that the relationship of ID to Common Descent has been misunderstood enough as to warrant some defense.

The issue is that most people understand common descent entirely from a Darwinian perspective. That is, they assume that the notion of natural selection and gradualism follow along closely to the notion of common descent. However, there is nothing that logically ties these together, especially if you allow for design.

In Darwinism, each feature is a selected accident. Therefore, Darwinian phylogenetic trees often use parsimony as a guide, meaning that it tries to construct a tree so that complex features don’t have to evolve more than once.

The ID version of common descent, however, doesn’t have to play by these rules. The ID version of common descent includes a concept known as frontloading – where the designer designed the original organism so that it would have sufficient information for its later evolution. If one allows for design, there is no reason to assume that the original organism must have been simple. It may in fact have been more complex than any existing organism. There are maximalist versions of this hypothesis, where the original organism had a superhuge genome, and minimalist versions of this hypothesis (such as from Mike Gene) where only the basic outlines of common patterns of pathways were present. Some have objected to the idea of a superhuge genome, on the basis that it isn’t biologically tenable. However, the amoeba has 100x the number of base pairs that a human has, so the carrying capacity of genetic information for a single-cell organism is quite large. I’m going to focus on views that tend towards the maximalist.

Therefore, because of this initial deposit, it makes sense that phylogenetic change would be sudden instead of gradual. If the genetic information already existed, or at least largely existed in the original organism, then time wouldn’t be the barrier for it to come about. It also means that multiple lineages could lead to the same result. There is no reason to think that there was one lineage that lead to tetrapods, for instance. If there were multiple lineages which all were carrying basically the same information, there is no reason why there weren’t multiple tetrapod lineages. It also explains why we find chimeras much more often than we find organs in transition. If the information was already in the genome, then the organ could come into existence all-at-once. It didn’t need to evolve, except to switch on.

Take the flagellum, for instance. Many people criticize Behe for thinking that the flagellum just popped into existence sometime in history, based on irreducible complexity. That is not the argument Behe is making. Behe’s point is that the flagellum, whenever it arose, didn’t arise through a Darwinian mechanism. Instead, it arose through a non-Darwinian mechanism. Perhaps all the components were there, waiting to be turned on. Perhaps there is a meta-language guided the piecing together of complex parts in the cell. There are numerous non-Darwinian evolutionary mechanisms which are possible, several of which have been experimentally demonstrated. [[NOTE – (I would define a mechanism as being non-Darwinian when the mechanism of mutation biases the mutational probability towards mutations which are potentially useful to the organism)]]

Behe’s actual view, as I understand it, actually pushes the origin of information back further. Behe believes that the information came from the original arrangement of matter in the Big Bang. Interestingly, that seems to comport well with the original conception of the Big Bang by LeMaitre, who described the universe’s original configuration as a “cosmic egg”. We think of eggs in terms of ontogeny – a child grows in a systematic fashion (guided by information) to become an adult. The IDists who hold to Common Descent often view the universe that way – it grew, through the original input of information, into an adult form. John A. Davison wrote a few papers on this possibility.

Thus the common ID claim of “sudden appearance” and “fully-formed features” are entirely consistent both with common descent (even fully materialistic) and non-common-descent versions of the theory, because the evolution is guided by information.

There are also interesting mixes of these theories, such as Scherer’s Basic Type Biology. Here, a limited form of common descent is taken, along with the idea that information is available to guide the further diversification of the basic type along specific lines (somewhat akin to Vavilov’s Law). Interestingly, there can also be a common descent interpretation of Basic Type Biology as well, but I’ll leave that alone for now.

Now, you might be saying that the ID form of common descent only involves the origin of life, and therefore has nothing to do with evolution. As I have argued before, abiogenesis actually has a lot to do with the implicit assumptions guiding evolutionary thought. And, as hopefully has been evident from this post, the mode of evolution from an information-rich starting point (ID) is quite different from that of an information-poor starting point (neo-Darwinism). And, if you take common descent to be true, I would argue that ID makes much better sense of what we see (the transitions seem to happen with some information about where they should go next).

Now, you might wonder why I disagree with the notion of common descent. There are several, but I’ll leave you with one I have been contemplating recently. I think that agency is a distinct form of causation from chance and law. That is, things can be done with intention and creativity which could not be done in complete absence of those two. In addition, I think that there are different forms of agency in operation throughout the spectrum of life (I am undecided about whether the lower forms of life such as plants and bacteria have anything which could be considered agency, but I think that, say, most land animals do). In any case, humans seem to engage in a kind of agency that is distinct from other creatures. Therefore, we are left with the question of the origin of such agency. While common descent in combination with ID can sufficiently answer the origin of information, I don’t think it can sufficiently answer the origin of the different kinds of agency.

(NOTE – original post and discussion here)

49 Replies to “The Relationship Between ID and Common Descent

  1. 1
    bornagain says:

    I recently wrote on some of the monumental hurdles facing the hypothesis of common descent.
    http://www.uncommondescent.com.....ent-585272

  2. 2
    Virgil Cain says:

    Intelligent Design is not an argument for or against Common Descent. As a matter of fact the only way Common Descent could be true is via Intelligent Design.

    That said there still needs to be a way to objectively test the premise to the exclusion of alternate hypotheses. For example, there needs to be a way to go into the lab, utilize a series of cumulative targeted mutagenesis on fish embryos and see if a fish-a-pod develops. That would be step one.

  3. 3
    Vy says:

    So ID’ – Intelligent Darwinism?

  4. 4
    Virgil Cain says:

    Here Vy, read the following: Intelligent Design is NOT Anti-Evolution

  5. 5
    johnnyb says:

    Note that since my original posting of this 6 years ago, there has been a rise in the usage of Active Information to measure the amount of information that something contributes to its own evolution. While it has been generally applied to digital evolution, there is no reason to think it can’t also be used to measure cellular evolution. So, now there is not only the idea that evolution is guided, there is at least a potential tool to measure the amount of guidance that evolution has.

  6. 6
    ppolish says:

    Intelligent Darwinism (ID) is oxymoronic. Darwinism is just moronic.

    Blind Watch Maker is Intelligent Darwinism
    Blind Village Idiot is Darwinism

  7. 7
    Mung says:

    Even a blind watchmaker is intelligent.

  8. 8
    bFast says:

    Interesting, johnnyb. I know I read your post quite lightly, but I did not see my view of a common descent based ID. While I see some possible merit to front-loading, I do not see a compelling reason to believe that front-loading is the universal answer to the puzzle.

    I point to my favorite gene, the HAR1F. It took on 18 point mutations between chimp and human. This may not be surprising except that it only has two point mutations between chimp and chicken, and those two point mutations do not affect the 3D shape of the resultant RNA. (Did I mention that this is an RNA gene, not a protein coding gene?) In the human there is a fundamental shift in the 3d configuration of the RNA. This 3D shift may be accomplishable with as few as 6 mutations, but not less. Therefore a stack of 6 non-contiguous point mutations must have happened approximately simultaneously to produce this mutational event.

    I don’t find front-loading to be any help in this transformation. Neo-Darwinism is hooped, as far as “Edge of Evolution” theory would say. The “common design” hypothesis could obviously produce this effect. However, the “active agent” model, the view that an active agent has frequently twiddled with DNA, is the best explanation that I can find that does not destroy “common descent”. Ie, back around cro magnon, an active agent induced the mutation. Now, even beneficial mutations often die for reasons independent of the mutation (like the lucky mutation carrier falls off a cliff.) Therefore either the active agent then husbanded the mutation into popularity or the active agent kept re-introducing the mutation until it stuck. In any case an 18 point mutational event, or even the minimalist 6 point mutational event would be best explained by an active agent. This could, however, be achieved without destroying common ancestry.

  9. 9
    Andre says:

    Like I said I have no problem with the idea of CD being a guided process. We see very strict control mechanisms inside living systems and that to me is the giveaway I simply cannot see how unguided process created these guided systems to prevent unguided processes from happening, not in any universe could it ever be possible.

  10. 10
    gpuccio says:

    bFast (and others):

    Just a few thoughts:

    1) It is, IMO, absolutely obvious that new complex functional information has been added to living beings throughout natural history. Each new protein superfamily, each new body plan, and so on, is the result of a huge addition of new complex functional information, and only conscious, intelligent, purposeful agents can do that.

    2) Therefore, I absolutely refute, as good explanations, both frontloading (in the sense of an intelligent agent acting only at the beginning of life) and theistic evolution (in the sense of an intelligent agent acting only at the beginning of the universe). None of them can explain even one single complex protein, not any more than neo darwinism can.

    3) Design happens in time and space, on our planet, at definite times.

    4) Design needs not be gradual. But it needs not be sudden. It can be either, or both, and only facts can help us understand how it happened in natural history. IMO, we have strong evidence of at least some very sudden designs (OOL, the appearance of eukaryotes, The Cambrian explosion, and so on), but there are also hints of gradual design (as explained very well by bFast in his post).

    5) CD is a good explanation of many facts, but is is not necessary to Intelligent design. At the same time, it is not in any way incompatible with Intelligent Design, as I have tried to explain.

    6) ID just means that all new complex functional information is introduced by some conscious intelligent purposeful being, and cannot originate by any non conscious process.

    7) CD just means that many empirical observations are best explained by some continuity in the “hardware”. Think of that as a designer continuing to implement new functions working on what he has already designed previously. He does not start from scratch each time, but he uses the existing physical supports of information to add new functional information to them, either suddenly or gradually.

    8) For me the only difference is that ID is absolutely supported by all known facts, while CD is supported by many facts, but there are some which are not well explained, and could even be against the theory. Let’s remember also that CD needs not be universal. After all, some designs could be from scratch.

    9) However, my firm conviction is the following: none of that can be solved by religious or phisolophical preconvictions, or antireligious preconvictions, of any kind. These are scientific problems, and they must be addressed empirically, with a correct scientific epistemology, even in the aspects (and there are many) which have deep philosophical, or religious, implications.

  11. 11
    Vy says:

    Likewise Virgil, read the following: Creation/YEC is NOT anti-evolution.

    See the problem?

  12. 12
    bornagain says:

    Nice post gpuccio.

    One point. For you CD is not a sacred cow that must never be questioned, (which is refreshing to hear you clearly state), but for Theistic Evolutionists such as BioLogos, and for hard core neo-Darwinists in particular, CD is a sacred cow that can never, ever, be questioned, (and which they never support with any real time evidence as to how it can remotely be possible to morph one creature into another).

    Case in point, Dawkins almost has a sacred cow when Venter denies common descent is true from the empirical evidence he sees

    Dr. Craig Venter Denies Common Descent in front of Richard Dawkins!
    https://www.youtube.com/watch?v=MXrYhINutuI

  13. 13
    gpuccio says:

    bornagain:

    I am happy that you understand my point of view. I consider you an old friend, and I hope that only goodwill may be between us. 🙂

    It’s perfectly true. CD is in no way a sacred cow for me! Indeed, I don’t consider even ID as a “sacred cow”, but I am happy to admit that, at a scientific level, it is for me as near to that as it is possible. 🙂

    Obviously neo darwinists and Theistic Evolutionists need CD: their theories cannot work without it. While ID can work without CD. You are perfectly right about that.

    ID is, definitely, the strong point. Complex functional information cannot arise, and never has, except that from a conscious intelligent designer. And complex functional information does arise throughout natural history in the living world.

    That is the simple truth.

  14. 14
    Mapou says:

    If strict common descent (nested hierarchy) is true, then horizontal gene transfers are false. You can’t have both. Claiming that the majority of observed species can be classified using a nested hierarchy is not good enough, IMO. If even .1% of species break the nested hierarchy rule, Darwinism is falsified. If the HGT happens at the higher levels of the hierarchy, Darwinism is falsified. In other words, if you find pure bat DNA sequences in a whale or a fish, then Darwinism is falsified.

    It’s CD or no CD. No fence straddling allowed, sorry. Intelligent design over time, by contrast, confidently predicts a mostly nested hierarchy sprinkled with many instances of multiple inheritance (HGT).

    The biggest enemy of Darwinism is computational genomics.

  15. 15
    bFast says:

    Thanks, gpuccio, for your thoughtful response.

    In general we are on exactly the same page.

    May I emphasize you here, “These are scientific problems, and they must be addressed empirically…” Yes! Soooo Yes! “The Third Way” so well illustrates the problem, “One way is Creationism that depends upon intervention by a divine Creator. That is clearly unscientific because it brings an arbitrary supernatural force into the evolution process.” Id is being philosophically dismissed by a group that provides fantastic evidence that the naturalistic option is untenable.

    An intelligent, strategic force is required for life as we know it, and for the big bang. In a scientific context I would prefer to use very broad terms like this, though one can hardly ignore the g word when you have a single intelligent strategic force that pulls off the amazingly fine tuned universe. For “intelligent, strategic force” to be dismissed on purely philosophical grounds when such provides the only feasible explanation for the data is, well, just silly. If that’s what it takes to be scientific, then science is just silly.

  16. 16
    bFast says:

    Mapou, “If strict common descent (nested hierarchy) is true, then horizontal gene transfers are false. You can’t have both.”

    You may be right with the word “strict” stuck in there. However, if “common descent” simply means that every organism has a parent (kinda hard to call it that in asexual reproduction, but you know what I mean) then common descent is not broken by HGT. (It does become harder to discern with DNA analysis however.) Consider that an HGT event happened in you, you were implanted with a gene that makes you glow (been done in rabbits and mice). You would still be the child of your mother and father. “You” would still have common ancestry, even though that gene wouldn’t.

    UCA, therefore, is not invalidated by HGT even if the word “strict” might be.

  17. 17
    Vy says:

    One way is Creationism that depends upon intervention by a divine Creator. That is clearly unscientific because it brings an arbitrary supernatural force into the [crea]tion process.

    And yet this worked amazingly well for the pioneers of modern science who were almost(?) entirely YECs.

    IMO, whoever believes that an idea isn’t scientific simply because it involves God is indirectly claiming the pioneers of modern science weren’t “scientific” because they added God to the equation.

  18. 18
    Mapou says:

    bFast,

    You are mistaken. Common descent does not mean having a parent but having a single parent (or, at most, a single parental pair) belonging to the same species. It does not allow for HGT since HGT requires another parent from a species on a distant branch of the hierarchy. Since mating occurs only within one’s own species, common descent is necessarily nested. HGT becomes an aberration if common descent is the accepted model. It is obvious that HGT falsifies both common descent and Darwinism.

  19. 19
    Virgil Cain says:

    I don’t think HGT falsifies Common Descent. It just makes it more complicated. Darwinism is safe from HGT as Darwin didn’t know the mechanism of heredity and he most likely would have adopted HGT as a form of his pangenesis.

    To falsify Common Descent you would need to know what makes an organism what it is:

    ”The scientist enjoys a privilege denied the theologian. To any question, even one central to his theories, he may reply “I’m sorry but I do not know.” This is the only honest answer to the question posed by the title of this chapter. We are fully aware of what makes a flower red rather than white, what it is that prevents a dwarf from growing taller, or what goes wrong in a paraplegic or a thalassemic. But the mystery of species eludes us, and we have made no progress beyond what we already have long known, namely, that a kitty is born because its mother was a she-cat that mated with a tom, and that a fly emerges as a fly larva from a fly egg.”- geneticist Giuseppe Sermonti

    Until that mystery of species is solved Common Descent is untestable. Could be true but we can’t say, scientifically.

    Then, throw in the Voles- A lot of micro but no macro

    The study focuses on 60 species within the vole genus Microtus, which has evolved in the last 500,000 to 2 million years. This means voles are evolving 60-100 times faster than the average vertebrate in terms of creating different species. Within the genus (the level of taxonomic classification above species), the number of chromosomes in voles ranges from 17-64. DeWoody said that this is an unusual finding, since species within a single genus often have the same chromosome number.  

    Among the vole’s other bizarre genetic traits:  

    •In one species, the X chromosome, one of the two sex-determining chromosomes (the other being the Y), contains about 20 percent of the entire genome. Sex chromosomes normally contain much less genetic information.
    •In another species, females possess large portions of the Y (male) chromosome.
    •In yet another species, males and females have different chromosome numbers, which is uncommon in animals. 

    A final “counterintuitive oddity” is that despite genetic variation, all voles look alike, said DeWoody’s former graduate student and study co-author Deb Triant. 

    “All voles look very similar, and many species are completely indistinguishable,” DeWoody said.  

    In one particular instance, DeWoody was unable to differentiate between two species even after close examination and analysis of their cranial structure; only genetic tests could reveal the difference.  

    Nevertheless, voles are perfectly adept at recognizing those of their own species.

    And after all this “evolution” a vole is still a vole. This study alone should cast a huge shadow over macroevolution and Common Descent.

  20. 20
    bFast says:

    Mapau, we are getting down to definitions here. By your definition, common descent cannot support HGT.

    Mapau, “It is obvious that HGT falsifies … Darwinism.”

    I don’t quite get that. If a natural mechanism is found that allows genes to be transferred, and I think a few have at least been proposed, then Darwinism is not toast. If you are to contend that Darwinism requires UCD, and that UCD holds to your definition, then I guess Darwinism is toast. However, this data alone does not defeat naturalism, RM+NS.

  21. 21
    bFast says:

    Virgil Cain,
    Very interesting info about voles. I hadn’t been particularly aware of voles until the last few years. ‘Seems that my “mouse problems” were actually vole problems.

    In all of the interesting findings, the one I find most interesting is, “Nevertheless, voles are perfectly adept at recognizing those of their own species.” This is certainly not universal of creatures, as I had a peacock mistake me for a peahen once. (Peacocks in heat are unbelievable desperadoes that try to mate with anything that moves.)

  22. 22
    Mapou says:

    bFast,

    The only mechanisms for HGT I know about are convergence and gene transfers via viral infections, both of which are astronomically unlikely. That is to say, the combinatorial explosion excludes both mechanisms. Even worse, neither hypothesis can be falsified. One must accept them on faith alone. Pseudoscience at its best.

  23. 23
    bFast says:

    mapou, “… convergence and gene transfers via viral infections”

    Convergence is not HGT. Viral transmission of genes is a purported mechanism. I think there are other identified mechanisms. However there may be unidentified mechanisms. In the bacteria world HGT is apparently an abundant phenomenon. As such it is unlikely that there isn’t some sort of mechanism floating around.

  24. 24
    computerist says:

    2) Therefore, I absolutely refute, as good explanations, both frontloading (in the sense of an intelligent agent acting only at the beginning of life) and theistic evolution (in the sense of an intelligent agent acting only at the beginning of the universe). None of them can explain even one single complex protein, not any more than neo darwinism can.

    This is one of reasons I have come up with the “self-repairing/constructive universe hypothesis” (SCUH).

    Here is the general breakdown:

    I propose the self-constructive/repairing universe hypothesis (SCUH). SCUH basically says that the universe is constantly in the process of being created/repaired (creation never ceased) in order to achieve a definitive state of goodness. Laws of the universe as well as entities within this universe (including ofcourse biological intelligent entities) are helpers or extensions of the primary designer (who is acting externally and/or internally to this process) in achieving this eventual state of goodness. The evidence for this is overwhelming IMHO. We find that the universe out of all the chaos/disorder/evil has a tendency to achieve a state of order/organization/goodness. This organization bias principle is the driver of SCUH. Progression will not be entirely smooth and/or deterministic, but inevitable given the primary intent.

    I believe this hypothesis is completely inline with ID as well as the evidence.

  25. 25

    In highlighting the rich diversity of intelligent design theories, I want to focus attention once more on the idea that the DNA system is like a little universe in it’s own right. In a similar way that a 3D computersimulation, or human imagination are worlds in their own right.

    Such a 3D DNA world can then copy objects from the universe proper, and in this DNA world, complete designs of an adult organism can be chosen as a whole. The 3D DNA adult then guides development of the physical organism to adulthood.

    Non-coding DNA? It could just as well be representations of the moon and earth cycles in the DNA world. There can just as well be a representation of a snake in human DNA, like in the garden of Eden, in order to recognize snakes as possibly lethal creatures.

    That is a common sense intelligent design theory which has got a lot going for it, that I don’t hear enough about.

    It’s been established by Peter Rowlands and Vanessa Hill that the mathematical ordering of the DNA system is the same as that of the physical universe, and the DNA world idea is speculation based on that fact. (there are 4 parameters mass, time, space and charge in physics, there are 4 bases CATG in biology. There are 64 elements of dirac algebra in physics, and there are 64 triplet codons in biology. etc. etc. etc. the mathematical ordering is the same)

  26. 26
    EugeneS says:

    What I personally find a bit confusing is using the word ‘evolution’ in a context that assumes guidance. By definition evolution is unguided. As soon as there is guidance, it is consequently not evolution.

    Also, the mentioned idea of a ‘cosmic egg’ seems weird to me. It follows that there must have been a cosmic hen or a cosmic turtle 😉

    bFast,

    “then science is just silly.”

    This is it! There is no other sensible answer to how it all came to be, except through God. In God all infinite regress disappears while He Himself always stays out of the ‘equation’ of science. He sustains the universe and miraculously intervenes but His providence is out of the scope of science.

    In my understanding, the wise science as opposed to the silly science knows its own limits. To certain questions, there cannot be scientific answers. I know it may seem like posting artificial bounds on science. But these limits do exist. It may not be satisfying to a scientist’s mind but humility is greater than science 🙂

    I think scientifically we can go as far as asserting eminent failures of naturalism in its attempts to provide an explanation of the origins of the universe and of life. ID is helpful in this regard. But that’s about it really.

  27. 27
  28. 28
    Zachriel says:

    bFast: I point to my favorite gene, the HAR1F… In the human there is a fundamental shift in the 3d configuration of the RNA. This 3D shift may be accomplishable with as few as 6 mutations, but not less.

    Just curious. What is the scientific support for this statement?

  29. 29
    Virgil Cain says:

    Just curious, what is the scientific support for Common Descent?

  30. 30
    johnnyb says:

    bfast –

    I don’t find front-loading to be any help in this transformation. Neo-Darwinism is hooped, as far as “Edge of Evolution” theory would say. The “common design” hypothesis could obviously produce this effect. However, the “active agent” model, the view that an active agent has frequently twiddled with DNA, is the best explanation that I can find that does not destroy “common descent”.

    I think that front-loading does it quite well. Yockey once, a long time ago, noted that as long as there has been a fixed length of time from the origin of life, that the information problem goes away if one is willing to posit an information-rich origin of life. That is, the channel capacity theorem allows us to determine, for a given amount of transmitted data, and a given amount of noise in the transmission channel, and the number of transmissions, how much original information is needed in the original ancestor. That amount may be a practical problem, but it is not a theoretical problem.

    Specifically to your question, there could have been two genes which were identical except for this one change, and one copy was lost on each side. Or, more likely, there could have been a mechanism that creates the change. That is, there is some genetic functor that applies the mutation that you are looking for, or at least did at some point in evolutionary history. Provided that you are willing to concede that the amount of information in the past is greater than the amount of information in the present, this is very possible.

    Now, that isn’t to say that this is a better model than the mutating actor theory, but it certainly within the realm of conceptual plausibility, and Yocki/Shannon/Dembski have shown us how to apply it (Dembski’s “Searching Large Spaces” contains what he calls the “No Free Lunch Regress”, which basically applies Yockey’s ideas to evolutionary search instead of static transmission).

  31. 31
    johnnyb says:

    gpuccio –

    Lots of good thoughts. However, I want to contend one major point:

    Therefore, I absolutely refute, as good explanations, both frontloading (in the sense of an intelligent agent acting only at the beginning of life) and theistic evolution (in the sense of an intelligent agent acting only at the beginning of the universe). None of them can explain even one single complex protein, not any more than neo darwinism can.

    I disagree here, for the reasons that I gave to bfast. If the origin of the life or the universe was information-rich rather than information-poor, then there is no problematic issue. The reason neo-Darwinism has problems is that its view of the origin-of-life is information-poor, and it has no mechanism to re-insert the information anywhere. Frontloading, on the other hand, simply inserts the information in the beginning.

    Now, this also ties into your first point, which I also have issue with:

    It is, IMO, absolutely obvious that new complex functional information has been added to living beings throughout natural history. Each new protein superfamily, each new body plan, and so on, is the result of a huge addition of new complex functional information, and only conscious, intelligent, purposeful agents can do that.

    I don’t think that this is as obvious as you may think. It is perhaps more obvious that there has been a differentiation of complex functional information, but I challenge you to show me how to differentiate between the possibility that information was (a) newly added or (b) newly expressed. If these situations look identical from the standpoint of the present, then I see no reason why one *must* prefer (a) over (b). And, if one chooses (b), then there is no logical reason why the information could not have been there from the beginning.

    Now, I agree with you that full frontloading is probably not the case, and there are good arguments against it. However, as far as information goes, there is nothing that logically precludes it (though there may be things that practically preclude it).

    As for your other points, I largely agree, especially in this one:

    For me the only difference is that ID is absolutely supported by all known facts, while CD is supported by many facts, but there are some which are not well explained, and could even be against the theory. Let’s remember also that CD needs not be universal. After all, some designs could be from scratch.

  32. 32
    johnnyb says:

    Virgil –

    The Voles story is very interesting, and it goes back to a point I made on the 98% thread – genes are not necessarily what evolution needs to work. This post is entirely on the genetic information problem, but there are other aspects of evolution that must be considered as well. As I mentioned in the thread, it may be possible for humans and chimps to have 100% similar DNA, and still maintain their differences, simply because genes are not equivalent with organisms. Thus, radical changes in DNA may not be equivalent with differing organisms. So, even when solving the information problem for common descent, there are others as well, which get little mention (which is why I mentioned my own in the thread).

  33. 33
    johnnyb says:

    So, a little addendum.

    First of all my goal in this thread was not to give a specific view as being better than others, but help clarify what it would mean for someone to hold to a mechanistic view of common descent that was compatible with ID. And, as I’ve mentioned in the thread, the findings of Shannon/Yockey/Dembski show that, if one is willing to presuppose an information-rich origin of life, then there is no theoretical problem with genetic information having been transmitted through common descent, either intact or as an intelligent search algorithm.

    There is more to biology and evolution than the information problem (which is where my problems with common descent come in) and there are practical problems with the mechanistic common descent hypothesis, but as a theoretical construct, one can hold to a mechanistic common descent and ID.

    This is different from Darwinism because Darwinism presupposes an information-poor origin of life, and supposes that the intricacies of biology have all been built-up from essentially haphazard changes. An information-rich origin of life can allow for all of the intricacies of biology to be latently present until they are revealed, or for a sufficient search mechanism to find them when they are needed/wanted, or any combination of these.

  34. 34
    johnnyb says:

    The analogy I give my students to help them understand is the Microsoft Windows installer. If I have a blank computer, I can install Microsoft Windows onto it. Now, Windows is not a single application. It is actually an entire ecosystem of applications. It has Notepad, Wordpad, Terminal, Control Panel, etc.

    However, the Windows Installer *is* a single application. It is an application that *contains* *all* of the data needed for all of the applications in the ecosystem. As the installer works, each individual application receives a subset of the information in the installer, based on the installer’s knowledge of how that should be setup.

    The final applications are all there by design. None of them arose by natural selection. But they all owe their physical origin to the single installer program which put them all in their correct place, with the appropriate information. However, such a process only works *if* the installer had sufficient information to create the other programs. It does *not* work if it had to generate those programs by trial-and-error, in absence of sufficient information.

  35. 35
    Upright BiPed says:

    9) However, my firm conviction is the following: none of that can be solved by religious or philosophical preconvictions, or antireligious preconvictions, of any kind. These are scientific problems, and they must be addressed empirically, with a correct scientific epistemology, even in the aspects (and there are many) which have deep philosophical, or religious, implications.

    Beautiful

  36. 36
    Axel says:

    ‘Even a blind watchmaker is intelligent.’

    But being visually unintelligent, Mung, is a handicap – especially when he tries to make a watch.

  37. 37
    bornagain says:

    “Evolution, in the sense of common descent, is not a theory of similarity. Linnaeus, Cuvier, and Agassiz knew all about similarity, yet they denied common descent. Evolution is a theory of transformation.”,,,
    Paul Nelson – What Evolution Is, and What It’s Not – October 30, 2015
    http://www.evolutionnews.org/2.....00501.html

  38. 38
    gpuccio says:

    johnnyb:

    The two points you disagree with are essentially the same point: I cannot accept frontloading because new complex functional information has been added throughout natural history.

    That is not a logical point: it is an empirical observation.

    If you have followed my posts here in the recent years, you may know that I firmly believe that each new protein superfamily is an example of new complex functional information, completely beyond any possible non design explanation.

    Now, the point is: new protein superfamilies have been appearing throughout natural history, at all times.

    For example, I think that LUCA was probably the first living being, or pool of beings, on our planet. IOWs, I believe that life started on our planet more or less in a “prokaryotic” form.

    That is remarkable, because it means that OOL was a process, however sudden or gradual we can imagine it, that generated probably the greatest amount of new functional information in a relatively short time, at least at the level of protein sequences. Indeed, about half of basic protein superfamilies seem to have been already present in LUCA. So, in a sense, at OOL there was a “frontloading” of a huge quantity of information.

    But was that information sufficient to generate the following developments of life?

    Certainly not.

    Let’s take just the following big step, for example: the appearance of eukaryotes.

    Now, let’s avoid for the moment the problem of descent: whether eukaryotes derived in some way from prokaryotes, or were some creation from scratch. That is not important, for the moment.

    What is really important is the following:

    1) Eukaryotes still use a lot of the protein information in prokaryotes.

    2) Eukaryotes have a lot of new protein information, which did not exist in prokaryotes.

    Both those statements are unequivocally true, as far as what we can observe in proteomes is concerned.

    Now, how can anyone explain all the new proteins and protein functions in eukaryotes by some previous frontloading? I can see exactly nothing in the known facts which supports such a view.

    The appearance of eukaryotes is, therefore, as much an obvious act of new design intervention as OOL.

    And the same can be said for metazoa, and the Cambrian explosion.

    And these are just the most amazing examples.

    The simple truth is: the designer, whoever he is, acts repeatedly in natural history. Maybe continuously, maybe at definite times. But many times, many identifiable times.

    In time and space.

  39. 39
    Zachriel says:

    johnnyb: That is, the channel capacity theorem allows us to determine, for a given amount of transmitted data, and a given amount of noise in the transmission channel, and the number of transmissions, how much original information is needed in the original ancestor.

    How much and what kind of information is that?

    johnnyb: That amount may be a practical problem, but it is not a theoretical problem.

    In science, practical problems often are theoretical problems.

  40. 40
    computerist says:

    In time and space.

    I would also add the possibility of “through time and space”.

  41. 41
    gpuccio says:

    computerist:

    I am not sure I understand what you mean, could you please explain better?.

    What I mean is that the result of design happens in a definite time and space. That remains true, either if the designer acts continuously, or only at certain times.

    For example, eukaryotes appear at a certain time (even if we don’t know exactly when). There is a window of time where they are not there, and a window of time where they are there. If the apperance is gradual (which is possible), there is a window of time in the middle where they appear.

    In all cases, the result of design can be observed in time. And space, obviously (it happens somewhere).

    Another aspect could be: is the designer in time and space? I am not considering that aspect, because we have at present no scientific clues to answer it. If aliens are the designer, they are probably in time and space. If a god is the designer, he could well be beyond time and space (according to how we conceive him), but then his actions have a result in time and space, which is my point. And I suppose there may be different answers.

    However, from a scientific point of view, and with the data available, I think we can only observe the results of biological design in time and space.

  42. 42
    gpuccio says:

    Zachriel:

    “In science, practical problems often are theoretical problems.”

    For once, I fully agree! 🙂

  43. 43
    johnnyb says:

    Zachriel –

    How much and what kind of information is that?

    I am specifically talking about genetic information in this post.

    The most important result is not the specific number, but that it is of finite size. However, just to show an example of how one might calculate it, here is a (very) naive approach to the estimation for known animals and plants (i.e. excluding prokaryotes, fungi, etc.).

    There are about 1 million (10^6 )catalogued animal and plant species.
    An average genome is about 1 billion (10^9) base pairs.
    This gives us a total of 10^15 base pairs that need to be transmitted to the present day.
    Average animal life span – 5 years (this number just pulled out of the air).
    Number of generations to the Cambrian explosion ~ 10^8
    Noise in the channel – 10 mutations per generation, giving a signal degredation of 10^-8, which is a fidelity of (1 – 10^-8) per generation

    Now, the basic channel capacity theorem is simply a relationship, like I made earlier, that it is a finite number (the main usage of it is the Shannon-Hartley theorem, which is focused on digital information over analog communication channels).

    In our case, the signal is surviving at a (1 – 10^-8) per generation, for 10^8 generations. That gives us a total fidelity of 0.37. Therefore, the original genome size for preserving 10^15 base pairs of information would be 2.7 * 10^15 base pairs.

    Again, this is a pretty naive approach. It would be increased based on how important different parts of the genome are to proper function, but also largely decreased (i.e. several orders of magnitude) if the original organism encoded any sort of compression (which is obviously possible giving both the amount of genetic overlap between creatures, and the number of repetitive sequences in any given organism). Other considerations we didn’t include are the effect of having multiple copies (i.e., individuals) carrying it, and the effect of sexual recombination, and the inclusion of variability.

    However, hopefully it is easy to see that, as the channel capacity theorem states, there is *some* amount of redundant coding which will preserve your message intact across noisy channels.

  44. 44
    gpuccio says:

    johnnyb:

    Just let me understand. You are suggesting that at OOL there were living beings with a genome of say 2.7 * 10^15 which contained the genomic information of all future species, and that such information has been gradually segregated in the course of natural history? Is that your point? Bizarre, indeed.

    Can you point to even one fact that would support such a theory?

  45. 45
    Zachriel says:

    johnnyb: There are about 1 million (10^6 )catalogued animal and plant species.

    That probably only represents a portion of the total species. Then you forgot to include trilobites and archaeopteryx. Altogether you are off by several orders of magnitude. On the other hand, many species are close analogues of other species, so there should be considerable data compression possible. After all, humans are just modified deuterostomes.

  46. 46
    Virgil Cain says:

    After all, humans are just modified deuterostomes.

    Modified by Intelligent Design. That is what a common design is all about- modifications on a similar theme.

  47. 47
    johnnyb says:

    gpuccio –

    I think you misunderstand my goals and my point. First of I, I do not myself hold to common descent. My point, if you missed it, is that there is no logical contradiction for someone saying that (a) ID is true, (b) universal common descent is true, and (c) universal common descent happened entirely by a mechanical process, at least in the case of genetics.

    The channel capacity theorem lets us know that for any fixed length period of time, we can plan for and enact a coding mechanism which will safeguard that much information through time. Therefore, there is no logical contradiction with the idea of frontloading. I gave an example calculation to show how one might go about estimating the amount of information needed.

    Now, if someone held to this view, would that require them to think that the information is contained in the original organism? No. It gives an amount of information, but information can be transmitted across different media. Therefore, the information could be held in some other way, or it could even be external to the organism. However, in those cases, we would want to convert from base pairs to bits (basically multiplying the result by 2).

    Now, this interests me for several reasons. First of all, it is certainly a more reasonable and quantifiable theory of evolution than, say, Darwinism. But, even more interesting, is that it opens up a whole host of questions we might not otherwise think to ask otherwise. This is often the case with big theories – it isn’t so much that the big theory is true in an absolute sense, but that once you understand the theoretical underpinnings, it helps you ask questions you might not otherwise think to ask.

    For example, one might ask, “can an original ancestor actually hold that much information?” Then, that can lead to experiments to determine just how much infomation an original ancestor can hold. It can lead to experiments to find out how compressible the genome is. These things then might lead us to determine how big of a genome the original ancestor could in fact have. If we decide that the number of required bits could not fit in the original genome, we might have an estimate of the total number of original genomes that are needed.

    Another consideration, which you mentioned, is “such information has been gradually segregated in the course of natural history”. In fact, this has been repeatedly identified in the more well-established cases of evolution. Considering information in the way we just did helps us think more theoretically about such issues.

    I like questions that give me new ways to think about problems. And that’s why I enjoy considering the possibility of mechanistic universal common descent. I also like exploring the opposite side of the spectrum with Louis Agassiz’s forgotten classic “Essay on Classification”.

    Great ideas don’t hand you the truth on a platter. Great ideas give you new dimensions of consideration that you haven’t thought of before. Taking them seriously and taking them to their limits teaches you both how it works, where it is useful, and where it isn’t. That’s what exercises like this are good for – they get you thinking in brand new ways.

    Zachriel –

    That probably only represents a portion of the total species.

    Yes, but my point was simply to get a number to work from. If you have better numbers feel free to plug them in!

    On the other hand, many species are close analogues of other species, so there should be considerable data compression possible.

    Exactly. It would be an interesting and worthwhile endeavor to try to analyze the compressibility of genomes, and the compressibility of genomes with respect to each other. In fact, the compressibility of two genomes with respect to each other might be a more theory-neutral method of measuring similarity of genomes.

  48. 48
    gpuccio says:

    johnnyb:

    Thank you for explaining better your ideas.

    I don’t know if I misunderstand your goals. Maybe. However, the point remains that I am not interested in theories which are in no way supported by known facts, because empirical science is empirical science, and not mathematics, and the only purpose of theories is to explain observed facts.

    You insist that “there is no logical contradiction”. OK, and so? A theory must explain observed facts, if possible without obvious logical contradictions. If a theory explains nothing of what we observe, the simple fact that it bears no logical contradictions is not enought to make it an interesting theory.

    However, I fully respect your thoughts, as I hope you may respect mine. 🙂

  49. 49
    Mung says:

    gpuccio:

    I cannot accept frontloading because new complex functional information has been added throughout natural history.

    And new codes.

Leave a Reply