Uncommon Descent Serving The Intelligent Design Community

ID and Common Descent

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Many, many people seem to misunderstand the relationship between Intelligent Design and Common Descent. Some view ID as being equivalent to Progressive Creationism (sometimes called Old-Earth Creationism), others seeing it as being equivalent to Young-Earth Creationism. I have argued before that the core of ID is not about a specific theory of origins. In fact, many ID’ers hold a variety of views including Progressive Creationism and Young-Earth Creationism.

But another category that is often overlooked are those who hold to both ID and Common Descent, where the descent was purely naturalistic. This view is often considered inconsistent. My goal is to show how this is a consistent proposition.

I should start by noting that I do not myself hold to the Common Descent proposition. Nonetheless, I think that the relationship of ID to Common Descent has been misunderstood enough as to warrant some defense.

The issue is that most people understand common descent entirely from a Darwinian perspective. That is, they assume that the notion of natural selection and gradualism follow along closely to the notion of common descent. However, there is nothing that logically ties these together, especially if you allow for design.

In Darwinism, each feature is a selected accident. Therefore, Darwinian phylogenetic trees often use parsimony as a guide, meaning that it tries to construct a tree so that complex features don’t have to evolve more than once.

The ID version of common descent, however, doesn’t have to play by these rules. The ID version of common descent includes a concept known as frontloading – where the designer designed the original organism so that it would have sufficient information for its later evolution. If one allows for design, there is no reason to assume that the original organism must have been simple. It may in fact have been more complex than any existing organism. There are maximalist versions of this hypothesis, where the original organism had a superhuge genome, and minimalist versions of this hypothesis (such as from Mike Gene) where only the basic outlines of common patterns of pathways were present. Some have objected to the idea of a superhuge genome, on the basis that it isn’t biologically tenable. However, the amoeba has 100x the number of base pairs that a human has, so the carrying capacity of genetic information for a single-cell organism is quite large. I’m going to focus on views that tend towards the maximalist.

Therefore, because of this initial deposit, it makes sense that phylogenetic change would be sudden instead of gradual. If the genetic information already existed, or at least largely existed in the original organism, then time wouldn’t be the barrier for it to come about. It also means that multiple lineages could lead to the same result. There is no reason to think that there was one lineage that lead to tetrapods, for instance. If there were multiple lineages which all were carrying basically the same information, there is no reason why there weren’t multiple tetrapod lineages. It also explains why we find chimeras much more often than we find organs in transition. If the information was already in the genome, then the organ could come into existence all-at-once. It didn’t need to evolve, except to switch on.

Take the flagellum, for instance. Many people criticize Behe for thinking that the flagellum just popped into existence sometime in history, based on irreducible complexity. That is not the argument Behe is making. Behe’s point is that the flagellum, whenever it arose, didn’t arise through a Darwinian mechanism. Instead, it arose through a non-Darwinian mechanism. Perhaps all the components were there, waiting to be turned on. Perhaps there is a meta-language guided the piecing together of complex parts in the cell. There are numerous non-Darwinian evolutionary mechanisms which are possible, several of which have been experimentally demonstrated. [[NOTE – (I would define a mechanism as being non-Darwinian when the mechanism of mutation biases the mutational probability towards mutations which are potentially useful to the organism)]]

Behe’s actual view, as I understand it, actually pushes the origin of information back further. Behe believes that the information came from the original arrangement of matter in the Big Bang. Interestingly, that seems to comport well with the original conception of the Big Bang by LeMaitre, who described the universe’s original configuration as a “cosmic egg”. We think of eggs in terms of ontogeny – a child grows in a systematic fashion (guided by information) to become an adult. The IDists who hold to Common Descent often view the universe that way – it grew, through the original input of information, into an adult form. John A. Davison wrote a few papers on this possibility.

Thus the common ID claim of “sudden appearance” and “fully-formed features” are entirely consistent both with common descent (even fully materialistic) and non-common-descent versions of the theory, because the evolution is guided by information.

There are also interesting mixes of these theories, such as Scherer’s Basic Type Biology. Here, a limited form of common descent is taken, along with the idea that information is available to guide the further diversification of the basic type along specific lines (somewhat akin to Vavilov’s Law). Interestingly, there can also be a common descent interpretation of Basic Type Biology as well, but I’ll leave that alone for now.

Now, you might be saying that the ID form of common descent only involves the origin of life, and therefore has nothing to do with evolution. As I have argued before, abiogenesis actually has a lot to do with the implicit assumptions guiding evolutionary thought. And, as hopefully has been evident from this post, the mode of evolution from an information-rich starting point (ID) is quite different from that of an information-poor starting point (neo-Darwinism). And, if you take common descent to be true, I would argue that ID makes much better sense of what we see (the transitions seem to happen with some information about where they should go next).

Now, you might wonder why I disagree with the notion of common descent. There are several, but I’ll leave you with one I have been contemplating recently. I think that agency is a distinct form of causation from chance and law. That is, things can be done with intention and creativity which could not be done in complete absence of those two. In addition, I think that there are different forms of agency in operation throughout the spectrum of life (I am undecided about whether the lower forms of life such as plants and bacteria have anything which could be considered agency, but I think that, say, most land animals do). In any case, humans seem to engage in a kind of agency that is distinct from other creatures. Therefore, we are left with the question of the origin of such agency. While common descent in combination with ID can sufficiently answer the origin of information, I don’t think it can sufficiently answer the origin of the different kinds of agency.

Comments
Mustela Nivalis:
Unfortunately, I saw nothing that would demonstrate how to objectively identify design in a real biological system.
And what is there to objectively identify illusory design in a real biological system? But I digress- The criteria for inferring design in biology is, as Michael J. Behe, Professor of Biochemistry at Leheigh University, puts it in his book Darwin ' s Black Box:
"Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.”
He goes on to say:
” Might there be some as-yet-undiscovered natural process that would explain biochemical complexity? No one would be foolish enough to categorically deny the possibility. Nonetheless, we can say that if there is such a process, no one has a clue how it would work. Further, it would go against all human experience, like postulating that a natural process might explain computers.”
Joseph
January 14, 2010
January
01
Jan
14
14
2010
01:37 PM
1
01
37
PM
PDT
As far as the list of "evolutionary mechanisms" Allen MacNeill has provided- "Evolutionary mechansims" are a conflation, meaning they are not necessarily blind and undirected. Dr Spetner goes over this in "Not By Chance" which came out in 1997.Joseph
January 14, 2010
January
01
Jan
14
14
2010
01:33 PM
1
01
33
PM
PDT
See sections 2.5, 2.6, and 3.1. See also its application to an objectively designed system in 3.4.johnnyb
January 14, 2010
January
01
Jan
14
14
2010
01:28 PM
1
01
28
PM
PDT
Mustela, Indeed, I don't have them. Nor do I know how they were arrived at. It could be totally falacious for all I know.Collin
January 14, 2010
January
01
Jan
14
14
2010
01:18 PM
1
01
18
PM
PDT
Collin at 64, The assertion is that the chance of a chromosome being assembled by undirected, natural processes is 1 in 10^150. That is my understanding of tribune7's assertion as well. I would like to see the actual calculations that support that assertion.Mustela Nivalis
January 14, 2010
January
01
Jan
14
14
2010
01:06 PM
1
01
06
PM
PDT
johnnyb at 63, So… are you going to read the paper? I glanced through it. Unfortunately, I saw nothing that would demonstrate how to objectively identify design in a real biological system. If I missed that information, please summarize it or cite the specific section where it is documented.Mustela Nivalis
January 14, 2010
January
01
Jan
14
14
2010
01:04 PM
1
01
04
PM
PDT
Aleta --Chance events always happen in the larger context of natural laws What are the natural laws to that cause a chromosome to form?tribune7
January 14, 2010
January
01
Jan
14
14
2010
01:03 PM
1
01
03
PM
PDT
Aleta said, "So even though it may be correct to say that the probability of a chromosome “occurring by chance is greater than 1 in 10^150,” that doesn’t mean that the probability of a chromosome occurring through a chain of natural events is greater than 1 in 10^150." I certainly don't understand this reasoning. That would make the first assertion totally meaningless. Do you think that the person who gave us the 1 in 10^150 assertion was assuming that no natural laws existed? Of course not. The assertion is that the chance of a chromosome being assembled by undirected, natural processes is 1 in 10^150.Collin
January 14, 2010
January
01
Jan
14
14
2010
01:02 PM
1
01
02
PM
PDT
Mustela Nivalis - So... are you going to read the paper?johnnyb
January 14, 2010
January
01
Jan
14
14
2010
01:00 PM
1
01
00
PM
PDT
johnnyb at 59, Why must it be quantitative to be objective? Qualitative analysis in chemistry is objective, but it isn’t quantitative. In chemistry, the term "qualitative" in "qualitative analysis" has a specific meaning. The testing that follows qualitative analysis is quantitative. I'm looking for the equivalent of that test for ID. What measurement can be made to decide whether or not a particular biological artifact was designed?Mustela Nivalis
January 14, 2010
January
01
Jan
14
14
2010
12:26 PM
12
12
26
PM
PDT
Aleta at 57, So even though it may be correct to say that the probability of a chromosome “occurring by chance is greater than 1 in 10^150,” that doesn’t mean that the probability of a chromosome occurring through a chain of natural events is greater than 1 in 10^150. Very well put.Mustela Nivalis
January 14, 2010
January
01
Jan
14
14
2010
12:22 PM
12
12
22
PM
PDT
tribune7 at 54, So it seems that chromosome stands as example of a biological entity exhibiting CSI since it has complex specificity and the probability of it occurring by chance is greater than 1 in 10^150. Could you please show your work? What is the objective specification? How, exactly, is CSI calculated? How are you relating a measure of information (CSI) to a probability?Mustela Nivalis
January 14, 2010
January
01
Jan
14
14
2010
12:21 PM
12
12
21
PM
PDT
Mustela Nivalis - Why must it be quantitative to be objective? Qualitative analysis in chemistry is objective, but it isn't quantitative. Why don't you read the paper and then criticize the method specifically, rather than complaining about it generally? jasondulle - I know nothing about the contents of the amoeba genome. My only point about the amoeba was simply that the maintenance of a large genome for a common ancestor is not a theoretical problem. However, I will take issue with this statement: "If it did, we would expect for the amoeba to be more complex than humans since it has so much more code" This does not follow. Take, for instance, the Windows installer program. It has _more_ code than Windows, but it is less complex, because it's job is to move the Windows operating system to the right location (your hard drive). Taking a biological example, a zygote is less complex than a human, but they both contain the same genetic information. Therefore, if ontogeny is a model of phylogeny (as, for instance, Davison believes) then it makes sense that there would be a lot of unexpressed code waiting for the right time to activate.johnnyb
January 14, 2010
January
01
Jan
14
14
2010
11:45 AM
11
11
45
AM
PDT
R0b: The reason I said my estimates were low is because it is based on a single base pair needing change. I don't know the numbers, but my guess is that it is more than that (actually, even more because a nucleotide has four possible values, and I calculated it for 2). The pair that it searches for is the one that increases binding strength to the antigen. The way I determined that this was the target is because the threat is the antigen.johnnyb
January 14, 2010
January
01
Jan
14
14
2010
11:39 AM
11
11
39
AM
PDT
I think I am repeating what someone else said, but nobody believes that the first chromosome (or its precursors) happened strictly "by chance". Chance events always happen in the larger context of natural laws - chance provides the variation but the main structures of what happen are provide by the interplay of natural laws. So even though it may be correct to say that the probability of a chromosome "occurring by chance is greater than 1 in 10^150," that doesn't mean that the probability of a chromosome occurring through a chain of natural events is greater than 1 in 10^150.Aleta
January 14, 2010
January
01
Jan
14
14
2010
10:57 AM
10
10
57
AM
PDT
tribune7, Good point. I never understand when darwinists say, "Natural selection is the engine of evolution." That is totally irrational! Natural selection only destroys. It can never create.Collin
January 14, 2010
January
01
Jan
14
14
2010
10:55 AM
10
10
55
AM
PDT
Mustela, 54 was for you.tribune7
January 14, 2010
January
01
Jan
14
14
2010
10:43 AM
10
10
43
AM
PDT
By the way, no biologist would suggest that a modern chromosome came together by chance . . .Allen MacNeill supplied a list of some awhile back. Two points: 1. Those mechanisms are ultimately effects of chance according to Darwinism but that doesn't really matter because . . . 2. All those mechanisms depend on the pre-existence of chromosomes. So it seems that chromosome stands as example of a biological entity exhibiting CSI since it has complex specificity and the probability of it occurring by chance is greater than 1 in 10^150.tribune7
January 14, 2010
January
01
Jan
14
14
2010
10:42 AM
10
10
42
AM
PDT
Selection is random, but its randomness is once removed. The environment that favors a mutation came into existence either by random processes or by intent of someone/something. It is therefore random. To say otherwise is magical thinking.Collin
January 14, 2010
January
01
Jan
14
14
2010
10:41 AM
10
10
41
AM
PDT
Mutations are random, selection is not. One could imagine how selection might favor a new feature, such as an eye or a wing. How does selection favor a single mutation, the first step in a series of mutations that will eventually lead to some tangible benefit? Apparently selection not only isn't random, but it exercises foresight and planning. It sees and protects unexpressed potential until it forms something useful.ScottAndrews
January 14, 2010
January
01
Jan
14
14
2010
10:29 AM
10
10
29
AM
PDT
johnnyb, I'm afraid I still don't understand your example. What base pair does the process search for, and how did you determine that this base pair is the target?R0b
January 14, 2010
January
01
Jan
14
14
2010
10:03 AM
10
10
03
AM
PDT
tribune7 at 46, Mustela – "Any valid calculation of CSI would need to take into account known evolutionary mechanisms." And, specifically, what are they? Allen MacNeill supplied a list of some awhile back. Do you have an example of CSI calculated for a real biological artifact?Mustela Nivalis
January 14, 2010
January
01
Jan
14
14
2010
10:02 AM
10
10
02
AM
PDT
Collin at 44, Mustela said, “By the way, no biologist would suggest that a modern chromosome came together by chance. Any valid calculation of CSI would need to take into account known evolutionary mechanisms.” Evolutionary mechanisms are chance. It’s merely chance one step removed. Chance means that there was no intentionality at one end. Putting a pattern between chance and outcome does not make it not chance. Mutations are random, selection is not.Mustela Nivalis
January 14, 2010
January
01
Jan
14
14
2010
09:59 AM
9
09
59
AM
PDT
Mustela --Otherwise I won’t know if any has evolved. When you compile you know you are having success in your evolution. However, John Koza, among many others, has been doing some interesting work similar to what you describe. Unless you can identify naturally occurring algorithms you may not use them in your evolution project if you want to be faithful to nature.tribune7
January 14, 2010
January
01
Jan
14
14
2010
09:59 AM
9
09
59
AM
PDT
johnnyb at 43, I have my own “metric” (it’s actually currently qualitative, not quantitative, but I still think it is quite usable) Any metric that purportedly can be used to identify design must be quantitative and objective. If such a metric exists, any observer must be able to measure it and come to the same conclusion (within the bound of experimental error). CSI has been presented as such a metric. I've read all the literature I can find, but have been unable to turn up a worked example for a real biological system. Do you have any plans to make your qualitative approach more mathematically rigorous?Mustela Nivalis
January 14, 2010
January
01
Jan
14
14
2010
09:58 AM
9
09
58
AM
PDT
Mustela - Any valid calculation of CSI would need to take into account known evolutionary mechanisms. And, specifically, what are they?tribune7
January 14, 2010
January
01
Jan
14
14
2010
09:53 AM
9
09
53
AM
PDT
johnnyb, Thanks for the confirmation on amoebas. While ID theorists believe that "junk DNA" has function, surely not all of it can have function. If it did, we would expect for the amoeba to be more complex than humans since it has so much more code, and yet that's not true. Would you say that the genome of the amoeba got so large due to random mutations (and lots of them)?jasondulle
January 14, 2010
January
01
Jan
14
14
2010
09:41 AM
9
09
41
AM
PDT
Mustela said, "By the way, no biologist would suggest that a modern chromosome came together by chance. Any valid calculation of CSI would need to take into account known evolutionary mechanisms." Evolutionary mechanisms are chance. It's merely chance one step removed. Chance means that there was no intentionality at one end. Putting a pattern between chance and outcome does not make it not chance.Collin
January 14, 2010
January
01
Jan
14
14
2010
09:31 AM
9
09
31
AM
PDT
R0b: "then that would indicate that the active information in question reduces the search space from the whole genome to 600 base pairs. Am I on the right track?" Precisely. Mustela Nivalis - "CSI seems to be the recommended metric for most in the ID community. Do you have another?" I gave an example of active information applied for R0b above. I have my own "metric" (it's actually currently qualitative, not quantitative, but I still think it is quite usable), which I apply to a subsystem of the bacterial flagellum in this paper. The application is in section 3.1. The nice thing about my metric is that it lends itself to applications beyond just determining what is designed and what is not.johnnyb
January 14, 2010
January
01
Jan
14
14
2010
09:30 AM
9
09
30
AM
PDT
tribune7 at 34, Mustela Nivalis –"My goal in asking is to understand CSI well enough to implement CSI measurement in software, to see if known evolutionary mechanisms are, in fact, unable to generate it." Start with the symbols used by the programming language of your choice, generate them randomly, when you get something that compiles, latch it and continue. Let us know how you do I need a quantifiable, implementable definition of CSI first. Otherwise I won't know if any has evolved. However, John Koza, among many others, has been doing some interesting work similar to what you describe.Mustela Nivalis
January 14, 2010
January
01
Jan
14
14
2010
09:01 AM
9
09
01
AM
PDT
1 5 6 7 8 9

Leave a Reply