Uncommon Descent Serving The Intelligent Design Community

ID and Common Descent

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Many, many people seem to misunderstand the relationship between Intelligent Design and Common Descent. Some view ID as being equivalent to Progressive Creationism (sometimes called Old-Earth Creationism), others seeing it as being equivalent to Young-Earth Creationism. I have argued before that the core of ID is not about a specific theory of origins. In fact, many ID’ers hold a variety of views including Progressive Creationism and Young-Earth Creationism.

But another category that is often overlooked are those who hold to both ID and Common Descent, where the descent was purely naturalistic. This view is often considered inconsistent. My goal is to show how this is a consistent proposition.

I should start by noting that I do not myself hold to the Common Descent proposition. Nonetheless, I think that the relationship of ID to Common Descent has been misunderstood enough as to warrant some defense.

The issue is that most people understand common descent entirely from a Darwinian perspective. That is, they assume that the notion of natural selection and gradualism follow along closely to the notion of common descent. However, there is nothing that logically ties these together, especially if you allow for design.

In Darwinism, each feature is a selected accident. Therefore, Darwinian phylogenetic trees often use parsimony as a guide, meaning that it tries to construct a tree so that complex features don’t have to evolve more than once.

The ID version of common descent, however, doesn’t have to play by these rules. The ID version of common descent includes a concept known as frontloading – where the designer designed the original organism so that it would have sufficient information for its later evolution. If one allows for design, there is no reason to assume that the original organism must have been simple. It may in fact have been more complex than any existing organism. There are maximalist versions of this hypothesis, where the original organism had a superhuge genome, and minimalist versions of this hypothesis (such as from Mike Gene) where only the basic outlines of common patterns of pathways were present. Some have objected to the idea of a superhuge genome, on the basis that it isn’t biologically tenable. However, the amoeba has 100x the number of base pairs that a human has, so the carrying capacity of genetic information for a single-cell organism is quite large. I’m going to focus on views that tend towards the maximalist.

Therefore, because of this initial deposit, it makes sense that phylogenetic change would be sudden instead of gradual. If the genetic information already existed, or at least largely existed in the original organism, then time wouldn’t be the barrier for it to come about. It also means that multiple lineages could lead to the same result. There is no reason to think that there was one lineage that lead to tetrapods, for instance. If there were multiple lineages which all were carrying basically the same information, there is no reason why there weren’t multiple tetrapod lineages. It also explains why we find chimeras much more often than we find organs in transition. If the information was already in the genome, then the organ could come into existence all-at-once. It didn’t need to evolve, except to switch on.

Take the flagellum, for instance. Many people criticize Behe for thinking that the flagellum just popped into existence sometime in history, based on irreducible complexity. That is not the argument Behe is making. Behe’s point is that the flagellum, whenever it arose, didn’t arise through a Darwinian mechanism. Instead, it arose through a non-Darwinian mechanism. Perhaps all the components were there, waiting to be turned on. Perhaps there is a meta-language guided the piecing together of complex parts in the cell. There are numerous non-Darwinian evolutionary mechanisms which are possible, several of which have been experimentally demonstrated. [[NOTE – (I would define a mechanism as being non-Darwinian when the mechanism of mutation biases the mutational probability towards mutations which are potentially useful to the organism)]]

Behe’s actual view, as I understand it, actually pushes the origin of information back further. Behe believes that the information came from the original arrangement of matter in the Big Bang. Interestingly, that seems to comport well with the original conception of the Big Bang by LeMaitre, who described the universe’s original configuration as a “cosmic egg”. We think of eggs in terms of ontogeny – a child grows in a systematic fashion (guided by information) to become an adult. The IDists who hold to Common Descent often view the universe that way – it grew, through the original input of information, into an adult form. John A. Davison wrote a few papers on this possibility.

Thus the common ID claim of “sudden appearance” and “fully-formed features” are entirely consistent both with common descent (even fully materialistic) and non-common-descent versions of the theory, because the evolution is guided by information.

There are also interesting mixes of these theories, such as Scherer’s Basic Type Biology. Here, a limited form of common descent is taken, along with the idea that information is available to guide the further diversification of the basic type along specific lines (somewhat akin to Vavilov’s Law). Interestingly, there can also be a common descent interpretation of Basic Type Biology as well, but I’ll leave that alone for now.

Now, you might be saying that the ID form of common descent only involves the origin of life, and therefore has nothing to do with evolution. As I have argued before, abiogenesis actually has a lot to do with the implicit assumptions guiding evolutionary thought. And, as hopefully has been evident from this post, the mode of evolution from an information-rich starting point (ID) is quite different from that of an information-poor starting point (neo-Darwinism). And, if you take common descent to be true, I would argue that ID makes much better sense of what we see (the transitions seem to happen with some information about where they should go next).

Now, you might wonder why I disagree with the notion of common descent. There are several, but I’ll leave you with one I have been contemplating recently. I think that agency is a distinct form of causation from chance and law. That is, things can be done with intention and creativity which could not be done in complete absence of those two. In addition, I think that there are different forms of agency in operation throughout the spectrum of life (I am undecided about whether the lower forms of life such as plants and bacteria have anything which could be considered agency, but I think that, say, most land animals do). In any case, humans seem to engage in a kind of agency that is distinct from other creatures. Therefore, we are left with the question of the origin of such agency. While common descent in combination with ID can sufficiently answer the origin of information, I don’t think it can sufficiently answer the origin of the different kinds of agency.

Comments
OK, I retire, but with some parting remarks. Response 1 — You are asking for certainty and I’m not going to give it to you. For the life of me I see nothing in what I've said as being asking for certainty. Response 4 — If real world events are much more complicated than simple pure chance, they are designed. As I've said. this remark makes no sense to me. Going back to earlier posts, this means lightning is designed, for instance - it means virtually everything in the universe is designed, in fact. That use of the word design is far broader than the definition commonly used by ID advocates. Response 3 — Your dismissal of ID is based on you being able to imagine situations in which it fails ... I have clearly stated that I am not rejecting design. I am rejecting your argument for design based on pure chance calculations. But obviously, the point is not getting across.Aleta
January 16, 2010
January
01
Jan
16
16
2010
01:56 PM
1
01
56
PM
PDT
Aleta -- The heart of my argument, which I don’t believe you have addressed, is that real world events are much more complicated than the simple pure chance Response 1 -- You are asking for certainty and I'm not going to give it to you. Response 2 -- Probability calculations, even simple ones, are a legitimate part of the search for design and you use them whether you care to admit it or not. If valuables are missing after every visit from Joe you will assume design based on probability calculations and respond accordingly. Response 3 -- Your dismissal of ID is based on you being able to imagine situations in which it fails --- which, btw, means it is potentially falsifiable and legitimate science by one common definition. This, no offense, is not much different than a Younger Earther rejecting an old Earth because science cannot conclusively show that radioactive decay was a constant a billion years ago. Last, but not least: Response 4 -- If real world events are much more complicated than simple pure chance, they are designed.tribune7
January 16, 2010
January
01
Jan
16
16
2010
01:48 PM
1
01
48
PM
PDT
And in response to 186, I'm not interested in taking this conversation to metaphysical discussions about the origin of the universe. We are talking about how we in this universe go about studying the universe. In fact, as I have alluded to, you have no idea whether I think the universe is designed or not, because I'm not interested in having that be part of the universe. I may be a theist, a deist, a Buddhist, a materialist, or even a front-loading IDist as mentioned in the opening post (remember the opening post?) My focus is narrow: the calculations being offered as support for the design inference are faulty. I have even stated what I think ID advocates should be doing to rectify this situation if they want the probability argument to have traction. So I have not been arguing against design: I have been arguing against a specific argument that purports to support design. There's a significant difference between those two things.Aleta
January 16, 2010
January
01
Jan
16
16
2010
01:38 PM
1
01
38
PM
PDT
Hmm. I'm thinking this conversation might be winding down because it's getting fairly repetitious. Tribune says, "But otherwise I just see you ignoring the claim, I’m making — namely design is a quite reasonable, maybe even the most reasonable, claim for how life came about." I'm not ignoring your claim. I am arguing against your claim with quite a bit of specificity, giving reasons and examples. That is not ignoring. Your claim is that "design is a quite reasonable, maybe even the most reasonable, claim for how life came about." My point is that the reasons you give for this claim are not convincing, and appear, for reasons I have gone to some length to explain, to be faulty. The heart of my argument, which I don't believe you have addressed, is that real world events are much more complicated than the simple pure chance calculations that are offered by ID advocates. This is an argument against your claims - since I am clearly not ignoring your claim, would you like to respond to my point?Aleta
January 16, 2010
January
01
Jan
16
16
2010
01:27 PM
1
01
27
PM
PDT
Aleta, 185 is for you too, I guess.tribune7
January 16, 2010
January
01
Jan
16
16
2010
01:23 PM
1
01
23
PM
PDT
R0b-- If the phenomenon in question is not statistically random, why would anyone *not* reject the hypothesis of uniform chance? Because an undesigned universe would come from chance since the physics can't precede the universe, and this of course would make the physical laws subordinate to chance which would mean we couldn't ultimately trust the physical laws.tribune7
January 16, 2010
January
01
Jan
16
16
2010
01:21 PM
1
01
21
PM
PDT
Tribune says, "How can someone who believes in an undesigned universe reject uniform chance?" This remark takes me back to the comment Collin made many posts ago that got me interested in this discussion. The statement makes no sense to me, and I would genuienely like to understand what understandings Tribune has to say such a thing. First, I don't think the topic here has been whether the universe is designed or not (that is a much larger question) - the much narrower question (to which I answer "no") is whether the kind of calculations being offered can tells us anything about whether a particular thing is designed. Furthermore, surely Tribune doesn't mean he thinks that everything happens by uniform chance? Apples fall down all the time, and there is no chance involved in that. Irrespective of whether the universe as a whole has been designed to have the nature it has, within this universe virtually all events happen in part because of lawful natural processes. So I am not rejecting uniform chance as an hypothesis that can apply to some situations - it certainly applies to throwing 10 dice, but I reject the hypothesis that that is relevant to the development of most events in the world. P.S. And in No Free Lunch, despite what Dembski says about chance and necessity working together, the example he uses is just another calculation based on a pure chance arrangement of components.Aleta
January 16, 2010
January
01
Jan
16
16
2010
01:19 PM
1
01
19
PM
PDT
Note to Clive Hayden- Do not ban any other evo. Every time you do two or more others just jump in and muddy the waters even more. Banning evos is like fighting mythical monsters...Joseph
January 16, 2010
January
01
Jan
16
16
2010
01:16 PM
1
01
16
PM
PDT
Aleta -- Do you see anything reasonable in this point I am making? If I were insisting on making a claim of dogma, yes. But otherwise I just see you ignoring the claim, I'm making -- namely design is a quite reasonable, maybe even the most reasonable, claim for how life came about.tribune7
January 16, 2010
January
01
Jan
16
16
2010
01:14 PM
1
01
14
PM
PDT
Aleta -- If you take into account the history of how a thing came into existence as opposed to just a simple look at its present configuration, how do you calculate the number of states required to compute the probability? You would attempt to determine the number of possible operations in our observed universe to serve as a reasonable upper limit on the number of search operations. And if someone should challenge you, say by claiming that there were circumstances that occurred that drastically cut the odds, you would ask what they were, be told and calculate accordingly. And if the person should be unable to tell you what those circumstances were but insist they occurred, you'd just have to shrug and respect that person's faith that accidents reign supreme. But what can be measured i.e. science would be on your side.tribune7
January 16, 2010
January
01
Jan
16
16
2010
01:11 PM
1
01
11
PM
PDT
tribune7:
How can someone who believes in an undesigned universe reject uniform chance?
If the phenomenon in question is not statistically random, why would anyone *not* reject the hypothesis of uniform chance? Everybody rejects this hypothesis as an explanation for biological structures.R0b
January 16, 2010
January
01
Jan
16
16
2010
01:09 PM
1
01
09
PM
PDT
R0b:
Thank you to Mustela, Aleta, and h.pesoj for pointing out that all reported CSI calculations are based on a single null hypothesis, namely uniform chance.
Yet that is false. Ya see in "No Free Luch"- the book that starts the talk about CSI and calculting- it states that chance and necessity are considered together.Joseph
January 16, 2010
January
01
Jan
16
16
2010
01:04 PM
1
01
04
PM
PDT
Note: In the above two quotes by Dr. Dembski, "chance hypotheses" include all natural hypotheses, even those that are deterministic. Says Dembski, "Chance as I characterize it thus includes necessity, chance (as it is ordinarily used), and their combination."R0b
January 16, 2010
January
01
Jan
16
16
2010
01:03 PM
1
01
03
PM
PDT
R0b -- But on the negative side, the best that a uniform-chance-based calculation can accomplish is indicate that the hypothesis of uniform chance should be rejected. How can someone who believes in an undesigned universe reject uniform chance?tribune7
January 16, 2010
January
01
Jan
16
16
2010
01:02 PM
1
01
02
PM
PDT
Tribune says the glossary tells how to compute the probability upon which CSI is based when it says "a given target zone in a search space, on a relevant chance hypothesis." Yes, but how do you do that? If you take into account the history of how a thing came into existence as opposed to just a simple look at its present configuration, how do you calculate the number of states required to compute the probability? I'll also note that the quoted sentence refers to the "relevant chance hypothesis", which makes me think that the only thing the glossary has in mind is exactly the kind of "pure chance" calculations that I am saying are not relevant because they do not model the real world. So I'm wondering why you or others are not responding to my point that if you want to have a realistic model with which to try and calculate probabilities, you have to take both the passage of time and the existence of natural processes by which states slowly change as they progress from a beginning state to an end state, such as I tried to explain with my dice example? Do you see anything reasonable in this point I am making?Aleta
January 16, 2010
January
01
Jan
16
16
2010
12:57 PM
12
12
57
PM
PDT
Thank you to Mustela, Aleta, and h.pesoj for pointing out that all reported CSI calculations are based on a single null hypothesis, namely uniform chance. On the plus side, this makes CSI calculations tractable in at least two ways: 1. P(E|H) is straightforward to calculate, and 2. It's easy to determine that the CINDE condition is met -- that is, the specification is independent of the event. (Under a hypothesis of random chance, the event is independent of everything.) But on the negative side, the best that a uniform-chance-based calculation can accomplish is indicate that the hypothesis of uniform chance should be rejected. In the case of biological structures, nobody has proposed pure random chance as a hypothesis, so such calculations accomplish nothing. Dr. Dembski warns that in order for a design inference to get going, we must "know enough to determine all the relevant chance hypotheses" and "we must have a good grasp of what chance hypotheses would have been operating to produce the observed event". Have we met these conditions when it comes to calculating the CSI in biological/chemical structures?R0b
January 16, 2010
January
01
Jan
16
16
2010
12:55 PM
12
12
55
PM
PDT
Mustela -- Design has unique characteristics. . .That’s not a principle, it’s an assertion about an observation. It's an assertion of a principle. Do you accept or reject the claim?tribune7
January 16, 2010
January
01
Jan
16
16
2010
12:48 PM
12
12
48
PM
PDT
BTW you want to know who should be using the EF? The very people who try to argue against ID. The explanatory filter (EF) is a process that can be used to reach an informed inference about an object or event in question. The EF mandates a rigorous investigation be conducted in an attempt to figure out how the object/ structure/ event in question came to be (see Science Asks Three Basic Questions, question 3). So who would use such a process? Mainly anyone and everyone attempting to debunk a design inference. This would also apply to anyone checking/ verifying a design inference. As I said in another opening post, Ghost Hunters use the EF. The EF is just a standard operating procedure used when conducting an investigation in which the cause is in doubt or needs to be verified.Joseph
January 16, 2010
January
01
Jan
16
16
2010
12:45 PM
12
12
45
PM
PDT
h.pesoj -- Tell me, given that no biologist would suggest that a modern chromosome came together by chance If they reject design it's either chance or handwaving. What are the forces that biologists say cause the chromosome, modern or otherwise, to come together? Why do reject the possibility of design?tribune7
January 16, 2010
January
01
Jan
16
16
2010
12:45 PM
12
12
45
PM
PDT
backwards me wrongly states:
It’s amazing that for all the claims that are made for the EF and how it proves the “intelligent designer” exists
The EF is not about "proof". The EF is about an inference- an inference based on the evidence- all the evidence and the context- PLUS our current understanding on cause and effect. And as with ALL scientific inferences it can be either confirmed or falsified with future research/ knowledge. That possibility sure didn't stop Einstein.
The fact that you can’t provide something as simple as the value for the CSI/FSCI in *anything at all* kind of undermines your argument.
Unfortunately for you I have done exactly that. I have also taken the time to tell others how to do it. I have done that in this very thread. And others have done so in various pro-ID writings. But anyway how do you think that archaeologists determine an artifact from a rock? Do you think they flip a coin? How about forensic scientists- do they flip a coin to determine whether or not a crime has been committed? Wise up trolls. Humans have tried and true design detection techniques- used daily. We have a good amount of experience/ observations of cause and effect. All your position has is the refusal to accept the design inference because you just refuse to understand ID. As I said in the end it doesn't matter about CSI. All the trolls have to do is to actually start supporting their position and ID would fade away.Joseph
January 16, 2010
January
01
Jan
16
16
2010
12:42 PM
12
12
42
PM
PDT
Aleta, It tells how to compute CSI given that you know a probability, but it does not tell how to compute the probability upon which the CSI is based. No, it tells you specifically how to compute the probability i.e. a given target zone in a search space, on a relevant chance hypothesis. CJYman summed it up well in Post 126.tribune7
January 16, 2010
January
01
Jan
16
16
2010
12:41 PM
12
12
41
PM
PDT
tribune7 at 158, Mustela, sorry I missed 138. No problem at all -- this thread is getting busy. Regarding Tenet 1: I do believe design has unique characteristics, that this is obvious and it should be considered axiomatic . . ."You’ve just attempted to assume away the part of your claim that is most difficult to support. What, exactly, are these unique characteristics? How, exactly, can they be measured?" It isn’t assuming away anything. It’s stating a starting principle i.e. Design has unique characteristics. That's not a principle, it's an assertion about an observation. Once we agree upon that we can start discussing what those characteristics might be. There's no reason to assert it in the first place if you haven't observed any characteristics that uniquely identify design. What, exactly, are the characteristics that you believe uniquely identify design?Mustela Nivalis
January 16, 2010
January
01
Jan
16
16
2010
12:31 PM
12
12
31
PM
PDT
P.S I note that in both the glossary article on CSI and on the EF, the only examples are of the same kind as I have been objecting to, so there is nothing new for this discussion in those two articles.Aleta
January 16, 2010
January
01
Jan
16
16
2010
10:51 AM
10
10
51
AM
PDT
Tribune at 156: "Aleta, did you read the description of CSI in the glossery?" Yes. It tells how to compute CSI given that you know a probability, but it does not tell how to compute the probability upon which the CSI is based.Aleta
January 16, 2010
January
01
Jan
16
16
2010
10:47 AM
10
10
47
AM
PDT
I guess, but it would be easier for you to scroll to the top of the page, click on the link and read it yourself.
I see no such link to an example of the EF in action. And I take it then that you cannot provide a list of items that have passed/been rejected by the EF. Tell me, given that no biologist would suggest that a modern chromosome came together by chance why do you continue to claim that the probability of it happening by chance is relevant to anything at all?
The smallest chromosome is that of the Candidatus Carsonella ruddii, a bacteria, at 159,662 base pairs. For the four bases of DNA to arrange themselves by chance into a specific sequence of 159,662 base pairs is far, far beyond the UPB. I’d give you a number but my calculator doesn’t go that high.
Tell me, for this to be relevant you have to know what is happening when "arrange themselves by chance" is happening? Is it a machine trying each combination one by one? When you talk about chromosomes coming together by chance what process do you imagine is happening? A big mixing vat with all the parts sloshing around? A line of chemicals on a belt, moving step by step? What is happening?h.pesoj
January 16, 2010
January
01
Jan
16
16
2010
10:19 AM
10
10
19
AM
PDT
h.pesoj -- Would it be possible for you to run us through an example of the usage of the EF? I guess, but it would be easier for you to scroll to the top of the page, click on the link and read it yourself.tribune7
January 16, 2010
January
01
Jan
16
16
2010
10:08 AM
10
10
08
AM
PDT
Joseph
To support my last comment I offer:
Offer all the quotes you want. The fact that you can't provide something as simple as the value for the CSI/FSCI in *anything at all* kind of undermines your argument. Or a list of biological items ordered by complexity. And quoting books is fine, but don't you have any papers or articles from peer reviewed sources to quote?h.pesoj
January 16, 2010
January
01
Jan
16
16
2010
10:07 AM
10
10
07
AM
PDT
Tribune7
Rigor is subjective but for an application to a biological system or component look under EF Explanatory Filter.
Would it be possible for you to run us through an example of the usage of the EF? If not, what artifacts has the EF determined are designed? And not designed? Is there a list? There must be more then 1 example available. Or will I have to "buy the book" (just tell me the page number as it goes)? Is there a difference in how you apply the EF to a "biological system" and a "component"? What is it? I've looked at the entry in the FAQ. I saw no list.h.pesoj
January 16, 2010
January
01
Jan
16
16
2010
10:00 AM
10
10
00
AM
PDT
Mustela -- The glossary does not define CSI with sufficient rigor to apply it to a real biological system or component, Rigor is subjective but for an application to a biological system or component look under EF Explanatory Filter.tribune7
January 16, 2010
January
01
Jan
16
16
2010
09:55 AM
9
09
55
AM
PDT
To support my last comment I offer:
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL
In the preceding and proceeding paragraphs William Dembski makes it clear that biological specification is CSI- complex specified information. In the paper "The origin of biological information and the higher taxonomic categories", Stephen C. Meyer wrote:
Dembski (2002) has used the term “complex specified information” (CSI) as a synonym for “specified complexity” to help distinguish functional biological information from mere Shannon information--that is, specified complexity from mere complexity. This review will use this term as well.
In order to be a candidate for natural selection a system must have minimal function: the ability to accomplish a task in physically realistic circumstances.- M. Behe page 45 of “Darwin’s Black Box”
He goes on to say:
Irreducibly complex systems are nasty roadblocks for Darwinian evolution; the need for minimal function greatly exacerbates the dilemma. – page 46
Joseph
January 16, 2010
January
01
Jan
16
16
2010
09:49 AM
9
09
49
AM
PDT
1 2 3 4 5 9

Leave a Reply