Uncommon Descent Serving The Intelligent Design Community

Can we all agree on specified complexity?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Amid the fog of recent controversies, I can discern a hopeful sign: the key figures in the ongoing debate over specified complexity on Uncommon Descent are actually converging in their opinions. Allow me to explain why.

Winston Ewert’s helpful clarifications on CSI

In a recent post, ID proponent Winston Ewert agreed that Elizabeth Liddle had a valid point in her criticisms of the design inference, but then went on to say that she had misunderstood what the design inference was intended to do (emphases mine):

She has objected that specified complexity and the design inference do not give a method for calculating probabilities. She is correct, but the design inference was never intended to do that. It is not about how we calculate probabilities, but about the consequences of those probabilities. Liddle is complaining that the design inference isn’t something that it was never intended to be.

He also added:

…[T]he design inference is a conditional. It argues that we can infer design from the improbability of Darwinian mechanisms. If offers no argument that Darwinian mechanisms are in fact improbable. When proving a conditional, we are not concerned with whether or not the antecedent is true. We are interested in whether the consequent follows from the antecedent.

In another post, Winston Ewert summarized his thoughts on specified complexity:

The notion of specified complexity exists for one purpose: to give force to probability arguments. If we look at Behe’s irreducible complexity, Axe’s work on proteins, or practically any work by any intelligent design proponent, the work seeks to demonstrate that the Darwinian account of evolution is vastly improbable. Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.

Winston Ewert concluded that “the only way to establish that the bacterial flagellum exhibits CSI is to first show that it was improbable.”

To which I would respond: hear, hear! I completely agree.

What about Ewert’s claim that “CSI and Specified complexity do not help in any way to establish that the evolution of the bacterial flagellum is improbable”? He is correct, if by “CSI and Specified complexity,” he simply means the concepts denoted by those terms. If, however, we are talking about the computed probability of the evolution of the bacterial flagellum emerging via unguided processes, then of course this number can be used to support a design inference: if the probability in question is low enough, then the inference to an Intelligent Designer becomes a rational one. Ewert obviously agrees with me on this point, for he writes that “Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.”

In a recent post, I wrote that “we can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).” Immediately afterwards, I added that in order to calculate the specified complexity of an object, we first require “the probability of producing the object in question via ‘Darwinian and other material mechanisms.'” I then added that “we compute that probability.” The word “compute” makes it quite clear that without that probability, we will be unable to infer that a given object was in fact designed. I concluded: “To summarize: to establish that something has CSI, we need to show that it exhibits specificity, and that it has an astronomically low probability of having been producedby unguided evolution or any other unintelligent process” (italics added).

Imagine my surprise, then, when I discovered that some readers had been interpreting my claim that “we can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold)” as if I were arguing for a design inference on the basis of some pre-specified numerical value for CSI! Nothing could be further from the truth. To be quite clear: I maintain that the inference that biological organisms (or structures, such as proteins) were designed is a retrospective one. We are justified in making this inference only after we have computed, on the basis of the best information available to us, that the emergence of these organisms (or structures) via unguided processes – in which I include both random changes and the non-random winnowing effect of natural selection – falls below a certain critical threshold of 1 in 2^500 (or roughly, 1 in 10^150). There. I cannot be clearer than that.

So I was heartened to read on a recent post by Barry Arrington that Keith S had recently endorsed a form of design inference, when he wrote:

To use the coin-flipping example, every sequence of 500 fair coin flips is astronomically improbable, because there are 2^500 possible sequences and all have equally low probability. But obviously we don’t exclaim “Design!” after every 500 coin flips. The missing ingredient is the specification of the target T.

Suppose I specify that T is a sequence of 250 consecutive heads followed by 250 consecutive tails. If I then sit down and proceed to flip that exact sequence, you can be virtually certain that something fishy is going on. In other words, you can reject the chance hypothesis H that the coin is fair and that I am flipping it fairly.

That certainly sounds like a design inference to me.

In a follow-up comment on Barry Arrington’s post, Keith S went on to point out:

…[I]n that example, I am not calculating CSI and then using it to determine that something fishy is going on. Rather, I have to determine that something fishy is going on first (that is, that P(T|H) is extremely low under the chance hypothesis) in order to attribute CSI to it.

To which I would respond: you’re quite right, Keith S. That’s what I’ve been saying and what Winston Ewert has been saying. It seems we all agree. We do have to calculate the probability of a system emerging via random and/or non-random unguided processes, before we impute a high level of CSI to the system and conclude that it was designed.

CSI vs. irreducible complexity: what’s the difference?

In a subsequent comment, Keith S wrote:

I think it’s instructive to compare irreducible complexity to CSI in this respect.

To argue that something is designed because it exhibits CSI is circular, because you have to know that it is designed before you can attribute CSI to it.

To argue that something is designed because it is irreducibly complex is not circular, because you can determine that it is IC (according to Behe’s definition) without first determining that it is designed.

The problem with the argument from IC is not that it’s circular — it’s that IC is not a barrier to evolution.

For the record: the following article by Casey Luskin over at Evolution News and Views sets forth Professor Mike Behe’s views on exaptation, which are that while it cannot be absolutely ruled out, its occurrence is extremely improbable, even for modestly complex biologically features. Professor Behe admits, however, that he cannot rigorously quantify his assertions, which are based on his professional experience as a biochemist. Fair enough.

The big difference between CSI and irreducible complexity, then, is not that the former is circular while the latter is not, but that CSI is quantifiable (for those systems where we can actually calculate the probability of their having emerged via unguided random and/or non-random processes) whereas irreducible complexity is not. That is what makes CSI so useful, when arguing for design.

Does Dr. Dembski contradict himself? I think not

Keith S claims to have uncovered a contradiction between the following statement by leading Intelligent Design advocate Dr. Willaim Dembski:

Michael Behe’s notion of irreducible complexity is purported to be a case of actual specified complexity and to be exhibited in real biochemical systems (cf. his book Darwin’s Black Box). If such systems are, as Behe claims, highly improbable and thus genuinely complex with respect to the Darwinian mechanism of mutation and natural selection and if they are specified in virtue of their highly specific function (Behe looks to such systems as the bacterial flagellum), then a door is reopened for design in science that has been closed for well over a century. Does nature exhibit actual specified complexity? The jury is still out.

and this statement of his:

It is CSI that Michael Behe has uncovered with his irreducbly complex biochemical machines. It is CSI that for cosmologists underlies the fine-tuning of the universe and that the various anthropic principles attempt to understand.

I don’t see any contradiction at all here. In the first quote, Dr. Dembski is cautiously pointing out that the inference that the bacterial flagellum was designed hinges on probability calculations, which we do not know for certain to be correct. In the second quote, he is expressing his belief, based on his reading of the evidence currently available, that these calculations are in fact correct, and that Nature does in fact exhibit design.

Dembski and the Law of Conservation of Information

Keith S professes to be deeply puzzled by Dr. Dembski’s Law of Conservation of Information (LCI), which he finds “murky.” He is especially mystified by the statement that neither chance nor law can increase information.

I’d like to explain LCI to Keith S in a single sentence. As I see it, its central insight is very simple: that when all factors are taken into consideration, the probability of an event’s occurrence does not change over the course of time, until it actually occurs. In other words, if the emergence of life in our universe was a fantastically improbable event at the time of the Big Bang, then it was also a fantastically improbable event 3.8 billion years ago, immediately prior to its emergence on Earth. And if it turns out that the emergence of life on Earth 3.8 billions of years ago was a highly probable event, then we should say that the subsequent emergence of life in our universe was highly probable at the time of the Big Bang, too. Chance doesn’t change probabilities over the course of time; neither does law. Chance and law simply provide opportunities for the probabilities to be played out.

Someone might argue that we can think of events in human history which seemed highly improbable at time t, but which would have seemed much more probable at a later time t + 1. (Hitler’s rise to power in Germany would have seemed very unlikely in January 1923, but very likely in January 1933.) But this objection misses the point. Leaving aside the point that humans are free agents, a defender of LCI could reply that when all factors are taken into consideration, events that might seem improbable at an earlier time can in fact be demonstrated to have a high probability of occurring subsequently.

Making inferences based on what you currently know: what’s the problem with that?

Certain critics of Intelligent Design are apt to fault ID proponents for making design inferences based on what scientists currently know. But I see no problem with that, as long as ID proponents declare that they would be prepared to cheerfully revise their opinions, should new evidence come to light which overturns currently accepted beliefs.

I have long argued that Dr. Douglas Axe’s paper, The Case Against a Darwinian Origin of Protein Folds, whose argument I summarized in my recent post, Barriers to macroevolution: what the proteins say, demonstrates beyond reasonable doubt that unguided mechanisms could not have given rise to protein folds that we find in living creatures’ body proteins, in the space of just four billion years. I have also pointed out that Dr. Eugene Koonin’s peer-reviewed article, The Cosmological Model of Eternal Inflation and the Transition from Chance to Biological Evolution in the History of Life (Biology Direct 2 (2007): 15, doi:10.1186/1745-6150-2-15) makes a very strong case that the probability of a living thing capable of undergoing Darwinian evolution – or what Dr. Koonin refers to as a coupled translation-replication system – emerging in our observable universe during the course of its history is astronomically low: 1 in 10^1,018 is Dr. Koonin’s estimate, using a “toy model” that makes deliberately optimistic assumptions. Finally, I have argued that Dr. Robin Collins’ essay, The Teleological Argument rules out the infinite multiverse hypothesis which Dr. Koonin proposes in order to explain the unlikely emergence of life in our universe: as Dr. Koonin argues, a multiverse would need to be specially fine-tuned in order to produce even one universe like our own. If Dr. Axe’s and Dr. Koonin’s estimates are correct, and if we cannot fall back on the hypothesis of a multiverse in order to shorten the odds against life emerging, then the only rational inference that we can make, based on what we currently know, is that the first living thing was designed, and that the protein folds we find in living creatures were also designed.

Now, Keith S might object that these estimates could be wrong – and indeed, they could. For that matter, the currently accepted age of the universe (13.798 billion years) could be totally wrong too, but I don’t lose any sleep over that fact. In everyday life, we make decisions based on what we currently know. If Keith S wants to argue that one can reasonably doubt the inference that living things were designed, then he needs to explain why the estimates I’ve cited above could be mistaken – and by a very large margin, at that.

Recently, Keith S has mentioned a new book by Dr. Andreas Wagner, titled, The Arrival of the Fittest: Solving Evolution’s Greatest Puzzle. I haven’t read the book yet, but let me say this: if the book makes a scientifically plausible case, using quantitative estimates, that life in all its diversity could have emerged on Earth over the space of just 3.8 billion years, then I will cheerfully change my mind and admit I was wrong in maintaining that it had to have been designed. As John Maynard Keynes famously remarked, “When the facts change, I change my mind. What do you do, sir?”

For that matter, I try to keep an open mind about the recent discovery of soft tissue in dinosaur bones (see here and here). Personally, I think it’s a very odd finding, which is hard to square with the scientifically accepted view that these bones are millions of years old, but at the present time, I think the preponderance of geological and astronomical arguments in favor of an old Earth is so strong that this anomaly, taken alone, would be insufficient to overthrow my belief in an old cosmos. Still, I could be wrong. Science does not offer absolute certitude, and it has never claimed to.

Conclusion

To sum up: statements about the CSI of a system are retrospective, and should be made only after we have independently calculated the probability of a system emerging via unguided (random or non-random) processes, based on what we currently know. After these calculations have been performed, one may legitimately infer that the system was designed – even while admitting that should subsequent evidence come to light that would force a drastic revision of the probability calculations, one would have to revise one’s views on whether that system was designed.

Are we all on the same page now?

Comments
Actually, with all due fairness, I can see where there could be confusion here (given charitable interpretations). Adapa has said that it is considered a scientific fact that unguided processes can produce CSI biodiversity; he isn't claiming that it is a fact that it did, only that it can. What he doesn't seem to realize is that this is all anyone is asking him to support. I'm not asking him (or any Darwinist) to prove ID wasn't involved; I'm just asking him to support what he has claimed - that it is a conclusive, scientifically-demonstrated fact that unguided processes can generate CSI biodiversity. Proving that ID wasn't involved in creating something is, of course, impossible. It's hard to believe that someone could construe anything said here as having that meaning, but, principle of charity requires it.William J Murray
November 22, 2014
November
11
Nov
22
22
2014
01:38 PM
1
01
38
PM
PDT
William J Murray It’s easy to get them to contradict themselves or blunder into making the most ridiculous strings of statements. You mean like when you made this idiotic illogical claim? "Even if we posit arguendo that there is plenty of positive evidence that natural processes “can” produce that CSI, it is not known that it did. It is a historical, abductive, theoretical inference at best, an assumption at worst." Or when you told this blatant lie? "Adapa: Science has conclusive demonstrated that evolution is unguided" Keep showing us how little honesty means to you WJM.Adapa
November 22, 2014
November
11
Nov
22
22
2014
01:28 PM
1
01
28
PM
PDT
Cantor @324: It's when they are immune to logic that you can get them to say the darnedest things! It's like they're programmed to respond in certain ways to certain key phrases, without regard to any kind of consistency or logical coherence. It's easy to get them to contradict themselves or blunder into making the most ridiculous strings of statements. You caught a really good one there! I'm going to spread it around a little bit.William J Murray
November 22, 2014
November
11
Nov
22
22
2014
01:21 PM
1
01
21
PM
PDT
StephenB:
I am defining CSI or specified complexity as the presence of functional parts (complexity) combined into a unified whole (Specificity), such that all the parts are necessary for the function. The only known cause of that phenomenon is intelligent agency. In this way, we can draw a direct inference to design without passing through RV+NS.
Stephen, Don't call it CSI. That acronym is already taken. Come up with your own acronym, like gpuccio and KF. It's a UD tradition.keith s
November 22, 2014
November
11
Nov
22
22
2014
12:56 PM
12
12
56
PM
PDT
StephenB Here is the order of events. When are you going to tell WJM his logic is all wrong? Why do you guys keep contradicting each other then running away when the contradictions are pointed out?Adapa
November 22, 2014
November
11
Nov
22
22
2014
12:06 PM
12
12
06
PM
PDT
Here is the order of events. Adapa
there is plenty of positive evidence natural processes can produce the complexity we see in living forms. It’s known well enough to be considered fact by the scientists who actually study and understand the processes.
WJM
Still waiting for Adapa to direct us to where this “fact” has been established. Link and a pertinent quote, please, Adapa? Shouldn’t be hard for you to support a known scientific fact. Right?
Adapa
Sorry William J Murray, my bad. I forgot you’re just an armchair philosopher and are too technically incompetent and/or lazy to research any science for yourself.
This is the way Adapa always responds to an intellectual challenge. Evade and insult, evade and insult, evade and insult. That is the way he responded to me @254. He never engages on the merits of the argument---never. Adapa is simply incapable of rational thought.StephenB
November 22, 2014
November
11
Nov
22
22
2014
12:03 PM
12
12
03
PM
PDT
192 Adapa November 20, 2014 at 10:01 am there is plenty of positive evidence natural processes can produce the complexity we see in living forms. 204 Adapa November 20, 2014 at 11:37 am It’s known well enough to be considered fact by the scientists who actually study and understand the processes. 321 Adapa November 22, 2014 at 7:49 am Nothing in there about unguided is a fact.
. @WJM: Adapa is immume to logic. It is not possible to have a rational argument with him. The only way his comment makes any sense is if he is admitting that materialism is an a priori axiom and not a fact. .cantor
November 22, 2014
November
11
Nov
22
22
2014
08:38 AM
8
08
38
AM
PDT
321 Adapa November 22, 2014 at 7:49 am You told a big fat lie
I knew it! Adapa is a five-year-old child! .cantor
November 22, 2014
November
11
Nov
22
22
2014
08:06 AM
8
08
06
AM
PDT
306 keith s November 21, 2014 at 5:58 pm we’re both ID critics, so we must be the same person?
There's "mountains of evidence". .cantor
November 22, 2014
November
11
Nov
22
22
2014
08:04 AM
8
08
04
AM
PDT
William J Murray Adapa’s link goes to a google search “evidence for evolution”. That’s not what I asked for; I asked for evidence supporting his claim that evolution is unguided and that the unguided nature of evolution is considered a scientific fact. Shame on you WJM. Not only are you still stupid enough to demand someone prove a negative now you've resorted to lying about what was actually said. Here's my words
Adapa: "there is plenty of positive evidence natural processes can produce the complexity we see in living forms. It’s known well enough to be considered fact by the scientists who actually study and understand the processes.
Nothing in there about unguided is a fact. You told a big fat lie to try and save face. Didn't work. Seems it always comes to that with you when you get shown up.Adapa
November 22, 2014
November
11
Nov
22
22
2014
05:49 AM
5
05
49
AM
PDT
Adapa said:
Sorry William J Murray, my bad. I forgot you’re just an armchair philosopher and are too technically incompetent and/or lazy to research any science for yourself. Here, this should help.
Adapa's link goes to a google search "evidence for evolution". That's not what I asked for; I asked for evidence supporting his claim that evolution is unguided and that the unguided nature of evolution is considered a scientific fact. I also asked that he configure his response with links to actual scientific research AND pertinent quotes from such research (to avoid link and literature bluffing). Care to try again, Adapa?William J Murray
November 22, 2014
November
11
Nov
22
22
2014
04:49 AM
4
04
49
AM
PDT
AR: This from 309? caught my eye:
Durston, Axe (and KF) seem not to grasp that because a protein has not turned up yet in any organism, it must be non-functional. This is the cardinal error in probability assumptions about the richness of the domain of as-yet-unknown proteins.
I note that to function as a rule a protein must fold and fit. Folding stability is the first criterion being applied. And, the objection you raise is therefore not a matter of assumptions. Folding or not is empirically tested and shown across the world of life. Remember prions and sickle cells as indicators. Next, the functionality challenge on origin of nes body plans involves proteins that must assemble and fold, activate and agglomerate chains to build nanomachines, or tissues, then organs then integrated systems. Folding is a lower complexity, least unlikely threshold criterion. And already, it highlights isolated islands of function facing sparse search on atomic and temporal resources. I hate to say it but you are beginning to sound like the Marxists with wheels on wheels till the main wheels fell off and the system collapsed at the end of the 80's. KFkairosfocus
November 22, 2014
November
11
Nov
22
22
2014
03:18 AM
3
03
18
AM
PDT
MT, seems to be cross-threaded, I copy back here, the proper home. In any case, the random search on average result is about searching in W for isolated zones T with maximally sparse search. On the whole, given the blind search for [a golden] search challenge . . . which has to come from the set of subsets of W i.e. from the power set of cardinality 2^W . . . we have no right to expect to find a golden search blindly that drastically outperforms the overwhelmingly likely fail on a blind reasonably random sparse search of W. Of course if one may impose successive "halves" with elimination of target in one half, the search tries to success drops impressively. Unfortunately, the sparseness imposed by W: atomic and temporal resources of sol system or the observed cosmos preclude half-break partitioned successive search. Neither the random dust nor a random walk nor a combination offer any advantages.) KF PS: I am too busy RW to try to further engage here, I leave that to those already present.kairosfocus
November 22, 2014
November
11
Nov
22
22
2014
03:02 AM
3
03
02
AM
PDT
William J Murray Still waiting for Adapa to direct us to where this “fact” has been established. Link and a pertinent quote, please, Adapa? Shouldn’t be hard for you to support a known scientific fact. Right? Sorry William J Murray, my bad. I forgot you're just an armchair philosopher and are too technically incompetent and/or lazy to research any science for yourself. Here, this should help Evidence for evolutionAdapa
November 21, 2014
November
11
Nov
21
21
2014
09:30 PM
9
09
30
PM
PDT
rhampton7
You definition is too broad, as it would cover the creation of heavy elements by way of nuclear fusion in stars.
No, it would not. What observable and specific function to the heavy elements serve? How do those parts interact? What is relationship of the part to the whole? What would happen if one of the parts was missing?StephenB
November 21, 2014
November
11
Nov
21
21
2014
08:37 PM
8
08
37
PM
PDT
Moose Dr:
Actually it is my opinion that we cannot agree on CSI because the ID community (of which I am a member) has taken too large a leap when defining CSI. The result is that the Dembskian definition is circular. The question of how CSI is made must be separated from the definition of CSI. The step of proving that RM+NS must be a subsequent step, a subsequent case. ..... I have given a clear method of establishing the CSI of a thing — especially a digitizable specification such as is found in DNA. To establish the complexity, you begin by adding up the number of bits of data — simple enough. You then subtract all bits of data that are superfluous — if the status of the bit doesn’t functionally diminish the specification, its bit can be subtracted. We now have a digital value for complexity. Simple enough?
The most puzzling question is (still) how information is specified by DNA. Much depends on which chromosome territory a given gene is located, which in transposition can change location even to another chromosome (most easily after chromosomes uncoil then adjacents intermingle to reestablish necessary connections from one to another). More information: https://uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/#comment-530765 The ID community can be thankful that what it has are none the less pieces to a puzzle that antiquates the model the Darwinian community is stuck with. Besides, William Dembski (and faithful Salvador) got the UD ball rolling by stirring things up real good with even a Charles Darwin doll with their head in a vise and a fart noise Dover cartoon parody of Judge Jones and others. Oh I so well remember the early days of UD. But making UD a place mainstream academia had to keep an eye on is very valuable to have when over the years the level of discussion slowly leads to scientifically revolutionary theory. That's sure one for the science history books. And its obligatory weird story is only a plus! The wacky history of cell theory - Lauren Royal-Woods https://www.youtube.com/watch?v=4OpBylwH9DU From what I can see KF and others all have useful ideas in one area or another of the overall problem that ends up wiring together a circuit to rival the intelligence of the human brain. Only difference is one has billions of years behind and likely ahead of it and works slowly, while the brain in our skull has to work faster for moment to moment actions that are made during one lifetime.Gary S. Gaulin
November 21, 2014
November
11
Nov
21
21
2014
07:56 PM
7
07
56
PM
PDT
Me_Think: Given that ID is based on ‘improbabilities’ of evolutionary process, isn’t it ironic that IDers are on a side which is far more improbable than Evolutionary + NS process? Mung: No Me_Think: No ? Is it because yours is an ideological stand or because you don’t agree with either Demski or Binomial calculation @ 270 which shows you would need thousands of ID agent for ID to function ? Mung: No, because it's not ironic. Mung: No, because no one knows what the "improbabilities" of evolutionary processes are. Mung: No, because your claim that IDers are on a side which is far more improbable than Evolutionary + NS process is not something you can substantiate. Need more?Mung
November 21, 2014
November
11
Nov
21
21
2014
06:40 PM
6
06
40
PM
PDT
Mung @ 308
Me_Think: Given that ID is based on ‘improbabilities’ of evolutionary process, isn’t it ironic that IDers are on a side which is far more improbable than Evolutionary + NS process? Mung: No.
No ? Is it because yours is an ideological stand or because you don't agree with either Demski or Binomial calculation @ 270 which shows you would need thousands of ID agent for ID to function ?Me_Think
November 21, 2014
November
11
Nov
21
21
2014
06:08 PM
6
06
08
PM
PDT
StephenB, You definition is too broad, as it would cover the creation of heavy elements by way of nuclear fusion in stars (no intelligent intervention required).rhampton7
November 21, 2014
November
11
Nov
21
21
2014
05:48 PM
5
05
48
PM
PDT
SB: In all known cases where CSI is produced in an object, an intelligent agent was the cause. This object contains CSI, Therefore, this object was probably designed.” Moose Dr.
Stephen B, the above case strongly resembles the case that should be made.
Yes.
I would think, however, that one should specifically reference the candidate alternative source of CSI — RM+NS.
In this context, it isn't necessary to even consider RM + NS. If, in the past, it was always the case that CSI was the product of an intelligent agent, then it follows that any future object that contains CSI is probably designed. It's a clear, straightforward, non-circular argument. The conclusion about "this" object is not contained in the premise.
However, to prove that RM+NS cannot produce CSI, you cannot define CSI as “information that could not have been produced by RM+NS”.
I don't need to prove that RM+NS cannot produce CSI. Remember, I began the argument with an scientific observation about intelligent causes, not an assumption or calculation involving natural causes. We have all observed humans building machines with CSI, languages with CSI, and computer programs with CSI.
Rather, the definition of CSI must reside outside of its cause so that the validity of the cause can be considered.
I am defining CSI or specified complexity as the presence of functional parts (complexity) combined into a unified whole (Specificity), such that all the parts are necessary for the function. The only known cause of that phenomenon is intelligent agency. In this way, we can draw a direct inference to design without passing through RV+NS.StephenB
November 21, 2014
November
11
Nov
21
21
2014
05:41 PM
5
05
41
PM
PDT
CSI does not and cannot exclude Darwinian and other non-natural processes. Indeed. A most pointless concept! :)Alicia Renard
November 21, 2014
November
11
Nov
21
21
2014
04:29 PM
4
04
29
PM
PDT
Moose Dr (November 21, 2014 at 11:27 am)
Alecia Renard (277) “the only obvious point to emerge is that there is no clear concept of what CSI might be”
Actually it is my opinion that we cannot agree on CSI because the ID community (of which I am a member) has taken too large a leap when defining CSI. The result is that the Dembskian definition is circular. The question of how CSI is made must be separated from the definition of CSI. The step of proving that RM+NS must be a subsequent step, a subsequent case.
Indeed. It is rather a problem.
“because we have been given no method on how to establish the “CSI” of anything”
I have given a clear method of establishing the CSI of a thing — especially a digitizable specification such as is found in DNA. To establish the complexity, you begin by adding up the number of bits of data — simple enough. You then subtract all bits of data that are superfluous — if the status of the bit doesn’t functionally diminish the specification, its bit can be subtracted. We now have a digital value for complexity. Simple enough?
Yes, but what counting nucleotides doesn't tell you is whether there is any useful information in a sequence. Any sequence of nucleotides will produce a protein sequence on transcription, translation and synthesis. A truly random sequence will have a stop codon appear on average every 21 triplet codons so you might include that in your calculation to detect non-functional DNA. But what if we skew the random generation to reduce the frequency of stop codes? I just don't see how you can tell whether a DNA sequence codes for a biologically active and useful protein without comparing to a known sequence or investigating the properties of a novel protein after having synthesized it. Predicting functionality of a theoretical sequence is still out of reach.
As to specification — this is a qualitative analysis. An object that has been specified was built from the specification. Protein, for instance, is built from the DNA. The specification has a component of “precision”, it must be “just right” to work. There is often flexibility in what “just right’ means. Sometimes its a carpenter’s “just right” (+- 1/16?) and other times it is a machinists’s “just right” (1/10,000 “). However, for something to be specified it must have a quality of at some point being “not right”.
Not sure your analogies are apt but there is no way of predicting this currently. Durston, Axe (and KF) seem not to grasp that because a protein has not turned up yet in any organism, it must be non-functional. This is the cardinal error in probability assumptions about the richness of the domain of as-yet-unknown proteins.
By this definition a gene is CSI. The gene has measurable amount of digital data, and a gene produces a protein which could be produced “not right”. Right? The question of whether RM+NS can produce new genes that create proteins is a question that must be addressed subsequent to defining CSI. I believe that the answer, however, is that it cannot.
The evidence suggests otherwise. Mutations (and other sources of genetic variability) happen. Selection happens.Alicia Renard
November 21, 2014
November
11
Nov
21
21
2014
04:26 PM
4
04
26
PM
PDT
Me_Think:
Given that ID is based on ‘improbabilities’ of evolutionary process, isn’t it ironic that IDers are on a side which is far more improbable than Evolutionary + NS process?
No.Mung
November 21, 2014
November
11
Nov
21
21
2014
04:25 PM
4
04
25
PM
PDT
wd400:
What part of CSI specifically excludes Darwinian and natural processes?
CSI does not and cannot exclude Darwinian and other non-natural processes.Mung
November 21, 2014
November
11
Nov
21
21
2014
04:22 PM
4
04
22
PM
PDT
cantor, You're really getting desperate. Your evidence for that assertion? That we're both ID critics, so we must be the same person?keith s
November 21, 2014
November
11
Nov
21
21
2014
03:58 PM
3
03
58
PM
PDT
268 keith s November 21, 2014 at 2:51 am You apparently didn’t notice that WJM was addressing Adapa, not me.
What I did notice is that Adapa appears to be your sock puppet. .cantor
November 21, 2014
November
11
Nov
21
21
2014
03:47 PM
3
03
47
PM
PDT
Alecia Renard (277) "the only obvious point to emerge is that there is no clear concept of what CSI might be" Actually it is my opinion that we cannot agree on CSI because the ID community (of which I am a member) has taken too large a leap when defining CSI. The result is that the Dembskian definition is circular. The question of how CSI is made must be separated from the definition of CSI. The step of proving that RM+NS must be a subsequent step, a subsequent case. "because we have been given no method on how to establish the “CSI” of anything" I have given a clear method of establishing the CSI of a thing -- especially a digitizable specification such as is found in DNA. To establish the complexity, you begin by adding up the number of bits of data -- simple enough. You then subtract all bits of data that are superfluous -- if the status of the bit doesn't functionally diminish the specification, its bit can be subtracted. We now have a digital value for complexity. Simple enough? As to specification -- this is a qualitative analysis. An object that has been specified was built from the specification. Protein, for instance, is built from the DNA. The specification has a component of "precision", it must be "just right" to work. There is often flexibility in what "just right' means. Sometimes its a carpenter's "just right" (+- 1/16") and other times it is a machinists's "just right" (1/10,000 "). However, for something to be specified it must have a quality of at some point being "not right". By this definition a gene is CSI. The gene has measurable amount of digital data, and a gene produces a protein which could be produced "not right". Right? The question of whether RM+NS can produce new genes that create proteins is a question that must be addressed subsequent to defining CSI. I believe that the answer, however, is that it cannot.Moose Dr
November 21, 2014
November
11
Nov
21
21
2014
09:27 AM
9
09
27
AM
PDT
268 keith s November 21, 2014 at 2:51 am After your expertise bluff...
. Too funny! A citation bluff to his own post. Yes, I heartily recommend open-minded readers to follow that link, especially posts 32, 40, 41, 70, and especially 162. The offer still stands. .cantor
November 21, 2014
November
11
Nov
21
21
2014
07:59 AM
7
07
59
AM
PDT
Joe, that's a neat escape. In any case, that post is for VJT. I wouldn't expect you to understand.Me_Think
November 21, 2014
November
11
Nov
21
21
2014
07:38 AM
7
07
38
AM
PDT
Me Think- You are trying to make the argument, not me. Make your case as opposed to just repeating yourself. That is if you can.Joe
November 21, 2014
November
11
Nov
21
21
2014
07:32 AM
7
07
32
AM
PDT
1 2 3 4 12

Leave a Reply