Uncommon Descent Serving The Intelligent Design Community

Can we all agree on specified complexity?

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Amid the fog of recent controversies, I can discern a hopeful sign: the key figures in the ongoing debate over specified complexity on Uncommon Descent are actually converging in their opinions. Allow me to explain why.

Winston Ewert’s helpful clarifications on CSI

In a recent post, ID proponent Winston Ewert agreed that Elizabeth Liddle had a valid point in her criticisms of the design inference, but then went on to say that she had misunderstood what the design inference was intended to do (emphases mine):

She has objected that specified complexity and the design inference do not give a method for calculating probabilities. She is correct, but the design inference was never intended to do that. It is not about how we calculate probabilities, but about the consequences of those probabilities. Liddle is complaining that the design inference isn’t something that it was never intended to be.

He also added:

…[T]he design inference is a conditional. It argues that we can infer design from the improbability of Darwinian mechanisms. If offers no argument that Darwinian mechanisms are in fact improbable. When proving a conditional, we are not concerned with whether or not the antecedent is true. We are interested in whether the consequent follows from the antecedent.

In another post, Winston Ewert summarized his thoughts on specified complexity:

The notion of specified complexity exists for one purpose: to give force to probability arguments. If we look at Behe’s irreducible complexity, Axe’s work on proteins, or practically any work by any intelligent design proponent, the work seeks to demonstrate that the Darwinian account of evolution is vastly improbable. Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.

Winston Ewert concluded that “the only way to establish that the bacterial flagellum exhibits CSI is to first show that it was improbable.”

To which I would respond: hear, hear! I completely agree.

What about Ewert’s claim that “CSI and Specified complexity do not help in any way to establish that the evolution of the bacterial flagellum is improbable”? He is correct, if by “CSI and Specified complexity,” he simply means the concepts denoted by those terms. If, however, we are talking about the computed probability of the evolution of the bacterial flagellum emerging via unguided processes, then of course this number can be used to support a design inference: if the probability in question is low enough, then the inference to an Intelligent Designer becomes a rational one. Ewert obviously agrees with me on this point, for he writes that “Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.”

In a recent post, I wrote that “we can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).” Immediately afterwards, I added that in order to calculate the specified complexity of an object, we first require “the probability of producing the object in question via ‘Darwinian and other material mechanisms.'” I then added that “we compute that probability.” The word “compute” makes it quite clear that without that probability, we will be unable to infer that a given object was in fact designed. I concluded: “To summarize: to establish that something has CSI, we need to show that it exhibits specificity, and that it has an astronomically low probability of having been producedby unguided evolution or any other unintelligent process” (italics added).

Imagine my surprise, then, when I discovered that some readers had been interpreting my claim that “we can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold)” as if I were arguing for a design inference on the basis of some pre-specified numerical value for CSI! Nothing could be further from the truth. To be quite clear: I maintain that the inference that biological organisms (or structures, such as proteins) were designed is a retrospective one. We are justified in making this inference only after we have computed, on the basis of the best information available to us, that the emergence of these organisms (or structures) via unguided processes – in which I include both random changes and the non-random winnowing effect of natural selection – falls below a certain critical threshold of 1 in 2^500 (or roughly, 1 in 10^150). There. I cannot be clearer than that.

So I was heartened to read on a recent post by Barry Arrington that Keith S had recently endorsed a form of design inference, when he wrote:

To use the coin-flipping example, every sequence of 500 fair coin flips is astronomically improbable, because there are 2^500 possible sequences and all have equally low probability. But obviously we don’t exclaim “Design!” after every 500 coin flips. The missing ingredient is the specification of the target T.

Suppose I specify that T is a sequence of 250 consecutive heads followed by 250 consecutive tails. If I then sit down and proceed to flip that exact sequence, you can be virtually certain that something fishy is going on. In other words, you can reject the chance hypothesis H that the coin is fair and that I am flipping it fairly.

That certainly sounds like a design inference to me.

In a follow-up comment on Barry Arrington’s post, Keith S went on to point out:

…[I]n that example, I am not calculating CSI and then using it to determine that something fishy is going on. Rather, I have to determine that something fishy is going on first (that is, that P(T|H) is extremely low under the chance hypothesis) in order to attribute CSI to it.

To which I would respond: you’re quite right, Keith S. That’s what I’ve been saying and what Winston Ewert has been saying. It seems we all agree. We do have to calculate the probability of a system emerging via random and/or non-random unguided processes, before we impute a high level of CSI to the system and conclude that it was designed.

CSI vs. irreducible complexity: what’s the difference?

In a subsequent comment, Keith S wrote:

I think it’s instructive to compare irreducible complexity to CSI in this respect.

To argue that something is designed because it exhibits CSI is circular, because you have to know that it is designed before you can attribute CSI to it.

To argue that something is designed because it is irreducibly complex is not circular, because you can determine that it is IC (according to Behe’s definition) without first determining that it is designed.

The problem with the argument from IC is not that it’s circular — it’s that IC is not a barrier to evolution.

For the record: the following article by Casey Luskin over at Evolution News and Views sets forth Professor Mike Behe’s views on exaptation, which are that while it cannot be absolutely ruled out, its occurrence is extremely improbable, even for modestly complex biologically features. Professor Behe admits, however, that he cannot rigorously quantify his assertions, which are based on his professional experience as a biochemist. Fair enough.

The big difference between CSI and irreducible complexity, then, is not that the former is circular while the latter is not, but that CSI is quantifiable (for those systems where we can actually calculate the probability of their having emerged via unguided random and/or non-random processes) whereas irreducible complexity is not. That is what makes CSI so useful, when arguing for design.

Does Dr. Dembski contradict himself? I think not

Keith S claims to have uncovered a contradiction between the following statement by leading Intelligent Design advocate Dr. Willaim Dembski:

Michael Behe’s notion of irreducible complexity is purported to be a case of actual specified complexity and to be exhibited in real biochemical systems (cf. his book Darwin’s Black Box). If such systems are, as Behe claims, highly improbable and thus genuinely complex with respect to the Darwinian mechanism of mutation and natural selection and if they are specified in virtue of their highly specific function (Behe looks to such systems as the bacterial flagellum), then a door is reopened for design in science that has been closed for well over a century. Does nature exhibit actual specified complexity? The jury is still out.

and this statement of his:

It is CSI that Michael Behe has uncovered with his irreducbly complex biochemical machines. It is CSI that for cosmologists underlies the fine-tuning of the universe and that the various anthropic principles attempt to understand.

I don’t see any contradiction at all here. In the first quote, Dr. Dembski is cautiously pointing out that the inference that the bacterial flagellum was designed hinges on probability calculations, which we do not know for certain to be correct. In the second quote, he is expressing his belief, based on his reading of the evidence currently available, that these calculations are in fact correct, and that Nature does in fact exhibit design.

Dembski and the Law of Conservation of Information

Keith S professes to be deeply puzzled by Dr. Dembski’s Law of Conservation of Information (LCI), which he finds “murky.” He is especially mystified by the statement that neither chance nor law can increase information.

I’d like to explain LCI to Keith S in a single sentence. As I see it, its central insight is very simple: that when all factors are taken into consideration, the probability of an event’s occurrence does not change over the course of time, until it actually occurs. In other words, if the emergence of life in our universe was a fantastically improbable event at the time of the Big Bang, then it was also a fantastically improbable event 3.8 billion years ago, immediately prior to its emergence on Earth. And if it turns out that the emergence of life on Earth 3.8 billions of years ago was a highly probable event, then we should say that the subsequent emergence of life in our universe was highly probable at the time of the Big Bang, too. Chance doesn’t change probabilities over the course of time; neither does law. Chance and law simply provide opportunities for the probabilities to be played out.

Someone might argue that we can think of events in human history which seemed highly improbable at time t, but which would have seemed much more probable at a later time t + 1. (Hitler’s rise to power in Germany would have seemed very unlikely in January 1923, but very likely in January 1933.) But this objection misses the point. Leaving aside the point that humans are free agents, a defender of LCI could reply that when all factors are taken into consideration, events that might seem improbable at an earlier time can in fact be demonstrated to have a high probability of occurring subsequently.

Making inferences based on what you currently know: what’s the problem with that?

Certain critics of Intelligent Design are apt to fault ID proponents for making design inferences based on what scientists currently know. But I see no problem with that, as long as ID proponents declare that they would be prepared to cheerfully revise their opinions, should new evidence come to light which overturns currently accepted beliefs.

I have long argued that Dr. Douglas Axe’s paper, The Case Against a Darwinian Origin of Protein Folds, whose argument I summarized in my recent post, Barriers to macroevolution: what the proteins say, demonstrates beyond reasonable doubt that unguided mechanisms could not have given rise to protein folds that we find in living creatures’ body proteins, in the space of just four billion years. I have also pointed out that Dr. Eugene Koonin’s peer-reviewed article, The Cosmological Model of Eternal Inflation and the Transition from Chance to Biological Evolution in the History of Life (Biology Direct 2 (2007): 15, doi:10.1186/1745-6150-2-15) makes a very strong case that the probability of a living thing capable of undergoing Darwinian evolution – or what Dr. Koonin refers to as a coupled translation-replication system – emerging in our observable universe during the course of its history is astronomically low: 1 in 10^1,018 is Dr. Koonin’s estimate, using a “toy model” that makes deliberately optimistic assumptions. Finally, I have argued that Dr. Robin Collins’ essay, The Teleological Argument rules out the infinite multiverse hypothesis which Dr. Koonin proposes in order to explain the unlikely emergence of life in our universe: as Dr. Koonin argues, a multiverse would need to be specially fine-tuned in order to produce even one universe like our own. If Dr. Axe’s and Dr. Koonin’s estimates are correct, and if we cannot fall back on the hypothesis of a multiverse in order to shorten the odds against life emerging, then the only rational inference that we can make, based on what we currently know, is that the first living thing was designed, and that the protein folds we find in living creatures were also designed.

Now, Keith S might object that these estimates could be wrong – and indeed, they could. For that matter, the currently accepted age of the universe (13.798 billion years) could be totally wrong too, but I don’t lose any sleep over that fact. In everyday life, we make decisions based on what we currently know. If Keith S wants to argue that one can reasonably doubt the inference that living things were designed, then he needs to explain why the estimates I’ve cited above could be mistaken – and by a very large margin, at that.

Recently, Keith S has mentioned a new book by Dr. Andreas Wagner, titled, The Arrival of the Fittest: Solving Evolution’s Greatest Puzzle. I haven’t read the book yet, but let me say this: if the book makes a scientifically plausible case, using quantitative estimates, that life in all its diversity could have emerged on Earth over the space of just 3.8 billion years, then I will cheerfully change my mind and admit I was wrong in maintaining that it had to have been designed. As John Maynard Keynes famously remarked, “When the facts change, I change my mind. What do you do, sir?”

For that matter, I try to keep an open mind about the recent discovery of soft tissue in dinosaur bones (see here and here). Personally, I think it’s a very odd finding, which is hard to square with the scientifically accepted view that these bones are millions of years old, but at the present time, I think the preponderance of geological and astronomical arguments in favor of an old Earth is so strong that this anomaly, taken alone, would be insufficient to overthrow my belief in an old cosmos. Still, I could be wrong. Science does not offer absolute certitude, and it has never claimed to.

Conclusion

To sum up: statements about the CSI of a system are retrospective, and should be made only after we have independently calculated the probability of a system emerging via unguided (random or non-random) processes, based on what we currently know. After these calculations have been performed, one may legitimately infer that the system was designed – even while admitting that should subsequent evidence come to light that would force a drastic revision of the probability calculations, one would have to revise one’s views on whether that system was designed.

Are we all on the same page now?

Comments
Moose Dr. "It is, therefore, not a requirement that ID show development micro-step by micro-step as is required by Darwinism. In any case, we have a very well documented macro-step by macro-step log of the intelligent development of human designs. " But that is the crux of the issue. The ID proposition is that the intelligent agent can't be limited by human imagination of intelligence. But all of the analogies and comparisons it makes are to human intelligence and technological advance. Is the intelligent agent limited to human type intelligence? At least this hypothesis would be testable. But I have never heard an ID theory about this. And I don't think I will. This is why hypothesizing about the nature and limitations of the designer is critical. It doesn't mean that you are locked into that hypothesis. You can modify the hypothesis as evidence dictates. But ID is extremely hesitant, for whatever reason, to step out on the limb. But it is only by stepping out on the limb that it will be taken seriously as a science.centrestream
November 18, 2014
November
11
Nov
18
18
2014
05:25 PM
5
05
25
PM
PDT
2 Tamara Knight November 18, 2014 at 4:12 am Your argument boils down to a rephrasing of “We can’t see how it works, therefore ID did it”.
... and your argument boils down to a rephrasing of "I can’t see how it works, but I believe the blind watchmaker did it anyway".cantor
November 18, 2014
November
11
Nov
18
18
2014
05:12 PM
5
05
12
PM
PDT
centrestream (82)"I have never seen any ID proponent describe a step by step process that has produced (not, can produce) the same level of complexity." Centrestream, you seem unclear on the concept. The neo-Darwinian model is a "step-by-step" model. It is reasonable, therefore, to show a "step-by-step" pathway to significant CSI. ID is not a step-by-step model. It is, therefore, not a requirement that ID show development micro-step by micro-step as is required by Darwinism. In any case, we have a very well documented macro-step by macro-step log of the intelligent development of human designs. It is called the USPTO. The patents show, with high resolution, the process of a working intelligent design system. If Darwinists could get anywhere close to that resolution in their analysis, they would have made HUGE leaps forward.Moose Dr
November 18, 2014
November
11
Nov
18
18
2014
05:01 PM
5
05
01
PM
PDT
This gets back to the biggest weakness of the ID proposition (which, for some reason, ID proponents think is a strength), the absolute refusal to propose the nature of the intelligent agent (OK, why don’t we just call it god) and the mechanism by which it functions. Answer me this: who designed Stonehenge?OldArmy94
November 18, 2014
November
11
Nov
18
18
2014
03:50 PM
3
03
50
PM
PDT
Silver Asiatic, Do you understand what P(T|H) is and how it fits into Dembski's CSI equation?keith s
November 18, 2014
November
11
Nov
18
18
2014
03:45 PM
3
03
45
PM
PDT
"can't understand this"Silver Asiatic
November 18, 2014
November
11
Nov
18
18
2014
03:39 PM
3
03
39
PM
PDT
I suppose we could agree on Specified Complexity but that's not what ID arguments use. We're missing something ... Ahh, there it is. Notice the agreement is not on CS, but rather on CSI. A minor omission?
Winston Ewert concluded that “the only way to establish that the bacterial flagellum exhibits CSI is to first show that it was improbable.” To which I would respond: hear, hear! I completely agree.
I can' understand this and I disagree. We're talking about CSI and the most important term that (rendering the others almost redundant) is the "I". So, to try the statement again in a slightly different manner: “the only way to establish that [something] exhibits [Information] is to first show that it was improbable.” No, obviously not. Information has certain characteristics that can be observed without knowing anything about the probability of its origins. Information communicates. We observe this Function of information. A communication or informational network has observable characteristics. Symbol, Coding, Sender, Medium, Translation, Receiver. Again, this has nothing to do with probability at all. We can observe information and we see it in a high-degree active in the bacterial flagellum (information communicated and organized through a variety of parts exhibiting a complex function). Probability studies are secondary.Silver Asiatic
November 18, 2014
November
11
Nov
18
18
2014
03:37 PM
3
03
37
PM
PDT
MD: Please read my remarks at 7 above. You will notice that I have always emphasised functionally specific complex organisation and associated information, FSCO/I, which is what is directly relevant to the world of life, and is pretty directly observable, starting with text and technology. When objectors can bring themselves to acknowledge that observable phenomenon ant the linked constraint on possible configurations imposed by interactions required to produce functionality, then we can begin to analyse soundly. Orgel actually spoke in the direct context of biofunction, and Wicken used the term, as well as identifying that wiring diagram organisation is informational. Specified Complexity, in this context, is informational, and Dembski's model is a superset based on abstracting specification to a generalised, independently and "simply" describable zone in a relevant configuration space, W. Unfortunately, such is fairly abstract and mathematical, in an era where the abstract is often misunderstood, twisted into pretzels, despised, dismissed. That is probably why GP has focussed down on a subset of FSCO/I, digitally coded, functionally specific coded information, dFSCI, such as we find in text, computer programs and D/RNA. But even this is stoutly resisted. I draw the conclusion that the problem is not CSI, or FSCO/I or dFSCI, but with where they point which is where many will not go. On observation, some objectors have been willing to burn down first principles of right reason and first, self-evident truths. Inductive conclusions and empirically grounded discussions will have no effect on such, until and unless -- and here I have the Marxists in mind -- their system crashes and burns. As to the the definition of CSI begs the question assertion, I say, fallacious. We have concrete cases, showing what CSI is about. The abstracted superset is reasonable in that context. As for, it cannot be empirically tested, that is false. Take the design inference process in hand -- notice, how many objectors simply will not deal with design thought as it is, but persistently erect strawman caricatures -- and examine an aspect of an object or phenomenon. If it shows low contingency regularity under similar initial conditions, then the reasonable explanation is mechanical necessity acting by law. Where there is high contingency under similar initial conditions, there are two known alternatives. As default, chance acting through stochastic contingency. As is well known from statistical studies, reasonable samples from a population of possibilities tend to reflect its bulk [the legitimate point behind the layman's "law of averages"], but is unlikely to capture rare clusters such as the far-tails of a classic bell distribution. As samples scale up likelihood of picking up such rises. Indeed, this is the basic point behind Fisher's statistical testing and the 5% or 1% far tails. Likewise, statistical process control and manufacturing quality management. (And yes, I recall astonishing ding-dong rhetorical battles with objectors who found every hyperskeptical device to try to dismiss this commonplace. Sad.) The design inference is linked to that point, as well as to the stat thermo-d concept of macroscopically identifiable clusters of microstates that are termed macrostates (and recall, that was the road I came to design theory from). The relative statistical weight of states tends to drive observability under reasonable chance driven contingency hyps. Indeed, that is the statistical underpinning of the second law of thermodynamics. But, again, I recall the ding-dong rhetorical battles when Professor Granville Sewell said the otherwise obvious thing that we do not expect to observe the stochastically utterly implausible when a system is opened up, save if something is crossing the boundary that makes it not implausible. It seems to me there is a policy of zero concessions to IDiots, that reminds me uncomfortably of Plato's warning on the implications of radical relativism and nihilism that so often flow from evolutionary materialist ideology, then and now: "hence arise factions." Sorry, but fair comment. Now, there are things such as FSCO/I, which are highly contingent but are stochastically implausible. Moreover, on the evidence of trillions of actually observed cases, such consistently results from intelligently directed configuration. Where this post is adding another case in point. So, on induction we are entitled to infer the best current explanation to be design. Not by begging questions or imposing circular definitions or the like, but by inductive reasoning. Where, "current" implues alternatives are considered, are classified and are addressed on the merits. And, that should future evidence say otherwise, the matter will be changed to reflect that. As in, similar to Newton in Query 31 in Opticks, which is probably the root source on the Grade School "Scientific Method" summary we are often taught:
As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis [--> inductive empirical analysis], ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths. For [--> speculative] Hypotheses are not to be regarded in experimental Philosophy. And although the arguing from Experiments and Observations by Induction be no Demonstration of general Conclusions; yet it is the best way of arguing which the Nature of Things admits of, and may be looked upon as so much the stronger, by how much the Induction is more general. And if no Exception occur from Phaenomena, the Conclusion may be pronounced generally. But if at any time afterwards any Exception shall occur from Experiments, it may then begin to be pronounced with such Exceptions as occur. By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. This is the Method of Analysis: And the Synthesis consists in assuming the Causes discover'd, and establish'd as Principles, and by them explaining the Phaenomena proceeding from them, and proving the Explanations
Yes, Sci Methods, 101. Therefore, in looking at discussions as to how CSI is defined and the like, or how design inferences are made, that should be borne in mind. And, particularly, should FSCO/I be observed to reasonably reliably or just observably come from blind chance and mechanical necessity, then that would decisively undermine the design inference on FSCO/I. Not, that that is plausibly likely to happen. We are talking here of sparse blind search for needles in very large haystacks. For 1,000 bits, the search potential of 10^80 atoms for 10^17 s at 10^14 searches of configs for 1,000 coins per second each, stands as one straw picked to a cubical haystack that would swallow up the 90 bn LY across observed cosmos. Under these circumstances, we have reason only to expect to catch the bulk. Now, perhaps the best thing is to start from Dembski's actual definition of CSI in NFL, as I have cited, including in the infographic I have been repeatedly posting for months which obviously is being studiously ignored by too many objectors:
p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
That should be quite plain enough, and has been highlighted and brought to attention enough times across years that there is no excuse for twisting it into strawman caricatures. He says, on the method of inferring design:
We know from experience that intelligent agents build intricate machines that need all their parts to function [[--> i.e. he is specifically discussing "irreducibly complex" objects, structures or processes for which there is a core group of parts all of which must be present and properly arranged for the entity to function; which is a subset of FSCO/I], things like mousetraps and motors. And we know how they do it -- by looking to a future goal and then purposefully assembling a set of parts until they’re a working whole. Intelligent agents, in fact, are the one and only type of thing we have ever seen doing this sort of thing from scratch. In other words, our common experience provides positive evidence of only one kind of cause able to assemble such machines. It’s not electricity. It’s not magnetism. It’s not natural selection working on random variation. It’s not any purely mindless process. It’s intelligence . . . . When we attribute intelligent design to complex biological machines that need all of their parts to work, we’re doing what historical scientists do generally. Think of it as a three-step process: (1) locate a type of cause active in the present that routinely produces the thing in question; (2) make a thorough search to determine if it is the only known cause of this type of thing; and (3) if it is, offer it as the best explanation for the thing in question. [[William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, pp. 20-21, 53 (InterVarsity Press, 2010). HT, CL of ENV & DI.]
Meyer, who was denigrated by an objector above, in replying to a critic of his Signature in the Cell, noted:
The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form) [--> notice the terms he uses here]. Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question . . . . In order to [[scientifically refute this inductive conclusion] Falk would need to show that some undirected material cause has [[empirically] demonstrated the power to produce functional biological information apart from the guidance or activity a designing mind. Neither Falk, nor anyone working in origin-of-life biology, has succeeded in doing this . . . . The central problem facing origin-of-life researchers is neither the synthesis of pre-biotic building blocks (which Sutherland’s work addresses) or even the synthesis of a self-replicating RNA molecule (the plausibility of which Joyce and Tracey’s work seeks to establish, albeit unsuccessfully . . . [[Meyer gives details . . . ]). Instead, the fundamental problem is getting the chemical building blocks to arrange themselves into the large information-bearing molecules (whether DNA or RNA) . . . . For nearly sixty years origin-of-life researchers have attempted to use pre-biotic simulation experiments to find a plausible pathway by which life might have arisen from simpler non-living chemicals, thereby providing support for chemical evolutionary theory. While these experiments have occasionally yielded interesting insights about the conditions under which certain reactions will or won’t produce the various small molecule constituents of larger bio-macromolecules, they have shed no light on how the information in these larger macromolecules (particularly in DNA and RNA) could have arisen. Nor should this be surprising in light of what we have long known about the chemical structure of DNA and RNA. As I show in Signature in the Cell, the chemical structures of DNA and RNA allow them to store information precisely because chemical affinities between their smaller molecular subunits do not determine the specific arrangements of the bases in the DNA and RNA molecules. Instead, the same type of chemical bond (an N-glycosidic bond) forms between the backbone and each one of the four bases, allowing any one of the bases to attach at any site along the backbone, in turn allowing an innumerable variety of different sequences. This chemical indeterminacy is precisely what permits DNA and RNA to function as information carriers. It also dooms attempts to account for the origin of the information—the precise sequencing of the bases—in these molecules as the result of deterministic chemical interactions . . . . [[W]e now have a wealth of experience showing that what I call specified or functional information(especially if encoded in digital form) does not arise from purely physical or chemical antecedents [[--> i.e. by blind, undirected forces of chance and necessity]. Indeed, the ribozyme engineering and pre-biotic simulation experiments that Professor Falk commends to my attention actually lend additional inductive support to this generalization. On the other hand, we do know of a cause—a type of cause—that has demonstrated the power to produce functionally-specified information. That cause is intelligence or conscious rational deliberation. As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process. Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence. This conclusion is not based upon what we don’t know. It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information . . . . [[In conclusion,] it needs to be noted that the [[now commonly asserted and imposed limiting rule on scientific knowledge, the] principle of methodological naturalism [[ that scientific explanations may only infer to "natural[[istic] causes"] is an arbitrary philosophical assumption, not a principle that can be established or justified by scientific observation itself. Others of us, having long ago seen the pattern in pre-biotic simulation experiments, to say nothing of the clear testimony of thousands of years of human experience, have decided to move on. We see in the information-rich structure of life a clear indicator of intelligent activity and have begun to investigate living systems accordingly. If, by Professor Falk’s definition, that makes us philosophers rather than scientists, then so be it. But I suspect that the shoe is now, instead, firmly on the other foot. [[Meyer, Stephen C: Response to Darrel Falk’s Review of Signature in the Cell, SITC web site, 2009. (Emphases and parentheses added.)]
Such have long been on record and have repeatedly been brought to the attention of objectors. I find little to show those who have insisted on setting up and knocking over strawman caricatures in any positive light. Not, after so much time and so many corrections and sources brushed aside to push a strawman tactic agenda. It is time for a fresh start on more reasonable grounds. Now, I better get back to other matters. KFkairosfocus
November 18, 2014
November
11
Nov
18
18
2014
03:03 PM
3
03
03
PM
PDT
There is all this talk about whether or not natural processes can produce 500 bits of information. According to ID proponents, if science cannot demonstrate a natural step by step process that produced (not, can produce) this level of complexity, it fails and an intelligent agent (AKA the god of choice) is the most likely explanation. Even if we ignore the fallacious nature of this argument, I have never seen any ID proponent describe a step by step process that has produced (not, can produce) the same level of complexity. Therefore, are we to conclude that an intelligent agent (name your god here) is also not a viable explanation? This gets back to the biggest weakness of the ID proposition (which, for some reason, ID proponents think is a strength), the absolute refusal to propose the nature of the intelligent agent (OK, why don’t we just call it god) and the mechanism by which it functions.centrestream
November 18, 2014
November
11
Nov
18
18
2014
02:57 PM
2
02
57
PM
PDT
keith s, that's right! As long as there are people who have a rebellious heart towards God there will always be those who prefer the *ABG hypothesis of Darwinism to anything that smacks of the slightest hint of Design. *Anything But Godbornagain77
November 18, 2014
November
11
Nov
18
18
2014
01:50 PM
1
01
50
PM
PDT
OldArmy94,
After years and years of crafting and spinning so many fables, they [Darwinists] realize that the mound of fabrication is collapsing upon itself.
The Ever-Imminent Collapse of Evolution The Imminent Demise of Evolution: The Longest Running Falsehood in Creationismkeith s
November 18, 2014
November
11
Nov
18
18
2014
01:38 PM
1
01
38
PM
PDT
keiths , And why does your opinion that 'The EN&V review of Arrival of the Fittest is remarkably ineffective' not surprise me in the least? I hate to break this to you keith s, but you are far from the most impartial judge in any matter dealing with ID. Shoot, Anthony Flew himself was far more impartial than any of you 'new' atheists are! "I now believe that the universe was brought into existence by an infinite intelligence. I believe that the universe's intricate laws manifest what scientists have called the Mind of God. I believe that life and reproduction originate in a divine Source. Why do I believe this, given that I expounded and defended atheism for more than a half century? The short answer is this: this is the world picture, as I see it, that has emerged from modern science." Anthony Flew - world's leading intellectual atheist for most of his adult life until a few years shortly before his death The Case for a Creator - Lee Strobel (Nov. 25, 2012) - video http://www.saddleback.com/mc/m/ee32d/bornagain77
November 18, 2014
November
11
Nov
18
18
2014
01:26 PM
1
01
26
PM
PDT
Multiverse. That single word tells you all you need to know about the frustration that Darwinists feel regarding their inability to build their body of work on a firm foundation. After years and years of crafting and spinning so many fables, they realize that the mound of fabrication is collapsing upon itself. Thus, whilst peppering us with machine gun rhetoric in an effort to divert our attention, they are quietly retreating to the promised land of infinite universes. Glory be to Darwin, we shall be free at last.OldArmy94
November 18, 2014
November
11
Nov
18
18
2014
01:21 PM
1
01
21
PM
PDT
"The point being the current “CSI” of the hands tells you nothing about whether they were designed or not. You have to take into account the history and rules of the game." Yes! The amount of CSI says nothing about how the CSI got there. The Universal Probability Bound allows you to realistically rule out the chance hypothesis. However, CSI must be defined for what it is, not how it got there. As such, no hypothesis (except true random via the UPB) is rejected a priori. Now, we IDers contend (hypothesize) that the mechanism of RM+NS is not capable of producing CSI, let alone the buckets of CSI that is at the core of all life forms. This is our contention, but it cannot be embedded in the definition of CSI.Moose Dr
November 18, 2014
November
11
Nov
18
18
2014
01:06 PM
1
01
06
PM
PDT
"You’re contradicting yourself. You just told us that it [complexity] needs to be quantified:" Sorry, you are correct. I provided a qualitative definition of specificity, not complexity. Complexity, however, is very easy to quantify. If we have a sentence, the CAD of an auto part, or DNA that defines the amino sequences of a protein, we have digital data. Quantifying the complexity is simply a matter of adding up the number of bits that make up this definition. This gets a bit more complex if we want to analyse whether any particular bit is relevant to the specification. (We note, for instance, that there are more than 1 DNA sequence designating a specific amino acid.) This, however, is merely an exercise in measurement. We can also conclude that for many particular aminos in a protein can be substituted with a subset of other aminos without loosing function -- though the range of available substitution varied greatly, and in many cases there seems to be only one pattern that can work. (We see this exemplified, for instance, in ultra-conserved sequences.) We can get lost in nit-picking about the exact count of the number of bits that can be dismissed in any particular sequence. However, it becomes obvious at some point that many of the bits are very much required to produce the specification. As the number of bits required very quickly blows the doors off of the Universal Probability Bound, this argument is rather mute, don't you think?Moose Dr
November 18, 2014
November
11
Nov
18
18
2014
12:59 PM
12
12
59
PM
PDT
Moose Dr “Sure, if you can find some way of quantifying the complexity. That’s a big if.” The complexity does not need to be quantified, it needs to be qualified. According to ID it needs to be quantified. The whole purpose of "CSI" was that you could roll up on some unknown object, objectively measure its "CSI", then conclude the object was consciously designed if the CSI exceeded 500 bits. The huge problem with this is that it doesn't take into account the history or processes that may have formed the object. To use Barry's favorite poker example, say you walked into a room and you see four players each holding a royal straight flush. If you assume those hands were all dealt exactly that way you'd assume conscious manipulation due to a tiny probability. But if the folks were playing draw poker the probabilities would be different and be slightly higher. If they were playing a version of draw poker that allowed unlimited redraws the tiny probabilities would turn into almost certainties. The point being the current "CSI" of the hands tells you nothing about whether they were designed or not. You have to take into account the history and rules of the game. Dembski realized he made a big blunder with his one and only attempt at calculating the CSI for a protein so he stopped using the argument. The rest of ID didn't get the message however.Adapa
November 18, 2014
November
11
Nov
18
18
2014
12:49 PM
12
12
49
PM
PDT
Moose Dr,
The complexity does not need to be quantified, it needs to be qualified.
You're contradicting yourself. You just told us that it needs to be quantified:
Complex: Quantitatively defined as the number of bits of information.
keith s
November 18, 2014
November
11
Nov
18
18
2014
12:33 PM
12
12
33
PM
PDT
bornagain77 #32, The EN&V review of Arrival of the Fittest is remarkably ineffective. There's no byline. I wonder who wrote it.keith s
November 18, 2014
November
11
Nov
18
18
2014
12:31 PM
12
12
31
PM
PDT
"Sure, if you can find some way of quantifying the complexity. That’s a big if." The complexity does not need to be quantified, it needs to be qualified. I propose that it has the quality of complexity if: 1 - it performs a function. > In an automobile, parts perform frequently critical functions. These parts are specified in the CAD program that defines them. > In human language, the words produce (relatively consistent) mental constructs in the mind of the hearer. > In biology, proteins perform specific functions, such as acting as a "stator" in a flagellum. 2 - at many points, small changes would cause the information to cease to perform its function (ie, a mod in the DNA would cause a gene to produce a dysfunctional protein. > Note on "at many points": Not necessarily all points, though for each of these points the total "complexity" of the information may be reducible.Moose Dr
November 18, 2014
November
11
Nov
18
18
2014
12:30 PM
12
12
30
PM
PDT
Moose Dr,
If CSI is defined as “not able to be produced by natural means” then your contention is absolutely correct. The definition is circular.
Yes. And Dembski's CSI is in fact defined that way. It contains a P(T|H) term which represents the probability that the target T arose by "Darwinian and other material mechanisms", denoted by H.
However, if we look at the term “Complex Specified Information” and seek a natural definition for it (not the overloaded definition that has been used here) we see a definition that has nothing to do with whether RM+NS is capable of producing it. Information: That ethereal sequence thing that has no dimension nor location, but that can be “stored” in a variety of mediums in a lossless way. Complex: Quantitatively defined as the number of bits of information.
How will you determine the "number of bits of information" for something like the flagellum?
If we, the ID community, can get to this lighter definition of CSI, then people like you may be able to participate in a serious discussion of whether RM+NS is capable of producing it. Would you not agree?
Sure, if you can find some way of quantifying the complexity. That's a big if.keith s
November 18, 2014
November
11
Nov
18
18
2014
12:19 PM
12
12
19
PM
PDT
Keith S, I have been watching your claim that the definition of CSI is circular. It took me a while to truly understand you. When I did, I found it very clear that you are correct. If CSI is defined as "not able to be produced by natural means" then your contention is absolutely correct. The definition is circular. However, if we look at the term "Complex Specified Information" and seek a natural definition for it (not the overloaded definition that has been used here) we see a definition that has nothing to do with whether RM+NS is capable of producing it. Information: That ethereal sequence thing that has no dimension nor location, but that can be "stored" in a variety of mediums in a lossless way. Complex: Quantitatively defined as the number of bits of information. Specified: It defines something, whether it be a sequence of words that has meaning to the human hearer, or whether it defines a sequence of amino acids that produces a significantly functional protein. (The definition of "significant", in this case is a little loose, but we have certainly seen the kind of functionality that proteins and groups of interacting proteins can produce.) Complex (as above), Specified (as above) information (as above) is a definition that is not contingent on how the information came to be. If we, the ID community, can get to this lighter definition of CSI, then people like you may be able to participate in a serious discussion of whether RM+NS is capable of producing it. Would you not agree?Moose Dr
November 18, 2014
November
11
Nov
18
18
2014
12:08 PM
12
12
08
PM
PDT
keith s, one molecular machine, as far as Behe and Dembski are concerned, would falsify ID, but theistic evolutionists would still have a beef with atheistic evolutionists,,, It's Easier to Falsify Intelligent Design than Darwinian Evolution - Michael Behe, PhD https://www.youtube.com/watch?v=_T1v_VLueGk&list=UUV4Zy3ry9DrDCdxwyAxXs0gbornagain77
November 18, 2014
November
11
Nov
18
18
2014
12:02 PM
12
12
02
PM
PDT
KF #59, you claim that there is a history of long-term abusive trolling at UD, presumably by ID opponents. What I have repeatedly seen is yourself or Barry declare an ID opponent's behaviour trollish simply because they are persistent in their arguments. And then the person is banned. Persistent isn't the same as trollish. Yet when you see someone like Joe, who's abusive trollish behaviour is there for all to see, and who's abusive behaviour is self evident and beyond defence, he is permitted to continue commenting. The reason for this is obvious. He supports ID. If your cause is so thin on supporters that it must tolerate this type of behaviour, then your cause is already lost. The reason that I have been persistent on this subject is that nobody has been able to provide a rational explanation as to why Joe's abusiveness is tolerated by UD, while ID opponents who remain civil are banned. Nobody is arguing that Barry and yourself don't have the right to ban anyone you want, for any reason you want. It is your blog. But claiming that you encourage open and civil debate while having a different set of rules for those who disagree with you is simply dishonest. Do better.centrestream
November 18, 2014
November
11
Nov
18
18
2014
12:00 PM
12
12
00
PM
PDT
Moose Dr #39,
Question, if it could be demonstrated that natural processes are capable of producing CSI, would it change the definition of CSI?
Natural processes are incapable of producing CSI by the very definition of CSI.
I think it would not. Complexity remains to be measured by number of bits of data.
That is Dembski’s bait-and-switch. He doesn’t actually measure complexity. He measures improbability, but calls it complexity.
I contend that CSI cannot reasonably be generated by random means.
By definition.
Please, people, does CSI cease to exist if it can be demonstrated that RM+NS is capable of making it? If so, why?
More properly, anything that RM+NS can produce never exhibited CSI to begin with. By definition.keith s
November 18, 2014
November
11
Nov
18
18
2014
11:57 AM
11
11
57
AM
PDT
Adapa: "If you want to see vulgar just read some of Joe’s posts at ATBC or even at Joe’s own blog."
Joe: "Joe Felsenstein, Proud to be an Ignorant Fatass"
Joe: "RichardTHughes, Proud to be an Ignorant Asshole"
Intelligent Reasoning is now my favourite blog! :-DJWTruthInLove
November 18, 2014
November
11
Nov
18
18
2014
11:42 AM
11
11
42
AM
PDT
Kairosfocus (59), "I am well aware that this can become a pull off track on a red herring chase led away to strawmen soaked in ad hominems..." Slow down a minute KF. I am finding this red herring laden discussion to be informative. Please read and respond to my comment (39 above). I believe that we are getting nowhere because we have bitten off more than we should when defining CSI. If accepting the definition of CSI forces an a priori rejection of the mechanism we believe explains it, then CSI gets rejected. This lack of clarity on our (the IDers) part as to what CSI is produces red herring laden ... Again, does CSI cease to exist if a natural mechanism for generating it is discovered? Does a gene with 500 bits of information which produces a truly functional protein cease to have CSI if NDE were proven to be able to produce it?Moose Dr
November 18, 2014
November
11
Nov
18
18
2014
11:41 AM
11
11
41
AM
PDT
Berlinski did not claim to have a relationship with God, he merely observed that "these are precisely the claims that theologians have always made as well". Thus, as far as I know, he is still agnostic, although maybe not in 'common usage' way of being agnostic..bornagain77
November 18, 2014
November
11
Nov
18
18
2014
11:38 AM
11
11
38
AM
PDT
Adapa (57), "If you want to see vulgar just read some of Joe’s posts at ..." 'Only goes to prove that in the context of UD, Joe is being restrained. The fact that he is capable of vulgarity should not have him banned, rather the fact that he is capable of restraint should keep him on.Moose Dr
November 18, 2014
November
11
Nov
18
18
2014
11:35 AM
11
11
35
AM
PDT
Berlinski (from BA77) "human beings are capable by an exercise of their devotional abilities to come to some understanding of the deity; and the deity, although beyond space and time, is capable of interacting with material objects." I thought Berlinski was agnostic?Moose Dr
November 18, 2014
November
11
Nov
18
18
2014
11:32 AM
11
11
32
AM
PDT
PS: Before I go back to the crises of the day, let me re-emphasise by clipping:
if there really were the sort of solid response on accounting per observed evidence for the tree of life from OOl to the utmost twigs by blind chance and necessity, there is a two years standing open invitation to host it, and it is patent that such would devastate the design inference based perspective on the world of life, and indeed BA has openly said that the production of CSI by such in a solid case would lead to his shutting down UD. So the challenge is on the table, show the cards to back your bets pardnuh. Nothing else really counts — save for patent continued inability to back your bets.
I can be contacted via my always linked, and have been waiting for nigh on two years two months so far . . . KFkairosfocus
November 18, 2014
November
11
Nov
18
18
2014
11:30 AM
11
11
30
AM
PDT
1 8 9 10 11 12

Leave a Reply