Intelligent Design

Has Specified Complexity Changed?

Spread the love

When I wrote my recent post on the alleged circularity of specified complexity, I did so in a deliberately provocative way. I wrote it emphasizing my agreement with keith s, in an attempt to shake things up and make people think. I felt that both sides of the debate here on uncommon descent were speaking past each other. I wanted to people to re-evaluate their understand of specified complexity. It seems many people found the result confusing. I’m sorry about that.

My attempt was prompted by this portion from a recent post:

We can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).

This section seems to be saying that we can establish astronomically low probability under chance hypotheses by appealing to specified complexity. As I attempted to elaborate in my last post, that would be circular reasoning. Now, I think VJTorley is clearly intelligent enough not to make such a blatantly circular argument. But I can also why ID critics would see this as circular.

I took the provocative and odd approach to making my point because I’ve been repeatedly making this point. However, none of my past attempts appear to have actually worked. When Liddle objected to the design inference, I stated exactly the same point I just made. As I wrote there:

She has objected that specified complexity and the design inference do not give a method for calculating probabilities. She is correct, but the design inference was never intended to do that. It is not about how we calculate probabilities, but about the consequences of those probabilities. Liddle is complaining that the design inference isn’t something that it was never intended to be.

I wrote the same idea in previous posts on this blog:

Dembski’s development of specified complexity depends on having established that the probability of structures like the bacterial flagellum is absurdly low under Darwinian mechanisms. Specified complexity provides the justification for rejecting Darwinian evolution on the basis of the absurdly low probability. It does nothing to help establish the low probability. Anyone arguing the Darwinian evolution has a low probability of success because of CSI has put the cart before the horse. You have to show that the probability of the bacterial flagellum is low before applying CSI to show that Darwinism is a bad explanation.

Going further back I wrote:

Remember, CSI is always computed in the context of a mechanism. Specified Complexity is nothing more than a more rigorous form of the familiar probability arguments. If you try to measure the specified complexity of arbitrary artefacts you will run into trouble because you are trying to use specified complexity for something it was not designed to be. Specified complexity was only intended to provide rigour to probability arguments. Anything beyond that is not specified complexity. It might be useful in its own right, but it is not specified complexity.

So despite the portrayal in some coners, this isn’t a new admission or concession on my part. I’ve been trying to make this point repeatedly. The fact is, I’ve grown weary of trying to convey this point.

Some are inclined to suggest that I’m incorrect, and that what I’ve stated isn’t Dembski’s argument. Some suggest that Dembski has attempted to silently change what he argued because his original version did not actually work. None of that is true. As far back as 1999, Dembski wrote:

Does nature exhibit actual specified complexity? This is the million dollar question. Michael Behe’s notion of irreducible complexity is purported to be a case of actual specified complexity and to be exhibited in real biochemical systems (cf. his book Darwin’s Black Box). If such systems are, as Behe claims, highly improbable and thus genuinely complex with respect to the Darwinian mechanism of mutation and natural selection and if they are specified in virtue of their highly specific function (Behe looks to such systems as the bacterial flagellum), then a door is reopened for design in science that has been closed for well over a century. Does nature exhibit actual specified complexity? The jury is still out. (Metanexus: Explaining Specified Complexity)

Dembski doesn’t say that life is improbable because it exhibits specified complexity. Rather, he says that life is improbable because of Behe’s argument of irreducible complexity. It argues that it exhibits specified complexity as a consequence of that probability. He doesn’t regard specified complexity as an established fact. Rather he explicitly stats the “jury is still out.” The specified complexity argument is explicitly contingent on the irreducible complexity argument. In other words, this has been the intended interpretation of Dembski’s work as far back as 1999.

I hope that some light has been shed on the issue of specified complexity.

25 Replies to “Has Specified Complexity Changed?

  1. 1
    phoodoo says:

    Winston,

    Where is the circle in this statement?:

    “We can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).”

    This is simply saying, if an object has a high degree of CSI, then it has a low probability of being caused by something unintelligent.

    Since we have no verified examples of unintelligence creating objects with high specific complexity, there is no circle in this point at all. I think your argument just collapses from here.

  2. 2
    keith s says:

    Winston Ewert:

    Some are inclined to suggest that I’m incorrect, and that what I’ve stated isn’t Dembski’s argument. Some suggest that Dembski has attempted to silently change what he argued because his original version did not actually work. None of that is true. As far back as 1999, Dembski wrote:

    Does nature exhibit actual specified complexity? This is the million dollar question. Michael Behe’s notion of irreducible complexity is purported to be a case of actual specified complexity and to be exhibited in real biochemical systems (cf. his book Darwin’s Black Box). If such systems are, as Behe claims, highly improbable and thus genuinely complex with respect to the Darwinian mechanism of mutation and natural selection and if they are specified in virtue of their highly specific function (Behe looks to such systems as the bacterial flagellum), then a door is reopened for design in science that has been closed for well over a century. Does nature exhibit actual specified complexity? The jury is still out. (Metanexus: Explaining Specified Complexity)

    Winston,

    Don’t forget that Dembski is notorious for contradicting himself. His writings resemble the Bible in that respect. 🙂

    You found a passage that supports your claim, but there are many others that contradict it.

    For example, Dembski contradicts himself by writing:

    CSI is what all the fuss over information has been about in recent years, not just in biology but in science generally.

    It is CSI that for Manfred Eigen constitutes the great mystery of life’s origin, and one he hopes eventually to unravel in terms of algorithms and natural laws. It is CSI that Michael Behe has uncovered with his irreducbly complex biochemical machines. It is CSI that for cosmologists underlies the fine-tuning of the universe and that the various anthropic principles attempt to understand. It is CSI that David Bohm’s quantum potentials are extracting when they scour the microworld for what Bohm calls “active information.”: It is CSI that enables Maxwell’s demon to outsmart a thermodynamic system tending toward thermal equilibrium… How CSI gets from an organism’s environment into an organism’s genome is one of the long-standing questions addressed by the Santa Fe Institute.

    Intelligent Design, p. 159

  3. 3
    Mung says:

    Thanks Winston!

  4. 4
    ppolish says:

    Winston, Dr Dembski refers to “specified complexity” once in his new book – in a note on page 60 citing a paper written by you and he and Marks in the book “Engineering and the Ultimate: An Interdisciplinary investigation of Order and Design in Nature and Craft”.

    Heck, I’m interested to learn if YOU think Specified Complexity has changed:)

  5. 5
    Mung says:

    keiths:

    Don’t forget that Dembski is notorious for contradicting himself. His writings resemble the Bible in that respect.

    troll

  6. 6
    bornagain77 says:

    Alleged Contradictions in the Gospels – Dr. Timothy McGrew – video
    https://vimeo.com/59940602

  7. 7
    Mung says:

    don’t feed the trolls

  8. 8
    bornagain77 says:

    as to contradictions, Neo-Darwinism is riddled through and through with them

    Who would have thought that it would be biologists that came up with the first Theory of Everything?
    Biological divergence? Evolution. Biological convergence? Evolution. Gradual variation? Evolution. Sudden variation? Evolution. Stasis? Evolution. Junk DNA? Evolution. No Junk DNA? Evolution. Tree of life? Evolution. No tree of life? Evolution. Common genes? Evolution. Orfan genes? evolution. Cell with little more than a jelly-like protoplasm? Evolution. Cell filled with countless, highly-specified nano-machines directed by a software code? Evolution. More hardy, more procreative organisms? Evolution. Less hardy, less procreative organisms? Evolution.
    – Evolution explains everything. –
    William J Murray

  9. 9
    Andre says:

    Keith S

    You seem to like taking cheap shots, so when will you show us on how unguided processes created a guided process to prevent unguided processes from happening Keith? I’m still waiting for you and the OP on TSZ just did not resolve the matter.

  10. 10
    Box says:

    Winston, who is confused about what?

    Winston Ewert: My attempt was prompted by this portion from a recent post:
    “We can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold)”.

    Now this is VJTorley quoting Keith S! ; see OP in this thread.

    Winston Ewert: This section seems to be saying that we can establish astronomically low probability under chance hypotheses by appealing to specified complexity.

    Don’t be surprised Winston, misrepresenting ID-arguments is Keith S thing.

    Winston Ewert: As I attempted to elaborate in my last post, that would be circular reasoning.

    Keith S likes it that way. He wants ID and CSI to look bad. It is his intend to make it look circular.

    Winston Ewert: Now, I think VJTorley is clearly intelligent enough not to make such a blatantly circular argument.

    You did not quote VJTorley. Like I said this was VJTorley quoting Keith S; from an comment at TSZ. These are not VJTorleys words.

    Winston Ewert: But I can also why ID critics would see this as circular.

    In the front line of critics we find Keith S. Of course he would see it as circular. In fact he is sure about. He is the designer of the circularaty.

    To make matters worse Winston, you have told Keith S that ‘he is right’. We will have to hear that one for a very long time.

  11. 11
    bornagain77 says:

    This may be of interest to some. Contrary to materialistic thought, information is measurable.,,

    Maxwell’s demon demonstration turns information into energy – November 2010
    Excerpt: Scientists in Japan are the first to have succeeded in converting information into free energy in an experiment that verifies the “Maxwell demon” thought experiment devised in 1867.,,, In Maxwell’s thought experiment the demon creates a temperature difference simply from information about the gas molecule temperatures and without transferring any energy directly to them.,,, Until now, demonstrating the conversion of information to energy has been elusive,,,, they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.
    http://www.physorg.com/news/20.....nergy.html

    Demonic device converts information to energy – 2010
    Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski.
    http://www.scientificamerican......rts-inform

    Moreover, it is now possible, by using quantum information/entanglement, to show that classical information can be erased without using energy,,,

    Quantum knowledge cools computers: New understanding of entropy – June 2011
    Excerpt: No heat, even a cooling effect;
    In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy.
    Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”
    http://www.sciencedaily.com/re.....134300.htm

    Scientists show how to erase information without using energy – January 2011
    Excerpt: Until now, scientists have thought that the process of erasing information requires energy. But a new study shows that, theoretically, information can be erased without using any energy at all. Instead, the cost of erasure can be paid in terms of another conserved quantity, such as spin angular momentum.,,, “Landauer said that information is physical because it takes energy to erase it. We are saying that the reason it is physical has a broader context than that.”, Vaccaro explained.
    http://www.physorg.com/news/20.....nergy.html

    In other words, contrary to materialistic thought, it is now shown that, “Information is information, not matter or energy.”,,,

    “Information is information, not matter or energy.”
    Norbert Weiner – MIT Mathematician – (Cybernetics, 2nd edition, p.132) Norbert Wiener created the modern field of control and communication systems,

    Of supplemental note:

    “Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same,,,”
    Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin]

    ,,,The total information content of the bacterial cell, when calculated from the thermodynamic perspective, is far larger than just what is encoded on the DNA,,

    Biophysics – Information theory. Relation between information and entropy: – Setlow-Pollard, Ed. Addison Wesley
    Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz’ deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures. https://docs.google.com/document/d/18hO1bteXTPOqQtd2H12PI5wFFoTjwg8uBAU5N0nEQIE/edit

    “a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong
    http://books.google.com/books?.....;lpg=PA112

    ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.”
    Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894

    Also of note, Professor Harold Morowitz shows the Origin of Life ‘problem’, when working from the thermodynamic perspective, escalates dramatically over the 1 in 10^40,000 figure used by many as a ballpark figure for the improbability for the ‘spontaneous’ origin of the proteins necessary for ‘simple’ biological life:

    “The probability for the chance of formation of the smallest, simplest form of living organism known is 1 in 10^340,000,000. This number is 10 to the 340 millionth power! The size of this figure is truly staggering since there is only supposed to be approximately 10^80 (10 to the 80th power) electrons in the whole universe!”
    (Professor Harold Morowitz, Energy Flow In Biology pg. 99, Biophysicist of George Mason University)
    http://books.google.com/books?.....38;f=false

  12. 12
    Winston Ewert says:

    This is simply saying, if an object has a high degree of CSI, then it has a low probability of being caused by something unintelligent.

    To say that something has high CSI is to say that is both specified and highly probable. That is the definition, and how we establish CSI. Thus to say that the object has CSI and therefore has low probability is to say that the object is improbable because it is both improbable and specified. True, but circular.

    Now this is VJTorley quoting Keith S! ; see OP in this thread.

    Look again. I quoted VJTorley’s corrected version of Keith S’s, argument.

    Heck, I’m interested to learn if YOU think Specified Complexity has changed:)

    I just spent a whole post arguing that it hadn’t changed.

  13. 13
    Box says:

    Winston Ewert: Look again. I quoted VJTorley’s corrected version of Keith S’s, argument.

    You are absolutely right. I’m absolutely wrong. My apologies.

  14. 14
    keith s says:

    Winston,

    I would be interested in your response to my comment #2, which shows Dembski contradicting the passage you quoted in your OP.

  15. 15
    ppolish says:

    Winston, i should have read your post before asking sorry.

    And when Dembski says the “jury is still out” on the “million dollar” question of CSI, he still means it. Although the jury is starting to tip in the direction of CSI? As Keith points out, many others are agreeing with Dembski on CSI. Reminds me of the movie “12 Angry Men” with Dembski in the role played by Henry Fonda:)

  16. 16
    Learned Hand says:

    Rather, he says that life is improbable because of Behe’s argument of irreducible complexity.

    Would it be fair to say that he argues that irreducible complexity satisfies the “complexity” requirement in specified complexity?

    (I realize that he and Behe are using the same word in two different ways, which makes my question sound inane. Please tell me if it actually is!)

  17. 17
    HeKS says:

    Winston,

    Can you please review this comment when you have a chance and tell me whether or not you agree? As you may know, I wrote the comment after our relatively extensive discussion on CSI not too long ago, but a number of ID opponents, and most particularly R0bb, have insisted I must have it all wrong and that few if any ID proponents would agree with my description of how CSI fits into the Design Inference.

  18. 18
    Eric Anderson says:

    Winston, thanks for your efforts to clarify.

    However, a few small items of potential confusion might remain. Let me mention a couple of points to perhaps address them:

    1. Irreducible complexity is a subset of specified complexity. It is a particular example of specified complexity as instantiated in a physical, three-dimensional, functioning machine.

    I’m not sure this sentence makes sense: “Rather, he says that life is improbable because of Behe’s argument of irreducible complexity.”

    Life is not improbable because of Behe’s argument. Behe’s argument is that because certain systems are improbable under purely natural and material processes, then it is likely those systems did not come about through such processes. That fact, coupled with a known cause (agency) that can produce such systems, allows us to infer that agency is the better causal explanation.

    Maybe that is what you meant by that sentence. In either case, irreducible complexity is a subset of the larger concept of specified complexity.

    2. “We can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI . . .”

    While this seems potentially circular as stated, it seems the author of that sentence was not arguing whether the naked probability of a system existing could be determined by first determining CSI. Rather, by determining CSI exists we can draw an inference that purely natural processes are not the most likely cause.

    3. Surely Dembski does not still think that the “jury is still out” as to whether living systems have CSI. That was 15 years ago. Are you suggesting that Dembski is still unsure whether any living systems, such as what we find in DNA, contain CSI? That seems highly unlikely.

    4. On a related note, this seems confusing: “The specified complexity argument is explicitly contingent on the irreducible complexity argument.”

    No. Irreducible complexity is a subset (or a specific manifestation, if you will) of specified complexity. At least in the sense of the examples Behe has used (mammalian eye, blood clotting cascade, bacterial flagellum, etc.).

    Perhaps if we take an expansive definition of “irreducible complexity” then we might end up with a definition that is essentially equivalent to specified complexity. But in any event, I don’t think it makes sense to say that irreducible complexity is explicitly contingent on irreducible complexity.

  19. 19
    HeKS says:

    @Eric Anderson #18

    While I haven’t confirmed this with Behe, I suspect that the “complex” in “Irreducible Complexity” has a different meaning than in “Complex Specified Information”.

    In the latter, “complex” means “improbable”, while in “Irreducible Complexity” I believe it is a reference to “many well-matched parts”. Of course, the arising of a system consisting of many well-matched parts is, indeed, improbable on known naturalistic hypotheses, but I nonetheless believe that the word “complex” is being used differently in these two concepts.

  20. 20
    Eric Anderson says:

    HeKS @19:

    Thanks for the good comment.

    There isn’t any difference in substance.

    Behe’s definition of IC is:

    By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning.

    “Several well-matched, interacting parts” is just another way of describing (i) the improbable arrangement; add to that (ii) the specification of those parts working together to perform a function, and you have irreducible complexity.

    In other words, CSI consists of (i) an improbable arrangement, plus (ii) a specification. Irreducible complexity consists of (i) an improbable arrangement, plus (ii) a specification. They are both referring to the same thing.

    Irreducible complexity is just a subset of CSI — a specific application of CSI in a functional, physical machine in three-dimensional space.

  21. 21
    Mung says:

    My IC/CSI meter just broke down. =P

  22. 22
    Joe says:

    Can someone please explain to me how the quotes in keth’s comment 2 contradict each other? We all know keith s isn’t up to that task as all he can do is make false accusations.

  23. 23
    ppolish says:

    1999 Dembski – Jury is still out on the million dollar question of CSI

    2014 Dembski – Jury is still out on the million dollar question of CSI although evidence is piling up.

    No contradiction if one ignores inflation (monetary).

  24. 24
    Eric Anderson says:

    The jury has been back from deliberations for a long time now and the verdict is clear. There is definitely CSI in biological systems.

  25. 25
    StephenB says:

    Eric

    Irreducible complexity is just a subset of CSI — a specific application of CSI in a functional, physical machine in three-dimensional space.

    Precisely. Thank you.

Leave a Reply