Uncommon Descent Serving The Intelligent Design Community

Back to Basics

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The materialists have been doing a little end zone dance over at the The Circularity of the Design Inference post. They seem to think that Winston Ewert has conceded that Dembski’s CSI argument is circular. Their celebrations are misplaced. Ewert did nothing of the sort. He did NOT say that Dembski’s CSI argument is circular. He said (admittedly in a rather confusing and inelegant way) that some people’s interpretation of the CSI argument is circular.

Ewert is making a very simple point. To make a design inference based on mere probability alone is fallacious. I don’t know what all of the fuss is about. But just in case this is not clear by now, let’s go back to basics. The design inference requires two things: A huge ocean of probability and a very tiny island of specification. If you don’t have both, it does not work.

Perhaps a poker example will illuminate the issue. There are 2,598,956 five-card poker combinations. Only 1 of those combinations corresponds to the specification “royal flush in spades.” The probability of a royal flush in spades on any given hand is 0.000000385. Now let us suppose the “search space” (i.e., the ocean of probability) is “four consecutive hands of poker.” The probability of a series of events is the product of the probability of all of the events. The probability of receiving a royal flush in spades in four consecutive hands is 0.000000385^4 or 0.00000000000000000000000002197 or about 2.197X10^-26.

Here’s the interesting point. The probability of ANY given series of four poker hands is exactly the same, i.e., 2.197X10^-26. So why would every one of us look askance at the series “four royal flushes in spades in a row” even though it has the exact same low probability as every other sequence of four hands?

The answer to this is, of course, the idea behind CSI. Low probability by itself does not establish CSI. The fact that in the enormous probabilistic ocean of four consecutive poker hands the deal landed on a tiny little island of specification (“four royal flushes in spades) is what causes us to suspect design (i.e., cheating).

Ewert writes:

The fact that an event or object is improbable is insufficient to establish that it formed by natural means. That’s why Dembski developed the notion of specified complexity, arguing that in order to reject chance events they must both be complex and specified.

Poker analogy: The fact that a series of four poker hands has a very low probability (i.e., 2.197X10^-26) is insufficient to establish that it was caused by pure chance. That’s why we need a specification as well.

Ewert:

Hence, its not the same thing to say that the evolution of the bacterial flagellum is improbable and that it didn’t happen. If the bacterial flagellum were not specified, it would be perfectly possible to evolve it even thought it is vastly improbable.

Poker analogy: It is not the same thing to say that a series of four hands of poker is improbable and therefore it did not happen by chance. If the four hands were not specified, it would be perfectly possible to deal them by pure chance even though any particular such sequence is vastly improbable.

Ewert:

The notion of specified complexity exists for one purpose: to give force to probability arguments. If we look at Behe’s irreducible complexity, Axe’s work on proteins, or practically any work by any intelligent design proponent, the work seeks to demonstrate that the Darwinian account of evolution is vastly improbable. Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.

Poker analogy: Dembski’s work on specified complexity and design inference works to show us why that improbability (i.e., 2.197X10^-26) gives us reason to reject chance and accept design (i.e., cheating).

In conclusion it seems to me that after all the dust settles we will see that Ewert was merely saying that Miller’s Mendacity (see the UD Glossary) misconstrues the CSI argument. But we already knew that.

Comments
AR: I notice what is rapidly becoming a stock selectively hyperskeptical dismissive argument, which is a strawman tactic that should be withdrawn. Doubtless, you learned it from those you look to, so you are not blameworthy. However it needs correction for record. Taking neg logs does not increase our ALGEBRAIC knowledge, but may open our eyes to recognise what we deal with, info beyond a threshold. And by doing an in effect dual, we are in a zone that accesses empirical metrics of information that allow us to ground ourselves to observational reality. Studying info storage, evaluating vulnerability of functional configs to random perturbation, and to observe redundancies, variable frequencies and the like. That is a very important and commonplace move in modelling, engineering, systems and physics etc. Last week, I pointed out that Laplace and Z Transforms and graphs tied to such do that service for systems. (For years I lived more in complex frequency than time domains.) That is why the objection is besides the point and even strawmannish and selectively hyperskeptical. KFkairosfocus
November 17, 2014
November
11
Nov
17
17
2014
03:56 AM
3
03
56
AM
PDT
Thank you Andre- well said.Joe
November 17, 2014
November
11
Nov
17
17
2014
03:50 AM
3
03
50
AM
PDT
It is impossible to stay courteous and civil when people are deliberate in their lies....... How do others do it?Andre
November 17, 2014
November
11
Nov
17
17
2014
03:47 AM
3
03
47
AM
PDT
centrestream:
Surely an agent that can manipulate nature to deal with new or difficult situations could just as easily manipulate nature to prevent new or difficult situations.
So accidents never happen? Really?Joe
November 17, 2014
November
11
Nov
17
17
2014
03:23 AM
3
03
23
AM
PDT
Joe at 5: "Due to an agency that can utilize knowledge to manipulate nature for a purpose or to deal with new or difficult situations." Surely an agent that can manipulate nature to deal with new or difficult situations could just as easily manipulate nature to prevent new or difficult situations. Keiths #12: "But it’s also worth noting that in a follow-up thread, Barry scoffed when I told him..." Is this the same moderator who said that scoffing is a poor form of argumentation?centrestream
November 17, 2014
November
11
Nov
17
17
2014
03:19 AM
3
03
19
AM
PDT
Adapa:
Notice that specification is defined as an a priori description of a system, not a post hoc one. Orgel’s “specification” is a post hoc description of the formation of polypeptides from DNA.
LoL! ID uses Orgel's as we see functionality and say there is a specification.Joe
November 17, 2014
November
11
Nov
17
17
2014
03:16 AM
3
03
16
AM
PDT
RDFish- YOU don't get to erect a strawman and make IDists follow it.Joe
November 17, 2014
November
11
Nov
17
17
2014
03:14 AM
3
03
14
AM
PDT
Just demonstrate the protein can arise via unguided processes and be done with it. Adapa:
Already been done.
No, it hasn't.
See Joe Thornton’s work on ancestral protein reconstruction.
Seen it and it has nothing to do with unguided evolution producing proteins. Obviously you are just gullible and will believe anything that you think supports unguided evolution.Joe
November 17, 2014
November
11
Nov
17
17
2014
03:11 AM
3
03
11
AM
PDT
It’s obvious from reading about Orgel’s collaborations with Stanley Miller among others that Orgel’s “specified complexity” has nothing in common with Dembski’s usurping of the term.
And another lie. It's obvious that the NCSE is just a bunch of desperate fools.Joe
November 17, 2014
November
11
Nov
17
17
2014
03:08 AM
3
03
08
AM
PDT
keith s:
I don’t mind them, and they are as ineffective as insults from Mung or Joe.
LoL! keith s is an insult to humanity. All I do is point that out.Joe
November 17, 2014
November
11
Nov
17
17
2014
03:06 AM
3
03
06
AM
PDT
So haw can you support the assertion about “islands of function”?
Observation. That is we observe protein fubnction isolation. How can someone test the claim that unguided evolution can produce functional proteins?Joe
November 17, 2014
November
11
Nov
17
17
2014
03:05 AM
3
03
05
AM
PDT
"To make a design inference based on mere probability alone is fallacious." Agreed. But that is the only argument that you have. Even the IC argument is based on probability. That was the entire argument behind Behe's fallacious extrapolation from the rarity of chloroquine resistance.centrestream
November 17, 2014
November
11
Nov
17
17
2014
03:04 AM
3
03
04
AM
PDT
keith s:
Your error is this: You fail to recognize that in order to establish that something exhibits 500 bits of CSI, you have to calculate P(T|H), the probability that it came about by “Darwinian and other material mechanisms”, as Dembski put it. P(T|H) is right there in Dembski’s equation.
Wrong. That edquation has to do with SPECIFICATION, not calculating CSI. And it is still up to you to provide H. You can't so you lose, as usual.
Again: You cannot establish that something exhibits 500 bits of CSI unless you consider the relevant ‘chance hypotheses’.
Again that is total nonsense and doesn't follow from anything Dembski has written.Joe
November 17, 2014
November
11
Nov
17
17
2014
02:53 AM
2
02
53
AM
PDT
Kairosfocus: In short, islands of function based on highly specific complex organisation of interacting parts, in wider config spaces that are overwhelmingly non functional, are a fact. You claim a "fact", something that can only be established by producing novel, say, protein sequences and testing for some function. Nobody disputes there are theoretically limitless possibilities of linear amino-acid sequences. For example, one essential requirement for establishing enzymatic activity for a protein is solubility in water of appropriate pH and osmotic concentration. How many as-yet theoretical sequences are soluble? Nobody knows. Can we say anything about the possible properties of sequences undiscovered in nature? Not with any reliability. So haw can you support the assertion about "islands of function"?Alicia Renard
November 17, 2014
November
11
Nov
17
17
2014
02:50 AM
2
02
50
AM
PDT
Kairosfocus: And of course the WmAD metric boils down to a functionally specific info beyond a relevant threshold metric. That is why it can be log reduced and set in terms that are informational and amenable to information measuring approaches. But if I produce a graph of log x, I end up with a curved line that I can use as a converter between x and log x. The results are unique and reversible. Taking logs adds nothing to our knowledge of x. Please, can you explain what you think you are achieving by calculating a logarithm? Not rocket science. No it's basic math!Alicia Renard
November 17, 2014
November
11
Nov
17
17
2014
02:25 AM
2
02
25
AM
PDT
The pettifogging and absolute silliness pouring from RDF and Learned Hand is dizzying.TSErik
November 17, 2014
November
11
Nov
17
17
2014
01:39 AM
1
01
39
AM
PDT
KS, I think there is a reasonable goal of civil, on topic discussion. Why not focus on the merits of fact, reasoning and alternative frameworks of thought? Where too, two years ago you refused to take up the root and branch warrant for the evolutionary materialist tree of life. But in fact had you successfully done such, this site and the underpinnings of design theory would have collapsed two years ago if the blind watchmaker thesis in some form had actually been warranted on the merits. The offer to host the 6,000 word essay is still open. Where, the dog that will not bark may be telling us something. KFkairosfocus
November 17, 2014
November
11
Nov
17
17
2014
01:29 AM
1
01
29
AM
PDT
RDF, you made a clever rhetorical quip but have not dealt with the issue, of your fallacy of the complex, loaded question. I repeat, design is a process, intelligently directed configuration. As just described, that process often leaves empirical markers, FSCO/I being most relevant. KFkairosfocus
November 17, 2014
November
11
Nov
17
17
2014
01:22 AM
1
01
22
AM
PDT
Folks, I suggest a pause to address the core issue instead of reading in by polarisation that ends up erecting strawmen. Orgel and Wicken across the 70's recognised that a common engineering phenomenon was present in the living cell and onward in life. Namely, functionally specific complex organisation that achieves functionality by interaction of correct parts assembled and coupled together per a wiring diagram, as Wicken termed it. This, leading to something that was not merely random, but was not merely repetitive order either. Orgel contrasted crystals and random mixes of mineral crystals in granite or the like to the specified complexity he was seeing. Wicken spoke of wiring diagrams, and pointed out that such is informational. Let us notice:
ORGEL, 1973: . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189.] WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
The source of my descriptive summary term, functionally specific complex organisation and/or associated information (FSCO/I) should be obvious. Save, to the excessively polarised and selectively hyperskeptical. If something is informational, according to a generalised wiring diagram, then it should be amenable to net listing and representation, similar to circuit diagrams, wiring diagrams, instrumentation and piping diagrams, exploded view assembly diagrams, process-flow integrated charts and the like. Stuff that AutoCAD etc routinely reduce to files, in the end on the principle of structured chains of y/n q's specifying configuration from the field of possibilities. That already yields an info content in bits that is a valid info metric. Beyond, studies of redundancies and statistical patterns may compress somewhat but the aperiodic (non-crystalline repetition) arrangement and coupling in wiring diagrams will resist high levels of compression. As Trevors and Abel pointed out long ago. What WmAD worked on was metrics of quantification that would allow us to distinguish based on observable characteristics and induction on a body of experience, cases where there was good reason to infer design as cause. Right after the passage BA cited in the linked paper, Dembski highlighted five key points:
Orgel and Davies used specified complexity loosely. [--> I suggest, qualitatively would have been beter phrasing] I’ve formalized it as a statistical criterion for identifying the effects of intelligence. Specified complexity, as I develop it, is a subtle notion that incorporates five main ingredients: (1) a probabilistic version of complexity applicable to events; (2) conditionally independent patterns; (3) probabilistic resources, which come in two forms, replicational and specificational; (4) a specificational version of complexity applicable to patterns; and (5) a universal probability bound.
Boiling down in light of the 2005 metric model, specification is generalised on the original focal matter, functionality. Dembski notes that in the biological world, it is cased out as functionality . . . obviously, that based on interactive parts working in a wiring diagram pattern. That's a touchstone test that is also the main field in view for scientific issues. Complexity is best first viewed in light of that structured y/n q list that gives the state of the config observed or planned in the context of other possible clumped or scattered configs. That gets us into config spaces, with lists of parts, standard axes for parts, co-ord axes, xyx the most convenient with ox the polar axis and a reference origin. Parts then may be located at xyz co-ords and their local axes aligned relative to ox per yaw pitch roll. Couplings can be identified. And so forth. Tedious but that is how there is a place for everything and everything in its place. You do not want to "let the smoke out." Compression techniques may reduce, but not to the extent that one says, construct unit crystal, replicate till materials are used up. Or actually, allow crystallisation forces to do so. Wiring diagrams are not at random, if function is to emerge. Nor are they generally merely orderly. Organisation that is functionally specific and complex, thence informational is an apt description. From this one can assess likelihood of hitting on such by the equivalent of putting 6500 C3 parts in a bag and shaking up. That can work for some fairly simple things. But as complexity rises, less and less likely. 500 - 1,000 bits is a threshold set off sol system or observable cosmos atomic resources. That is where probability enters, but that is also connected to information, which is measured on log of inverse probabilities as well as by direct y/n q counts and statistical studies. All of which can be connected. And of course the WmAD metric boils down to a functionally specific info beyond a relevant threshold metric. That is why it can be log reduced and set in terms that are informational and amenable to information measuring approaches. Not rocket science. But, if you are determined to find objections, read to object, exert selective hyperskepticism, ignore the wider context of common engineering phenomena, and generally hold "IDiots" in hostile contempt and suspicion, you will predictably refuse to acknowledge the general reasonableness of such an approach. That is why I have now taken to holding up the exploded diagram of a 6500 C3 reel. Not a 747 jumbo jet. Not an instrument on its panel. Not a clock on its panel. Not even a watch. A very simple product made by a company that started out making watches and taxi meters. Look at the wiring diagram. Ask yourself whether any or nearly any clumped config of parts would work as a reel. Ask whether shaking parts up in a bag or the like would be likely to discover working configs. Or, scattering parts. In short, islands of function based on highly specific complex organisation of interacting parts, in wider config spaces that are overwhelmingly non functional, are a fact. Yes, tolerances and variations exist so there is no one point, and the island is a range of possibilities T in the wider -- much wider -- space W. Which is a multidimensional space where configs may have many close neighbourhood points etc etc etc. None of that changes the basic fact. And, as Paley put on the table in 1804 for watches, we can in principle have a self replicating reel with internalised blue print. Down that road lies the von Neumann Self Replicator, vNSR. That additionality would INCREASE the FSCO/I and would be further reason to infer design of the reel. Then, we can extend such phenomena to molecular scale and observe the cell. FSCO/I a-plenty, wiring networks everywhere, vNSR implemented, it uses digitally stored algorithmic codes to assemble proteins and more. That is why I keep pointing to OOL as the first context to be examined. It is foundational. It is easy to see that atomic resources of sol system and of observed cosmos are not enough to get a reasonable likelihood of blind watchmaker processes getting us to a cell. The FSCO/I strongly points to design. That is transformational as we see here the collapse of the blind watchmaker designer mimic project. Beyond the OOL, we see a tree of life. Built in niche adaptability and robustness are not at issue, body plan origin is. FSCO/I again, where when mutation patterns, the need for 10 - 100+ mn bases of fresh dna to address cell types, tissues, organs, regulatory expression from embryonic stages on, and integrated body plan systems all come together we have no good observational evidence founded reason to exclude the obvious empirically warranted source of such. Intelligently directed configuration, aka design. KFkairosfocus
November 17, 2014
November
11
Nov
17
17
2014
01:19 AM
1
01
19
AM
PDT
Keith S Lets please stop with the nonsense of Darwinian mechanisms please Keith S. What are Darwinian Mechanisms? Natural selection? Are you aware that Darwin based this on what he observed about artificial selection? He reconkoned that if we can do it in a guided way then nature can also do so! It is false and the collapse of the Galapagos finches' evolution proves this, these creatures adapt to their environment and then always revert back when the pressures no longer apply..... http://www.jstor.org/stable/10.1086/674899 Random Mutations There is no such thing as random mutations....... http://www.ncbi.nlm.nih.gov/pubmed/22522932 How about that? we see design even in your supposed Darwinian framework....... Risk management strategy chew on it Keith SAndre
November 16, 2014
November
11
Nov
16
16
2014
11:30 PM
11
11
30
PM
PDT
Ugh, no, it was right the first time. I'd better go to bed, jetlag only makes you feel awake.Learned Hand
November 16, 2014
November
11
Nov
16
16
2014
10:36 PM
10
10
36
PM
PDT
If he’s ever specifically said that he thinks crystals are complex, I don’t recall it.
Sorry, I meant aren't complex.Learned Hand
November 16, 2014
November
11
Nov
16
16
2014
10:35 PM
10
10
35
PM
PDT
Barry,
Also, you are a guest on this website. You should make an effort at being polite to your host. Unprovoked insults are bad manners.
As all long-term (and even most short-term) readers know, you routinely insult your guests. Please spare us the hypocritical double standard. Just to be clear: I'm not complaining about your insults. I don't mind them, and they are as ineffective as insults from Mung or Joe. What I do object to is the double standard. If you're going to complain about insults, then ban yourself. If you're unwilling to ban yourself, then don't complain about others who behave better than you do.keith s
November 16, 2014
November
11
Nov
16
16
2014
10:35 PM
10
10
35
PM
PDT
I am familiar with Dembski stating that crystals do NOT exhibit CSI. Understanding Intelligent Design: Everything You Need to Know in Plain Language, pp. 105-106 (Harvest House, 2008).)
I'm not familiar with that work, and haven't read it. I have read No Free Lunch, but I'm not going to look for my copy right now so I'll point to Shallit and Elsberry instead, since they cite what I'm thinking of. They point out in more detail why Dembski considers crystals not to be designed. It's because there's a "physical necessity" cause, not because they aren't "complex." So I don't think this establishes that Dembski doesn't consider crystals "complex." (Having written that, I think that your citation comes from the paragraph Luskin excerpted, and which can helpfully be found in the top few results when googling "Dembski crystals complex". He seems to be saying the same thing there that Shallit and Elsberry were attributing to him.) If he's ever specifically said that he thinks crystals are complex, I don't recall it. I infer that from his position that highly-ordered things can be "complex," like the Kubrick Monolith or five thousand coins landing heads-up in a row or the Caputo sequence. That could be wrong; it's possible that he would say crystals aren't complex because nothing originating from such a physical necessity can be complex. Is that your take? Please do, if you have the time, share why you think Orgel and Dembski use the term "complexity" in exactly the same way. Is it because Orgel defines complexity somewhere in a way that comports with Dembski's use? Where? I've asked several times why you think this; you seem extremely reluctant to explain.Learned Hand
November 16, 2014
November
11
Nov
16
16
2014
10:31 PM
10
10
31
PM
PDT
Barry:
keiths: “And in a recent thread, he claimed that CSI can be assessed without a chance hypothesis” You don’t seem to understand the comment to which you link. Read it again, and if you still don’t understand it I will explain it.
Pay attention, Barry. That comment was from R0bb, not me. However, looking at the linked thread, I can see your error, and it is probably the same one that R0bb spotted. Your error is this: You fail to recognize that in order to establish that something exhibits 500 bits of CSI, you have to calculate P(T|H), the probability that it came about by "Darwinian and other material mechanisms", as Dembski put it. P(T|H) is right there in Dembski's equation. Are you familiar with the equation, and do you understand it? In order to calculate P(T|H), you have to consider all relevant 'Darwinian and material mechanisms' leading to the phenomenon in question. In other words, all relevant "chance hypotheses", to use Dembski's inapt terminology. Again: You cannot establish that something exhibits 500 bits of CSI unless you consider the relevant 'chance hypotheses'. Dembski's (and KF's, and gpuccio's) eternal problem is that they cannot calculate P(T|H) for biological phenomena. Since they can't do that, they can't demonstrate design.keith s
November 16, 2014
November
11
Nov
16
16
2014
10:28 PM
10
10
28
PM
PDT
I still don't understand how materialists can think or believe that; Examples like; Stability control mechanisms, feedback loops, networks, two way information flow, and redundancy, can ever be the result of chance or necessity..... the mind truly boggles on how they ignore these problems to their worldview, Things don't appear to be designed, they are designed and we recognise the engineering principles at play here.Andre
November 16, 2014
November
11
Nov
16
16
2014
10:26 PM
10
10
26
PM
PDT
Here is a definition of "specified cmplexity" based on Dembski's work that was posted on UD:
Specified complexity consists of two important components, both of which are essential for making reliable design inferences. The first component is the criterion of complexity or improbability. In order for an event to meet the standards of Dembski’s theoretical notion of specified complexity, the probability of its happening must be lower than the Universal Probability Bound which Dembski sets at one chance in 10^150 possibilities. The second component in the notion of specified complexity is the criterion of specificity. The idea behind specificity is that not only must an event be unlikely (complex), it must also conform to an independently given, detachable pattern. Specification is like drawing a target on a wall and then shooting the arrow. Without the specification criterion, we’d be shooting the arrow and then drawing the target around it after the fact.
ID Foundations 4 Notice that specification is defined as an a priori description of a system, not a post hoc one. Orgel's "specification" is a post hoc description of the formation of polypeptides from DNA. The way Orgel and Dembski use the term is fundamentally different.Adapa
November 16, 2014
November
11
Nov
16
16
2014
10:25 PM
10
10
25
PM
PDT
Learned Hand:
I don’t know. I only have the wiki entry on specified complexity to go on...
Yet even wikipedia got it right, and you didn't. http://en.wikipedia.org/wiki/Specified_complexity#DefinitionMung
November 16, 2014
November
11
Nov
16
16
2014
10:18 PM
10
10
18
PM
PDT
Adapa reminds me of keiths. If you don't have an argument, post a link to something else. Then pretend like you have an argument.
Leslie Orgel (1973) coined the now famous term “specified complexity” to distinguish between crystals, which are organized but not complex, and life, which is both organized and complex.
Which paper was that published in?Mung
November 16, 2014
November
11
Nov
16
16
2014
10:14 PM
10
10
14
PM
PDT
Adapa @ 35. I made a substantive criticism of the NCSE paper. Do you care to take a stab at answering it?Barry Arrington
November 16, 2014
November
11
Nov
16
16
2014
10:12 PM
10
10
12
PM
PDT
1 2 3 4 5

Leave a Reply