Uncommon Descent Serving The Intelligent Design Community

The Circularity of the Design Inference

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Keith S is right. Sort of.

As highlighted in a recent post by vjtorley, Keith S has argued that Dembski’s Design Inference is a circular argument. As Keith describes the argument:

In other words, we conclude that something didn’t evolve only if we already know that it didn’t evolve. CSI is just window dressing for this rather uninteresting fact.

In its most basic form, a specified complexity argument takes a form something like:

  • Premise 1) The evolution of the bacterial flagellum is astronomically improbable.
  • Premise 2) The bacterial flagellum is highly specified.
  • Conclusion) The bacterial flagellum did not evolve.

Keith’s point is that in order to show that the bacterial flagellum did not evolve, we have to first show that the evolution of the bacterial flagellum is astronomically improbable, which is almost the same thing. Specified complexity moves the argument from arguing that evolution is improbable to arguing that evolution didn’t happen. The difficult part is showing that evolution is improbable. Once we’ve established that evolution is vastly improbable, it seems a very minor obvious point that it would therefore not have occurred.

In some cases, people have understood Dembski’s argument incorrectly, propounding or attack some variation on:

  1. The evolution of the bacterial flagellum is highly improbable.
  2. Therefore the bacterial flagellum exhibits high CSI
  3. Therefore the evolution of the bacterial flagellum is highly improbable
  4. Therefore the bacterial flagellum did not evolve.

This is indeed a very silly argument and people need to stop arguing about it. CSI and Specified complexity do not help in any way to establish that the evolution of the bacterial flagellum is improbable. Rather, the only way to establish that the bacterial flagellum exhibits CSI is to first show that it was improbable. Any attempt to use CSI to establish the improbability of evolution is deeply fallacious.

If specified complexity doesn’t help establish the improbability of evolution what good is it? What’s the point of the specified complexity argument? Consider the following argument:

  1. Each snowflake pattern is astronomically improbable.
  2. Therefore it doesn’t snow.

Obviously, it does snow, and the argument must be fallacious. The fact that an event or object is improbable is insufficient to establish that it formed by natural means. That’s why Dembski developed the notion of specified complexity, arguing that in order to reject chance events they must both be complex and specified. Hence, its not the same thing to say that the evolution of the bacterial flagellum is improbable and that it didn’t happen. If the bacterial flagellum were not specified, it would be perfectly possible to evolve it even thought it is vastly improbable.

The notion of specified complexity exists for one purpose: to give force to probability arguments. If we look at Behe’s irreducible complexity, Axe’s work on proteins, or practically any work by any intelligent design proponent, the work seeks to demonstrate that the Darwinian account of evolution is vastly improbable. Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.

So Keith is right, arguing for the improbability of evolution on the basis of specified complexity is circular. However, specified complexity, as developed by Dembski, isn’t designed for the purpose of demonstrating the improbability of evolution. When used for its proper role, specified complexity is a valid, though limited argument.

 

Comments
keiths:
As I said, Mung is a real asset to the ID movement.
Can I quote you on that? Mung
keith s:
But you haven’t addressed the points I made to Winston, namely that 1) Dembski presents a lengthy argument in support of his claim that “chance” and “law” cannot produce CSI, and 2) he then dubs this the “Law of Conservation of Information”. You don’t need lengthy arguments to demonstrate something that is true by definition, and something that is true merely by definition doesn’t merit the designation “Law”.
I'm having problems following your logic. If we want to do Euclidean geometry, we "define" a point, a line, a plane, etc. Then we use these terms to "demonstrate something is true" sometimes using "lengthy arguments." Now, if you were to "prove" that a "point" is exactly what you termed a "point" to be, then something will have gone haywire. Definitions are definitions; they can't pass into the realm of "theorems" or "laws." But this is not what Dembski has done. He has marshalled a "lengthy argument" to logically "demonstrate" that one cannot use an algorithm to increase the information of what is being inputted into the algorithm itself. He then sees this as something that will always hold, and, so, constituting an inviolable 'law,' which he then terms the "Law of the Conservation of Information." If Dembski had said: "I define that Law of Conservation of Information as meaning that the use of an algorithm cannot increase the information of what is inputted into the algorithm," and then went on to say: "I will now PROVE this definition true by invoking this Law," you'd have something to get upset about. But he didn't do that. It is because the Law of the Conservation of Information is 'generalizable' that it constitutes a "Law;" not just because he says so by "fiat." You're welcome to undermine his argument. ( As far as the power of EA, I'm sure Winston Ewert can address any argument you'd want to make) PaV
HeKS, thank you very much for your clear and well-written explanation of CSI. IMHO this should be an OP. Box
@R0bb #62
His understanding is that, in order to calculate CSI, we must choose as a chance hypothesis the actual cause of the event. When I asked how that works when the actual cause is design, he said:
No, no, no! You do not calculate the CSI (the high improbability of the specificity) of an event that was designed. That would be calculating the probability, or rather improbability, of something that was done intentionally, which is incoherent.
R0bb, your characterization of what I said is misleading. The context in which our discussion took place was related to a challenge from Barry to show natural processes producing 500+ bits of CSI. You answered his challenge by pointing to an article by Ewert where he provides a high CSI calculation for a pattern with respect to a couple of natural processes known NOT to have produced the pattern and you then declared Ewert was saying that natural processes had actually produced huge amounts of CSI. In response, I pointed out to you that in order to claim that natural processes had ACTUALLY produced a specific amount of CSI, you would need to calculate the CSI on the basis of the natural process actually known to have produced the effect in question. This is different from trying to determine whether or not some object, pattern, event, etc. displays CSI when the cause of the object, pattern, event, etc. is not already known. In that case, you would calculate the CSI with respect to all known, relevant chance hypotheses and if all of them resulted in a high CSI calculation then you would say the object, pattern, whatever demonstrates high CSI (or perhaps just CSI period, depending on what metric is being used). It remains true that you do not calculate this type of CSI on the hypothesis of design. In other words, you do not ask: How highly improbable is the existence of this effect matching an independent specification given the assumption that it was intentionally brought about as the product of design. You can't do this, because improbability is calculated with respect to random/chance processes, not the intentional actions of conscious agents. When people talk about CSI with respect to designed objects/systems, they are often using a different meaning of the word "complex", such that it means "consisting of many well-matched parts". And by "specified" they typically mean that the particular relation of those parts have been specified for the purpose of bringing about a specific function. Where they mean CSI in the same sense as that calculated on the basis of chance hypotheses, they are not giving an actual specific calculation of CSI on the hypothesis of design, but merely saying that the thing in question matches an independent specification (usually a functional one) and is highly improbable on all known, relevant natural hypotheses. If Ewert disagrees with any of this, he's more than welcome to say so and I would appreciate the correction. HeKS
And I love this part: "....has thus disproven 150 years of evolutionary theory." wow Upright BiPed
@R0bb #45 Why didn't you point Ewert to my extensive post on the issue of CSI later in that thread, which happened to flow directly out of my conversations with him? Here it is: https://uncommondesc.wpengine.com/atheism/heks-strikes-gold-again-or-why-strong-evidence-of-design-is-so-often-stoutly-resisted-or-dismissed/#comment-518656 If he sees anything wrong there, I'm happy to be corrected. HeKS HeKS
After all the posturing, they look rather silly letting textbook material reduce them to insults. Upright BiPed
Adapa: You seem to argue a lot from authority (with a good deal of sneering mixed in). Why is that? Do you not feel comfortable with your own ability to understand and debate the issues? Are you not comfortable admitting this? Phinehas
Alicia Renard Having found TSZ due to numerous links and references here, I searched “Upright Biped” in their archive and discovered the semiotic theory. Refining my search, I find this that should supply adapa with the background. Thanks and wow. Upright Biped is just another internet boy wonder who invented his own custom version of reality and has thus disproven 150 years of evolutionary theory. That explains a lot of the inane claims he makes actually. Adapa
Adapa: Why resort to a sock puppet? This makes whatever point you are trying to make seem more cowardly than cute. Phinehas
R0bb:
You make a good point. When Dembski says, “Natural causes are incapable of generating CSI”, it sounds like a flat-out tautology.
I'm actually making the opposite point. :-) The fact that Dembski made that statement, backed up with a lengthy argument, indicates to me that he didn't see it as tautological. If he had been going by his later definition of CSI, then his statement would have been true by definition, and it wouldn't have required an argument at all -- just a restatement of his definition of CSI.
In my opinion, the LCI is so murky and problematic that there’s very little hope of discussing it productively.
That's for sure, and the problems begin with the inaccurate name itself. keith s
No one in science understands the UB magical mystery process which apparently doesn’t involve either chemistry or physics.
Wow Adapa, Full-throttle defense mechanism. Immediate. No waiting. - - - - - - - - - - - - - Adapa, here’s what you do: Go find a biophysicist. Let me tell you what to ask him: >> Mr. Biophysicist, is the nucleic triplet CTA arranged in that order because of the thermodynamic properties of the DNA molecule? After he tells you “no”, ask him this: >> Mr. Biophysicist, does that mean that the arrangement is independent of what is called the “minimum total potential energy principle”? After he tells you “yes”, then ask him this: >> Mr. Biophysicist, if CTA is not determined by the thermodynamic properties of the DNA molecule, then is it arranged that way because of the thermodynamic properties of leucine? After he tells you “no”, then ask him this: >> Mr. Biophysicist, if the arrangement CTA is not based on the thermodynamic properties of leucine, does this mean that the arrangement CTA is physically and chemically “discontinuous” with leucine? After he tells you “yes”, then ask him this: >> Well Mr. Biophysicist, if the arrangement CTA is physically and chemically discontinuous with leucine, then how does the arrangement CTA result in leucine being added to a polypeptide? After he tells you “it has to be translated” then come back here and tell me again that no one understands what I’m telling you. We can then talk about why the organization of the system preserves the necessary discontiniuty during translation. You might even figure out why it's necessary. Imagine that. Upright BiPed
Upright Biped writes: Adapa, are you familiar with the concept that information requires a medium in order to exist? Having found TSZ due to numerous links and references here, I searched "Upright Biped" in their archive and discovered the semiotic theory. Refining my search, I find this that should supply adapa with the background. Alicia Renard
BTW do I have a thank you note coming for the Shannon inspired explanation for the specification/expression discontinuity
Not sure if you'll get that note from Adapa but I'll send one myself. That was a very helpful explanation. Thanks! Silver Asiatic
UBP:
Adapa, you do not understand the process of translation, and it is apparent that you wish to remain that way.
I had to arrive at that conclusion also, unfortunately. Silver Asiatic
You can say it but you’ll look even sillier and less scientifically informed than you already do. Sure you want to ride that train? I don't get on the same train as you Darwinists because your worldview 'trains' you to fling personal insults every time you get in the corner. You guys do this every time and you DONT EVEN KNOW WHY. And apparently you don't know how it makes you look when you do it, which relates to the inability to look at oneself honestly. Now I asked you specifically for the information storage location for the area morphology where the rim of my nostril joins the face, and I gave you a stripped down way to quantify some of that information as in a polynomial, crude as it may be. Now suppose that a person's face is made up of a billion such arc segments, you really think those trillions of bits of CSI are specified as follows: It’s called the PAX3 gene. Oh really? One gene completely specifies billions of unique faces throughout history, each with a billion arc segments? That little gene surely can store an astronomical amount of information now! A Genome-Wide Association Study Identifies Five Loci Influencing Facial Morphology in Europeans We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes—PRDM16, PAX3, TP63, C5orf50, and COL17A1—in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Doesn't look like the researcher discovered where all of the CSI is stored, specifying the complete morphology of the face, or they wouldn't be using words like "potentially represent" and "associated". BTW do I have a thank you note coming for the Shannon inspired explanation for the specification/expression discontinuity or is this something you already mulled over in school? groovamos
keith s:
But you haven’t addressed the points I made to Winston, namely that 1) Dembski presents a lengthy argument in support of his claim that “chance” and “law” cannot produce CSI, and 2) he then dubs this the “Law of Conservation of Information”.
You make a good point. When Dembski says, "Natural causes are incapable of generating CSI", it sounds like a flat-out tautology. But one could also make the following argument: The Law of Conservation of Information doesn't merely say that natural processes are extremely unlikely to yield high-CSI events (which would be purely tautological). It also deals with the question of what a natural process can do in combination with a previous process. For example, can one process yield a complex unspecified string, and then a deterministic process convert it to a specified string? I don't know if that's a good argument. In my opinion, the LCI is so murky and problematic that there's very little hope of discussing it productively. R0bb
Upright BiPed you do not understand the process of translation, and it is apparent that you wish to remain that way. No one in science understands the UB magical mystery process which apparently doesn't involve either chemistry or physics. But you know what? That's not science's problem. Let us know when you finally decide yo publish your amazing original research, K? Adapa
SB #108 I hope you found my contribution useful for your research. I thought it was something to do with the OP! markf
Adapa, you do not understand the process of translation, and it is apparent that you wish to remain that way. It is also vividly apparent that you think you can avoid any corrections of your misunderstandings by lobbing juvenile taunts at me. You are entirely correct about that; you can choose any avoidance mechanism you wish. Upright BiPed
Heh. Here is what Mung claims is not a quote mine. I wrote:
Mung quote mines Tamara Knight:
Mung: Because you asked about measuring information. Tamara: No, I asked about measuring information… No? You asked about measuring information but you did not ask about measuring information? How does that work?
Here’s what Tamara actually wrote:
No, I asked about measuring information in the Dembski sense.
You’re a real asset to the ID movement, Mung. Glad you’re not on my side.
Mung's "defense"? That he quoted the omitted words later in his comment. Apparently, a quote mine isn't a quote mine if you quote the omitted words later. As I said, Mung is a real asset to the ID movement. keith s
Mung Is that when you’ll understand the question in #111? Don't feed the troll. Adapa
Upright BiPed: Adapa, did you not understand the question in #111 ? Adapa: I’m sorry Upright Biped, remind us again when and in what scientific journal your amazing original research will be published? Mung: Is that when you’ll understand the question in #111? 125 Adapa: Troll Is that a no? Mung
keiths:
The meaning was completely changed by your selective editing. In other words, you quote mined a fellow IDer, which I think is hilarious.
What's hilarious is your constant re-definition of what it means to quote mine. keiths recently accused me of quote mining. Want to know what was hilarious about that? The text he claimed was missing was right there in the post in which he claimed I quote mined. keiths. not the best source for what it means to quote mine, nor the best source of actual examples of quote mining. Mung
Mung Is that when you’ll understand the question in #111?? Troll. Adapa
Adapa:
I’m sorry Upright Biped, remind us again when and in what scientific journal your amazing original research will be published?
Is that when you'll understand the question in #111? Mung
I'm sorry Upright Biped, remind us again when and in what scientific journal your amazing original research will be published? Adapa
Adapa, did you not understand the question in #111 ? Upright BiPed
StephenB,
Yes, it was minor.
No, it was major. You presented it as a quote, but you redacted some key words. There was no ellipsis to indicate that you had done so. The meaning was completely changed by your selective editing. In other words, you quote mined a fellow IDer, which I think is hilarious. keith s
groovamos Well interesting that you didn’t answer my challenge question to you, asking where is my nose (or part of such) specified. A drawing is a specification for a part. Something specifies the shape of that part of my nose. It's called the PAX3 gene. A Genome-Wide Association Study Identifies Five Loci Influencing Facial Morphology in Europeans Since my nose is very symmetrical, I can say that there is a specification somewhere with a tolerance for the appearance of my nose. You can say it but you'll look even sillier and less scientifically informed than you already do. Sure you want to ride that train? Adapa
Adapta: Biological entities aren’t fabricated from mechanical drawings with tolerances. Science FAIL. Well interesting that you didn't answer my challenge question to you, asking where is my nose (or part of such) specified. A drawing is a specification for a part. Something specifies the shape of that part of my nose. Since my nose is very symmetrical, I can say that there is a specification somewhere with a tolerance for the appearance of my nose. We could delineate corresponding chords on each nostril rim put numbers to them and fit a polynomial of certain degree to each and look at the coefficients, compare, and from that take a rough estimate of the tolerances for the coefficients. So there you have it, each coefficient is a dimension with a tolerance, and we have a rough guess for Dempski's CSI for the short chord where the rim of my nostril joins the face. My girlfriend has a slightly asymmetrical nose which adds to her appeal, so we would have to assign wider tolerances for the coefficients for her. This would mean a lower quantity of CSI for her nose compared to me UNLESS a designer intentionally wanted for some people that kind of visual appeal. Now how all this applies can be illustrated by persons having defective heart valves. If what I put here is at all useful then we should be able to say that there are specifications for defective heart valves which somehow have been deficient. If those specifications have dimensions that are deficient or if they have tolerances which are too relaxed, problems can arise with heart valves. Now if you can, please tell me where I go wrong here or at least tell me where the specifications for heart valves are stored as CSI. Or maybe in your world, heart valve morphology/function represents zero information. Science FAIL I spent a good bit of effort indicating to you the nature of the discontinuity between specifications and expression, based on Shannon information, and the categorization of uncertainty which can be applied to each. Now if my ability to edify on this has somehow failed please indicate how. Or maybe you already knew all that Shannon stuff, no? And was wasting your time? groovamos
Box @112, One problem I see in VJT's formulation is that #2 says "We can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI", but #6 says the reverse of that. And assuming that the phrase "low probability of having been produced by unintelligent natural causes" refers to low P(H|T), there is no need to pair that requirement with specificity as VJT does in #1 and #6, since the design inference follows immediately from low P(H|T) alone. R0bb
Adapa:
Biological entities aren’t fabricated from mechanical drawings with tolerances.
The original biological entities could have been. Joe
R0bb:
I recommend the paper that Dembski calls his “most up-to-date treatment of CSI”. Note the formulas that contain the factor P(T|H).
The updated part refers to the specification component of CSI. Joe
groovamos. There is a discontinuity between a mechanical drawing and a part fabricated to such. Biological entities aren't fabricated from mechanical drawings with tolerances. Science FAIL. Adapa
Adapa:
I’ll be looking forward to reading your original research that details this discontinuity in the DNA translation process.
Pick up a biology textbook and start reading- the research has been completed years ago and it is widely written about. Ignoring the evidence or refusing to discuss it won't make it go away. Joe
Adapta: I see. You can’t provide any evidence of this claimed discontinuity, just keep tossing out your usual buzzword salad. There is a discontinuity between a mechanical drawing and a part fabricated to such. If the part is made correctly, its dimensions are within the tolerances specified. The dimensions and their tolerances represent information which in themselves allow for uncertainty. The tighter the tolerances, the less uncertainty is allowed in the actual dimension referenced by such on the part. The greater the reduction of uncertainty the greater the informational content of the dimension on the drawing, which is in line with Shannon. However the part itself has an actual dimension which can be measured. It does not "have a tolerance", it has an actual dimension which can be measured. The measurement has an an associated accuracy. The higher the measurement accuracy, the greater the uncertainty of the actual dimension, which again is in line with Shannon. Also the higher the accuracy, the more digits are required to record the measurement, in line with Shannon. So now you should understand the discontinuity of which you somehow think we cannot define. The difference in the drawing and the part is a discontinuity, and the discontinuity can be said to underlie the different way uncertainty is applicable to each, the drawing and the part. Now I have a question. The outer rim of my nostril joins my face with a particular curve. Can you or anyone please tell me where is this curve specified, as analogous to how a dimension on a part is specified on a mechanical drawing? groovamos
I'm quoting VJTorley's version of Dembski’s argument from here. Winston Ewert must hold that it contains a mistake; otherwise he would have no reason to write his OP. I do hope Winston Ewert will show where VJTorley errs. VJTorley:
I’m sorry to say that KeithS has badly misconstrued Dembski’s argument: he assumes that the “could not” in premise 1 refers to absolute impossibility, whereas in fact, it simply refers to astronomical improbability. Here is Dr. Dembski’s argument, restated without circularity: 1. To safely conclude that an object is designed, we need to establish that it exhibits specificity, and that it has an astronomically low probability of having been produced by unintelligent natural causes. 2. We can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold). 3. To determine whether something has CSI, we use a multiplicative formula for SC that includes the factor P(T|H), which represents the probability of producing the object in question via “Darwinian and other material mechanisms.” 4. We compute that probability, plug it into the formula, and then take the negative log base 2 of the entire product to get an answer in “bits of SC”. The smaller P(T|H) is, the higher the SC value. 5. If the SC value exceeds the threshold, we conclude that it is certain beyond reasonable doubt that unintelligent processes did not produce the object. We deem it to have CSI and we conclude that it was designed. 6. To summarize: to establish that something has CSI, we need to show that it exhibits specificity, and that it has an astronomically low probability of having been producedby unguided evolution or any other unintelligent process. Once we know that it has CSI, we conclude that it is designed – that is, that it it is certain beyond all reasonable doubt that it was not produced by unguided evolution or any other unintelligent process.
Box
Adapa, are you familiar with the concept that information requires a medium in order to exist? Upright BiPed
@ Upright Biped I see. You can't provide any evidence of this claimed discontinuity, just keep tossing out your usual buzzword salad. Guess we'll never see your amazing original research published anywhere. Have you tried the DI's phony science magazine Bio-Complexity? They're really hurting for submissions I hear. Adapa
My #02 again. A paragraph as I A DNA molecule has I A sand castle has no I, and therefore no CSI. . Upright BiPed
Mark
Now perhaps you can explain why you ask?
First, I want to do a comparative/contrast analysis among various forms of CSI ie., a sand castle, a written paragraph, a DNA molecule etc. Second, I want to find out if ID critics reject CSI because they they don't want it to be workable and search for ways to make it unworkable, or because it is unworkable. Third, I want to relate those scientific calculations (and the different approaches among them) to the informal cognitive process by which design is recognized intuitively. Fourth, I want to analyze the anti-ID bias against the informal cognitive process by which design is detected and the ways in which it is transferred to the criticism of ID science. StephenB
Adapa, No need to wait. The physicochemical discontinuity (between the arrangement of nucleic acids in a codon and the amino acids those arrangements represent to the system) has been well-known since the 1960s. When Nirenberg set out to crack the genetic code, his methodology required him to demonstrate that the relationships exist, because they cannot derived from the arrangement of the nucleic input - even in principle. Not even the arrangement of the nucleotides themselves can be derived from physical law - i.e. they exist independent of the minimum total potential energy principle. (Like I said, try to educate yourself to the issues at hand, that way you can target your insults more effectively, and hopefully you'll stop making statements that disagree with physical realty). Upright BiPed
KeithS
It isn’t a “minor fact”.
Yes, it was minor. It was a fair representation of his unclarified position and it preceded a question that asked for further clarification. I recognized that I could be misreading him. Indeed, he acknowledges that he should have been more clear about that point. Once he did, I made the appropriate adjustment. I realize and acknowledge that I had misunderstood him. On the other hand, your breach was major. Even after Winston asked you to acknowledge that Dembski's argument was not circular and explained why it is not circular, you continue to promote the same mistake. Indeed, you even tried to implicate him in your error, KeithS
With the circularity issue out of the way, I’d like to draw attention to the other flaws of Dembski’s CSI.
Thus, you imply that his post had confirmed your claim that Dembski's argument was circular when it did nothing of the kind. Yet you have still not confessed your error or amended your claims. You simply double down. Remarkable! StephenB
Upright BiPed Also, work on your reading comprehension as well. I'll be looking forward to reading your original research that details this discontinuity in the DNA translation process. When and in which scientific journal will it be published? Adapa
SB #94
I am not insisting on an accurate measurement. I am asking Robb (or you, or Winston) for a step-by-step description of the process by which the CSI would be calculated.
Well that’s pretty tedious and I don’t see the point as the problems only illustrate the problems with the whole CSI concept – but here goes. Dembksi gives  the formula on page 21 of the oft cited paper: -log2[M*N*Thetas(T )*P(T|H) To get a handle on it you might only consider piles of sand with same number of grains. So you could start by estimating the number of grains and the number of stable configurations of that number of grains. You could then heroically assume that with natural causes all stable configurations are equally likely. I guess you could do better than that if you knew more about the natural process that create piles of sand. Then: Thetas(T ) is the specificity of the sandcastle.  That would be the number of possible configurations which could be described as simply or more simply than “sandcastle”. M*N is the number of opportunities to create such a pile of sand.  I guess you could limit it to the number of opportunities for you to observe such a pile which would be roughly the total area of beach you had seen in your life divided by the average size of a pile of sand! P(T|H) is the probability of natural processes creating the pile of sand. Given our absurd assumptions this would be  1/(the number of stable configurations). As you can see it is all utterly absurd and meaningless but don’t blame me - that is the formula. Now perhaps you can explain why you ask? markf
Adapa, Also, work on your reading comprehension as well. Upright BiPed
Keith #98, You must be pinching yourself. You have been right - well 'sort of' - for the very first time. Only a few days ago things were less comfy when you tried to defend your 'damp squib' with the absurd proposal to promote assumptions to facts. If you have already apologized I must have missed it. BTW I'm still not sure what it is you are exactly right about. Box
Upright BiPed The product of translation cannot be reduced to physical law because of the natural (and necessary) discontinuity that exists between the arrangement of the medium and its post-translation effect. It is the organization of the system that establishes the effect, not physical law. Pity that such a disconnect only exists in your imagination. If you had any evidence for it you'd be a Nobel Prize winner by now. But alas, you don't. Adapa
It's Easier to Falsify Intelligent Design than Darwinian Evolution - Michael Behe, PhD https://www.youtube.com/watch?v=_T1v_VLueGk The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 Excerpt of conclusion pg. 42: "To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: “Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.” A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis." http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2662469/ Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag bornagain77
Winston,
Do a search for “Laws of Math” you’ll find that many mathematical results are stated as laws.
Dembski's "Law of Conservation of Information" isn't merely a mathematical law. It's an empirical claim about physical reality:
Natural causes are incapable of generating CSI. I call this result the law of conservation of information, or LCI for short…
Dembski thought this was a non-obvious result, not a mere truth by definition. For example, he writes:
Nevertheless the sense that laws can sift chance and thereby generate CSI is deep-seated in the scientific community.
And also:
It is CSI that for Manfred Eigen constitutes the great mystery of life’s origin, and one he hopes eventually to unravel in terms of algorithms and natural laws.
Those statements make no sense if CSI, by definition, cannot be produced by "chance" and "law". Now, given that the circularity problem has been pointed out for at least eight years, I'm sure that by now Dembski is aware of it and has amended his position. However, the quotes I've provided make it clear that he was still in thrall to his mistake when he wrote Intelligent Design. keith s
StephenB, It isn't a "minor fact". The words you omitted completely change Winston's meaning, as you full well know, or should know (hi, KF!). Here's your "quote" of Winston, in which you "unintentionally forgot" to insert an ellipsis:
Keith S is right. Sort of. Dembski’s design argument is a circular argument.
Here's what Winston actually wrote:
Keith S is right. Sort of. As highlighted in a recent post by vjtorley, Keith S has argued that Dembski’s Design Inference is a circular argument.
I must say, it's interesting to see an IDer quote mine a fellow IDer for a change. keith s
KeithS
As KF would say: Please do better, Stephen.
LOL, Keiths. By all means, let's obsess over the minor fact that I unintentionally forgot to insert elliptical dots in a quote, which has been resolved, and ignore your ongoing misrepresentation of Dembski's argument, which you still will not acknowledge even after being corrected by the very person you appealed to for solace. Yes, indeed. Let's do better. StephenB
Hi R0bb, You wrote:
I agree with Winston here. To restate his claim: For H=”natural selection” and T=”success”: The argument of specified complexity was never intended to show that P(T|H) << 1. It was intended to show that if P(T|H) << 1, then P(H|T) << 1. P.S. To be even more accurate, the above should be: For H=”natural selection” and T=”success”: The argument of specified complexity was never intended to show that P(T|H) <<<< 1. It was intended to show that if P(T|H) <<<< 1, then P(H|T) < 1/2.
But you haven't addressed the points I made to Winston, namely that 1) Dembski presents a lengthy argument in support of his claim that "chance" and "law" cannot produce CSI, and 2) he then dubs this the "Law of Conservation of Information". You don't need lengthy arguments to demonstrate something that is true by definition, and something that is true merely by definition doesn't merit the designation "Law". keith s
StephenB, Despite your elaborate dance, the truth remains. You blatantly misquoted Winston:
Keith S is right. Sort of. Dembski’s design argument is a circular argument.
...when Winston actually wrote this:
Keith S is right. Sort of. As highlighted in a recent post by vjtorley, Keith S has argued that Dembski’s Design Inference is a circular argument.
Box mistakenly trusted you instead of checking the OP. As KF would say: Please do better, Stephen. keith s
SB: Let’s say that I am evaluating the probability that wind, air, and erosion produced a well-formed sand castle on the beach. Assume that the object contains two million grains of sand, the time value of your choice, or any other quantitative facts you need to make the determination. Take me through the process. Pay special attention to the chronology involved in determining the probability that unguided forces were responsible for the object and the amount of complex specified information that it contains. Mark
The fact that it may be extremely difficult to estimate P(T|H) in practice is a problem with the CSI method not a prove that CSI doesn’t need such an estimate.
I am not insisting on an accurate measurement. I am asking Robb (or you, or Winston) for a step-by-step description of the process by which the CSI would be calculated. StephenB
...you [won't] say things that ignore physical reality. Upright BiPed
Adapa, The product of translation cannot be reduced to physical law because of the natural (and necessary) discontinuity that exists between the arrangement of the medium and its post-translation effect. It is the organization of the system that establishes the effect, not physical law. Try to educate yourself. Your insults will become better, and you say things that ignore physical reality. Upright BiPed
Upright BiPed Perhaps the reason you are such a vociferous ID critic is because you don’t understand the issues. Perhaps the reason ID makes zero headway in the scientific community is because what IDers keep dreaming up as problematic evolutionary issues aren't issues at all. Adapa
Silver Asiatic You have a massively difficult task in showing the evolutionary origin of multi-layered cellular information processing, cell-repair, feedback control loops, protein folds, epigenetics … etc Yeah, we know the ID argument by heart. "This is sooooo complex I can't imagine it evolved, therefore the Christian God, er the Intelligent Designer did it". You actually have the massively difficult task in showing the Designed origin of multi-layered cellular information processing, cell-repair, feedback control loops, protein folds, epigenetics. Science has a perfectly good empirically observed mechanism for explaining the evolution of complexity in biological life. What was that ID mechanism for the creation and manufacture of these "designs" again? Let us know when you come up with some positive evidence for your Intelligent Designer. Adapa
Adapa #86
The processes of transcription and translation that take DNA to mRNA to polypeptide chain are 100% deterministic physical processes.
The fact that the system follows natural law has never been in question, by anyone. Perhaps the reason you are such a vociferous ID critic is because you don't understand the issues. Upright BiPed
Thanks for admitting the ID argument is busted.
It's clear that you're not interested in understanding the argument. You think you've won some kind of victory merely by declaring it to yourself. You've done nothing against the ID inference and have actually supported the non-circularity argument I proposed (by saying nothing about it). You have a massively difficult task in showing the evolutionary origin of multi-layered cellular information processing, cell-repair, feedback control loops, protein folds, epigenetics ... etc. It's a question of origins. You might want to inform Jim Shapiro that there's really no problem here. It's all 100% deterministic on known "laws of chemistry and physics": "One of the great scientific ironies of the last century is the fact that molecular biology, which its pioneers expected to provide a firm chemical and physical basis for understanding life, instead uncovered powerful sensor and communication networks essential to all vital processes , such as metabolism, growth, the cell cycle, cellular differentiation, and multicellular morphogenesis. ... [T]he life sciences have converged with other disciplines to focus on questions of acquiring, processing, and transmitting information to ensure the correct operation of complex vital systems." (p. 4) -- Evolution: A View from the 21st Century Silver Asiatic
Dr. Stephen Meyer: Chemistry/RNA World/crystal formation can't explain genetic information - video Excerpt 5:00 minute mark: "If there is no chemical interaction here (in the DNA molecule) you can't invoke chemistry to explain sequencing" http://www.youtube.com/watch?v=yLeWh8Df3k8 bornagain77
Silver Asistic We don’t consider deterministic physical processes (water running downhill) as the output of a communication/information process. The processes of transcription and translation that take DNA to mRNA to polypeptide chain are 100% deterministic physical processes. These are chemical molecules undergoing reactions by following the laws of chemistry and physics. Thanks for admitting the ID argument is busted. Adapa
Upright BiPed #34
I think it would be a great advance to ID, if ID proponents would get it clear in their heads what the I in CSI is.
Agreed. Our opponents (and I think this OP itself?) misunderstand this. KF offered the following:
Further, the relevant configuration-sensitive separately and “simply” describable specification is often that of carrying out a relevant function on correct organisation and linked interaction of component parts, ranging from text strings for messages in English, to source or better object code for execution of computing algorithms, to organisation of fishing reel parts (such as the Abu 6500 C3) to the organisation of D/RNA codon sequences to control synthesis of functional proteins individually and collectively across the needs of cells and organisms with particular body plans. Where, too, function of proteins requires sequences that fold and form relevant slots and key-lock fitting structures in a way that per Axe 2010 goes beyond mere Lego brick stacking of interchangeable modules. That is there are interactions all along the folded string of AA residues.
Restated: "... the [information] specification ... often [carries] out a relevant function [of] correct organisation and linked interaction of component parts, ranging from text strings for messages in English, to ... code for execution of computing algorithms, to organisation of fishing reel parts (such as the Abu 6500 C3) to the organisation of D/RNA codon sequences to control synthesis of functional proteins individually and collectively across the needs of cells and organisms with particular body plans. ... function of proteins requires sequences that fold and form relevant slots and key-lock fitting structures in a way that per Axe 2010[,] goes beyond mere Lego brick stacking of interchangeable modules. That is there are interactions all along the folded string of AA residues." Restated again: Information is communicative. What it often communicates is a function. Thus, we have FCSI. We observe information (as examples given), and recognize its related function. Specified information goes beyond mere patterning. As you [Upright Biped] have argued many times, there is a necessary "discontinuity" between sender and receiver of information. There is a separation or 'freedom' involved because otherwise the information would be linked to the function in a deterministic mechanism -- and therefore would not be information, and would be unnecessary. We don't consider deterministic physical processes (water running downhill) as the output of a communication/information process. Gravity is not communicating information to the water and the water is not receiving the message from gravity or the hill (as far as we can tell). It is much different for genetic information which communicates to cell-processes. It's different also for information communicated and received by plants and animals -- and especially humans, in diverse ways. The ID argument does not begin by declaring that something could not have originated via natural processes. It begins by observing Information which is Specified (usually to an observable Function) and is Complex beyond a threshold. I don't think the ID inference concludes "this could not have evolved", but rather, "it's most probable that this is the product of Design by Intelligence, since it correlates to FCSI observances that we know were produced by intelligence." The ID inference can be falsified in those cases by showing that the FCSI was produced by natural forces. There is nothing circular about this. 1. Observe FCSI [Note]: What is FCSI? It has nothing to do with whether the thing has evolved or not. It has to do with Information which communicates - sender to receiver. We often observe a function. We observe that. 2. Explore natural forces/processes to see if the observation was caused by them. 3. Explore known Intelligent forces for correlation or causation. 4. Determine most probable source of FCSI. -- if #2 is Null and #3 is highly correlated, the inference to the most probable cause is Intelligent Design. It's not a circular argument. SETI research is not based on a tautology. Forsensics does not begin with the conclusion and then define results based on that. 1. We know it is murder if there is MSI (murder specific information). 2. MSI is defined as having ruled out natural cause. 3. We can't find natural cause. 4. Therefore there is MSI and a murder must have occurred. If you think forensic science works like that, then I can understand why you think CSI is a circular argument. Silver Asiatic
Of supplemental note: classical 'digital' information, is found to be a subset of ‘non-local', (i.e. beyond space and time), quantum entanglement/information by the following method:
Quantum knowledge cools computers: New understanding of entropy – June 2011 Excerpt: No heat, even a cooling effect; In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.” http://www.sciencedaily.com/releases/2011/06/110601134300.htm
,,,And here is the evidence that quantum information is in fact ‘conserved’;,,,
Quantum no-hiding theorem experimentally confirmed for first time Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html Quantum no-deleting theorem Excerpt: A stronger version of the no-cloning theorem and the no-deleting theorem provide permanence to quantum information. To create a copy one must import the information from some part of the universe and to delete a state one needs to export it to another part of the universe where it will continue to exist. http://en.wikipedia.org/wiki/Quantum_no-deleting_theorem#Consequence
Besides providing direct empirical falsification to the materialistic claims of neo-Darwinists, the implications of finding 'non-local', beyond space and time, and ‘conserved’ quantum information in molecular biology on a massive scale is fairly, and pleasantly, obvious:
Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff - video (notes in description) http://vimeo.com/29895068 Quantum Entangled Consciousness - Life After Death - Stuart Hameroff - video https://vimeo.com/39982578
Verse and Music:
John 1:1-4 In the beginning was the Word, and the Word was with God, and the Word was God. He was in the beginning with God. All things were made through Him, and without Him nothing was made that was made. In Him was life, and the life was the light of men. ROYAL TAILOR – HOLD ME TOGETHER – music video http://www.youtube.com/watch?v=vbpJ2FeeJgw
bornagain77
Thus not only is information not emergent from a energy-matter basis, as is presupposed in atheistic materialism, but energy, and even matter, ultimately reduce to an information basis as is presupposed in Christian Theism (John1:1). How all this plays out in molecular biology is that this non-local, beyond space and time, quantum entanglement, by which energy/matter was reduced to information and teleported, is now found in molecular biology on a massive scale. Here are a few of my notes along that line:
Quantum entanglement holds together life’s blueprint – 2010 Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours. “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ Quantum Information/Entanglement In DNA - short video https://vimeo.com/92405752 Coherent Intrachain energy migration at room temperature – Elisabetta Collini and Gregory Scholes – University of Toronto – Science, 323, (2009), pp. 369-73 Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state. http://www.scimednet.org/quantum-coherence-living-cells-and-protein/
That quantum entanglement, which conclusively demonstrates that ‘information’ in its pure ‘quantum form’ is completely transcendent of any time and space constraints (Bell, Aspect, Leggett, Zeilinger, etc..), should be found in molecular biology on such a massive scale is a direct empirical falsification of Darwinian claims, for how can the ‘non-local’ quantum entanglement ‘effect’ in biology possibly be explained by a material (matter/energy) cause when the quantum entanglement effect falsified material particles as its own causation in the first place? Appealing to the probability of various 'random' configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply!
Looking beyond space and time to cope with quantum theory – 29 October 2012 Excerpt: “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” http://www.quantumlah.org/highlight/121029_hidden_influences.php Closing the last Bell-test loophole for photons - Jun 11, 2013 Excerpt:– requiring no assumptions or correction of count rates – that confirmed quantum entanglement to nearly 70 standard deviations.,,, http://phys.org/news/2013-06-bell-test-loophole-photons.html etc.. etc..
In other words, to give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various ‘special’ configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! And although Naturalists have proposed various, far fetched, naturalistic scenarios to try to get around the Theistic implications of quantum non-locality, none of the ‘far fetched’ naturalistic solutions, in themselves, are compatible with the reductive materialism that undergirds neo-Darwinian thought.
"[while a number of philosophical ideas] may be logically consistent with present quantum mechanics, ...materialism is not." Eugene Wigner Quantum Physics Debunks Materialism - video playlist https://www.youtube.com/watch?list=PL1mr9ZTZb3TViAqtowpvZy5PZpn-MoSK_&v=4C5pq7W5yRM Why Quantum Theory Does Not Support Materialism By Bruce L Gordon, Ph.D Excerpt: The underlying problem is this: there are correlations in nature that require a causal explanation but for which no physical explanation is in principle possible. Furthermore, the nonlocalizability of field quanta entails that these entities, whatever they are, fail the criterion of material individuality. So, paradoxically and ironically, the most fundamental constituents and relations of the material world cannot, in principle, be understood in terms of material substances. Since there must be some explanation for these things, the correct explanation will have to be one which is non-physical – and this is plainly incompatible with any and all varieties of materialism. http://www.4truth.net/fourtruthpbscience.aspx?pageid=8589952939
Thus, as far as empirical science itself is concerned, Neo-Darwinism is directly falsified by the experimental evidence, in its claim that information in life is merely ‘emergent’ from a materialistic basis. bornagain77
Moreover, to drive the nail in the coffin, as it were, to the materialistic belief that information is merely emergent from a material basis, it is now shown, by using quantum entanglement as a 'quantum information channel', that material reduces to information. (of note: energy is completely reduced to quantum information, whereas matter is semi-completely reduced, with the caveat being that matter can be reduced to energy via e=mc2). Here are a few of my notes along that line:
Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original,,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap – 2009 Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,, “What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts Scientists Report Finding Reliable Way to Teleport Data By JOHN MARKOFF - MAY 29, 2014 Excerpt: They report that they have achieved perfectly accurate teleportation of quantum information over short distances. They are now seeking to repeat their experiment over the distance of more than a kilometer. If they are able to repeatedly show that entanglement works at this distance, it will be a definitive demonstration of the entanglement phenomenon and quantum mechanical theory. http://www.nytimes.com/2014/05/30/science/scientists-report-finding-reliable-way-to-teleport-data.html?_r=2 First Teleportation Of Multiple Quantum Properties Of A Single Photon - Oct 7, 2014 To truly teleport an object, you have to include all its quantum properties. Excerpt: ,,,It is these properties— the spin angular momentum and the orbital angular momentum?(of a photon)—?that Xi-Lin and co have teleported together for the first time.,,, https://medium.com/the-physics-arxiv-blog/first-teleportation-of-multiple-quantum-properties-of-a-single-photon-7c1e61598565 Researchers Succeed in Quantum Teleportation of Light Waves - April 2011 Excerpt: In this experiment, researchers in Australia and Japan were able to transfer quantum information from one place to another without having to physically move it. It was destroyed in one place and instantly resurrected in another, “alive” again and unchanged. This is a major advance, as previous teleportation experiments were either very slow or caused some information to be lost. http://www.popsci.com/technology/article/2011-04/quantum-teleportation-breakthrough-could-lead-instantanous-computing How Teleportation Will Work - Excerpt: In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. — As predicted, the original photon no longer existed once the replica was made. http://science.howstuffworks.com/science-vs-myth/everyday-myths/teleportation1.htm Quantum Teleportation – IBM Research Page Excerpt: “it would destroy the original (photon) in the process,,” http://researcher.ibm.com/view_project.php?id=2862
bornagain77
Winston Ewert, for me, not being trained in higher mathematics, the easiest way to avoid this 'sort of' circularity, i.e. the adding of information as sort of a 'gloss' after the improbability of a certain configuration of a material substrate is established,,
"Probability comes before, not after the establishment of CSI." Winston Ewert
,,,the way to avoid the adding information as somewhat of a 'gloss' after the probability calculation is to go directly to the empirical evidence itself and establish that information is a 'real' entity that is separate from, and even primary to, material.,,, First a little background as to the situation we are dealing with with materialists,,, Believe it or not, not too many years ago I remember that materialists, as obvious as it is that sophisticated programming/information is in DNA (Bill Gates, etc..), would constantly argue that information was not really even in life. In fact, in the past, more than once, these following citations have been used to counteract the materialistic belief that information is not really in life.
Every Bit Digital: DNA’s Programming Really Bugs Some ID Critics - Casey Luskin - 2010 Excerpt: "There’s a very recognizable digital code of the kind that electrical engineers rediscovered in the 1950s that maps the codes for sequences of DNA onto expressions of proteins." http://www.salvomag.com/new/articles/salvo12/12luskin2.php Harvard cracks DNA storage, crams 700 terabytes of data into a single gram - Sebastian Anthony - August 17, 2012 Excerpt: A bioengineer and geneticist at Harvard’s Wyss Institute have successfully stored 5.5 petabits of data — around 700 terabytes — in a single gram of DNA, smashing the previous DNA data density record by a thousand times.,,, Just think about it for a moment: One gram of DNA can store 700 terabytes of data. That’s 14,000 50-gigabyte Blu-ray discs… in a droplet of DNA that would fit on the tip of your pinky. To store the same kind of data on hard drives — the densest storage medium in use today — you’d need 233 3TB drives, weighing a total of 151 kilos. In Church and Kosuri’s case, they have successfully stored around 700 kilobytes of data in DNA — Church’s latest book, in fact — and proceeded to make 70 billion copies (which they claim, jokingly, makes it the best-selling book of all time!) totaling 44 petabytes of data stored. http://www.extremetech.com/extreme/134672-harvard-cracks-dna-storage-crams-700-terabytes-of-data-into-a-single-gram Information Storage in DNA by Wyss Institute - video https://vimeo.com/47615970 Quote from preceding video: "The theoretical (information) density of DNA is you could store the total world information, which is 1.8 zetabytes, at least in 2011, in about 4 grams of DNA." Sriram Kosuri PhD. - Wyss Institute DNA: The Ultimate Hard Drive - Science Magazine, August-16-2012 Excerpt: "When it comes to storing information, hard drives don't hold a candle to DNA. Our genetic code packs billions of gigabytes into a single gram. A mere milligram of the molecule could encode the complete text of every book in the Library of Congress and have plenty of room to spare." http://news.sciencemag.org/sciencenow/2012/08/written-in-dna-code.html
In my honest opinion, the main reason that atheists/materialists would even argue the absurd position that information is not really in life in the first place, despite the fact that highly sophisticated functional information is obviously in life, is that, in the materialistic worldview, information is held to ultimately be 'emergent' from, to even be 'illusory' to, a material basis. That is to say, in the materialistic worldview, information, (as well as consciousness itself), is held to merely emerge from what they consider to be the primary, i.e. the 'real', material basis of reality. And while probabilistic arguments are certainly very good, for most reasonable people, for determining whether a functional sequence was designed or not, probabilistic arguments fail to be conclusive for the committed atheists/materialist. The reason for this failure of conclusiveness for them is, as far as I can tell from my dealings with commited atheists/materialists on UD, is that they can always imagine that some undiscovered hidden law, or some type of undiscovered hidden process, will be someday be discovered that will 'come to the rescue' for them. Some undiscovered process or law that will make these sequences of functional information not as improbable as they are turning out to be for them. I call it the Dumb and Dumber 'there's a chance' hypothesis:
Dumb and Dumber 'There's a Chance' https://www.youtube.com/watch?v=KX5jNnDMfxA
(for prime example of this misbegotten hope, see keith s's current excitement, as ill-founded as that excitement is, with Wagner's, as far as I can gather, evidence-free conjecture in his book 'Arrival Of The Fittest'. A book in which Wagner tries to reduce the mathematical probabilistic barriers for blind search). (Of note: Winston, given your skill in mathematics in this particular area, perhaps you can examine Wagner's argument more closely and do a post on that?) Thus, due to the that 'misbegotten hope' of the committed materialist, it is very important, as Dr. Dembski has currently done in his new book, to show that information is the foundation 'stuff' of the universe, and to show that material is 'emergent', even 'illusory' from what is the 'real' information basis of the universe.
“The thesis of my book ‘Being as Communion’ is that the fundamental stuff of the world is information. That things are real because they exchange information one with another.” Conversations with William Dembski–The Thesis of Being as Communion – video https://www.youtube.com/watch?v=cYAsaU9IvnI Conversations with William Dembski–Information All the Way Down - video https://www.youtube.com/watch?v=BnVss3QseCw
Dr. Dembski's contention that information, not material, is the 'fundamental stuff' of reality has some very impressive, even conclusive, empirical clout behind it. Both Wheeler and Zeilinger have, in their dealings with quantum mechanics, come to the conclusion that information, not material, is the 'fundamental stuff' of the universe.
"it from bit” Every “it”— every particle, every field of force, even the space-time continuum itself derives its function, its meaning, its very existence entirely—even if in some contexts indirectly—from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. “It from bit” symbolizes the idea that every item of the physical world has a bottom—a very deep bottom, in most instances, an immaterial source and explanation, that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment—evoked responses, in short all matter and all things physical are information-theoretic in origin and this is a participatory universe." – Princeton University physicist John Wheeler (1911–2008) (Wheeler, John A. (1990), “Information, physics, quantum: The search for links”, in W. Zurek, Complexity, Entropy, and the Physics of Information (Redwood City, California: Addison-Wesley)) "Now I am in the grip of a new vision, that Everything Is Information. The more I have pondered the mystery of the quantum and our strange ability to comprehend this world in which we live, the more I see possible fundamental roles for logic and information as the bedrock of physical theory." – J. A. Wheeler, K. Ford, Geons, Black Hole, & Quantum Foam: A Life in Physics New York W.W. Norton & Co, 1998, pp 64. Why the Quantum? It from Bit? A Participatory Universe? Excerpt: In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: "In the beginning was the Word." Anton Zeilinger - a leading expert in quantum mechanics: http://www.metanexus.net/archive/ultimate_reality/zeilinger.pdf
bornagain77
WJM #73
Can you show where Dembski (or any ID proponent) has built into the measurement of CSI an evaluation of it’s probability of being generated by unguided forces?
I see R0bb has already answered this but maybe it is worth being even more explicit. First I assume you agree that a chance hypothesis is an  example of an unguided force? Given that, see page 21 of the paper that R0bb quoted.  Dembski gives the formula for the context dependent specified complexity of T given H –where H is a chance hypothesis  (note he gives no other way of measuring specified complexity such as one that is context independent or without a given H).  It is not possible to write the formula here because I don’t have the character set but it contains the expression P(T|H). Would you like to give an example of how to measure CSI that does not include a chance hypothesis? SB #78 The fact that it may be extremely difficult to estimate P(T|H) in practice is a problem with the CSI method not a prove that CSI doesn't need such an estimate. markf
Absolutely. My intended but poorly stated point was that everybody already rejects any hypothetical explanations for biology that require astronomical luck. We don’t need to learn about specified complexity to be convinced of this position, although Dembski might argue that specified complexity explains why we take this position.
I get your point. I'll simply note that some people do try to claim that even if evolutionary explanations required astronomical luck, it doesn't mean it didn't happen.
I reiterate that if Dembski had thought that CSI excluded “chance” and “law” by definition, then it would have been pointless to make a lengthy argument to show this by other means. Something that’s true by definition does not need to be demonstrated by argument — you can simply restate the definition again!
All right triangles follow the Pythagorus theorem by definition. That is, the theorem follows somewhat non-obviously from the definition. You couldn't provie it by just restating the definition. In the same way, CSI follows Dembski's laws by definition, but not in a way that is established by repeating the definition.
The fact that Dembski calls this a law indicates that he thinks it is an empirical truth, not a definitional tautology.
Do a search for "Laws of Math" you'll find that many mathematical results are stated as laws.
His understanding is that, in order to calculate CSI, we must choose as a chance hypothesis the actual cause of the event. When I asked how that works when the actual cause is design, he said:
Allow me to put it this way. Specified Complexity is defined based on both an object of interest and a chance hypothesis that produced it. You cannot compute any value of CSI independently of a chance hypothesis. When we say that an object exhibits specified complexity outright, that's shorthand for saying that object exhibits specified complexity under all relevant chance hypotheses. (In my opinion, this was an unnecessary overloading of the terminology, but its too late to change that now.) When say that an intelligent agent produces CSI, we mean that they produce objects which were improbable under all the relevant chance hypotheses. Note that we do not attempt to calculate the probability by the agent hypothesis because its not a chance hypothesis.
but nowhere in the post does he say what, in my judgment, needed to be said—namely, that Dembski’s design inference is NOT circular and that KeithS was wrong to claim otherwise. Indeed, he continued to emphasize the extent to which he agreed with KeithS
Clearly, I should have been more explicit. However, my last paragraph was attempting to state what you are saying there:
So Keith is right, arguing for the improbability of evolution on the basis of specified complexity is circular. However, specified complexity, as developed by Dembski, isn’t designed for the purpose of demonstrating the improbability of evolution. When used for its proper role, specified complexity is a valid, though limited argument.
Unfortunately, Dr. Dembski doesn’t post here anymore. Thus, I would appreciate if Winston could comment on Kairosfocus’s FSCO/I and dFSCI. Do these terms in the work currently done at the Evolutionary Informatics Lab? Have they ever been used or at least discussed?
Since I've graduated, I'm no longer in as frequent contact with the EIL, so I can't really speak to there status to the same degree. I'll simply note that the acronyms have too many letters. A good acronym has at most three letters. More seriously, I haven't looked closely at this measures, so I won't comment on them.
Can you show where Dembski (or any ID proponent) has built into the measurement of CSI an evaluation of it’s probability of being generated by unguided forces?
See my article http://www.evolutionnews.org/2013/04/information_pas071201.html, in there I point to pages throughout Dembski's work where he says that CSI requires the evaluation of probability by chance hypotheses. Winston Ewert
Robb
I recommend the paper that Dembski calls his “most up-to-date treatment of CSI”. Note the formulas that contain the factor P(T|H).
Let’s say that I am evaluating the probability that wind, air, and erosion produced a well-formed sand castle on the beach. Assume that the object contains two million grains of sand, the time value of your choice, or any other quantitative facts you need to make the determination. Take me through the process. Pay special attention to the chronology involved in determining the probability that unguided forces were responsible for the object and the amount of complex specified information that it contains. StephenB
WJM:
Can you show where Dembski (or any ID proponent) has built into the measurement of CSI an evaluation of it’s probability of being generated by unguided forces?
I recommend the paper that Dembski calls his "most up-to-date treatment of CSI". Note the formulas that contain the factor P(T|H). R0bb
I note in info theory info is measured on loss of uncertainty on receiving a message, regarding state of the source. KF kairosfocus
WJM, that is why it is important to log reduce and see that you have an info beyond a threshold metric. The threshold is where an evaluation comes in. The degree of specified, complex info is before that. And I repeat we can use empirical studies of the 4-state per node DNA chain or the 20-state (more or less there are oddball cases) per node AA chain in proteins. KF kairosfocus
Box, probability and information metrics are related, are in effect dual through the negative log probability metric. As there has been so much back forth I will use a fairly technical case from a diverse field to make the point, via how Harry S Robertson in Statistical ThermoPhysics grounds the informational view of thermodynamics. Here I clip my always linked note, from Robertson:
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy . . . ] H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]
In short, information is linked to probability and vice versa. In the case in prolonged debate because it has been a handy rhetorical club for ID objectors it turns out that information content of DNA or protein is fairly easy to obtain on state possibilities or on statistical studies on how such are actually used. Which will reflect the variability across life and time thus also the underlying probabilities. We already know the result, it is immaterial which metric we use, life forms have FSCO/I well beyond the threshold where blind watchmaker mechanisms are remotely plausible. Therein lieth the rub, as that is an ideologically unacceptable result in many quarters. KF kairosfocus
WE said:
No. In Dembski’s work the probability is calculated in order to establish the complexity of the complex-specification criteria. Probability comes before, not after the establishment of CSI.
Measuring probability-distribution complexity capacity of a thing along with how specified (and or functional) it is is not the same thing as determining whether or not that a particular distributional pattern is plausibly achievable via unguided forces. You measure it first. You evaluate nature's capacity to produce it afterwards. Can you show where Dembski (or any ID proponent) has built into the measurement of CSI an evaluation of it's probability of being generated by unguided forces? William J Murray
StephenB, For me the shocker is in post #32 and #51.
WJM: No. CSI isn’t an argument; it’s a measurement.
WE #32: No, its both. The measurement is based on how specified and improbable an object is.
WJM: No, the measurement demonstrates how complex-specified it is. Probabilities come into play after the CSI has been established and a case is being made for best cause.
WJM presents the common understanding of CSI at UD. It is in full accord with not being circular at all. We can observe an artifact and measure its CSI independent from probabilities. Kairosfocus wrote about CSI in a recent article:
KF: 'But you cannot quantify it, so there, it’s all pointless!' Another blatant denial of well known facts. Go open your PC up, and open up a folder. Set to view details. Notice the listed file sizes? Those are in bytes, eight bit clusters, of functionally specific, digitally coded complex informational files.
Is there a calculation of the improbability of evolution in the listed file sizes in my PC folders? Of course not. We have always understood that the measurement of the number of bits of CSI is distinct from a calculation of the improbability of evolution. Winston Ewert is very definitive however:
Winston Ewert: No. In Dembski’s work the probability is calculated in order to establish the complexity of the complex-specification criteria. Probability comes before, not after the establishment of CSI.
Now, I'm here waiting and hoping to see some arguments from both sides. Box
PPPS: I suggest that complexity of a relevant entity can be indicated by the length of the y/n string of structured q's to specify its state in the face of the possible clumped or scattered possibilities, i.e. the rough equivalent of what AutoCAD does in portraying a 3-d config of a system. That is, functionally specific bits. Configs that are only a few bits long can fairly easily be recaptured by chance. Those that are of 500 or 1000 or more, not so, per sparse search imposed by available atomic and temporal resources. Complexity of course here points to high contingency, and the next aspect is that specified complexity would bespeak organisation of components that fit an independent or detachable "simple" description, or observable component interaction dependent performance -- functionality. Text elements in this post can be in many many possible configs, but only a few will make for contextually responsive remarks in English. For simple and familiar instance. Likewise AA strings can be in many states, but relatively few will carry out the functions of given enzymes required for even a simple living cell, to relevant degrees of effectiveness. And so forth. kairosfocus
PPS: Where also it can be argued that the observed patterns of proteins and genes etc show what patterns of variation have been successfully explored across time in the world of life. That pattern, for proteins, the point of contact with the real world of organisms that have to live and reproduce, is that we have a dust of clusters of proteins in the space of possible AA string configs, such that we do have thousands of islands of function that are not reasonably bridgeable by either incremental stepping stones or Lego-brick like modularity that can build a whole from standard components. (Fold-function interactions involve the whole AA chain so that is unsurprising.) kairosfocus
PS: What are relevant chancy hyps? For OOL, they are that standard physical and chemical materials and forces with linked statistical thermodynamics are acting in Darwin's warm pond of salts or the like prebiotic environments. These point strongly to a preponderance of forces of breakdown being prevalent, and to forces that feed diffusion and the sort of random walk dispersions we can observe as Brownian motion. Where clay templates or the like will tend to be crystalline and/or random, not fortuitously functionally specific. With of course the point that self replication on in effect a von Neumann kinematic, code using self replicator is part of what needs to be adequately causally explained. Language, of course is normally indicative of intelligence and purposeful thought. For existing unicellular life, we are looking at engines of non-foresighted hereditable variation such as various triggers of various mutations. Again, overwhelmingly adverse. Where also, while the tendency is to speak of "natural selection" in fact differential reproductive success and resulting culling out of less fit or fortunate varieties SUBTRACTS hereditable information. It is to the sources of chance variation we must turn to find the much discussed incremental creation of novel body plans etc. But, actual empirical evidence of the creative and inventive power of such chancy processes to account for 10 - 100+ mn new bases in the genome dozens of times over remains strangely absent, whilst it is abundantly easy to see that functionally constrained specified complexity is just that -- highly constrained to what Dembski once termed islands of function, in the space of possible configs W. We are back to T in W or rather clusters of Ts in W that are isolated and face sparse blind search. And blind searches for golden searches on W that land us in convenient reach of T's, face the fact that a search or sample is a subset of W. So the search for a golden blind search is looking at the higher order space 2^W. for 500 bits W = 3.27*10^150, 2^W is calculator smoking territory. kairosfocus
WE: Perhaps, more significant is to look at the information dual to the probability metric, given that the actual context is log(p(T|H)). That is, low probabilities on blind chance [and linked mechanical necessities] are correlated with high informational complexity. With, the other two terms so transformed to informational form by the log operator and summation/subtraction, marking a threshold of relevant informational complexity where something beyond that threshold and which is in a zone T that is deeply isolated in the space of possibilities is not plausible to be attained on reasonable chance hyps that are said to be relevant. Where, zone T is independently describable or specifiable; i.e. it marks a case of specified complexity. Thus, Chi is a beyond threshold of specified complex information metric. Further, the relevant configuration-sensitive separately and "simply" describable specification is often that of carrying out a relevant function on correct organisation and linked interaction of component parts, ranging from text strings for messages in English, to source or better object code for execution of computing algorithms, to organisation of fishing reel parts (such as the Abu 6500 C3) to the organisation of D/RNA codon sequences to control synthesis of functional proteins individually and collectively across the needs of cells and organisms with particular body plans. Where, too, function of proteins requires sequences that fold and form relevant slots and key-lock fitting structures in a way that per Axe 2010 goes beyond mere Lego brick stacking of interchangeable modules. That is there are interactions all along the folded string of AA residues. Where further, practical thresholds for Sol system resources and/or the observed cosmos as a whole, on sparse sampling to space of possibilities rations, run to 500 - 1,000 functionally specific bits. The latter being the square of the former and in effect forcing the full set of possible Planck-time states of 10^80 atoms across a reasonable cosmological thermodynamic lifespan of 10^25 s, to be 1 in 10^150 of the possibilities W. This addresses a point in NFL where Dembski says in effect 1 in 10^150 of possibilities. So, a heuristic metric linked to Dembski's 2005 expression would be to seek functionally specific complex info in cells and other biologically relevant context that credibly traces to the deep past of origins and has in it at least 500 - 1,000 bits of such specified complexity. As a simple example even if one sets 100 AAs as a reasonable length for early simple organisms, with only 100 proteins and takes overly generous to abiogenesis estimates of 1 bit of info per AA, we have 10,000 bits of functionally specific complex information in the suite of proteins to make such an over simplified first cell work its metabolism etc, which is an order of magnitude of bits beyond the threshold. On needle in haystack blind search and on the vera causa principle (observing that cases of functional specified complexity routinely and on observation are only known to come from intelligently directed configuration), we are epistemically, inductively warranted to infer that the best explanation of the living cell is design. Which, puts design at the root of the tree of life, raising the point that the root grounds and supports the shoots, branches and twigs. So, we have excellent reason to hold that design pervades the world of life. KF kairosfocus
Andre @61:
Mapou @4 You are on the money! Why do you think Keith S is ignoring me? Because I demanded that from him. Well spotted.
I'm glad we think alike. In my opinion, the gene repair argument fully refutes Darwinian evolution. The whole theory is nothing but a joke, a pathetic attempt by an unscrupulous elite who are hellbent on imposing a stupid state religion on the people. Mapou
P.S. To be even more accurate, the above should be: For H=”natural selection” and T=”success”: The argument of specified complexity was never intended to show that P(T|H) <<<< 1. It was intended to show that if P(T|H) <<<< 1, then P(H|T) < 1/2. R0bb
keith s:
In your linked article, you said:
The argument of specified complexity was never intended to show that natural selection has a low probability of success. It was intended to show that if natural selection has a low probability of success, then it cannot be the explanation for life as we know it.
That’s not correct, and the quote I gave above shows this:
I agree with Winston here. To restate his claim: For H="natural selection" and T="success": The argument of specified complexity was never intended to show that P(T|H) << 1. It was intended to show that if P(T|H) << 1, then P(H|T) << 1. R0bb
Unfortunately, Dr. Dembski doesn't post here anymore. Thus, I would appreciate if Winston could comment on Kairosfocus's FSCO/I and dFSCI. Do these terms in the work currently done at the Evolutionary Informatics Lab? Have they ever been used or at least discussed? sparc
KeithS writes,
Box, You made the mistake of trusting StephenB.
Box, unlike KeithS, I wouldn't dare insinuate that you don't have the ability to read the OP for yourself and that you need to trust me to get the facts. What an insult. I have found that you are far more thoughtful than that. As everyone knows, KeithS has been arguing for weeks that Dembski's design inference is "hopelessly circular," and that CSI is mere "window dressing." So when Winston Ewert broached the subject and began his essay with these words,
Keith S is right. Sort of. As highlighted in a recent post by vjtorley, Keith S has argued that Dembski’s Design Inference is a circular argument.
I interpreted that to mean that he did, sort of, agree with the substance of KeithS' perennial rant. To be sure, Winston qualified his remarks by making some important clarifications about his thoughts on why Dembski developed the notion of specified complexity, but nowhere in the post does he say what, in my judgment, needed to be said---namely, that Dembski's design inference is NOT circular and that KeithS was wrong to claim otherwise. Indeed, he continued to emphasize the extent to which he agreed with KeithS That is certainly the way KeithS and all the anti-ID partisans interpreted it, as is clear from their celebration. Predictably, KeithS began his pre-mature victory dance by saying,
With the circularity issue out of the way, I’d like to draw attention to the other flaws of Dembski’s CSI."
Clearly, this is a continued attack on Dembski's argument, as opposed to the way others interpret Dembski's argument, and clearly KeithS is implicating Winston in his mistake. So Winston makes the following request,
Before I’d even consider discussing these other alleged flaws, I need you to explicitly acknowledge that the alleged circularity isn’t a flaw in specified complexity, but only in some people’s mistaken interpretation of it. Dembski’s original argument isn’t circular.
As expected, KeithS threw it back in his face:
Winston, The problem is that Dembski does (or at least did) take the presence of CSI as a non-tautological indication that something could not have evolved or been produced by “chance” or “necessity”.
There you go. So, Box, I will let you decide who is trying to pull who's leg. StephenB
Winston:
I don’t see anything in the linked post that suggest a belief that designed CSI is incoherent.
Sorry for the confusion -- the linked comment is where HeKS says that you agree with him. His understanding is that, in order to calculate CSI, we must choose as a chance hypothesis the actual cause of the event. When I asked how that works when the actual cause is design, he said:
No, no, no! You do not calculate the CSI (the high improbability of the specificity) of an event that was designed. That would be calculating the probability, or rather improbability, of something that was done intentionally, which is incoherent.
R0bb
Mapou @4 You are on the money! Why do you think Keith S is ignoring me? Because I demanded that from him. Well spotted. Andre
"everybody already rejects any hypothetical explanations for biology that require astronomical luck." MMM, no they don't,,, HISTORY OF EVOLUTIONARY THEORY - WISTAR DESTROYS EVOLUTION Excerpt: A number of mathematicians, familiar with the biological problems, spoke at that 1966 Wistar Institute,, For example, Murray Eden showed that it would be impossible for even a single ordered pair of genes to be produced by DNA mutations in the bacteria, E. coli,—with 5 billion years in which to produce it! His estimate was based on 5 trillion tons of the bacteria covering the planet to a depth of nearly an inch during that 5 billion years. He then explained that the genes of E. coli contain over a trillion (10^12) bits of data. That is the number 10 followed by 12 zeros. *Eden then showed the mathematical impossibility of protein forming by chance. http://www.pathlights.com/ce_encyclopedia/Encyclopedia/20hist12.htm A review of The Edge of Evolution: The Search for the Limits of Darwinism The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have “invented” little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155). http://creation.com/review-michael-behe-edge-of-evolution Waiting Longer for Two Mutations – Michael J. Behe Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that ‘for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years’ (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless “using their model” gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model. http://www.discovery.org/a/9461 Don't Mess With ID by Paul Giem (Durrett and Schmidt paper)- video https://www.youtube.com/watch?v=5JeYJ29-I7o "The immediate, most important implication is that complexes with more than two different binding sites-ones that require three or more proteins-are beyond the edge of evolution, past what is biologically reasonable to expect Darwinian evolution to have accomplished in all of life in all of the billion-year history of the world. The reasoning is straightforward. The odds of getting two independent things right are the multiple of the odds of getting each right by itself. So, other things being equal, the likelihood of developing two binding sites in a protein complex would be the square of the probability for getting one: a double CCC, 10^20 times 10^20, which is 10^40. There have likely been fewer than 10^40 cells in the world in the last 4 billion years, so the odds are against a single event of this variety in the history of life. It is biologically unreasonable." - Michael Behe - The Edge of Evolution - page 146 Swine Flu, Viruses, and the Edge of Evolution - Casey Luskin - 2009 Excerpt: “Indeed, the work on malaria and AIDS demonstrates that after all possible unintelligent processes in the cell–both ones we’ve discovered so far and ones we haven’t–at best extremely limited benefit, since no such process was able to do much of anything. It’s critical to notice that no artificial limitations were placed on the kinds of mutations or processes the microorganisms could undergo in nature. Nothing–neither point mutation, deletion, insertion, gene duplication, transposition, genome duplication, self-organization nor any other process yet undiscovered–was of much use.” Michael Behe, The Edge of Evolution, pg. 162 http://www.evolutionnews.org/2009/05/swine_flu_viruses_and_the_edge020071.html An Open Letter to Kenneth Miller and PZ Myers - Michael Behe July 21, 2014 Dear Professors Miller and Myers, Talk is cheap. Let's see your numbers. In your recent post on and earlier reviews of my book The Edge of Evolution you toss out a lot of words, but no calculations. You downplay FRS Nicholas White's straightforward estimate that -- considering the number of cells per malaria patient (a trillion), times the number of ill people over the years (billions), divided by the number of independent events (fewer than ten) -- the development of chloroquine-resistance in malaria is an event of probability about 1 in 10^20 malaria-cell replications. Okay, if you don't like that, what's your estimate? Let's see your numbers.,,, ,,, If you folks think that direct, parsimonious, rather obvious route to 1 in 10^20 isn't reasonable, go ahead, calculate a different one, then tell us how much it matters, quantitatively. Posit whatever favorable or neutral mutations you want. Just make sure they're consistent with the evidence in the literature (especially the rarity of resistance, the total number of cells available, and the demonstration by Summers et al. that a minimum of two specific mutations in PfCRT is needed for chloroquine transport). Tell us about the effects of other genes, or population structures, if you think they matter much, or let us know if you disagree for some reason with a reported literature result. Or, Ken, tell us how that ARMD phenotype you like to mention affects the math. Just make sure it all works out to around 1 in 10^20, or let us know why not. Everyone is looking forward to seeing your calculations. Please keep the rhetoric to a minimum. With all best wishes (especially to Professor Myers for a speedy recovery), Mike Behe http://www.evolutionnews.org/2014/07/show_me_the_num088041.html podcast - Michael Behe: Vindication for 'The Edge of Evolution,' Pt. 2 http://intelligentdesign.podomatic.com/entry/2014-08-06T15_26_19-07_00 "The Edge of Evolution" Strikes Again 8-2-2014 by Paul Giem - video https://www.youtube.com/watch?v=HnO-xa3nBE4 When Theory and Experiment Collide — April 16th, 2011 by Douglas Axe Excerpt: Based on our experimental observations and on calculations we made using a published population model [3], we estimated that Darwin’s mechanism would need a truly staggering amount of time—a trillion, trillion, years or more—to accomplish the seemingly subtle change in enzyme function that we studied. per biologic institute Is There Enough Time For Humans to have Evolved from Apes? Dr. Ann Gauger Answers - video http://www.youtube.com/watch?v=KN7NwKYUXOs bornagain77
Winston, I reiterate that if Dembski had thought that CSI excluded "chance" and "law" by definition, then it would have been pointless to make a lengthy argument to show this by other means. Something that's true by definition does not need to be demonstrated by argument -- you can simply restate the definition again! Further, the Law of Conservation of Information makes no sense if CSI excludes "chance" and "law" by definition:
Natural causes are incapable of generating CSI. I call this result the law of conservation of information, or LCI for short... Intelligent Design, p. 170
The fact that Dembski calls this a law indicates that he thinks it is an empirical truth, not a definitional tautology. keith s
Box, You made the mistake of trusting StephenB. Stephen misquoted Winston as saying:
Keith S is right. Sort of. Dembski’s design argument is a circular argument.
What Winston actually wrote was this:
Keith S is right. Sort of. As highlighted in a recent post by vjtorley, Keith S has argued that Dembski’s Design Inference is a circular argument.
Winston doesn't think that Dembski's argument, as Dembski intended it, is circular, but he does think that other people have misconstrued Dembski's argument, and that the altered argument is circular. keith s
Winston:
But to reject modern evolution theory we need P(H|T).
Absolutely. My intended but poorly stated point was that everybody already rejects any hypothetical explanations for biology that require astronomical luck. We don't need to learn about specified complexity to be convinced of this position, although Dembski might argue that specified complexity explains why we take this position. R0bb
Winston Ewert: Dembski’s design argument is a circular argument.
Is this meant to be a general statement? IOW does the 'circularity' also apply to e.g. Dembski's 'conservation of information'? Or does circularity only apply to Dembski's (second version ?) specified complexity argument? Please clarify. Box
Winston, As a protégé of Dembski, I'm surprised that you're not more familiar with his writings. In your linked article, you said:
The argument of specified complexity was never intended to show that natural selection has a low probability of success. It was intended to show that if natural selection has a low probability of success, then it cannot be the explanation for life as we know it.
That's not correct, and the quote I gave above shows this:
We can summarize our findings to this point: (1) Chance generates contingency, but not complex specified information. (2) Laws (i.e., Eigen’s algorithms and natural laws, or what in section 6.2 we called functions) generate neither contingency nor information, much less complex specified information. (3) Laws at best transmit already present information or else lose it. Given these findings, it seems intuitively obvious that no chance-law combination is going to generate information either. After all, laws can only transmit the CSI they are given, and whatever chance gives to a law is not CSI. Ergo, chance and laws working in tandem cannot generate information. This intuition is exactly right, and I will provide a theoretical justification for it shortly. Nevertheless the sense that laws can sift chance and thereby generate CSI is deep-seated in the scientific community. Intelligent Design, p. 167
keith s
Winston Ewert
Keith S is right. Sort of. Dembski’s design argument is a circular argument.
And later
Dembski’s original argument isn’t circular.
Is KeithS right to say that Dembski’s argument is circular, or is KeithS wrong to say that Dembski’s argument is circular? You appear to have quietly changed your mind. Does the word “original” mean something in this context? Winston to KeithS
Before I’d even consider discussing these other alleged flaws, I need you to explicitly acknowledge that the alleged circularity isn’t a flaw in specified complexity, but only in some people’s mistaken interpretation of it. Dembski’s original argument isn’t circular.
Have you forgotten that KeithS was not referring to others' interpretations of specified complexity? He laid the charge of circularity on Dembski and Dembski's argument directly--and you agreed? If you are now saying that KeithS is wrong and Dembski’s argument is not circular, you are saying it too quietly. If you want to be heard now, you will have to roar. From a reader: “Can you show the original (Dembski’s) version as well? I cannot find.”
I’m not sure what you are looking for here. This is the argument as originally developed by Dembski.
I think what the poor reader is asking is why you introduced the word “original.” Come to think of it, so am I. StephenB
Winston Ewert:
The notion of specified complexity exists for one purpose: to give force to probability arguments. If we look at Behe’s irreducible complexity, Axe’s work on proteins, or practically any work by any intelligent design proponent, the work seeks to demonstrate that the Darwinian account of evolution is vastly improbable. Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.
As I see it: the reason the Darwinian account of "evolution" is vastly improbable is due to a logical construct made of generalizations that hide all pertaining to intelligence (and other things) in what can be described as a black-box of natural selection. Instead of answers to the question of how intelligence works throughout biology words like "altruism" are invented that really only look smart while not answering anything. The way around the very serious and misleading weaknesses of Darwinian theory is through theory premised for "intelligent cause", which does not even need a crutch word like "evolved" to explain the origin/development of intelligent living things. It is then possible to make reliable predictions in regards to all that is or is not "intelligent". Going on theory for something else is only a good way to get wrong answers that look right to those who none the less find the conclusions useful, for reasons that do not fully pertain to science. Gary S. Gaulin
Good work, Winston Ewert :) You demonstrated that ID proponents at UD (significantly BA and KF, not to mention the lesser lights) do not understand ID. This is most welcome to settle some record straight, even though this profoundly undermines the ID cause. Brave move :) E.Seigner
No, the measurement demonstrates how complex-specified it is. Probabilities come into play after the CSI has been established and a case is being made for best cause.
No. In Dembski's work the probability is calculated in order to establish the complexity of the complex-specification criteria. Probability comes before, not after the establishment of CSI. Winston Ewert
Can you show the original (Dembski’s) version as well? I cannot find.
I'm not sure what you are looking for here. This is the argument as originally developed by Dembski.
For example, in Chapter 6 of his book Intelligent Design, Dembski devoted pages to arguing that “law” and “chance” cannot produce CSI.
See my previous post: http://www.evolutionnews.org/2013/04/information_pas071201.html. Dembski has always required the calculation of probabilities according to relevant chance hypotheses. What Dembski is arguing in that chapter is that the impossibility of chance or necessity producing CSI derives directly from the definition of CSI. That is, natural processes cannot produce CSI by definition. That's true now and it was true then.
Dembski’s specified improbability concept, as laid out in The Design Inference, was an attempt to justifiably infer low P(H|T) from low P(T|H) without going through Bayes. I would argue that this can’t be done (which is another discussion), but I would also argue that Dembski’s attempt is not circular.
That's very true and insightful. I agree, except for the part about Bayes.
I think that this renders specified complexity somewhat superfluous in terms providing traction for ID. If it could be demonstrated that biological structures are in fact irreducibly complex, in the sense that there are no gradual evolutionary pathways to them, then everyone would concede that modern evolutionary theory is flat-out wrong. There would be no need to invoke specified complexity in order to get everyone on board.
If we could show that there are no gradual pathways to biological structures, we have shown that P(T|H) is very small. But to reject modern evolution theory we need P(H|T). As you noted, CSI seeks to get P(H|T) from P(T|H). So we do need to invoke specified complexity, or something playing a simliar role, to complete the ID argument.
And speaking of the CSI mess, HeKS is under the impression that you agree with his interpretation of CSI. One entailment of this interpretation is that designers cannot create CSI as Dembski defines it, as the very idea of designed CSI is incoherent. You may want to clear that up.
I don't see anything in the linked post that suggest a belief that designed CSI is incoherent. Winston Ewert
WJM: Precisely, the explanatory filter process is an observationally anchored evaluatory process. Where, specified complexity in cases of relevance is functionality based and that is observable. Steps to create metric models buuild on that observability but they do not erase it such that CSI becomes utterly unrelated to the FSCO/I that we observe. Instead the relationship is that FSCO/I has dFSCI as a subset (one that is often not irreducibly complex). Likewise irreducibly complex things do manifest FSCO/I. And by abstracting out the type of specification you create a superset, CSI. CSI has been useful for some forms of analysis but precisely because of its abstractions is open to all sorts of debate points in a situation where selective hyperskepticism is common. By anchoring down to empirical observables imposed by requisites of functional specificity based on interactive parts that collectively achieve functionality, many of those debate points are readily seen as hollow, flawed, question-begging or even strawman tactic. KF kairosfocus
5th: Biosystems do not need to be irreducibly complex for FSCO/I to be a relevant criterion. There are many cases of multipart systems with a degree of redundancy that have no core set of parts that the removal of any one of these destroys relevant function; irreducible complexity is a rather strict claim and criterion. At crude, simplistic level that's why we have two lungs and two kidneys as well as two arms. Similar things happen in cells. In the old Apollo rocket systems, there were typically five ways for vital functions to be carried out by design. As I pointed out to WE and as he accepted, in tech systems such as the dFSCI in error correcting codes there are no kill-points where single point failures are catastrophic. But at the same time the systems in question -- bio or technological -- exhibit such a degree of complex, functionally specific interactive organisation to achieve function that such is not plausibly a result of blind chance and mechanical necessity, but of intelligently directed configuration. KF kairosfocus
keith said:
Those statements make no sense in light of today’s version of CSI, which rules out natural and algorithmic explanations by definition.
Natural and algorithmic explanations are not ruled out by definition. They are ruled out by evaluation. William J Murray
WE: The analytical metric model WmAD proposed is not itself observable, being an analytical construct. Similarly, in generalising from functionally linked informational organisation to a more abstract general specification, that moves away from observables to an analytical term. However so soon as one deals with a description detachable from but designating a zone T in W for a real world case, one is back at criteria of choice that are at minimum in principle observable so that one may decide in/out with reasonable reliability. And, in much the same NFL context of pp 144 and 148, WmAD pointed out that the form of specification relevant to life is functional; which is a highly observable phenomenon -- does the AA string fold, agglomerate and function in that enzyme, or not and does the enzyme have x-level of activity? Functional specificity pivoting on information rich organisation and linked interacting parts in living forms becomes highly relevant. And that, is highly observable and amenable to metric models that are linked to what WmAD did. Orgel and Wicken spoke to the context of the world of life and to recognisable phenomena, though they did not at that time essay on quantification. Quantification, analysis and abstractions have their use but that use does not imply a barrier that erects a middle wall of partition such that the models and quantities are essentially unrelated to the context of the world of life. Where, you should be aware that some objectors have tried to deny or dismiss that there is such a thing as real specified complexity, or that relevant specification can be functional, or that requisites of multi-part interactive function based on correctly arranged and coupled parts has the effect of requiring that configs and parts be specific to zones Z of a much wider configuration space of clumped or scattered parts that will not function. Which in turn puts us in the situation of sparse blind chance and necessity driven needle in haystack search that is predictably fruitless on available atomic and temporal resources. Where by contrast, intelligently directed configuration aka design, routinely creates such functionally specific, complex organisation and associated information, through knowledge, purpose and creative skill as is quite easily seen in a technological world, or even just reflecting on what is going on on text strings in posts in this thread. It is in that context, that on observing high contingency on similar initial conditions, we may infer that a particular aspect of a phenomenon or object is not reasonably explained by mechanical, lawlike necessity rooted in the forces and materials of nature. Though other aspects obviously must reflect such physical or chemical etc necessities. The objects we deal with are composed of atomic matter. High contingency under similar initial conditions has two main known causes: chance factors, and intelligently directed configuration. Where, chance in some form is default; absent the manifestation of functionally specific complex organisation and associated information that bring to bear the sort of sparse needle in haystack search challenge to blind chance and necessity search or sampling that makes such an explanation maximally implausible. Where, such FSCO/I is routinely created by design so that per vera causa it is an empirically reliable sign of it. That is, we see how FSCO/I as an empirically evident manifestation of specified complexity, is a sign of design and plays a role in the design inference explanatory process. Going beyond, the search for search challenge and the concept of active injected information allow us to see as well how such information can reduce the odds against finding relevant zones exhibiting FSCO/I or the like forms of specified complexity as Marks and Dembski have explored in some fairly recent work. (That is, active information in effect steers results to zones of interest in various ways overcoming the sparse search challenge. Such active information is a manifestation of intelligently directed configuration.) KF kairosfocus
Winston, And speaking of the CSI mess, HeKS is under the impression that you agree with his interpretation of CSI. One entailment of this interpretation is that designers cannot create CSI as Dembski defines it, as the very idea of designed CSI is incoherent. You may want to clear that up. R0bb
Winston:
You can certainly have a notion of specified complexity that is observable, like Orgel and Wicken did. But care must be taken not to conflate it with Dembski’s conception.
Thank you. Although you should be warned about how the moderator of this board has responded to someone else who pointed out that Orgel and Dembski are talking about two different concepts. Barry:
Mathgrrl, I will tell you what is ridiculous: Your attempt to convince people that Orgel and Dembski are talking about two different concepts, when that is plainly false. Like the Wizard of Oz you can tell people “don’t look behind that curtain” until you are blue in the face. But I’ve looked behind your curtain, and there is nothing there but a blustering old man. I will not retract an obviously true statement no matter how much you huff. You’ve been found out. Deal with it.
But it's also worth noting that in a follow-up thread, Barry scoffed when I told him that Dembski's examples of specified complexity include simple repetitive sequences, plain rectangular monoliths, and narrowband signals. And in a recent thread, he claimed that CSI can be assessed without a chance hypothesis. So the board moderator, who has been "studying the origins issue for 22 years", doesn't understand what Dembski means by CSI. Which means that if you want to clean up the CSI mess, you have an uphill battle ahead of you. R0bb
Robb said, I think that this renders specified complexity somewhat superfluous in terms providing traction for ID. If it could be demonstrated that biological structures are in fact irreducibly complex, in the sense that there are no gradual evolutionary pathways to them, then everyone would concede that modern evolutionary theory is flat-out wrong. I say. I agree However I think that the concept of CSI comes in handy as a general marker of a set of noncomputable functions including IC but also including things like cosmological fine tuning. Hope that makes sense peace fifthmonarchyman
Winston:
But its a combination of irreducible complexity and specified complexity to produce a whole argument for intelligent design.
Indeed that seems to be Dembski's argument, although I don't know of anyplace that he has said so as straightforwardly as you have. I think that this renders specified complexity somewhat superfluous in terms providing traction for ID. If it could be demonstrated that biological structures are in fact irreducibly complex, in the sense that there are no gradual evolutionary pathways to them, then everyone would concede that modern evolutionary theory is flat-out wrong. There would be no need to invoke specified complexity in order to get everyone on board. R0bb
Thank you, Dr. Ewert. I think the confusion here arises from the conflation of P(T|H) with P(H|T). keith_s summarizes thusly:
To summarize: to establish that something has CSI, we need to show that it could not have been produced by unguided evolution or any other unintelligent process. Once we know that it has CSI, we conclude that it is designed -- that is, that it could not have been produced by unguided evolution or any other unintelligent process.
But the phrase "could not have been produced by unguided evolution or any other unintelligent process" is ambiguous. It may mean "Given unintelligent processes, the event is very unlikely to occur", i.e. P(T|H) << 1. Or it may be interpreted as "Given that the event occurred, it's very unlikely that an unintelligent process was the cause", i.e. P(H|T) << 1. This happens all the time when talking about probabilities -- it's one of the hazards of using informal language. Dembski's specified improbability concept, as laid out in The Design Inference, was an attempt to justifiably infer low P(H|T) from low P(T|H) without going through Bayes. I would argue that this can't be done (which is another discussion), but I would also argue that Dembski's attempt is not circular. R0bb
Even though this discussion is dialed in on CSI, improbabilities, and bacterial flags. etc. I can't help but take pause to think about things in a broader context. After all, life with it's "development" has a long history of step by step, and apparently coordinated multiple step progressions originally emerging from the menu of chemicals available. And within the context of the environmental conditions that were and have been present. Scientific experiments that try and duplicate what might have lead to certain phases of life's development at its earliest stages seem to underscore the need for intelligent manipulation to overcome "probability" barriers (frustration because of normal chemical responses). Chemical laws at that stage, do not seem to be sufficient in producing reactions required for sympathetic results. In fact, chemical laws there, left to themselves, have demonstrated the propensity to destroy any meaningful progression. Science does have the ability to observe in real time how chemicals respond under certain conditions. Seems to me this might be an area in the study of progression towards "living" chemistry that probability calculations might have particular objective significance. "Living" chemical systems had to go through all phases of progression and development to arrive at where they are today. Sorry, no short-cuts. I don't want to divert the discussion. But just to express a thought. Maybe for another discussion. bpragmatic
Against my better judgement KeithS said Those statements make no sense in light of today’s version of CSI, which rules out natural and algorithmic explanations by definition. I say. Of course they do if we see them as part of an extended effort to define CSI For example I might have to spend a lot of ink explaining that "The temperature in Cleveland is 21 degrees" is not a self evident truth as part of my explanation of what self evident truths are. That is how explanations work. Peace fifthmonarchyman
WOW! Someone from UD openly admitting that an opponent is right (sort of)? Never thought I'd see the day. It's a small step for Winston, a huge leap for UD. Congrats ladies! AVS
keiths:
With the circularity issue out of the way, I’d like to draw attention to the other flaws of Dembski’s CSI.
Winston Ewert:
Before I’d even consider discussing these other alleged flaws, I need you to explicitly acknowledge that the alleged circularity isn’t a flaw in specified complexity, but only in some people’s mistaken interpretation of it. Dembski’s original argument isn’t circular.
Winston, The problem is that Dembski does (or at least did) take the presence of CSI as a non-tautological indication that something could not have evolved or been produced by "chance" or "necessity". For example, in Chapter 6 of his book Intelligent Design, Dembski devoted pages to arguing that "law" and "chance" cannot produce CSI. Here is an excerpt:
We can summarize our findings to this point: (1) Chance generates contingency, but not complex specified information. (2) Laws (i.e., Eigen's algorithms and natural laws, or what in section 6.2 we called functions) generate neither contingency nor information, much less complex specified information. (3) Laws at best transmit already present information or else lose it. Given these findings, it seems intuitively obvious that no chance-law combination is going to generate information either. After all, laws can only transmit the CSI they are given, and whatever chance gives to a law is not CSI. Ergo, chance and laws working in tandem cannot generate information. This intuition is exactly right, and I will provide a theoretical justification for it shortly. Nevertheless the sense that laws can sift chance and thereby generate CSI is deep-seated in the scientific community.
And also:
It is CSI that for Manfred Eigen constitutes the great mystery of life's origin, and one he hopes eventually to unravel in terms of algorithms and natural laws.
Those statements make no sense in light of today's version of CSI, which rules out natural and algorithmic explanations by definition. Dembski clearly thought, back then at least, that CSI could be an indicator that something had not evolved. keith s
Winston said What do you mean by “highly complex.”? I say, I would very tentatively say infinite Kolmogorov complexity or zero entropy. Which I believe are equivalent values. fifthmonarchyman
No, its both. The measurement is based on how specified and improbable an object is.
No, the measurement demonstrates how complex-specified it is. Probabilities come into play after the CSI has been established and a case is being made for best cause.
I mean that the argument of specified complexity doesn’t establish that evolution is improbable, instead it assumes that we have established that in some other way.
No, it doesn't. The reason you measure the CSI is because you suspect that design is necessary. We do not know that the flagellum was designed; we suspect design was necessary, so we measure the CSI and we look for known, natural explanations that otherwise acquire the target.
You even call it an argument at the end of your post.
Pardon my mistake. It's not an argument. William J Murray
My $.02 I think it would be a great advance to ID, if ID proponents would get it clear in their heads what the I in CSI is. Upright BiPed
Winston, You have provided your version on the CSI argument.
Premise 1) The evolution of the bacterial flagellum is astronomically improbable. Premise 2) The bacterial flagellum is highly specified. Conclusion) The bacterial flagellum did not evolve.
Can you show the original (Dembski's) version as well? I cannot find it. Box
No. CSI isn’t an argument; it’s a measurement.
No, its both. The measurement is based on how specified and improbable an object is. You even call it an argument at the end of your post. The argument shows that high CSI events don't happen naturally.
CSI doesn’t assume evolution is highly improbable; it makes no assumption about evolution whatsoever.
When I say assumption, I don't mean that we assume that evolution is highly improbable without proof. I mean that the argument of specified complexity doesn't establish that evolution is improbable, instead it assumes that we have established that in some other way.
The CSI argument determines that life as a result of natural forces is highly improbable.
How does it do that? The CSI argument asks you to calculate probabilities, it doesn't tell you how to calculate those probabilities. Winston Ewert
Winston said What do you mean by “highly complex.”? I say, Check out the discussion in gpuccio's thread. Measuring complexity and establishing an objective standard is where the action is IMHO Peace fifthmonarchyman
Premise 1) the bacterial flagellum is specified and highly complex.
What do you mean by "highly complex."?
WE: Pardon, a note. The point on multipart interaction to achieve a specific function is not necessarily an appeal to irreducible complexity.
Fair enough.
PS: Specified complexity is first an observable phenomenon (as noticed by Orgel and Wicken etc) that becomes puzzling as it seems intuitively unlikely to result from blind watchmaker type mechanisms.
The version of specified complexity developed by Dembski isn't an observable phenomenom. See http://www.metanexus.net/essay/explaining-specified-complexity. You can certainly have a notion of specified complexity that is observable, like Orgel and Wicken did. But care must be taken not to conflate it with Dembski's conception. Winston Ewert
I characterize the argument of specified complexity as starting as assuming that evolution is highly improbable.
No. CSI isn't an argument; it's a measurement. CSI doesn't assume evolution is highly improbable; it makes no assumption about evolution whatsoever. CSI is posited as a measurement of an objective value. The value of CSI found in an artifact can, in theory, be determined to either be within the range of known natural forces to plausibly generate, or outside of that range.
Specified complexity assumes that we already know that life is highly improbable.
No, it doesn't. The CSI measurement doesn't assume that life is highly improbable. The CSI argument determines that life as a result of natural forces is highly improbable. William J Murray
I really must stop asking Kairosfocus to explain things on a level that relates to people with a modest capacity of understanding such as myself. Let post #13 be the last request of many. Box
PS: Specified complexity is first an observable phenomenon (as noticed by Orgel and Wicken etc) that becomes puzzling as it seems intuitively unlikely to result from blind watchmaker type mechanisms. On comparing alternative explanations, we see that BW mechanisms face a sparse needle in haystack search that easily swamps solar system or observable cosmos resources. Intelligently directed configuration faces no such limiting challenge ans knowledgeable designers routinely generate things with FSCO/I. Invention may be a challenge but that is a different issue. FSCO/I is then a reasonable and reliable sign of design as cause per trillions of cases in point. kairosfocus
Winston said, Premise 1) The evolution of the bacterial flagellum is astronomically improbable. Premise 2) The bacterial flagellum is highly specified. Conclusion) The bacterial flagellum did not evolve. I say, I would phrase the argument like this Premise 1) the bacterial flagellum is specified and highly complex. Premise 2) Algorithms(can not/have not been demonstrated to) produce highly specified complex things Conclusion)The bacterial flagellum did not arise through an algorithmic process like evolution. peace fifthmonarchyman
WE: Pardon, a note. The point on multipart interaction to achieve a specific function is not necessarily an appeal to irreducible complexity. That is possible in some cases where there is a core set of parts that are all so necessary to function that loss of any one destroys performance. But in other contexts with redundancies that is not so. A good example is an error correcting code lets use 3m triple repetition for concreteness and simplicity: Message as sent = [M1m2m3 . . . mn] x 3. By a voting algorithm, received bit values are ascertained. By the algorithm an error in any one bit say mi on any one of its three appearances cannot corrupt the overall message, but an error in two of the appearances can if it changes the vote. Just as a simple example. KF kairosfocus
With the circularity issue out of the way, I’d like to draw attention to the other flaws of Dembski’s CSI.
Before I'd even consider discussing these other alleged flaws, I need you to explicitly acknowledge that the alleged circularity isn't a flaw in specified complexity, but only in some people's mistaken interpretation of it. Dembski's original argument isn't circular.
Ewert characterizes the ID position as having assumed the very thing it is attempting to demonstrate. Of course his characterization is circular.
No, I don't. I characterize the argument of specified complexity as starting as assuming that evolution is highly improbable. Specified complexity assumes that we already know that life is highly improbable. Intelligent design as a whole combines specified complexity with other arguments to show that evolution did not happen. I was discussing only the issue of specified complexity, not the whole of the intelligent design argument.
1) First, the premise is that the bacterial flagellum is highly specified, and also contingent upon having many parts working together in concert to be of any use whatsoever. 2) We only know of things being specific, and requiring many units working together in concert, arising from intelligent sources. 3) Since evolution does not utilize intelligence, or methods which intentionally construct things which work in integrated concert, the bacterial flagellum is highly unlikely to be constructed by evolution.
That's a fine argument. But its a combination of irreducible complexity and specified complexity to produce a whole argument for intelligent design. That's exactly how its supposed to work. Specified complexity is supposed to be combined with other arguments to form a complete argument for ID. The problem is that people like Keith attempt to critique specified complexity as though it were a complete argument by itself. Winston Ewert
Winston:
In its most basic form, a specified complexity argument takes a form something like: Premise 1) The evolution of the bacterial flagellum is astronomically improbable. Premise 2) The bacterial flagellum is highly specified. Conclusion) The bacterial flagellum did not evolve.
More like: Premise 1) No one knows the step-by-step process that constructed the bac flag Premise 2) No one can even model such a thing Premise 3) Therefor we need to use probabilities to flesh out how the bac flag came to be Premise 4) Bac flags are both specified and complex Premise 5) Given the above the unguided evolution of bac flags is highly improbable Premise 6) Bac flags fit the criteria of intelligent design Premise 7) To refute the design inference for bac flags all one has to do is demonstrate that unguided evolution can produce one. That the inference can be refuted is evidence against circularity. Joe
WJM, Good points. I happen to find the argument that evolutionists always pull out, of snowflakes showing complexity, as using a giant trickery that is not pertinent to the argument at all. Snowflakes don't show complexity. They show repeated patterns, that is not even close to complexity. The fact that snowflakes just so happen to look like something someone would draw that is complex and artistic, does not mean it is complex in the sense of complex functionality. It just looks cool. I don't think any IDst are arguing that something that looks cool must be designed, so that is the basis for the argument. The design people see is not about looking cool, its about something performing or relating to functionality. It has nothing to do with snowflakes or ripples on water, or doggie shaped clouds-so I don't think that argument even needs refuting. phoodoo
Oops, sorry, wasn't paying attention Joe
Joe, language tone please. KF kairosfocus
PS: Also, 17 just above. I adjust Ph: 1) First, the premise is that the bacterial flagellum is highly specified, and also contingent upon having many correctly arranged parts working together in concert to be of any use whatsoever. (This is a case of CSI, specifically FSCO/I.) 2) We only know by observation of things being specific, and requiring many units working together in concert, arising from intelligently directed configuration. (This is vera causa.) 3) On analysis of sparse blind search of large configuration spaces and the implications of FSCO/I that we have islands of function that will be deeply isolated, blind watchmaker thesis searches by dust or random walk with drift or both etc are maximally likely to fail to hit on islands of function. (The needle in haystack, blind sparse search challenge.) 4) Since blind watchmaker thesis evolution does not utilize intelligence, or methods which intentionally construct things which work in integrated concert, the bacterial flagellum is highly unlikely to be constructed by blind watchmaker thesis evolution. (Negative conclusion regarding a candidate.) 5) By contrast, the relevant FSCO/I is known to be something produced by design, and it is reasonable to explain this feature as due to intelligently directed configuration. But this is central to the flagellum, so it is credibly in material part the result of design. (Positive inference.) 6) Where, we note that Irreducible Complexity is a subset of FSCO/I cf. 12 above. kairosfocus
Box, cf 11 above, it is inductive. KF kairosfocus
Winston, If you want to put the argument in simple terms, I don't think you have done the correct job of that at all: Its not: Premise 1) The evolution of the bacterial flagellum is astronomically improbable. Premise 2) The bacterial flagellum is highly specified. Conclusion) The bacterial flagellum did not evolve. 1) First, the premise is that the bacterial flagellum is highly specified, and also contingent upon having many parts working together in concert to be of any use whatsoever. 2) We only know of things being specific, and requiring many units working together in concert, arising from intelligent sources. 3) Since evolution does not utilize intelligence, or methods which intentionally construct things which work in integrated concert, the bacterial flagellum is highly unlikely to be constructed by evolution. Not circular in any way shape or fashion. Finally, don't expect Keith to have a good handle on when an argument is circular and when it is not, this is not his forte. phoodoo
BTW ID uses probabilities because there isn't any evidence for unguided evolution producing CSI so all we have left are probabilities. And it is the evolutionists who have to provide them yet they try to blame us for not providing them. Talk about cowardice. Joe
It’s ironic that ID proponents are always demanding mutation-by-mutation accounts of how this or that biological feature evolved,
Lol! We ask for such a thing because your position says it has such a thing.
because that is the level of detail they must provide in order to justify the values they assign to P(T|H).
Wrong! That is the level of detail YOU need to provide as YOURS is the position that says it has a step-by-step process capable of producing the diversity of life as well as the diversity of the systems and subsystems. Our opponents are so clueless and apparently proud of it. Joe
markf:
If you want to define to CSI to include IR then that changes the argument. However, that is not how Dembski defines it.
Yes, he does- read "No Free Lunch"- hDEmbski states that IC is a special case of specified complexity (page 289)- and he provides a formula for IC - P(dco) = P(orig)xP(local)xP(config), where dco is a discrete combinatorial object, orig is the origin of the parts, local is getting them in the proper spot and config is getting the proper configuration.
You cannot observe CSI without observing or deducing that any known natural process would be so unlikely to meet the specification that it is effectively impossible that it did so.
That is incorrect. We can observe CSI without knowing its origins. The point of CSI is that no one has ever observed non-telic processes producing it and every time we have observed CSI and knew the cause it has always been via some intelligent agency. Joe
Kairosfocus, Can you provide the premisses and conclusion of the specified complexity argument? IOW can you improve on Winston Ewert's version:
In its most basic form, a specified complexity argument takes a form something like: Premise 1) The evolution of the bacterial flagellum is astronomically improbable. Premise 2) The bacterial flagellum is highly specified. Conclusion) The bacterial flagellum did not evolve.
Can you point out where Winston goes wrong? Box
MF: Pardon a point of clarification. The just above helps you see how IC entities are linked to FSCO/I, as in that case the interactive organised complex functionality includes a core of parts that are each necessary for the core functionality. IC is thus a subset of FSCO/I, which is the relevant form of CSI. By contrast dFSCI is another sub set of FSCO/I, but in many cases due to redundancies [error correcting codes come to mind], there will be no set of core parts in a data string such that if any one of such is removed function ceases. CSI is a superset that abstracts specification away from being strictly functional. KF kairosfocus
WJM, well said. 1: FSCO/I -- the operationally relevant thing -- is observable as a phenomenon in and of itself. It depends on multiple, correctly arranged and coupled, interacting components to achieve said functionality. 2: That tight coupling and organisation with interaction sharply constrains the clusters of possible configs consistent with the functionality. Where, 3: There are a great many more clumped configs in the possibilities space that are non functional. (An assembled Abu 6500 C3 reel will work, you can shake up a bag of parts as long as you like, generating all sorts of clumped configs, which predictably will not.) 4: The number of ways to scatter the parts is even hugely more, and again, non functional. 5: The wiring diagram for the reel is highly informational, and the difference between scattered or clumped at random in a bag and properly assembled is manifest. That is, qualitatively observable. 6: The wiring diagram can be specified in a string of structured y/n q's defining the functional cluster of states (there are tolerances, it is not a single point.) That allows us to quantify the info in bits, functionally specific info. 7: Now, let us define a world as a 1 m^3 cubic vat in which parts are floating around based on some version of Brownian motion, with maybe drifts, governed by let's just use Newtonian dynamics. Blind chance and mechanical necessity. 8: It is maximally unlikely that under these circumstances a successful 6500 C3 will be assembled. 9: By contrast, feed in some programmed assembly robots, that find and clump parts then arrange in a complete reel per the diagram . . . quite feasible. And such would with high likelihood, succeed. 10: So, we see that blind chance and mechanical necessity will predictably not find the island of function (it is highly improbable on such a mechanism) but is quite readily achieved on intelligently directed configuration. 11: Now, observe sitting there on your desk, a 6500 c3 reel. It is not known how it came to be, to you. But it exhibits FSCO/I . . . just the gear train alone is decisive on that, never mind the carbontex slipping clutch drag and other features such as the spool on bearings etc. 12: On your knowledge of config spaces, islands of function and the different capabilities of the relevant mechanisms, you would be fully entitled to hold FSCO/I is a reliable sign of design, and to -- having done a back of envelope calc on the possibility space of configs and the search limitations of the sol system (sparse, needle in haystack search) -- hold that it is maximally implausible that a blind dynamic-stochastic mechanism as described or similar could reasonably account for the reel. 13: Thus, the reasoning that infers design on FSCO/I is not circular, but is empirically and analytically grounded. 14: It extends to the micro world also. For, say the protein synthesis mechanism in the ribosome and associated things, is a case of an assembly work cell with tape based numerical control. There is no good reason to infer that such a system with so much of FSCO/I came about by blind chance and mechanical necessity on the gamut of the observable cosmos. But, assembly according to a plan, makes sense. 15: Some will object by inserting self replication and an imagined deep past. That simply inadvertently highlights that OOL is pivotal, as the ribosome system is key to the cell and proteins. 16: Where, the origin of the additional capacity of self replication becomes important, and brings to bear Paley's thought exercise of the time keeping self replicating watch in Ch II of his 1804 Nat Theol. (Which, for coming on 160 years, seems to have been shunted to one side in haste to dismiss his watch vs stone in the field argument. And BTW, Abu started as a watch making then taxi meter manufacturing company, then turned to the SIMPLER machine, fishing reels, when WW II cut off markets. A desperation move that launched a legend.) 17: So, FSCO/I remains a pivotal issue, once we start from the root of the TOL. And, it allows us to see how it is that design is a better explanation for specified, functional complexity than blind chance and mechanical necessity. (Never mind side tracks on nested hierarchies and the like.) KF kairosfocus
SB
Irreducible complexity if a form of CSI.
If you want to define to CSI to include IR then that changes the argument. However, that is not how Dembski defines it. In fact I am not aware of anywhere that it is defined that way. Remember CSI is usually presented as something you calculate. I have never known anyone calculate IR or even show how to do it.
Also, CSI is not restricted to living things. A sand castle contains CSI.
True. That is why I talked about observing CSI in living things.  In non-living things then it has to be phrased more generally.  You cannot observe CSI without observing or deducing that any known natural process would be so unlikely to meet the specification that it is effectively impossible that it did so. Therefore, you cannot use the presence of CSI to deduce that something was not the result of any known natural process. markf
Ewert said:
In its most basic form, a specified complexity argument takes a form something like: Premise 1) The evolution of the bacterial flagellum is astronomically improbable. Premise 2) The bacterial flagellum is highly specified. Conclusion) The bacterial flagellum did not evolve.
Ewert characterizes the ID position as having assumed the very thing it is attempting to demonstrate. Of course his characterization is circular. Premise 1 is incorrect. ID doesn't premise that the evolution of the bacterial flagellum is astronomically improbable. Premise 2 is incorrect. ID doesn't hold as a premise that the bacterial flagellum is highly specified. We do not "begin" with the improbability of the evolution of something. What we begin with is the prima facie appearance design/artificiality. Just because something has the appearance of design/artificiality doesn't mean it has been demonstrated (1)beyond the plausible reach of natural forces and (2)that design is a good alternative explanation. On first blush, snowflakes and rainbows appear to be designed. The paths of the planets around the sun appear to have been designed. This appearance of design (meaning, something that looks lie it was was deigned) is cause for further investigation, not assuming that the thing in question is beyond the reach of natural forces. As with the cases of snowflakes, rainbows and solar systems, we look for natural causes - some combination of the causal categories natural law and chance - to account for the effect/phenomena in question. In the course of this investigation, we find that many biological artifacts hold very high levels of CSI in the form of highly precise, organized, functional 3D mechanisms and a corresponding systems operation code. We research how much CSI can be attributed to any known natural forces combined with generous statistical leeway and find that the CSI found in living organisms cannot be accounted for. Looking around, we find that the only known source of CSI beyond the range of nature to produce is design. Ewert begins his argument at the conclusion by inserting as premise the very thing ID attempts to demonstrate - that natural forces are an implausible explanation, and that design is a plausible explanation. William J Murray
CSI may not be synonymous with evolution being improbable but you cannot observe CSI in a living thing without first observing or deducing that evolution is improbable.
I don't think so. Irreducible complexity if a form of CSI. Also, CSI is not restricted to living things. A sand castle contains CSI. StephenB
CSI is related to, but not synonymous with the process of rendering evolution improbable. It isn’t simply a negative argument against evolution. It is a positive affirmation based on our observational experience of how designers operate and make choices among several possibilities.
CSI may not be synonymous with evolution being improbable but you cannot observe CSI in a living thing without first observing or deducing that evolution is improbable. Dembski is quite clear about this. As many people have pointed out, you can even point to the term in the mathematical definition that corresponds to "evolution is improbable". Therefore it is circular to use the presence of CSI to deduce that evolution is improbable in a living thing. markf
Sorry Winston, but I don’t think your analysis is quite on the mark. CSI is related to, but not synonymous with the process of rendering evolution improbable. It isn’t simply a negative argument against evolution. It is a positive affirmation based on our observational experience of how designers operate and make choices among several possibilities. Yes, it is related to the chance hypothesis, but it is not exactly the same thing. CSI is a confirmation, not a copy. Dembski’s argument is circular only if CSI is mere window dressing. It isn’t. Otherwise, he would not given it equal billing. As he puts it," To safely conclude that an object is designed, we need to establish that it exhibits specificity, and that it has an astronomically low probability of having been produced by unintelligent natural causes. Also, KeithS, as I understand him, is not, as you suggest, simply saying that it is circular to argue for the improbability of evolution on the basis of specified complexity. He is attributing this argument to Dembski. Frankly, I think you need to go back to the drawing board and rework you post. StephenB
Winston - you explained that very well. I wonder if VJ would like to comment? markf
The improbability argument seems weak on the surface but if one looks deep enough, one can see that it quickly turns into a rock solid, impossibility argument. I am surprised that nobody in the ID camp ever seems to notice this. IMO, Darwinian evolution is problematic, not because it is improbable but because it is logically impossible before it even starts. Why? It is because genes cannot survive, let alone evolve, unless there is a gene repair mechanism in place that repairs mutations. As any programmer knows, almost all changes in a program are deleterious. A viable repair mechanism cannot exist because it would need to know in advance which mutations to fix and which ones to allow. Darwinian evolution eats its own tail. Mapou
And some more: keiths on June 16, 2013 at 7:37 pm said:
Well, essentially, this is what Dembski is getting at with his concept of “Specification”.
“Specification” is Dembski’s attempt at dealing with the fact that vastly improbable things happen all the time. Problem is, specifications are usually too specific. For example, Dembski knows that he would be committing the lottery winner fallacy if he claimed that the bacterial flagellum, exactly as it appears today, was evolution’s “target”. Instead, he broadens the specification to include any “bidirectional rotary motor-driven propeller.” But this is still far too specific. Even “propulsion system” is too specific, because evolution didn’t set out to produce a propulsion system. Evolution’s only “target” is differential reproductive advantage, and even then the word “target” is too strong.
keiths on June 17, 2013 at 12:57 am said:
In short, Dembski is hosed. 1) His concept of specification is too narrow, but even if it weren’t, 2) P(T|H) can’t be computed for realistic biological cases, but even if it could, 3) you have to answer the relevant question — “Could this have evolved?” — without the use of CSI, when you calculate P(T|H), 4) so the concept of CSI adds nothing, and if you invoke it the entire argument becomes circular: X couldn’t have evolved, so it must have CSI; X has CSI, therefore it couldn’t have evolved. Dembski’s “solution” to these problems: A. Retreat. Renounce the explanatory filter but affirm the value of CSI, as if these were separable concepts. B. Give up arguing that evolution cannot produce adaptive complexity, without actually admitting that it can. C. Argue instead that any adaptive complexity produced by evolution was already implicit in the environment, and that a Designer must have placed it there. Not much of an improvement, but at least he’s fighting a different set of battles.
keith s
With the circularity issue out of the way, I'd like to draw attention to the other flaws of Dembski's CSI. Here are some relevant comments from TSZ: keiths on June 14, 2013 at 12:59 am said:
Lizzie,
However, let’s suppose that he does manage to compute the probability distribution under some fairly comprehensive null that includes “Darwinian and other material mechanisms”.
It’s ironic that ID proponents are always demanding mutation-by-mutation accounts of how this or that biological feature evolved, because that is the level of detail they must provide in order to justify the values they assign to P(T|H). It’s even worse for them, in fact, because P(T|H) must encompass all possible evolutionary pathways to a given endpoint. P.S. Winston’s last name is “Ewert”, with two E’s.
keiths on June 14, 2013 at 8:56 am said:
Dembski is notorious for scoffing that
ID is not a mechanistic theory, and it’s not ID’s task to match your pathetic level of detail in telling mechanistic stories.
His statement was mocked for obvious reasons, but it was also unintentionally prophetic. He’s right that ID’s job isn’t to match evolution’s “pathetic level of detail” — ID has to exceed that level of detail in order to establish the value of P(T|H). Without a value for P(T|H), or at least a defensible upper bound on its value, the presence of CSI can never be demonstrated — by Dembski’s own rules. Think of what that would involve in the case of biology. You’d not only have to identify all possible mutational sequences leading to the feature in question — you’d also have to know the applicable fitness landscapes at each stage, which would mean knowing things like the local climatic patterns and the precise evolutionary histories of the other organisms in the shared ecosystem. If he didn’t realize it then, Dembski must certainly see by now that it’s a quixotic and hopeless task. That may be why he’s moved on to “the search for a search”.
keiths on June 16, 2013 at 8:37 am said:
timothya,
That is to say, any investigator wanting to eliminate “chance and necessity” or any other non-design cause, would need to work through all possible mutational sequences to prove they couldn’t have done it?
It depends on what you mean by “work through”. Dembski’s approach depends on being able to eliminate all non-design explanations, so every possible non-design cause must at least be considered. However, it may be possible to reject some of them without doing a detailed analysis. For example, the probability of the vertebrate eye evolving in a single generation is vanishingly small. It’s not impossible, but the associated probability is so small as to be negligible. It will have almost no effect on the overall P(T|H) and can therefore be neglected. The problem for Dembski et al is that even without considering these vastly improbable outliers, the difficulty in calculating P(T|H) for a complicated biological structure is overwhelming. The required information is simply not available. That’s why IDers haven’t done it, and that’s why no one expects them to.
keith s
Winston, Thank you for that straightforward acknowledgement. keith s

Leave a Reply