Uncommon Descent Serving The Intelligent Design Community

The Tragedy of Two CSIs

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

CSI has come to refer to two distinct and incompatible concepts. This has lead to no end of confusion and flawed argumentation.

CSI, as developed by Dembski, requires the calculation of the probability of an artefact under the mechanisms actually in operation. It a measurement of how unlikely the artefact was to emerge given its context. This is the version that I’ve been defending in my recent posts.

CSI, as used by others is something more along the lines of the appearance of design. Its typically along the same lines as notion of complicated developed by Richard Dawkin in The Blind Watchmaker:

complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone.

This is similar to Dembski’s formulation, but where Dawkins merely requires that the quality be unlikely to have been acquired by random chance, Dembski’s formula requires that the quality by unlikely to have been acquired by random chance and any other process such as natural selection. The requirements of Dembski’s CSI is much more stringent than Dawkin’s complicated or the non-Dembski CSI.

Under Dembski’s formulation, we do not know whether or not biology contains specified complexity. As he said:

Does nature exhibit actual specified complexity? The jury is still out. – http://www.leaderu.com/offices/dembski/docs/bd-specified.html

The debate for Dembski is over whether or not nature exhibits specified complexity. But for the notion of complicated or non-Dembski CSI, biology is clearly complicated, and the debate is over whether or not Darwinian evolution can explain that complexity.

For Dembski’s formulation of specified complexity, the law of the conservation of information is a mathematical fact. For non-Dembski formulations of specified complexity, the law of the conversation of information is a controversial claim.

These are two related but distinct concepts. We must not conflate them. I think that non-Dembski CSI is a useful concept. However, it is not the same thing as Dembski’s CSI. They differ on critical points. As such, I think it is incorrect to refer to any these ideas as CSI or specified complexity. I think that only Dembski’s formulation or variations thereof should be termed CSI.

Perhaps the toothpaste is already out the bottle, and this confusion of the notion of specified complexity cannot be undone. But as it stands, we’ve got a situations where CSI is used to referred to two distinct concepts which should not be conflated. And that’s the tragedy.

Comments
This is undoubtedly far too late to the party, but I can't help but be fascinated by the perpetual wrestling over the validity and application of the CSI concept, and, consequently, want to add a few ideas to the pot. My primary exposure to the concept of CSI is via Meyer's Signature, so I can't comment with any force on Dembski's development or application of his conception of CSI. Meyer discusses CSI (though I'm not sure that he explicitly uses that acronym, he certainly gives a very lucid exposition of the component concepts of CSI) in chapter 4 of SITC under the subheading "Shannon Information or Shannon Plus?" (pg. 105-110). He begins with the hypothetical anecdote concerning the attempts of Misters Jones and Smith to reach each other via phone - contrasting Mr. Jones random 10-digit sequence with Smith's specifically arranged sequence (i.e. Jones' phone number). Meyer then makes this comment concerning the distinction between Jones' sequence and Smith's sequence:
Both sequences... have information-carrying capacity, or Shannon information, and both have an equal amount of it as measured by Shannon's theory. Clearly, however, there is an important difference between the two sequences. Smith's number is arranged in a particular way so as to produce a specific effect, namely, ringing Jones's cell phone, whereas Jones's number is not. Thus, Smith's number contains specified information or functional information, whereas Jones's does not; Smith's number has information content, whereas Jones's number has only information-carrying capacity (or Shannon information). [Emphases from original]
Note how Meyer uses the terms specified information and functional information in parallel. It becomes quite apparent, then, that the term specified in CSI is to be identified with function. We could just as properly use the term CFI if we so chose. Meyer next tackles the 'C' - complexity:
Both Smith's and Jones's sequences are also complex. Complex sequences exhibit an irregular, nonrepeating arrangement that defies expression by a general law or computer algorithm.... Complex sequences... cannot be compressed to, or expressed by, a shorter sequence or set of coding instructions. (Or rather, to be more precise, the complexity of a sequence reflects the extent to which it cannot be compressed.) [Emphasis from original]
Here it is shown that complexity corresponds to the compressibility of a sequence (or, rather, the lack thereof). It should also be noted that complexity in this sense is not an "either-or" type of quality. Some sequences may resist even the smallest degree of compression, yet others may be wholly compressible, and many likely fall somewhere in the middle. But how are we to understand information? One thing that jumps out in Meyer's discussion of the subject is that he specifically confines his usage of the term 'information' to linear digital sequences. He uses the term in reference to numerical sequences, alphabetic sequences, amino acid sequences, and nucleotide sequences. All of those can properly be described as linear digital sequences (where "digital" is understood as referring to discrete values - as opposed to continuous values). Putting all of this together creates this definition of CSI: Linear digital sequences that are algorithmically incompressible (in some measure) and possess functional significance Meyer makes the applicability of this concept to biology clear on paragraph 2 of page 109:
Molecular biologists beginning with Francis Crick have equated biological information not only with improbability (or complexity), but also with "specificity," where "specificity" or "specified" has meant "necessary to function." Thus, in addition to a quantifiable amount of Shannon information (or complexity), DNA also contains information in the sense of Webster's second definition: it contains "alternative sequences or arrangements of something that produce a specific effect.".... DNA displays a property - functional specificity - that transcends the merely mathematical formalism of Shannon's theory. [Emphases from original]
We can thus confidently say that DNA (and nucleotide sequences) possesses CSI in the aforementioned sense, namely it contains sequences that actually do something (to put it as simply as possible). One especially salient point that Meyer makes is that functional specificity (or specificity) cannot be reduced to mere numerical status - it transcends it. So while it is perfectly possible to calculate the 'C' part of CSI (its complexity or Shannon information), the 'S' part cannot be calculated (as I believe Eric earlier pointed out). Having said all of that, here's a few questions for CSI skeptics: 1. Is there such a thing as Shannon information (sequence complexity)? 2. Do you believe that 'function' is a useful descriptor? 3. Can a sequence possessing some amount of Shannon information simultaneously have functional significance that is sequence-dependent? a) If yes, would you agree that CSI (as defined in this comment) has at least limited applicability? b) If no, how would you differentiate between functional and non-functional sequences? For fellow ID proponents: 1. Meyer states that the Shannon information of Jones's random 10-digit number is 33.2 bits. If a 10-digit base 10 number (non-redundant, I assume) contains 33.2 bits of Shannon information, could we say that a functional sequence of the same type contains 33.2 bits of CSI? a) If not, why not? I'm curious to hear the responses, so please chime in if you have the time. Thx:) Optimus
I agreed with Elsberry and Shallit that the LCI doesn’t work in the case that the natural process is unknown to the specifying agent, a point also made by Tellgren and conceded by Dembski.
I have never tried to understand the technical aspects of Dembski's ideas so this is not a comment on that. But reading through the lines above, it says that all natural processes currently known to man cannot add information. And does this mean that natural selection is such a process that cannot add information. If this is true and agreed to by Elsberry, Shalit and Twllgren, then shouldn't that fact in lay man's language become part of the science curriculum? jerry
Thanks, Alan. I wish I had something to say that hasn't already been said in papers and blog posts, but I don't. R0bb
Winston, yes I'm aware of Dembski's account of the LCI in NFL. As for problems with it, in my last comment I agreed with Elsberry and Shallit that the LCI doesn't work in the case that the natural process is unknown to the specifying agent, a point also made by Tellgren and conceded by Dembski. This problem by itself is enough to disqualify the LCI, as defined in NFL, from being a mathematical fact. I completely respect your choice to not delve into these issues in this thread. I appreciate your attempts to clear up the confusion surrounding the topic of CSI, and agree with much of what you say. I especially appreciate the fact that you're willing contradict other IDists. For example, you point out that whether two copies of something have more CSI than one copy depends on the assumed mechanism, contra Dembski who says that they have the same amount of information, and that any formal account of information had better agree. More power to you, Winston. R0bb
I suggest that you write a post and get it up on The Skeptical Zone...
R0bb has author status at The Skeptical Zone, should he decide that TSZ is a suitable venue and feels inclined to publish a post there. He would be most welcome. Alan Fox
I'm assuming that you are well aware where Dembski has offered definitions of Specified Complexity, and that you find fault with those definitions. I assume you are also aware of his proof of that law in No Free Lunch, and you find fault with it. But you don't spell out what your problem is with the proof or defintion, and merely vague statements about not being defined well enough. I wrote my response to Elsberry and Shallit's criticism as part of a larger response to someone else who had referenced E&S. Thus I addressed the particular issue that he brought up, although I did go back and read E&S. Perhaps you are looking at something slightly different from their criticisms. On either issue, answering your questions here is more effort than I'm willing to put into a blog comment. If you'd like to offer a critique of my response there, I suggest that you write a post and get it up on The Skeptical Zone, Panda's Thumb, or similar. If you do that, I'll look into responding. Winston Ewert
Winston:
Elsberry and Shallit’s criticism show consistent misunderstanding of Dembski’s work. I’ve previously discussed their confused objections to the LCI.
WRT their confused objections: 1) I'm curious -- where in their paper do they appear to believe that K in Dembski's definition refers to the entire background knowledge of a subject? At the beginning of section 8 they define K as "a set of items of background knowledge" that "'explicitly and univocally' identifies a rejection function", and they seem to stick with this definition throughout the paper. 2) I'll take your second response a sentence at a time:
Second, Elsberry and Shallit object that the natural process under consideration might not be in the background knowledge of the subject.
To be exact, their objection is that g∘f is not necessarily explicitly and univocally identifiable from K, where K is the background knowledge that explicitly and univocally identifies g.
However, Dembski has never claimed that every subject will be able to identify specified complexity in every case.
You seem to be implying that there might be specified complexity, but the subject might lack the background knowledge to recognize it as such. But specified complexity is defined in terms of K, the background knowledge that identifies the pattern. If there's no K, then there's no specified complexity.
The design inference is argued to be resilient against false positives, but not false negatives.
But we're talking about the LCI, not the design inference.
Furthermore, after investigation, the subject will learn about the natural process and thus it will enter the background knowledge of the subject.
Even if we could guarantee that an investigation will always take place and that the investigation will always yield knowledge about the natural process (which we can't), that would not change the fact that prior to the investigation, the LCI is being violated. 3) I think your response to "the question of whether knowledge gained about the process might invalidate the conditional independence requirement" has some problems, but I'm not even sure if this is a question posed by Elsberry & Shallit. Is it? I still stand by my assertion that the LCI hasn't been defined well enough to allow for mathematical proof. If you think that is has, can you point me to the definition? Hopefully I can respond to the rest of your comment later. R0bb
F/N: A bit late to the party, see that the matter has been quite well handled in general. I note from 2 above an inadvertent illustration by AF of the all too typical fundamental misunderstandings and dismissiveness of objectors to the concept of functionally specific, complex organisation and/or associated information:
How has dFCSI demonstrated itself as useful? Where can I find a demonstration of usefulness? All I see is GEM counting amino acid residues and claiming he has done something useful without achieving anything useful at all.
Let's see: 1 --> Amino acid sequences of relevant length [say 100 up] give us a huge space of possible configs, even leaving off chirality and the geometrical/functional fail to fold correctly implications of incorrect handedness, possibilities of different bonding patterns, the much broader set of possible amino acids vs the 20 or so in life, interfering cross-reactions, implications of endothermic reactions, etc. 2 --> Of these, given what we know about fold domains, singletons, key-lock fitting and particular requisites of function, we know that functional sequences are a very tiny fraction of the space of possibilities. 3 --> In addition, they come in isolated clusters with non-functional gaps intervening in the Hamming-distance space. 4 --> That is, it is an appropriate metaphor to speak of deeply isolated islands of function in wide seas of non-function. 5 --> where, given search resources of a solar system or an observed cosmos, we can use these facts and the typical lengths of relevant proteins to see -- per needle in a haystack issues -- that it is maximally unlikely that blind chance and mechanical necessity in a pre-biotic soup, could come up with a cluster of relevant molecules to get life started. 6 --> And similarly, the intervening seas of non-function multiplied by search challenges and observed patterns pointing to upper limits on plausible numbers of simultaneous changes [as in 7 or so per Axe and Gauger, Behe etc] point to a similar maximal lack of likelihood of forming new body plans be chance variation of various types and differential reproductive success leading to new population patterns thence descent with an adequate degree of modification. 7 --> So, it is no surprise that there is a lack of empirical observation of origin of novel body plans by such mechanisms. The Darwinian theory of body plan level macro evolution lacks an observed causally adequate mechanism. The same, for variants. 8 --> All of this has been repeatedly pointed out to AF and explained in adequate detail. On fair comment, he has persistently refused to yield to adequate warrant. 9 --> On further fair comment the dismissive remarks as cited are little more than a strawman fallacy. 10 --> AF et al would do better to carefully examine the point that functional specificity is as close as what happens when a car part is sufficiently out of spec. As a simple biological case in point, reflect on sickle cell anemia. (The fear of what radiation does to cells is a similar case.) 11 --> And likewise, they would do well to ponder the protein synthesis mechanism and its use of codes -- digital four state codes, and step by step processes aka algorithms. (But then, this is an ilk that is highly resistant to the demonstrated reality of self evident truth. No inductive argument -- thus nothing of consequence in science -- can rise to that level of warrant. This is ideologically driven selective hyperskepticism that we are dealing with. ) KF PS: EW, it is in the context that WmAD highlights in NFL that in the biological arena specificity is cashed out as function that I have focussed on that. A simplification of the 2005 Dembski expression then gives: Chi_500 = I*S - 500, bits beyond the Solar System threshold. I being a reasonable measure of info content and S a dummy variable defaulting to 0 and set to 1 where on objective grounds functional specificity is positively identified. Digital code such as in D/RNA is an obvious example. The Durston metric can be used for I and it yields that some relevant protein families are credibly designed. kairosfocus
Naturally I disagree. This would imply that the law is defined rigorously enough to allow for mathematical proof, which it certainly is not. See the ambiguities and problems pointed out by Tellgren and Elsberry & Shallit.
Elsberry and Shallit's criticism show consistent misunderstanding of Dembski's work. I've previously discussed their confused objections to the LCI. .
I’m afraid I still don’t understand. You yourself have done CSI calculations based on hypothesized processes without knowing whether those processes were actually in operation. Dembski has done the same. He based his CSI analysis of the Nicholas Caputo incident on a hypothesis of a random draw, even though there apparently was no actual random draw in operation.
Specified complexity allows the testing of a particular hypothesis for a given outcome. We can a test a hypothesis whether or not it was actually in operation. So we can test the fair coin hypothesis for Caputo, or any of the various hypotheses I tested for the mystery image. However, if I went to argue that an artifact was not produced by anything internal to a system, I need to ensure that it was not produced by any natural laws in that system. That's the case where I need to look at all the mechanisms/natural laws that operate in the system. So we have no basis for claiming that Caputo's actions were driven by intelligence. We can conclude that he almost certainly didn't use a fair coin. But he could very well have used a biased coin, all the specified complexity tells us is that we can reject the fair coin hypothesis. That alone does not tell us whether his action were driven by intelligence. Winston Ewert
gpuccio:
That is exactly the improbability of getting a functional sequence by a random search. It is exactly CSI. The simple truth is that CSI, or any of its subsets, like dFSCI, measures the improbability of the target state.
But how do we define the target state, and under what hypothesis do we calculate the improbability? Does it qualify as CSI if we choose any target state and any hypothesis we like? Dembski's current CSI measure is an upper bound on the probability of E, or any event more simply describable than E, occurring anywhere at any time. To calculate this upper bound, you have to factor in the replicational and specificational resources relevant to E's occurrence, which Durston does not do in his FSC measure. If you think that definitional details like this are unimportant, consider the amount disagreement over CSI just among IDists. You say that CSI is found in biology -- Ewert says we don't know if there's CSI in biology or not. jerry says that "specified" has no agreed-upon meaning -- others obviously disagree. Eric Anderson says that a sequence of 1000 coin flips that's all heads has no complexity -- Sal disagrees. I submit that disputes like these are resolved by cranking up the rigor, which is what needs to happen in CSI discussions. R0bb
Winston, from your OP:
For Dembski’s formulation of specified complexity, the law of the conservation of information is a mathematical fact.
Naturally I disagree. This would imply that the law is defined rigorously enough to allow for mathematical proof, which it certainly is not. See the ambiguities and problems pointed out by Tellgren and Elsberry & Shallit. #38
By “mechanisms in operation”, I was referring the natural laws that operate in a system. I’m not referring the actual operations that produced the object.
I'm afraid I still don't understand. You yourself have done CSI calculations based on hypothesized processes without knowing whether those processes were actually in operation. Dembski has done the same. He based his CSI analysis of the Nicholas Caputo incident on a hypothesis of a random draw, even though there apparently was no actual random draw in operation. R0bb
Please, check my long discussion with Elizabeth here: https://uncommondesc.wpengine.com.....selection/ Posts 186 on.
Lizzie starts posting here and the whole exchange between you and Lizzie is informative but ultimately unsatisfactory. The essential point that Lizzie makes is that you (along with Axe, Abel, Trevors, Durston) are assuming about the rarity of unknown protein sequences and making an unjustified extrapolation. That is also my view and is so far unchanged by what I have read that you have written. Alan Fox
No hope we can get along one with the other.
Surely you don't mean that, gpuccio? I don't think you are politically motivated and I hope you subscribe to "live and let live" too. Disagreeing on matters metaphysical does not prevent peaceful coexistence. Alan Fox
wd400:
Got it. You calculate CSI based on an assumption no one believes to be true, and ignore the mechanism that is actually proposed to explain protein evolution.
You got it perfectly right. I am, certainly, a minority guy. And you are, definitely, a willing conformist. No hope we can get along one with the other. Good luck. gpuccio
My accusation is that Sal modified the content of some of my posts to make it appear as if I had written something which I had not in fact written.
Not true, I modified them trying to make it evident as possible that it wasn't you that said those words. I thought people would figure out after the post that thanked me for editorial improvements, in addition to your complaints, I thought it was common knowledge that your modified posts were simply taken as counter measures for your bad behavior. I was wrong. There was a post that said, "I'm not responsible for what appears in this post. I'm bipolar and schizophrenic." I thought people would know you wouldn't possibly say that. The problem is they totally found it plausible you were biopolar and schizophrenic. I wonder why? Now that stuff about you drinking and people thinking you are alcoholic? That is your doing, those are your words. You're the one that insinuated about your own self that you drink till you feel like everything is spinning. If you have a drinking problem, all the more reason I want you out of my discussions. Even if you don't, please stay away -- you're wasting my time and yours. And from now, on, stop turning discussions at UD into your forum to complain about me. It's extremely rude to the other authors that you are spamming their threads with your personal vendetta. Even if I'm guilty as you say, you have no business impinging on the other UD authors by turning their threads into your private litany against me. So please, stop bringing it up on their threads. Set up your own website and whine all you like, but stop spamming other UD author's threads. scordova
I repent of modifying Mung's posts. As penance, in the future, I'll just erase or delete them. I might leave an explanation like "banned for trolling". scordova
Salvador:
I modified Mung’s posts.
True. On multiple occasions. You deleted my posts. You deleted the content of my posts. You changed the content of my posts. Salvador:
I thought it would be obvious to all that they were modified...
False. So now Salvador knows at least one reason why I think he is a liar. But this is progress, imo. Salvador has finally admitted to modifying the content of my posts, not just deleting the content. My accusation is that Sal modified the content of some of my posts to make it appear as if I had written something which I had not in fact written. Sal now admits the truth of this fact. His excuse?
I thought it would be obvious to all that they were modified…
Really? Does that somehow justify what was done? Admission of wrongdoing does not constitute repentance. Do you repent, Sal? Mung
The topic was the tragedy of two CSIs. Let me state where I believe all or most IDists agree. In the coin+robot system there was no net increase in algorithmic information in the coin+robot system after the robot ordered the coins from a random state to all heads. This is analogous to bacteria evolving from one bacterium to a colony -- there is no net increase in algorithmic information. The only way a baceterium can gain algorithmic information is via genetic or other kind of information exchange (like redesign by a designer like Craig Ventner or God). The bacterial colony can augment it's database of information by measuring the environment, but substantial increase in its capabilities must come from an outside source. Most ID proponents, myself included, do not believe there is any empirical or theoretical evidence an information poor environment that only says "live or die" can provide much input in increasing the algorithmic information in bacteria. Algorithmic information can include (but is not limited to): 1. blueprints for new proteins 2. blue prints for regulation of proteins 3. blue prints for using the proteins I'm using the phrase "algorithmic information" because it is used in industry. It is generally well understood what it signifies. scordova
how did you manage to infer that I do not know how to program or do not understand compilers?
Never said that Mung, you're making stuff up again. What I did point out is you made stuff up about me and my understanding of compilers. You did so by misrepresenting what I said. Gee, Mung, now the conversation has gotten way off topic. You post garbage about me, then I have to try to set the record straight. See the pattern? You're a waste of time. scordova
Salvador:
You accused me of not knowing how to program, not understanding compilers etc. Then you confess that you don’t even have a computer science degree (I do).
Great. You have a computer science degree. I never said you didn't, right? I guess from the fact that you have a CS degree we're supposed to infer that you know how to program and that you understand compilers. But given that I do not have a computer science degree how did you manage to infer that I do not know how to program or do not understand compilers? Does the fact that I do not have a computer science degree mean that I cannot call BS on things you post with regard to compilers and programming? I honestly believe that I have written more programs in actual use than you have. Want to bet? How many programs have you written that you've managed to sell? IOW, you have a degree, I have actual practice and experience. It's my actual experience in the real world that allows me to call BS on your claims. But in threads you author, no one would be the wiser. Mung
I modified Mung's posts. I thought it would be obvious to all that they were modified when I wrote something to the effect (in CAPS):
I'D LIKE TO THANK SALVADOR FOR ALL HIS EDITORIAL IMPROVEMENTS TO MY POSTS. I SAY REALLY STUPID THINGS AND MAKE STUFF UP ABOUT SAL BECAUSE I HATE SAL SO MUCH. I WANT TO THANK HIM FOR CLEANING UP MY TROLL POSTS.....
Apparently some did not get the memo. Since that time, I've just deleted what you wrote and left the post empty. You are permanently banned from my discussions, and future uninvited visits will be dealt by erasure of what your write. As an amends, any such posts that had an editorial improvement has been removed if I find it (except maybe a note pointing out you are trolling). So now you can't say that if you said something at UD, it was because I modified your posts. All the stuff you've said that remains is yours, not something I put in your posts. From the comment policy:
moderators are editors and it’s their job to make people’s words disappear before anyone else sees them. The second thing to remember is that we don’t have the time or inclination to get into debates over our editing decisions. Nagging us about a comment that didn’t get approved is only going to make us even less likely to approve your future comments.
All Mung's comments are subject to deletion on my discussions on the grounds it wastes time and detracts from more interesting matters than his vendetta to get me to beg for his forgiveness. PS Apologies to Winston that his thread has to be derailed by a confrontation between Mung and I. Mung should take it up elsewhere, instead of throwing his off-topic protests against me every chance he gets. scordova
Mung:
The reason you “toss me” is because I expose you for what you are.
Salvador:
In some of the posts I’ve deleted you’ve called me liar, hypocrite, and other names.
If the shoe fits... "The sting of any rebuke is the truth." - Benjamin Franklin But sure, better to delete the accusations than deal with them. Better to delete the evidence than admit it exists. You've modified the content of some of my posts to make it appear that I wrote something which I did not in fact write? True or false? Mung
The reason you “toss me” is because I expose you for what you are.
Baloney. In some of the posts I've deleted you've called me liar, hypocrite, and other names. In https://uncommondesc.wpengine.com/philosophy/the-shallowness-of-bad-design-arguments/ You accused me of not knowing how to program, not understanding compilers etc. Then you confess that you don't even have a computer science degree (I do). Then you question my background in thermodynamics, on what basis? You have no background in physics either, you, even by your own admission can't comprehend the math. Then you come along to one of my discussion and ask me to write a tutorial on math. What's the matter, is simple algebra over your head? I end up wasting more time responding to your trolling, your false accusations than actually discussion the topic at hand. As to your waste of time comment:
If the coins were not flipped, whether the coins are “fair” or not is irrelevant.
The reason the coins are stated as fair is that determines the a priori probability which is important in scoring the CSI content of the coin configuration. Also I make that statement lest any one say the coins might not be fair, I provide that they are fair as part of the hypothesis under consideration so as to clarify the points. But you're so bent to disagree with everything I say, you'll dredge up stupid arguments just to troll my discussions. Here are you in the discussion that made me decide it is best to dispense with your posts:
Beg my forgiveness... https://uncommondesc.wpengine.com/philosophy/the-shallowness-of-bad-design-arguments/
Beg your forgiveness, Mung, as if you are God? You ought to thank me for deleting and editing your posts lest the readers conclude you are getting loony or alcoholic. scordova
Salvador:
Example of why Mung is a waste of time, and why I toss him from my discussions.
The reason you "toss me" is because I expose you for what you are. Even here you reveal your true character through your selective quoting of what I wrote. My point in your original thread was completely on topic and relevant. But that's not what matters to you. Let me quote you:
He’ll occasionally try to say something useful just to sneak in and participate, knowing I’ll remove his comment or edit it.
I have a question for you Sal, a simple one. What do you think motivated Winston's recent posts here at UD? Do you think it was something I wrote, and if so, why? CSI Confusion 1 CSI Confusion 2 The Tragedy of Two CSIs As near as I can tell you're the only one (other than Winston) here at UD authoring posts about CSI recently. His posts seem to be offered as correctives. Correctives to what? Mung
Got it. You calculate CSI based on an assumption no one believes to be true, and ignore the mechanism that is actually proposed to explain protein evolution. I think I've hear enough... wd400
wd400: a) I don't include NS in calculations of CSI because NS is a necessity mechanism. If someone can show that such a mechanism acted in some specific path to a basic protein domain, I am ready to include that in my calculations, and I have shown how to do that. Please, check my long discussion with Elizabeth here: https://uncommondesc.wpengine.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/ Posts 186 on. b) You say: they needn’t be selected for, btw, just tolerated That is simply wrong. Only positive selection increases the probabilistic resource that are in favour of the ultimate result. Any other mechanism, including the famous genetic drift. do not increase the probabilities of any specific outcome, and therefore are irrelevant to the computation of dFSCI. That should be very obvious to anyone who understands probabilities, and yet darwinists find it so difficult to understand! So, you are wrong. The intermediate must be positively selected and expanded in the population, otherwise we are still in a purely random search (all unrelated outcomes are equiprobable). c) You say: So while you ignore natural selection you rely utterly on the idea there are no viable intermediates. That’s what you need to prove. Absolutely not. It's you who must prove that they exist. I can simply say that no one has ever been found. That's enough to make your hypothesis a myth, unsupported by facts. You must find the facts to support your hypothesis. Moreover, I have added that not only no such intermediates were ever found, but there is no reasonable argument to expect that they exist at all. IOW, your hypothesis is both logically unfounded and empirically unsupported. gpuccio
We collectively hold our breath. Assertion or engagement. What will it be? Upright BiPed
Alan Fox: My compliments! Your post #63 is a true masterpiece of non sequitur and divagation. I have stated that Durston's FSC and Dembski's CSI, and my dFSCI are the same thing. And I am going to show that it is that way. You know, I use to give support to my arguments in my discussions. So, let's take Durston's numbers and see what they mean. I will refer, again, to Table 1 in his paper. Let's take just one example out of 35. Ribosomal S12, a component of the 30S subunit of the ribosome. The length os the sequence is 121 AAs (not too long, indeed, and it is only a part of a much more complex structure). The analysis has been performed on 603 different sequences of the family. The null state has a complexity of 523 bits. That is the complexity of a random sequence of that length. IOWs, a complexity of 2^523 (which is approximately the same as 20^121). That is the complexity, and the improbability (as 1:2^523) of each specific sequence in the search space. In the following column, we can see that Durston's calculation, applying Shannon's principles to the comparison of the sequences in the set, gets a functional sequence complexity (FSC) for that family of 359 Fits (which means, functional bits). I will not discuss now if the calculation is right, or precise, or how he gets that number. I will just discuss what it means. Durston explain it very clearly, if only you take the time to read and understand it:
The number of Fits quantifies the degree of algorithmic challenge, in terms of probability, in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribosomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space.
(Please, note that in this paragraph there is a typo, the Fits for ribosomal S12 are 359, and not 379, as can be checked in Table 1, and as is obvious by the computation. I have used the correct value of 359 in the following discussion) IOWs, the target space, the number of sequences of that length, is calculated here to be of 10^49 sequences that are functional. IOWs 2^164. That is to say that the functional space (the target space) is made of approximately 2^164 (or 10^49) sequences. Therefore, the ratio of the target space to the search space is 2^164 : 2^523, that is 2^-359 (or 10^-108). (That is the same as 10^-106 percent of the search space). IOWs, the probability of finding a functional S12 sequence in the search space, by random search, is 1:10^108 (in one attempt). That is exactly the p(T|H) in Dembski's definition. (wd400, where are you?) So, they are the same thing. QED. gpuccio
why these concepts are not the same. Durston at least calculates his “functional sequence omplexity”.
as predicted Upright BiPed
gpuccio Your English is fine, it's what's between the lines that I find interesting. For anyone who can read, my “major claim” is that there are no “selectable intermediates”, and that CSI is of fundamental relevance, both to evaluate the improbability of the whole sequence for RV, or (if and when selectable intermediates will be shown) to evaluate the improbability of the role of RV before and after the expansion of the selectable ... So, you don't included natural selection in your CSI calculations. You says this is because there are no "selectable" intermediates (they needn't be selected for, btw, just tolerated). So while you ignore natural selection you rely utterly on the idea there are no viable intermediates. That's what you need to prove. But then, that's just "what use if half an eye" for proteins dressed up in some math. If you had good evidence that there were not tolerable intermediates between protein families you wouldn't need CSI. So why bother? wd400
why these concepts are not the same.
Durston at least calculates his "functional sequence omplexity". Complex specified information remains undefined. BTW How's the paper on semiotics coming? Alan Fox
Jerry:
We are not advocating replacing it with a design hypothesis but with a statement that the best current science knows no known method to account for life’s changes.
Well, I'm not sure this statement of yours is entirely accurate Jerry. Michael Behe doesn't dispute common descent. Since life first got going on Earth, there have been huge changes. The environment, the continents, the climate have all changed dramatically over the last three billion years. There have been several mass extinctions such as the KT event and who is to say there are not more catastrophic changes in the pipeline. Evolutionary theory may only a partial explanation for the way lifeforms were moulded by these events but there is no other current theory that approaches it in explanatory power. ID explains nothing. There is no ID theory. Alan Fox
GP: It’s absolutely the same thing. Alan Fox: Sorry, gpuccio, I don’t agree. Kirk Durston has indeed made some calculations that stand up to scrutiny, even though they are far from immune from criticism. My own view is that Durston’s calculations tell us nothing we don’t already know.
With the absolute certainty of any physical law we have ever known, we can rest assured that Alan Fox will not enage/debate/explain/argue with GP as to why these concepts are not the same. He will simply make his assertions and hide behind them. Nothing, and no-thing, will ever change that. Upright BiPed
except where they are in conflict with reality, in which case I reserve the right to point out the disparity.
Reality says that your definition of the TOE never produced anything but trivial changes. Now this does not say that changes in life forms do not have a naturalistic origin but until a process comes along that can explain the changes in information in organisms, intelligence will have to remain a viable option and in the meantime Darwinian processes have been eliminated as a viable theory. And you should welcome the effort to get this theory eliminated from the curriculum. We are not advocating replacing it with a design hypothesis but with a statement that the best current science knows no known method to account for life's changes. I assume you will support that effort based on your comment about pointing out disparity when things are in conflict with reality. jerry
It’s absolutely the same thing.
Sorry, gpuccio, I don't agree. Kirk Durston has indeed made some calculations that stand up to scrutiny, even though they are far from immune from criticism. My own view is that Durston's calculations tell us nothing we don't already know.
One can use any term, but the concept is the same:
I'm afraid I think that is sloppy. It wouldn't do in medical diagnosis, it doesn't do in science in general.
...the complexity linked to the function, the improbability of the target space given the random hypothesis. That is very clear in Durston’s paper. If you understand the concepts, it is the same thing.
But those who are persuaded by evolutionary theory reject the premise that it is a purely random process. Design by the environment is assuredly not random.
It is really strange that you, who are so ready to believe that CSI cannot be really calculated, don’t even try to realize that Durston has abundantly calculated it, and that your only argument is that he uses another term (very similar, however) for it.
Lets leave belief out of it. I am simply unconvinced that CSI is a quantifiable quantity. Should I turn out to be wrong on that, the next hurdle is whether a calculation can demonstrate that a system, process, entity arose by "Design" while that too remains a vague, undefined concept. It just isn't Science. I don't want to interfere in people's beliefs or religion - except where they are in conflict with reality, in which case I reserve the right to point out the disparity. Alan Fox
Alan Fox:
The term he uses, which originated with Leslie Orgel and developed by Robert Hazen is “Functional Sequence Complexity”.
It's absolutely the same thing. One can use any term, but the concept is the same: the complexity linked to the function, the improbability of the target space given the random hypothesis. That is very clear in Durston's paper. If you understand the concepts, it is the same thing. It is really strange that you, who are so ready to believe that CSI cannot be really calculated, don't even try to realize that Durston has abundantly calculated it, and that your only argument is that he uses another term (very similar, however) for it. gpuccio
don’t raise a mention of CSI
You are confusing a concept with a measure of some aspect of that concept.
Functional Sequence Complexity
This is a measure of the information in something (a protein sequence or the analogous gene sequence) that is complex and specifies the function of something else. Sounds like a measure of FCSI to me. jerry
A thousand coins get another distribution.
With respect to the proportion of heads vs. tails it is the binomial distribution. I provided details here along with expectation and standard deviations: https://uncommondesc.wpengine.com/mathematics/ssdd-a-22-sigma-event-is-consistent-with-the-physics-of-fair-coins/ Strictly speaking we could even use the binomial for 1 coin. scordova
Coin flips are a uniform distribution where there are two equi-probable events. A die is a uniform distribution where there are six equi-probable events. Two dies are the combination of two equip-probable event that gets another distribution. Two coins get a different distribution. A thousand coins get another distribution. So I suggest we try to use uniform distribution instead or the proper name for the particular distribution and why this distribution is relevant. And if it comes down to one of just two outcomes, explain why it is relevant especially as it relates to natural selection or other natural processes. Coin flips just obscure what is going on. After 8 years of them, no one seems to understands their relevance. Nor can they define CSI in any way that people understand. I think there is a correlation. jerry
Why does homochirality need a solution? Half of a racemic mixture is available as a substrate.
Same reason if you found a collection of coins all heads in some locality. There is a pool of millions of "racemic" coins out there. It is a problem, otherwise we wouldn't have teams of OOL researcher trying to find mindless solutions to the problem, not to mention even if they did find initial conditions to create homochirality, thermal and quantum noise will dissipate homochirality over time, just like shaking a table of coins that initially start out all heads. It is a serious problem for the Blind watchmaker hypothesis. And now another quotation from Design Infernece page 50:
If however multiple chance hypotheses could be responsible for E...
We can consider multiple chance hypotheses. For the sake of completeness, 1 of the 20 common amino acids in life is not chiral (neither left or right). I seem to recall there might be one amino acid that may not naturally have a 50/50 chance of L and D forms. The point is, like coins we can empirically and theoretically determine an approximate probability. We can even be generous and say the ratios are 60% favorable on average to the L state. Even then, the binomial distribution will reject homochirality as the result of chance from a pre-biotic soup. The only explanation for homochirality in the present day are the robots we call cells, but then that raises the question, who made the robot? scordova
Why does homochirality need a solution? Half of a racemic mixture is available as a substrate. Alan Fox
Related comment – And please no one use coin flips or dice rolls to illustrate CSI. I have never seen the relevance
It relates well to the homo chirality argument and any time we see duplicates of things in biology (a bacterial colony evidences high duplication of the ancestor bacteria). Coin analogies have similar if not identical distributions as these questions. Coins are textbook examples of how to illustrate relevant distributions. The robot analogous to the copy machines in living cells. I chose the coins and robots to clarify the issues at hand. If we can't solve the paradoxes for coins and robots, we aren't going to solve it for homochirality and self-replicating cells since the same statistic are in play. scordova
Recent Exchanges with Kirk Durston don't raise a mention of CSI. One hit in a comment of mine and the full phrase by someone else. Durston doesn't use it at all. Alan Fox
Oops missed blockquotes. First paragragh is gpuccio. Alan Fox
Yes, I do want to go through it again. If you have arguments, please detail them, and show why Durston has not measured CSI in 35 protein families. Or just admit you were wrong. If you like. Well, my main issue with your claim is that nowhere does Durston claim to calculate CSI of anything. The term he uses, which originated with Leslie Orgel and developed by Robert Hazen is "Functional Sequence Complexity". Alan Fox
The word "definition" has finally appeared on this thread. Does anyone not see the irony of this thread as we try to understand CSI. There have been probably been more than 10,000 comments on probably over a hundred previous threads trying to understand this concept. There is no lay man's definition, mainly, because there is no definition of the word "specified." It seems there are long discussions on this blog about concepts where the people discussing them do not agree on a common definition for the concept being discussed. I believe "complex" can be adequately defined for the lay person and so can "information." But "specified" for such a common word seems to be left out of the common understanding. I am aware that the word "information" can have nearly hundreds of definitions but the average lay person will not have a hard time understanding how it is being used in a biological framework. Those who disagree should contact those who do bioinformatics. http://en.wikipedia.org/wiki/Bioinformatics Related comment - And please no one use coin flips or dice rolls to illustrate CSI. I have never seen the relevance. Probability distributions are fair game. And as far as probability is concerned, can there be a probability distribution for something that has never happened as least according to our current knowledge. Natural selection has never produced any useful biological information in terms of the evolution debate. What would such a probability distribution look like that considers it as a cause of something of consequence when there are zero instances of such an event given there were a gazillion of potential events where natural selection could operate. jerry
Alan Fox @10: Ah, so Dawkins was using Weasel to demonstrate something un-Darwinian? Of course not; let’s not pretend that he wasn’t attempting to show how Darwinian processes operate. That was the whole point of his program. Yet his program has that little thing that is wholly un-Darwinian – that target phrase, that careful forcing of climbing up Mount Improbable. Subsequent evolutionary algorithms that generate anything of consequence, whether Avida or NASA’s antenna, utilize the same approach: a guiding target phrase, a goal, a purpose-driven, ends-oriented process. Thoroughly un-Darwinian. I'm glad you acknowledge, thought, that his program doesn't demonstrate how Darwinian evolution works. :)
Laughably untrue. Dawkins is on record as saying he didn’t even bother to keep his code because it wasn’t important.
And yet, here you are, defending Weasel. :)
It showed the power of selection against random draw.
Almost. But you are painting it in a generic light in order to avoid the context of his claim. His purpose was to show how a Darwinian process could produce something that random draw couldn't. So, yes, it showed the power of selection. But it is the power of selection when: (i) it is carefully coaxed toward a target phrase through intelligent design (thoroughly un-Darwinian), and (ii) there is a sequence of slight, successive intermediate steps leading from A to Z (thoroughly unproven in the case of biological structures).
The environment designs.
Yeah, sure. Let’s confuse the conversation by saying that nature “designs.” Great evolutionist talking point. What plan or purpose or thought or intention does nature have in mind when it designs? Look, just FYI, every time I use “design” in the context of the evolution/design debate it means “design” in the ordinary, dictionary definition of the word, not a twisted, forced definition (again, attempting to bring design in through the backdoor of natural processes) that can support materialism. Eric Anderson
Sal @30:
We don’t ask, “what is the probability a Designer will make a protein from a pre-biotic soup” we ask, “what is the probability a protein will emerge from a random prebiotic soup”. For physical artifacts, the CSI score is based on the rejected mechanism (chance hypothesis, Shannon degrees of freedom) not the actual mechanism that created the object.
Exactly. ----- Winston @36:
That’s where specification comes in. Improbable events which are specified are rare. Improbable events themselves are not rare. In order to deem an event to[o] rare to plausibly happen [by chance] we need to show that it is specified and complex. That’s the use of specified complexity.
Exactly. Eric Anderson
wd400:
So your major claim seems to be that there are “selectable intermediates”, and CSI is of little relavance as no one in their right mind thinks proteins arose at random?
Is that addressed to me??? For anyone who can read, my "major claim" is that there are no “selectable intermediates”, and that CSI is of fundamental relevance, both to evaluate the improbability of the whole sequence for RV, or (if and when selectable intermediates will be shown) to evaluate the improbability of the role of RV before and after the expansion of the selectable intermediate. For your convenience, I paste the relevant phrases from my previous posts: "The NS part must be supported by facts, like all necessity mechanisms. It is not. " "unfortunately, no selectable intermediates are known for basics protein domains, for the simple fact that there is no reason they should exist and because none was ever found." "In an old post I showed how selectable intermediates, if they existed, could help the process and how CSI allows us to quantify the RV part even in the presence of NS events." My english may not be so good, but I thought that was clear enough. gpuccio
wd400 =>no one in their right mind thinks proteins arose at random?. =>You are right coldcoffee
So your major claim seems to be that there are "selectable intermediates", and CSI is of little relavance as no one in their right mind thinks proteins arose at random? wd400
wd400:
So, how much CSI selection can create?
The role of NS in the neodarwinisn model is not to "create CSI", but only to expand positive selections and fix them. That does not create CSI (the functional information, however complex, must already be in the sequence generated by RV). But it can certainly modify the probabilistic resources of the system, because CSI must be computed separately for each transition made by RV, and selectable intermediates, if they exist, can "fragment" the process in smaller sub processes, as I have shown, with computations, in an old post. Again, the problem is that complex information is not deconstructable into simpler selectable intermediates. That's why the neo darwinian model is based on a myth, and form a scientific point of view the whole transition to a new basic protein domain can only be explained by RV (which is ruled out by CSI) or by a design inference. Is that clear? gpuccio
Alan Fox: I can go through anything you like. You challenged us to "Calculate the CSI of something" (#12). I answered that "Durston has calculated the CSI of 35 protein families" (#20). You objected "Not true. It’s a different metric" (#23). I answered: "No, it isn’t" (#24). wd400 then asked: "How did Dunston calculate p(T|H)?" (#25) I answered: "Durston calculated the functional complexity of each of the 35 protein families by comparing the reduction in uncertainness given by the functional specification (being a member of the family), versus the random state. That is exactly the improbability of getting a functional sequence by a random search. It is exactly CSI. The simple truth is that CSI, or any of its subsets, like dFSCI, measures the improbability of the target state. The target state is defined, in the functional subset of CSI, by the function. In this case, the function is very simply the function of the protein family. CSI is simply the complexity linked to the function. It’s just as simple as that. The confusion is only created by the dogmatism of neo darwinists who cannot accept the truth." (#34) wd400 has not answered that. Neither have you. Yes, I do want to go through it again. If you have arguments, please detail them, and show why Durston has not measured CSI in 35 protein families. Or just admit you were wrong. If you like. gpuccio
(cos, at the moment it sounds like you think neo-darwinism is wrong, so you aren't including it in your calculations. In which case, you wouldn't need CSI!) wd400
So, how much CSI selection can create? wd400
wd400: Must we really go back to the basics? The "explanation" we talk of is, obviously, neo darwinism. It is based on two sequencial processes: RV and NS. The RV part is essential to the model. It has to be quantified. CSI allows us to quantify it. The NS part must be supported by facts, like all necessity mechanisms. It is not. In an old post I showed how selectable intermediates, if they existed, could help the process and how CSI allows us to quantify the RV part even in the presence of NS events. But, unfortunately, no selectable intermediates are known for basics protein domains, for the simple fact that there is no reason they should exist and because none was ever found. Therefore, the neo darwinisn model is neither reasonable nor supported by any facts. On the contrary, the design inference is strongly and positively supported by the easily observed connection between CSI and conscious design. These are really the basics, I supposed you knew them. gpuccio
Hmm, probably shouldn't comment form my phone. Comment 37 should say Your two comments here seem contradictory.First you say CSI compares observed data with a random expectation, then you say CSI tests explanations for observed data. How does that work when the mechanism isn't random? How much CSI (as you defined it) can natural selection create? How do we know that? wd400
That is exactly the improbability of getting a functional sequence by a random search.
Which is exactly irrelevant to evolutionary processes. Alan Fox
@ gpuccio Yes the pantomime season is upon us. Oh, no it isn't! Do really want to go through this again? Alan Fox
Sal, By "mechanisms in operation", I was referring the natural laws that operate in a system. I'm not referring the actual operations that produced the object. I'm in total agreement with what you've written there. Winston Ewert
Your two comments here seem .First you say compares observed data with a random expectation, then you say csi tests explanations for observed data. How does that work? How much (as you defined it) can natural selection create? How do we know that? wd400
I’m sorry, this reads like you are saying CSI argues that improbable explanations are improbable. If that’s the case, what use is it?
But improbable outcomes happen all the time. Each snowflake is highly improbable. But it still snows. Every possible poker hand is highly improbable, yet we can still deal card. In certain situations, a highly improbable outcome is highly probable. This can happen because there can be a large number of possible outcomes each individually improbable, but when combined highly probable. That's where specification comes in. Improbable events which are specified are rare. Improbable events themselves are not rare. In order to deem an event to rare to plausibly happen we need to show that it is specified and complex. That's the use of specified complexity. Winston Ewert
wd400:
I’m sorry, this reads like you are saying CSI argues that improbable explanations are improbable. If that’s the case, what use is it?
The use, obviously, is in giving a metrics to evaluate how improbable an explanation is. That's what science does. gpuccio
wd400: Durston calculated the functional complexity of each of the 35 protein families by comparing the reduction in uncertainness given by the functional specification (being a member of the family), versus the random state. That is exactly the improbability of getting a functional sequence by a random search. It is exactly CSI. The simple truth is that CSI, or any of its subsets, like dFSCI, measures the improbability of the target state. The target state is defined, in the functional subset of CSI, by the function. In this case, the function is very simply the function of the protein family. CSI is simply the complexity linked to the function. It's just as simple as that. The confusion is only created by the dogmatism of neo darwinists who cannot accept the truth. gpuccio
If the mechanisms that created you distribution of chirality is not random then a CSI calculation based on that assumption is useless.
Agreed. The assumed chance hypothesis can be falsified. scordova
When we calculate the CSI in the homochirality of a protein, we presume the CSI score from the natural binomial distribution in evidence from chemistry for L and D amino-acids (just like fair coins obey a binomial distribution) Which is precisely the problem, surely. If the mechanisms that created you distribution of chirality is not random then a CSI calculation based on that assumption is useless. wd400
It argues that if evolution is an improbable account of life, we are justified in dismissing I'm sorry, this reads like you are saying CSI argues that improbable explanations are improbable. If that's the case, what use is it? wd400
CSI, as developed by Dembski, requires the calculation of the probability of an artefact under the mechanisms actually in operation.
In the case of artifacts, we may not have access to the mechanism in operation, the mechanism is unknown, the EF was meant to adjudicate whether an artifact was designed independent of the mechanism that facilitated its creation. In the case of human designers, it doesn't make sense to say what probability there is that a human designer will make Mt. Rushmore. The a priori probability that the true mechanism actually doing a task (considering it's abilities or "willingness" or programming) should not figure into the CSI score of a physical artifact. We don't ask, "what is the probability a Designer will make a protein from a pre-biotic soup" we ask, "what is the probability a protein will emerge from a random prebiotic soup". For physical artifacts, the CSI score is based on the rejected mechanism (chance hypothesis, Shannon degrees of freedom) not the actual mechanism that created the object. When we calculate the CSI in the homochirality of a protein, we presume the CSI score from the natural binomial distribution in evidence from chemistry for L and D amino-acids (just like fair coins obey a binomial distribution). We don't base the CSI score on the probability that the intelligent designer (the true mechanism) created life. scordova
Alan Fox, You've asked for a CSI calculation. Dembski's CSI is not about determining the probabilities, but about the consequence of those probabilities. It argues that if evolution is an improbable account of life, we are justified in dismissing it. It provides absolutely nothing to attempt to establish that life is, in fact, improbable under Darwinian mechanisms. However, almost every single argument put forward by intelligent design whether irreducible complexity, protein folding, no free lunch, etc. seek to establish that the probability is very low. Those arguments we will point to in order to establish that the probability of life is low. We will argue that those argument show that the probability of life is far too low to accept Darwinism as an account for it. Winston Ewert
Mung wrote: whether the coins are “fair” or not is irrelevant.
Example of why Mung is a waste of time, and why I toss him from my discussions. scordova
My post in that thread before Sal modified it's content:
Salvador:
The coins were fair, they just happen to all show heads. The coins weren’t flipped, they are found that way in some box or on the floor.
If the coins were not flipped, whether the coins are "fair" or not is irrelevant. So what does this have to do with CSI, if anything? For those who manage to read this before Salvador turns it onto something I did not write, Shannon information is based upon probabilities. Is CSI any different? If a coin is not flipped, how do you calculate the probabilities? What's "fair" got to do with it?
Winston Ewert:
CSI, as developed by Dembski, requires the calculation of the probability of an artefact under the mechanisms actually in operation.
This is the same objection I made to Sal's nonsense about CSI. But according to Salvador my posts are "off topic" and I am a "troll." Mung
CSI has come to refer to two distinct and incompatible concepts. This has lead to no end of confusion and flawed argumentation.
Exhibit A Mung
gpuccio, How did Dunston calculate p(T|H)? wd400
Alan Fox: No, it isn't. gpuccio
@ gpuccio Not true. It's a different metric. Alan Fox
I think pastor Joe Boot, although he is talking about the universe as a whole in the following quote, illustrates the insurmountable problem, that 'context dependency' places on reductive materialism, very well:
"If you have no God, then you have no design plan for the universe. You have no prexisting structure to the universe.,, As the ancient Greeks held, like Democritus and others, the universe is flux. It's just matter in motion. Now on that basis all you are confronted with is innumerable brute facts that are unrelated pieces of data. They have no meaningful connection to each other because there is no overall structure. There's no design plan. It's like my kids do 'join the dots' puzzles. It's just dots, but when you join the dots there is a structure, and a picture emerges. Well, the atheists is without that (final picture). There is no preestablished pattern (to connect the facts given atheism)." Pastor Joe Boot - 13:20 minute mark of the following video Defending the Christian Faith – Pastor Joe Boot – video http://www.youtube.com/watch?v=wqE5_ZOAnKo
Supplemental quote:
‘Now one more problem as far as the generation of information. It turns out that you don’t only need information to build genes and proteins, it turns out to build Body-Plans you need higher levels of information; Higher order assembly instructions. DNA codes for the building of proteins, but proteins must be arranged into distinctive circuitry to form distinctive cell types. Cell types have to be arranged into tissues. Tissues have to be arranged into organs. Organs and tissues must be specifically arranged to generate whole new Body-Plans, distinctive arrangements of those body parts. We now know that DNA alone is not responsible for those higher orders of organization. DNA codes for proteins, but by itself it does not insure that proteins, cell types, tissues, organs, will all be arranged in the body. And what that means is that the Body-Plan morphogenesis, as it is called, depends upon information that is not encoded on DNA. Which means you can mutate DNA indefinitely. 80 million years, 100 million years, til the cows come home. It doesn’t matter, because in the best case you are just going to find a new protein some place out there in that vast combinatorial sequence space. You are not, by mutating DNA alone, going to generate higher order structures that are necessary to building a body plan. So what we can conclude from that is that the neo-Darwinian mechanism is grossly inadequate to explain the origin of information necessary to build new genes and proteins, and it is also grossly inadequate to explain the origination of novel biological form.’ - Stephen Meyer - (excerpt taken from Meyer/Sternberg vs. Shermer/Prothero debate - 2009) Stephen Meyer - Functional Proteins And Information For Body Plans - video http://www.metacafe.com/watch/4050681
bornagain77
Mr. Fox, despite your, and other Darwinists stubborn reluctance to admit to the abject failure inherent in the "Weasel" project for providing any support whatsoever for Darwinian claims, I am grateful for what Dawkins' "Weasel" project has personally taught to novices like me. Because of the simplicity of the program and the rather modest result, "Methinks it is like a weasel", that the program was trying to achieve, it taught me in fairly short order, in an easy to understand way, that,,
"Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information." - William Dembski
In fact so effective was Dawkins' "Weasel" project at teaching me this basic, 'brick wall', limitation for material processes to create even trivial levels of functional information, that I highly recommend Wiker & Witt's book "A Meaningful World" in which they show, using the "Methinks it is like a weasel" phrase that Dawkins' used from from Shakespeare's Hamlet, that the problem is much worse for Darwinists than just finding the "Methinks it is like a weasel" phrase by a blind search, since the "Methinks it is like a weasel" phrase makes no sense at all unless the entire play of Hamlet is taken into consideration so as to give the "Weasel" phrase context. Moreover the context in which the phrase derives its meaning is derived from several different levels of the play. i.e. The ENTIRE play provides meaning for the individual "Weasel" phrase.
A Meaningful World: How the Arts and Sciences Reveal the Genius of Nature - Book Review Excerpt: They focus instead on what "Methinks it is like a weasel" really means. In isolation, in fact, it means almost nothing. Who said it? Why? What does the "it" refer to? What does it reveal about the characters? How does it advance the plot? In the context of the entire play, and of Elizabethan culture, this brief line takes on significance of surprising depth. The whole is required to give meaning to the part. http://www.thinkingchristian.net/C228303755/E20060821202417/
In fact it is interesting to note what the overall context is for "Methinks it is like a weasel" that is used in the Hamlet play. The context in which the phrase is used is to illustrate the spineless nature of one of the characters of the play. To illustrate how easily the spineless character can be led to say anything that Hamlet wants him to say:
Ham. Do you see yonder cloud that ’s almost in shape of a camel? Pol. By the mass, and ’t is like a camel, indeed. Ham. Methinks it is like a weasel. Pol. It is backed like a weasel. Ham. Or like a whale? Pol. Very like a whale. http://www.bartleby.com/100/138.32.147.html
After realizing what the context of 'Methinks it is like a weasel' actually was, I remember thinking to myself that it was perhaps the worse possible phrase Dawkins could have possibly chosen to try to illustrate his point, since the phrase, when taken into context, actually illustrates that the person saying it was easily deceived and manipulated into saying the phrase by another person. Which I am sure is hardly the idea, i.e. deception and manipulation by a person to get a desired phrase, that Dawkins was trying to convey with his 'Weasel' example. But is this context dependency that is found in literature also found in life? Yes! Starting at the amino acids of proteins we find context dependency:
Fred Sanger, Protein Sequences and Evolution Versus Science - Are Proteins Random? Cornelius Hunter - November 2013 Excerpt: Standard tests of randomness show that English text, and protein sequences, are not random.,,, http://darwins-god.blogspot.com/2013/11/fred-sanger-protein-sequences-and.html (A Reply To PZ Myers) Estimating the Probability of Functional Biological Proteins? Kirk Durston , Ph.D. Biophysics – 2012 Excerpt (Page 4): The Probabilities Get Worse This measure of functional information (for the RecA protein) is good as a first pass estimate, but the situation is actually far worse for an evolutionary search. In the method described above and as noted in our paper, each site in an amino acid protein sequence is assumed to be independent of all other sites in the sequence. In reality, we know that this is not the case. There are numerous sites in the sequence that are mutually interdependent with other sites somewhere else in the sequence. A more recent paper shows how these interdependencies can be located within multiple sequence alignments.[6] These interdependencies greatly reduce the number of possible functional protein sequences by many orders of magnitude which, in turn, reduce the probabilities by many orders of magnitude as well. In other words, the numbers we obtained for RecA above are exceedingly generous; the actual situation is far worse for an evolutionary search. http://powertochange.com/wp-content/uploads/2012/11/Devious-Distortions-Durston-or-Myers_.pdf
Moreover, context dependency is found on at least three different levels of the protein structure:
"Why Proteins Aren't Easily Recombined, Part 2" - Ann Gauger - May 2012 Excerpt: "So we have context-dependent effects on protein function at the level of primary sequence, secondary structure, and tertiary (domain-level) structure. This does not bode well for successful, random recombination of bits of sequence into functional, stable protein folds, or even for domain-level recombinations where significant interaction is required." http://www.biologicinstitute.org/post/23170843182/why-proteins-arent-easily-recombined-part-2
Moreover, it is interesting to note that many (most?) proteins are now found to be multifunctional depending on the overall context (i.e. position in cell, cell type, tissue type, etc..) that the protein happens to be involved in. Thus, the sheer brick wall that Darwinian processes face in finding ANY novel functional protein to perform any specific single task in a cell in the first place (Axe; Sauer) is only exponentially exasperated by the fact that many proteins are multifunctional and, serendipitously, perform several different 'context dependent' functions within the cell:
Human Genes: Alternative Splicing (For Proteins) Far More Common Than Thought: Excerpt: two different forms of the same protein, known as isoforms, can have different, even completely opposite functions. For example, one protein may activate cell death pathways while its close relative promotes cell survival. http://www.sciencedaily.com/releases/2008/11/081102134623.htm Genes Code For Many Layers of Information - They May Have Just Discovered Another - Cornelius Hunter - January 21, 2013 Excerpt: “protein multifunctionality is more the rule than the exception.” In fact, “Perhaps all proteins perform many different functions by employing as many different mechanisms." http://www.fasebj.org/content/23/7/2022.full
Context dependency, and the problem it presents for 'bottom up' Darwinian evolution is perhaps most dramatically illustrated by the following examples in which 'form' dictates how the parts are used:
An Electric Face: A Rendering Worth a Thousand Falsifications - Cornelius Hunter - September 2011 Excerpt: The video suggests that bioelectric signals presage the morphological development of the face. It also, in an instant, gives a peak at the phenomenal processes at work in biology. As the lead researcher said, “It’s a jaw dropper.” https://www.youtube.com/watch?v=wi1Qn306IUU What Do Organisms Mean? Stephen L. Talbott – Winter 2011 Excerpt: Harvard biologist Richard Lewontin once described how you can excise the developing limb bud from an amphibian embryo, shake the cells loose from each other, allow them to reaggregate into a random lump, and then replace the lump in the embryo. A normal leg develops. Somehow the form of the limb as a whole is the ruling factor, redefining the parts according to the larger pattern. Lewontin went on to remark: “Unlike a machine whose totality is created by the juxtaposition of bits and pieces with different functions and properties, the bits and pieces of a developing organism seem to come into existence as a consequence of their spatial position at critical moments in the embryo’s development. Such an object is less like a machine than it is like a language whose elements … take unique meaning from their context.[3]“,,, http://www.thenewatlantis.com/publications/what-do-organisms-mean
bornagain77
Alan Fox: Durston has calculated the CSI of 35 protein families. Is that cogent enough? gpuccio
Oops: status of minor god. Alan Fox
By the way it is possible to calculate the complexity of a rock and and a strand of DNA and a protein.
A demonstration would elevate you to the status on minor god. Go for it. Show me how to calculate the complexity of a rock. Do I get to pick which one? Alan Fox
Yes, on the fact that information can be created and it has always been trivial. If I am not correct on both please provide a counter example.
You are assuredly correct, Jerry. Man, you are on a roll. (PM Ras) Alan Fox
Jerry is again correct.
Yes, on the fact that information can be created and it has always been trivial. If I am not correct on both please provide a counter example. By the way it is possible to calculate the complexity of a rock and and a strand of DNA and a protein. jerry
There are various aspects of the design issues: 1. the probability an algorithm or information processing mechanism can generate new concepts 2. the probability designed physical artifacts can be synthesized by random processes 3. situations where both probabilities are in play I don't think there is much disagreement in the ID community about #1. An evolutionary computation, a biological "computation" is fundamentally limited in the class of new concepts (platonic forms, ideas, etc.) that it can generate that match what we humans subjectively perceive as designed. This is where No-Free-Lunch is blatantly obvious. When we speak of an algorithm being unable to spontaneously increase its algorithmic information, I don't think there is much dispute about that. All the algorithm can do is at best is to make a variety of representations of what is already inside it. For example, suppose we have an algorithm to make a variety of rectangles by stating Euclidian X,Y coordinates of the corners. Here are some example outputs 0,0, and 4,4 -1,-1 and 3.14, 3.14 etc. This can be done with random number generator and we simply take two random numbers and duplicate them. The random input is constrained by the program, it will never generate more algorithmic information than the concept of rectangles (or pairs of identical numbers). It will not describe space shuttles. We might try to mutate the computer code randomly, and all you'll get is a mess. There is no real increase in conceptual information, the only variety proceeds from the random inputs, but this does not add new insights, it does not add new platonic concepts. There is no free lunch. I don't think there is much disagreement about the limits of evolutionary computation to create fundamentally new conceptions (specifications) than what was front loaded into it either explicitly or implicitly (implicit is usually the case). It's when we get out of the realm of conceptual information increase to the transfer of conceptual information into a physical representation we run into issues. In the case of the robot, let us say all it knows how to do is make coins heads. It can never self-evolve new concepts. Randomly mutating the robot will likely result in robot malfunction, failure, not increase in new conceptual abilities. The NFL theorems clearly work well in the case of the Robot's algorithmic information. If we can agree at least about the Robot's inability to create new specifications beyond that which it was front loaded with, then we have at least one thing we can agree on. scordova
I believe the "C" only has reference to the "I." So it is the "I" that is being assessed as far as complexity. The "S" is what differentiates one event from another event. Without the "S" the concept would have no meaning. A rock sitting in the middle of the river bed contains information, namely the arrangement of the molecules that compose it but no one would say it specifies or is specified but it surely can be very complex. Specifies is necessary as there must be two independent entities and one specifies the other or is specified by the other. The "F" is added to limit the events to those where the specified entity has a recognizable function. FCSI is a subset which is easily understood because of the specified function. jerry
PS @ Eric Anderson You may infer from my previous comment n° 11 that CSI is unquantifiable, meaningless, useless. Is that cogent enough? Alan Fox
CSI is an incredibly simple concept. I have yet to hear any cogent criticism of the concept.
OK then. Calculate the CSI of something. Please show your working. Alan Fox
To say that natural forces do not create information is a dead end argument. Of course natural forces create information once original information is available.
Jerry is again correct. The environment is the designing element in evolutionary processes. Of course, Creationists should direct their fire to Origin-of-Life theories where the science is far from settled. But nobody listens to me. ;) Alan Fox
“Cumulative selection” as you call it, is after all, precisely what Darwinian evolution is supposed to provide. It is quite clear that Dawkins was trying to demonstrate the “power of cumulative selection [read Darwinian evolution].”
Eric, have a look at Wikipedia as you appear to have only absorbed Creationist propaganda on the subject.
Look, it shouldn't be that hard for people to say, “Sorry, bad example.” Instead, Dawkins lovers continue to defend Weasel tooth and nail.
Laughably untrue. Dawkins is on record as saying he didn't even bother to keep his code because it wasn't important. Creationists got their teeth into Weasel 30 years or more after it appeared in "Blind Watchmaker". I note they have been much less critical of later more sophisticated programs such as bio-morphs and those that generated sea shells and spider webs.
It was wrong. It didn’t demonstrate what he thought it did.* He was called on it, and rightly so. Let’s stop trying to defend the indefensible or rewrite history.
It did all it was ever meant to do. It showed the power of selection against random draw.
Ironically, instead it showed how you can sneak design in through the back door, as evolutionists are so often wont to do and as virtually every subsequent “evolutionary algorithm” that performs anything interesting does.
Who is disputing that design happens? The environment designs. Breeders design. Alan Fox
EA @ 6,8 Bravo :) Optimus
I fear this discussion may be generating confusion, rather than light. In order to help remedy the situation, I want to lay out, if I may, the crux of the matter. Known Mechanism? The whole point of CSI, as Dembski proposed, was to identify the likely provenance of an artifact in those cased in which the actual origin, meaning the actual mechanism that produced the artifact, is unknown. Further, if the particular mechanism that produced an artifact is known, then we never invoke the concept of CSI, because we already know the provenance. The entire concept is useful precisely in those instances in which the actual, historical, source or mechanism is unknown. So it is certainly not the case that we calculate CSI only in those cases in which the mechanism that produced the artifact is known. Quite the opposite is true. The only time we use the concept to try and infer the best explanation for the origin of the artifact is when the actual origin is unknown. Pro-forma Mechanism Now, we could say that we calculate CSI with respect to various competing "pro-forma mechanisms" or "potential mechanisms" or "proposed mechanisms," etc. That is perfectly fine. And in those cases the mechanisms are broad in nature: chance, necessity, design. And of those three the only one with respect to which it makes sense to do any calculation is chance, because necessity already carries a probability of 1 and design is not amenable to a probability calculation (or, per another viewpoint could also be viewed as 1). So as a result, we always calculate 'C' with respect to chance, and typically that is adequately accomplished through the simplest known parameters: nucleotides interacting naturally to form a chain of DNA, amino acids interacting to form a protein chain, etc. Thus, in virtually all cases, we are calculating the 'C' of CSI with respect to a hypothetical or a pro-forma chance scenario. And we do so irrespective of whether we know that chance is the actual mechanism or not. Problems with CSI? CSI is an incredibly simple concept. I have yet to hear any cogent criticism of the concept. Are there interesting corner cases, like Sal's self-reproducing cells? Sure. But in essentially all those cases we are dealing with semantics and can easily resolve the imagined problems with CSI by stating the particular case with more clarity. Problems do arise when we start to think that we can calculate CSI with some kind of mathematical precision that will be the final definitive demonstration of CSI's existence or non-existence in a particular case. We don't and we can't calculate CSI, per se. We calculate 'C'. The 'S' and the 'I' are not amenable to simple mathematical reduction. They are concepts that depend on experience, logic, meaning, context, understanding, etc. Are those concepts challenging in their own right at times? Certainly. But our inability to precisely calculate them in no way invalidates or diminishes the importance of CSI as a tool for helping us arrive at an inference to the best explanation precisely in those cases in which the origin of a particular artifact is unknown. That is the whole point of CSI, and it is remarkably effective at carrying the weight of that burden, if we keep our eye on the ball. Eric Anderson
To say that natural forces do not create information is a dead end argument. Of course natural forces create information once original information is available. However, every example is trivial but real. There may be a new fur color or a minor change to the gene sequence that produces a new protein. But the examples are rare and trivial. To deny there are no beneficial mutations or a change in information is a losing argument. To accept such minor changes and demand the other person admit there is nothing more than trivial changes is the winning argument. Of course the Darwinists are never going to do this. So the meme is that information can change but it always of low consequence and it would take a trillion universes to get to just one substantive new protein. We should advocate this because it buries Darwinian evolution as nothing more than trivial. Which of course it is as we get daily reminders by the Darwinist silence on anything meaningful. jerry
Alan Fox:
The “Weasel” program was only meant to show the power of cumulative selection over random draw.
Don't be silly. "Cumulative selection" as you call it, is after all, precisely what Darwinian evolution is supposed to provide. It is quite clear that Dawkins was trying to demonstrate the "power of cumulative selection [read Darwinian evolution]." Look, it shouldn't be that hard for people to say, "Sorry, bad example." Instead, Dawkins lovers continue to defend Weasel tooth and nail. It was wrong. It didn't demonstrate what he thought it did.* He was called on it, and rightly so. Let's stop trying to defend the indefensible or rewrite history. ----- * Ironically, instead it showed how you can sneak design in through the back door, as evolutionists are so often wont to do and as virtually every subsequent "evolutionary algorithm" that performs anything interesting does. Eric Anderson
In the case of Blind search, such as illustrated in: https://uncommondesc.wpengine.com/computer-science/dawkins-weasel-vs-blind-search-simplified-illustration-of-no-free-lunch-theorems/ The details of the evolutionary mechanism which Dave Thomas might employ are completely unknown to me, however, I knew he could not solve the password unless he had specialized knowledge. Which ever mechanism he chose would fail. He could not reduce the uncertainty (or increase his information about the password). There is little objection if we state the problem in terms of the information inside the robotic or evolutionary agent relative to the sort of things he can construct. I don't see we disagree there. I said I'm enthusiastic to support NFL in that context. It is clear, it is blatantly true. This will apply to Avida, or other evolutionary algorithms. There will be limits to what they can construct. I presume we are in agreement there as both of us have worked to varying degrees to refute the claims of the Avida proponents. My challenge to Dave Thomas was to illustrate the limits of evolutionary computation, no matter what computation was employed, it cannot reduce the uncertainty (or increase the information) about what specifications were in my mind (the password). By way of extension, evolutionary algorithms cannot create new algorithmic information that coincides with human specifications for design beyond which the evolutionary algorithm was front-loaded with. The Dave Thomas challenge was meant to illustrate this. I think, I hope we are in agreement there.... However, it's a different story when we start calculating CSI for physical objects when we have no a priori access to the information base of the agent that constructed it. If we say an object (like Mt. Stone Henge) evidences CSI (when we may not even have access to the designer). Then we run into the current dispute. We can only calculate CSI in such cases base on the object, we don't factor the possible mechanism into the EF. My complaint is that it is important to distinguish when we are estimating information inside a supposed mechanism (it's level of know-how, its level of front loaded algorithmic information), versus the Shannon information in evidence in physical objects where we have limited or no access to the mechanism of its creation. In the case of the 2000 coins, this is analogous to many physical artifacts where the details of the mechanism are inaccessible to us. Thus in the case of a house of cards or a set of coins we happen upon in a room, if we go by the artifacts alone, the CSI from the random to ordered state clearly increases in the artifact. It is fair to say, that knowledge (algorithmic information) of the Robot to do such tasks was front loaded, and it did not increase its knowledge base in the process. We could say the Robot's knowledge of such design patterns did not self-increase. The reason the Robot could order coins is it had specialized knowledge (algorithmic infromation) as to what humans consider designed. The designers of the Robot essentially gave the Robot a password. In contrast I did not give Dave Thomas a password. No algorithm he could possibly write would reduce his uncertainty about the specifications I had in my mind, hence he could not resolve my password. I have no problem saying an evolutionary algorithm cannot spontaneously reduce uncertainty (or increase algorithmic information) about subjectively perceived specifications in human minds. I do have a problem in saying the Shannon Information in evidence in artifacts is bounded in the same way. We have 4 issues: 1. algorithmic information inside the mechanism of creation (evolutionary algorithm, robot, bacteria) cannot increase relative to specifications that identify design. Hence biological organisms cannot spontaneously create more algorithmic information that matches human perceived specifications of design (i.e. the challenge to Dave Thomas)... 2. shannon information evidenced by the artifact which the mechamism creates 3. Shannon vs. algorithmic information 4. how the boundaries of the EF are drawn or redrawn -- the information levels when drawing the boundary just around 2000 coins and then the information levels when the boundary is drawn around both the 2000 coins and the robot I'm on the same page with you on #1. We appear to be having disagreements over the other 3 issues. Thank you again for your willingness to get aggravated over this discussion, but I think it is a topic that needs to be discussed. scordova
My first lesson in all this was Dawkin’s infamous Weasel program, in which he had a computer match the target Weasel phrase. He claimed this demonstration proved once and for all that Darwinian processes could do the miraculous.
You have this completely wrong. The "Weasel" program was only meant to show the power of cumulative selection over random draw. Dawkins never made any claims for it. It was meant as a pedagogical illustration only. Dawkins' later "bio-morphs" are a much better analogue for the power of artificial and environmental selection. Alan Fox
Winston Ewert, Seeing as how I have been sort of a nose bleed seat observer to this whole information controversy, I would like to put in my unsolicited 2 cents. My first lesson in all this was Dawkin's infamous Weasel program, in which he had a computer match the target Weasel phrase. He claimed this demonstration proved once and for all that Darwinian processes could do the miraculous. i.e. Create sophisticated functional information. More discerning minds were not so impressed by his demonstration and pointed out that Dawkin's had obviously smuggled information into the final solution. In fact, so obvious was Dawkins attempt at smuggling information that, when he was asked to see the coding for his program, he somehow 'lost the program' (I don't know if the dog ever un-ate his homework). His attempt at hoodwinking people could easily be considered as one of the worst smuggling attempts ever.
Busted! The worst drug smuggling attempts ever - June 21, 2010 http://dailycaller.com/2010/06/21/busted-the-worst-drug-smuggling-attempts-ever/
From my perspective, again way up in the nose bleed section, I smelled the strong odor of a dead rat in the Weasel program. And the stench has not subsided as the sophistication of evolutionary algorithms (smuggling information) has increased over these past few years. Drs. Dembski and Marks, along with you Mr.(?) Ewert, have done an excellent job in busting these information smuggling Cartels. The 'information' Cartels that have had their smuggling operations shut down by you guys include, but are limited to, Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev:
LIFE’S CONSERVATION LAW - William Dembski - Robert Marks - Pg. 13 Excerpt: Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case. http://evoinfo.org/publications/lifes-conservation-law/
These efforts at shutting down information smuggling have also grown in sophistication as the efforts to smuggle it is have been increased:
Before They've Even Seen Stephen Meyer's New Book, Darwinists Waste No Time in Criticizing Darwin's Doubt - William A. Dembski - April 4, 2013 Excerpt: In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled "Conservation of Information Made Simple" (go here). ,,, ,,, Here are the two seminal papers on conservation of information that I've written with Robert Marks: "The Search for a Search: Measuring the Information Cost of Higher-Level Search," Journal of Advanced Computational Intelligence and Intelligent Informatics 14(5) (2010): 475-486 "Conservation of Information in Search: Measuring the Cost of Success," IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061 For other papers that Marks, his students, and I have done to extend the results in these papers, visit the publications page at www.evoinfo.org http://www.evolutionnews.org/2013/04/before_theyve_e070821.html
I for one applaud you guys valiant efforts Mr.(Dr.?) Ewert, as the odor in my nose bleed section has taken a much more pleasant character than it once had with the Weasel program. bornagain77
Well one of these concepts [complex specified information and its descendants]seems to be useless while the other, the non Dembski version, seems to be very useful.
How, Jerry? How has dFCSI demonstrated itself as useful? Where can I find a demonstration of usefulness? All I see is GEM counting amino acid residues and claiming he has done something useful without achieving anything useful at all. Alan Fox
Well one of these concepts seems to be useless while the other, the non Dembski version, seems to be very useful. That is what I am getting out of this. Which is why we long ago abandoned CSI as a useful concept and took up FCSI. Mainly, because even a 10 year old can understand FCSI but maybe 4 people in the universe could understand the other the usefulness of the other version of CSI. It seems it is mainly good for coin flips. One of the problems is the word "specified." It seems to have no agreed upon meaning. I am being a little facetious but I think I am close to the truth. jerry

Leave a Reply