Uncommon Descent Serving The Intelligent Design Community

The Tragedy of Two CSIs

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

CSI has come to refer to two distinct and incompatible concepts. This has lead to no end of confusion and flawed argumentation.

CSI, as developed by Dembski, requires the calculation of the probability of an artefact under the mechanisms actually in operation. It a measurement of how unlikely the artefact was to emerge given its context. This is the version that I’ve been defending in my recent posts.

CSI, as used by others is something more along the lines of the appearance of design. Its typically along the same lines as notion of complicated developed by Richard Dawkin in The Blind Watchmaker:

complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone.

This is similar to Dembski’s formulation, but where Dawkins merely requires that the quality be unlikely to have been acquired by random chance, Dembski’s formula requires that the quality by unlikely to have been acquired by random chance and any other process such as natural selection. The requirements of Dembski’s CSI is much more stringent than Dawkin’s complicated or the non-Dembski CSI.

Under Dembski’s formulation, we do not know whether or not biology contains specified complexity. As he said:

Does nature exhibit actual specified complexity? The jury is still out. – http://www.leaderu.com/offices/dembski/docs/bd-specified.html

The debate for Dembski is over whether or not nature exhibits specified complexity. But for the notion of complicated or non-Dembski CSI, biology is clearly complicated, and the debate is over whether or not Darwinian evolution can explain that complexity.

For Dembski’s formulation of specified complexity, the law of the conservation of information is a mathematical fact. For non-Dembski formulations of specified complexity, the law of the conversation of information is a controversial claim.

These are two related but distinct concepts. We must not conflate them. I think that non-Dembski CSI is a useful concept. However, it is not the same thing as Dembski’s CSI. They differ on critical points. As such, I think it is incorrect to refer to any these ideas as CSI or specified complexity. I think that only Dembski’s formulation or variations thereof should be termed CSI.

Perhaps the toothpaste is already out the bottle, and this confusion of the notion of specified complexity cannot be undone. But as it stands, we’ve got a situations where CSI is used to referred to two distinct concepts which should not be conflated. And that’s the tragedy.

Comments
To say that natural forces do not create information is a dead end argument. Of course natural forces create information once original information is available. However, every example is trivial but real. There may be a new fur color or a minor change to the gene sequence that produces a new protein. But the examples are rare and trivial. To deny there are no beneficial mutations or a change in information is a losing argument. To accept such minor changes and demand the other person admit there is nothing more than trivial changes is the winning argument. Of course the Darwinists are never going to do this. So the meme is that information can change but it always of low consequence and it would take a trillion universes to get to just one substantive new protein. We should advocate this because it buries Darwinian evolution as nothing more than trivial. Which of course it is as we get daily reminders by the Darwinist silence on anything meaningful.jerry
November 26, 2013
November
11
Nov
26
26
2013
11:27 AM
11
11
27
AM
PST
Alan Fox:
The “Weasel” program was only meant to show the power of cumulative selection over random draw.
Don't be silly. "Cumulative selection" as you call it, is after all, precisely what Darwinian evolution is supposed to provide. It is quite clear that Dawkins was trying to demonstrate the "power of cumulative selection [read Darwinian evolution]." Look, it shouldn't be that hard for people to say, "Sorry, bad example." Instead, Dawkins lovers continue to defend Weasel tooth and nail. It was wrong. It didn't demonstrate what he thought it did.* He was called on it, and rightly so. Let's stop trying to defend the indefensible or rewrite history. ----- * Ironically, instead it showed how you can sneak design in through the back door, as evolutionists are so often wont to do and as virtually every subsequent "evolutionary algorithm" that performs anything interesting does.Eric Anderson
November 26, 2013
November
11
Nov
26
26
2013
10:29 AM
10
10
29
AM
PST
In the case of Blind search, such as illustrated in: https://uncommondescent.com/computer-science/dawkins-weasel-vs-blind-search-simplified-illustration-of-no-free-lunch-theorems/ The details of the evolutionary mechanism which Dave Thomas might employ are completely unknown to me, however, I knew he could not solve the password unless he had specialized knowledge. Which ever mechanism he chose would fail. He could not reduce the uncertainty (or increase his information about the password). There is little objection if we state the problem in terms of the information inside the robotic or evolutionary agent relative to the sort of things he can construct. I don't see we disagree there. I said I'm enthusiastic to support NFL in that context. It is clear, it is blatantly true. This will apply to Avida, or other evolutionary algorithms. There will be limits to what they can construct. I presume we are in agreement there as both of us have worked to varying degrees to refute the claims of the Avida proponents. My challenge to Dave Thomas was to illustrate the limits of evolutionary computation, no matter what computation was employed, it cannot reduce the uncertainty (or increase the information) about what specifications were in my mind (the password). By way of extension, evolutionary algorithms cannot create new algorithmic information that coincides with human specifications for design beyond which the evolutionary algorithm was front-loaded with. The Dave Thomas challenge was meant to illustrate this. I think, I hope we are in agreement there.... However, it's a different story when we start calculating CSI for physical objects when we have no a priori access to the information base of the agent that constructed it. If we say an object (like Mt. Stone Henge) evidences CSI (when we may not even have access to the designer). Then we run into the current dispute. We can only calculate CSI in such cases base on the object, we don't factor the possible mechanism into the EF. My complaint is that it is important to distinguish when we are estimating information inside a supposed mechanism (it's level of know-how, its level of front loaded algorithmic information), versus the Shannon information in evidence in physical objects where we have limited or no access to the mechanism of its creation. In the case of the 2000 coins, this is analogous to many physical artifacts where the details of the mechanism are inaccessible to us. Thus in the case of a house of cards or a set of coins we happen upon in a room, if we go by the artifacts alone, the CSI from the random to ordered state clearly increases in the artifact. It is fair to say, that knowledge (algorithmic information) of the Robot to do such tasks was front loaded, and it did not increase its knowledge base in the process. We could say the Robot's knowledge of such design patterns did not self-increase. The reason the Robot could order coins is it had specialized knowledge (algorithmic infromation) as to what humans consider designed. The designers of the Robot essentially gave the Robot a password. In contrast I did not give Dave Thomas a password. No algorithm he could possibly write would reduce his uncertainty about the specifications I had in my mind, hence he could not resolve my password. I have no problem saying an evolutionary algorithm cannot spontaneously reduce uncertainty (or increase algorithmic information) about subjectively perceived specifications in human minds. I do have a problem in saying the Shannon Information in evidence in artifacts is bounded in the same way. We have 4 issues: 1. algorithmic information inside the mechanism of creation (evolutionary algorithm, robot, bacteria) cannot increase relative to specifications that identify design. Hence biological organisms cannot spontaneously create more algorithmic information that matches human perceived specifications of design (i.e. the challenge to Dave Thomas)... 2. shannon information evidenced by the artifact which the mechamism creates 3. Shannon vs. algorithmic information 4. how the boundaries of the EF are drawn or redrawn -- the information levels when drawing the boundary just around 2000 coins and then the information levels when the boundary is drawn around both the 2000 coins and the robot I'm on the same page with you on #1. We appear to be having disagreements over the other 3 issues. Thank you again for your willingness to get aggravated over this discussion, but I think it is a topic that needs to be discussed.scordova
November 26, 2013
November
11
Nov
26
26
2013
10:22 AM
10
10
22
AM
PST
My first lesson in all this was Dawkin’s infamous Weasel program, in which he had a computer match the target Weasel phrase. He claimed this demonstration proved once and for all that Darwinian processes could do the miraculous.
You have this completely wrong. The "Weasel" program was only meant to show the power of cumulative selection over random draw. Dawkins never made any claims for it. It was meant as a pedagogical illustration only. Dawkins' later "bio-morphs" are a much better analogue for the power of artificial and environmental selection.Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
10:09 AM
10
10
09
AM
PST
Winston Ewert, Seeing as how I have been sort of a nose bleed seat observer to this whole information controversy, I would like to put in my unsolicited 2 cents. My first lesson in all this was Dawkin's infamous Weasel program, in which he had a computer match the target Weasel phrase. He claimed this demonstration proved once and for all that Darwinian processes could do the miraculous. i.e. Create sophisticated functional information. More discerning minds were not so impressed by his demonstration and pointed out that Dawkin's had obviously smuggled information into the final solution. In fact, so obvious was Dawkins attempt at smuggling information that, when he was asked to see the coding for his program, he somehow 'lost the program' (I don't know if the dog ever un-ate his homework). His attempt at hoodwinking people could easily be considered as one of the worst smuggling attempts ever.
Busted! The worst drug smuggling attempts ever - June 21, 2010 http://dailycaller.com/2010/06/21/busted-the-worst-drug-smuggling-attempts-ever/
From my perspective, again way up in the nose bleed section, I smelled the strong odor of a dead rat in the Weasel program. And the stench has not subsided as the sophistication of evolutionary algorithms (smuggling information) has increased over these past few years. Drs. Dembski and Marks, along with you Mr.(?) Ewert, have done an excellent job in busting these information smuggling Cartels. The 'information' Cartels that have had their smuggling operations shut down by you guys include, but are limited to, Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev:
LIFE’S CONSERVATION LAW - William Dembski - Robert Marks - Pg. 13 Excerpt: Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case. http://evoinfo.org/publications/lifes-conservation-law/
These efforts at shutting down information smuggling have also grown in sophistication as the efforts to smuggle it is have been increased:
Before They've Even Seen Stephen Meyer's New Book, Darwinists Waste No Time in Criticizing Darwin's Doubt - William A. Dembski - April 4, 2013 Excerpt: In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled "Conservation of Information Made Simple" (go here). ,,, ,,, Here are the two seminal papers on conservation of information that I've written with Robert Marks: "The Search for a Search: Measuring the Information Cost of Higher-Level Search," Journal of Advanced Computational Intelligence and Intelligent Informatics 14(5) (2010): 475-486 "Conservation of Information in Search: Measuring the Cost of Success," IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061 For other papers that Marks, his students, and I have done to extend the results in these papers, visit the publications page at www.evoinfo.org http://www.evolutionnews.org/2013/04/before_theyve_e070821.html
I for one applaud you guys valiant efforts Mr.(Dr.?) Ewert, as the odor in my nose bleed section has taken a much more pleasant character than it once had with the Weasel program.bornagain77
November 26, 2013
November
11
Nov
26
26
2013
04:23 AM
4
04
23
AM
PST
Well one of these concepts [complex specified information and its descendants]seems to be useless while the other, the non Dembski version, seems to be very useful.
How, Jerry? How has dFCSI demonstrated itself as useful? Where can I find a demonstration of usefulness? All I see is GEM counting amino acid residues and claiming he has done something useful without achieving anything useful at all.Alan Fox
November 26, 2013
November
11
Nov
26
26
2013
01:15 AM
1
01
15
AM
PST
Well one of these concepts seems to be useless while the other, the non Dembski version, seems to be very useful. That is what I am getting out of this. Which is why we long ago abandoned CSI as a useful concept and took up FCSI. Mainly, because even a 10 year old can understand FCSI but maybe 4 people in the universe could understand the other the usefulness of the other version of CSI. It seems it is mainly good for coin flips. One of the problems is the word "specified." It seems to have no agreed upon meaning. I am being a little facetious but I think I am close to the truth.jerry
November 25, 2013
November
11
Nov
25
25
2013
07:57 PM
7
07
57
PM
PST
1 2 3 4

Leave a Reply