Uncommon Descent Serving The Intelligent Design Community

“Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Here’s our newest paper: “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information,” by William A. Dembski and Robert J. Marks II, forthcoming chapter in Bruce L. Gordon and William A. Dembski, eds., The Nature of Nature: Examining the Role of Naturalism in Science (Wilmington, Del.: ISI Books, 2009).

Click here for pdf of paper.

1 The Creation of Information
2 Biology’s Information Problem
3 The Darwinian Solution
4 Computational vs. Biological Evolution
5 Active Information
6 Three Conservation of Information Theorems
7 The Law of Conservation of Information
8 Applying LCI to Biology
9 Conclusion: “A Plan for Experimental Verification”

ABSTRACT: Laws of nature are universal in scope, hold with unfailing regularity, and receive support from a wide array of facts and observations. The Law of Conservation of Information (LCI) is such a law. LCI characterizes the information costs that searches incur in outperforming blind search. Searches that operate by Darwinian selection, for instance, often significantly outperform blind search. But when they do, it is because they exploit information supplied by a fitness function—information that is unavailable to blind search. Searches that have a greater probability of success than blind search do not just magically materialize. They form by some process. According to LCI, any such search-forming process must build into the search at least as much information as the search displays in raising the probability of success. More formally, LCI states that raising the probability of success of a search by a factor of q/p (> 1) incurs an information cost of at least log(q/p). LCI shows that information is a commodity that, like money, obeys strict accounting principles. This paper proves three conservation of information theorems: a function-theoretic, a measure-theoretic, and a fitness-theoretic version. These are representative of conservation of information theorems in general. Such theorems provide the theoretical underpinnings for the Law of Conservation of Information. Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms.

Comments
R0b, Thank you for your comment. I think your point is now clearer. I think this is where Dembski's point about LCI being applied to individual situations is important. Once a person defines their search set-up and how it has improved performance over blind search, the applicability of LCI can be shown, as Dembski did to Dawkins' Weasel. In the footnote you mentioned, Dembski showed that even limiting himself to just the proximity reward functions, you still have as many different proximity reward functions as you have elements in your original search. If we take into account the other possible fitness functions for our evolutionary search, the informational cost is even greater. So you question becomes, in essence, "What if we define our higher order search as just the elements that are 'good' fitness functions?" In this case we are still looking at a subset of the possible functions, so the cost is there. But why doesn't this apply to choosing an evolutionary search (with all fitness functions) out of all the other search algorithms? Shouldn't we also take into account the informational cost in the algorithm reduction? The answer is no, because simply choosing one search strategy over another does not improve search performance (as shown by the NFL theorems.) So it becomes irrelevant to us, since we're only interested in explaining the improvement in our original search and showing that the improvement gain in the original search comes at a cost equal to or greater than the active information. If we take into account all the other informational costs that do not contribute to search improvement, then the informational cost can get much larger than the active information. But if we limit ourselves to only the reductions that lead directly to search performance improvement, this cost cannot be less than the active information, per Demsbki and Marks' paper. AtomAtom
May 7, 2009
May
05
May
7
07
2009
01:28 PM
1
01
28
PM
PDT
R0b, I mentioned that I enjoyed reading the opening philosophical sections of the chapter. But there is a hugely important philosophical issue begging to be addressed, and Dembski and Marks have ignored it: What is a probability? It seems that you are touching on this omission. There are people far better qualified than I to debate Dembski on probability interpretation. But I am certain that it is a crucial issue in the present context. To treat the negative logarithms of probabilities as physical information, Dembski and Marks must stick consistently to physical probabilities. The consensus on physical probability is that it must be related to a repeatable physical experiment. You mentioned the absence of "constraint" on probabilities, and I think I have just identified what is required. Dembski and Marks never discuss how their probability measures model physical searches. They go from pure math regarding a regress of probability measures to a claim that they have a characterized physical reality, doing nothing to establish the probabilities as physical. In short, they have reified (hypostasized) abstract mathematical entities. At first blush, I would say that virtually none of their searches corresponds to an experiments repeatable in the observed universe (i.e., the relative volume of the subset of realizable searches is vanishly small).T M English
May 7, 2009
May
05
May
7
07
2009
01:24 PM
1
01
24
PM
PDT
"Biological reproduction generally increases, and natural selection generally reduces, the entropy of a population — this is a common and uncontroversial observation." No problem here. At least I think not but it will depend upon what is meant by entropy in this situation. Natural selection can lead to survival in a single step process (introduction of a new environment) or to extinction in a multi-step process (several changes in environment.) In the multi-step process key genetic elements may have been weeded out by adapting to a previous environment and thus make the population less likely to survive a new environment. "Novel information arises when biota work to reproduce, and something extrinsic (the “environment”) impinges on the process." Here we are getting into a sort of "No Man's Land" and by that I mean it depends on exactly what you mean whether it is true or not. Novel information has to be defined. In the concept of a gene pool there are multiple versions of many genetic elements, e.g. alleles. There may not be any member of the pool that has every possible combination and sexual reproduction generates a unique individual but no unique genetic element has arisen. A second scenario is that there is recombination during sexual reproduction to produce a unique genetic element that was not in the original gene pool and the gene pool is now larger and there is truly a unique new element. But usually these genetic elements are of no major consequence in life's journey from microbe to man but just provide some new variant. A third possibility is that some of the gametes were mutated by one of the many known processes and then during sexual reproduction this new genetic element was incorporated into the gene pool. Thus, two and three add new genetic elements. Natural selection or environmental pressures may determine if a new genetic element survives or what percentage of the population gets the new genetic element possibly taking thousands of generations or more to reach a stable level. From what I have read the supposed cause for major evolutionary change is this third process. Now I have no idea how this plays out in the information scenario that is being analyzed in the subject paper. "Parents do not create the novel information in their offspring. To tie this to Brillouin, we may interpret natural selection as a value-increasing transformation in which information is reduced. What constitutes value in biotic information is more subtle than most people make it out to be, and I’m going to leave the concept fuzzy for now." Yes it is fuzzy with or without any reference to the subject paper. The term value increasing is fuzzy. What is value increasing in one environment could be detrimental in another but I am not sure if that is what you mean. As you said and I agree, fuzzy. Natural selection tends to cull the gene pool over time and this process leads to a gene pool that may not be able to survive some environment and thus, go extinct. This is certainly a loss of information and does not seem to be the same information that is referred to in LCI. Yes, fuzzy!!!jerry
May 7, 2009
May
05
May
7
07
2009
01:03 PM
1
01
03
PM
PDT
R0b mentions the Brillouin quotation Dembski and Marks often use:
The [computing] machine does not create any new information, but it performs a very valuable transformation of known information.
I have highlighted the part that dovetails nicely with my post on logical depth (162). Bennett addresses the value of information, and Dembski and Marks do not. In fact, most computations of classical computers are irreversible. You can't tell your laptop computer to run in reverse to recover its earlier state. Information is lost (entropy increases, and your lap gets hot) as your laptop machine works to make irreversible transitions from one state to another. Biological reproduction generally increases, and natural selection generally reduces, the entropy of a population -- this is a common and uncontroversial observation. Novel information arises when biota work to reproduce, and something extrinsic (the "environment") impinges on the process. Parents do not create the novel information in their offspring. To tie this to Brillouin, we may interpret natural selection as a value-increasing transformation in which information is reduced. What constitutes value in biotic information is more subtle than most people make it out to be, and I'm going to leave the concept fuzzy for now.T M English
May 7, 2009
May
05
May
7
07
2009
11:50 AM
11
11
50
AM
PDT
Atom, thanks for your comment. If there's any ID proponent here who can hash out these issues, it's you. Here's my ridiculously long-winded response. From the paper:
The Law of Conservation of Information (LCI). Any search that proportionately raises the probability of locating a target by q/p with respect to blind search requires in its formation an amount of information not less than the active information I+ = log(q/p).
To assess the information cost of an efficient search, we essentially invent a story of how the search came to be. The LCI doesn't tell us what this story should be, other than that it should involve selecting something from some space, and we calculate the information cost by dividing the number of good somethings by the size of the space. Given an evolutionary search that efficiently finds "I AM AN ENGLISH SENTENCE", you say that it was the fitness function that was selected. It may just as well have been the algorithm, or any other aspect of the search, or the search itself, as in Marks and Dembski's measure-theoretic CoI theorem. And what space was it selected from? You say it was from the space of all fitness functions that map all 24-letter sequences (from an alphabet of capital letters and spaces) to n numerical values. But why can't we say, like Marks and Dembski did in their WEASEL analysis, that it came from the space of all fitness functions that indicate proximity to a target? Interestingly, Marks and Dembski say in endnote 49 that they should have defined the space to be more expansive, as you did. They say, "In limiting ourselves to fitness functions based on the Hamming distance from a target sequence, we’ve already incurred a heavy informational cost." But -- and this is my key point -- we have to limit ourselves somehow or we have a completely undefined space. For instance, how does your higher-order search know the domain and codomain of the fitness function that it's searching for? Shouldn't the space include all fitness functions, not just those with that particular domain and codomain? And how does it know to search for a fitness function, as opposed to an algorithm or something else? Shouldn't the space include everything? As I said, we have to limit ourselves somehow or we have an undefined space. But the LCI doesn't tell us what limitations to impose on the higher-order space. So it seems that we're free to limit it as we please. We can limit it to only "good" fitness functions if we want, in which case finding a good fitness function incurs no information cost, and the LCI is rendered false. Marks and Dembski could specify in the LCI how a higher-order space of fitness functions should be defined, but higher-order spaces can also be defined in an infinite number of ways that don't involve fitness functions. As the paper says, "the specific forms by which null and alternative searches can be instantiated is so endlessly varied that no single mathematical theorem can cover all contingencies." Thus my admittedly poor remedy that the LCI include a condition that the higher-order space must be "reasonably" defined. Interestingly, this problem was introduced by this paper. Previously, the only CoI theorem was a more detailed version of their measure-theoretic CoI theorem. This theorem specifies a definition for the higher-order search space that works for any search. Under that definition, the LCI certainly holds. If Marks and Dembski had stuck with that single definition, this problem wouldn't exist. But perhaps I'm taking the LCI too literally. Here's one of its characterizations from the paper: "Thus, instead of LCI constituting a theorem, it characterizes situations in which we may legitimately expect to prove a conservation of information theorem. LCI might therefore be viewed as a family of theorems sharing certain common features." If the LCI is merely a family of theorems, and there are situations in which we would not legitimately expect CoI to hold, is it right to call it a law?R0b
May 7, 2009
May
05
May
7
07
2009
10:39 AM
10
10
39
AM
PDT
jerry, What you said about your course in analysis reminded me of the concept of logical depth, developed by Charles Bennett. The Central Limit Theorem can be derived by a fairly small program containing the requisite axioms, but the program evidently has to run a long time to obtain the result. Loosely speaking, the program length is the algorithmic information of the theorem, and the running time (computational cost) is the logical depth of the theorem. I just had a look at a classic paper by Bennett, Logical Depth and Physical Complexity (1988). Most of what he says in the first 5+ pages is not only easy to follow, but also utterly brilliant. I found the intro highly relevant to the present discussion, and I recommend it highly to everyone. For Bennett, the value of an object is the time cost of obtaining it from a compact program. That is, an object low in algorithmic information may be high in value because it takes a long time to compute. Dembski and Marks focus on information cost, ignoring time. I believe there's a lot to be learned in considering the difference.T M English
May 7, 2009
May
05
May
7
07
2009
10:14 AM
10
10
14
AM
PDT
jerry wrote:
Bill tends to get quoted often by people looking for something negative to say about him and anything said here would not be used in any positive way
Ain't that the truth. AtomAtom
May 7, 2009
May
05
May
7
07
2009
08:47 AM
8
08
47
AM
PDT
I was going to say this earlier but Tom English just said it. Bill Dembski uses this site for ideas and he has been open about this for as long as I have been contributing. I have no idea if anyone has found a flaw let alone a fatal flaw in the current paper. But my guess is that as Tom English has said this was not written in jello and Dr. Dembski was looking for some fine tuning. So all you anti ID people out there, take pride in your contributions to ID. They have been many. So maybe as Tom English asks, Bill will put up LCI Two. And maybe Tom will explain to us what it is all about in a kinder and gentler way than his earlier comments. I know I have lots of questions but they are of a basic level and a clarification level and maybe the topic is just too beyond me. By the way if I were Bill Dembski I would not answer too many questions in a forum like this. Bill tends to get quoted often by people looking for something negative to say about him and anything said here would not be used in any positive way. It is just how the anti ID people work. They are such a nice considerate lot.jerry
May 7, 2009
May
05
May
7
07
2009
07:46 AM
7
07
46
AM
PDT
Folks, While I've said that the Law of Conservation of Information is not what Dembski promised in his book No Free Lunch, I will say also that the chapter is not "written in jello." There are some clear and important points to be debated. Perhaps we could get a fresh thread from the management, now that we've read and contemplated the chapter. Ironically, Dembski and Marks get a free lunch from discussion like this. They don't have to mingle with the riff-raff, but they can exploit anything useful that comes along in revising the chapter. It doesn't look like a final draft to me.T M English
May 6, 2009
May
05
May
6
06
2009
09:43 PM
9
09
43
PM
PDT
jerry, There's a backstory I'm not going into here. You might consider that Dembski has posted at DesignInference.com,
Jeffrey Shallit I and Jeffrey Shallit II. [8Jul05] My end of a sharp exchange with Jeff Shallit. Jeff was a teacher of mine at the University of Chicago in the 1980s. I took away some useful insights from his course on computational number theory. I’ve valued him as a critic even though my public denunications of him have been a bit over the top. Perhaps some day we will be able to put our differences on the table dispassionately.
T M English
May 6, 2009
May
05
May
6
06
2009
09:05 PM
9
09
05
PM
PDT
R0b wrote:
The LCI, as stated in the paper, is false for the simple reason that no constraints are specified for defining the higher-order search space. Given the freedom to define it as we please, we can always define it such that the probability of finding a good search is higher than the performance gain of a good search over the null search.
Since no one is responding to your point I'll attempt to address it, as discussions are no fun when people ignore you. The beauty of Dembski's COI theorems, if the formal mathematics underlying them holds, is that they show the relationship of information transfer from higher level to lower level searches. Namely, they bound it by the relation that the amount of information used at the higher level to select a "good" fitness function from the space of possible fitness functions (that reduction of possibilities is information in the sense of reducing uncertainty and constraining possibilities) will always be greater than or equal to the amount of information gain in the lower level search, measured in terms of improved search performance. (There are other ways to measure the information gain in the lower level search rather than by a posteriori measurements of search performance, but Dembski's measure is fine and works.) In other words, the gain in search performance at the lower level is proportional to the information input at the higher level, via the reduction of "fitness function space" to a single (or set of) fitness function(s) that gives us at least this increase in performance. This is all very abstract, so let's try to make it slightly more concrete. Let's begin with a search for the string "I AM AN ENGLISH SENTENCE" from the search space of all 24 letter permutations of the 27 letter alphabet (26 uppercase plus one space character). Our original search space is 27 ^ 24 elements big, roughly 10 ^ 34. So we have a base probability (intrinsic difficulty) of 1 in 10 ^ 34. (This much should not be controversial so far, if I did my arithmetic right) Now let's say we want to use an evolutionary search to improve our chances of finding the target using an appropriate fitness function. (As Weasel Ware 2.0 shows empirically, not just any fitness function will work.) We have to reduce the space of possible fitness functions to the space of ones that "work", or allow us to find the target in a reasonable number or queries. But how many possible fitness functions are there for that search space? If we limit the number of possible "fitness values" to n, we get n ^ (10 ^ 34) possible fitness functions. So the search space for fitness functions is exponentially larger than our original search space, except for the trivial case where we only have one possible fitness value (we'd only have one function in that case.) Depending on how many "good" fitness functions there are, this search may be harder than the original search. (It is Dembski's point that it is.) So to get to your concern, the higher order "search space" is simply all the ways to assign values between 0 and n to each of the permutations of the original space. So it is already implicitly defined. Now if we want to reduce this higher order space to only a subset of the possibilities, this reduction incurs an informational cost. That cost is greater than or equal to the active information gained (by the original search) by such a reduction. So choosing a good fitness function costs information, and allows the original search to perform better than blind search by that same (or less) amount of information, using Dembski's metric of active information (q being the improved probability of success, -log_2(q) being the information associated with that new probability.) It is either the same or less, but never more. If we want to assume some fitness functions are more "likely" than others, this change of probability distributions also incurs an informational cost. Demsbki shows this using the probability distribution version of his theorem, which again shows that the higher order reduction incurs a cost at least equal to the active information. So unless you can put your objection another way, I don't think the freedom to reduce the space of possible fitness functions violates COI in any way; it would seem to underscore it. So your claim that Dembski's COI is "false" is probably presumptuous and incorrect. I'm open to further clarification of your concern, as I may have missed a vital part of your argument. AtomAtom
May 6, 2009
May
05
May
6
06
2009
07:42 PM
7
07
42
PM
PDT
Beez, I cannot presume to speak for Dembski, but I don't understand your objection and this may be why you haven't gotten an answer from anyone here, including him. If you could clarify your concern more clearly perhaps someone here can address your question or give another take on it. As it stands, I doubt if your short post articulated your concern well enough for us to understand why you see an issue. (Or maybe I'm just obtuse, which is always a live option.) AtomAtom
May 6, 2009
May
05
May
6
06
2009
06:06 PM
6
06
06
PM
PDT
R0b, I suggest you go to Tom English's site and ask him. He knows about this stuff. You may not get the same answer that Bill Dembski would give or even get an answer. They do not seem to agree with each other on some areas of this topic.jerry
May 6, 2009
May
05
May
6
06
2009
05:29 PM
5
05
29
PM
PDT
I'm a little bummed that I joined the party after Dembski left. I'm guessing that the issues brought up by several commenters here, including myself, will remain unaddressed. But just for good measure, I'll throw some more out for anyone who cares: - The LCI, as stated in the paper, is false for the simple reason that no constraints are specified for defining the higher-order search space. Given the freedom to define it as we please, we can always define it such that the probability of finding a good search is higher than the performance gain of a good search over the null search. To remedy this would require the addition of a condition such as "... with the higher-order search space reasonably defined ...". This is, of course, vague language, but no more vague than leaving it out. - I join Tom English in congratulating Marks and Dembski for coming right out and saying, "Intelligence creates information." Unfortunately, no evidence for this assertion is offered. I submit that, according to their definitions, active information is created by luck, not by intelligent agents. Here's my reasoning: The amount of active info associated with something is the log-transformed, baseline-relativized probability of that something finding a solution (which may be a low-order target or a good search ). If intelligent agents have a knack for finding solutions, that means that there is a high probability of them doing so, hence a large amount of active info associated with intelligent agents. So when intelligent agents find solutions, that success is accounted for not by the creation of active info, but by the pre-existing active info associated with the agents. On the other hand, when a good search is found by something that is unlikely to find it, this is a case where something with high active info came about by something with low or no active info. Thus, luck creates active info. - Quotes regarding information seem misapplied to the active info framework, since the authors of the quotes were certainly not talking about active info. For example, Marks and Dembski have quoted Brillouin several times in their active info papers. But Brillouin was talking specifically about deterministic computation, which he viewed as non-information-producing precisely because of its determinacy. What does his point have to do with conservation of performance for typically non-deterministic searches? - In a comment above, although not in any active info papers that I'm aware of, Dembski restricts the notion of intrinsic targets to points in the search space that are salient. As Mark Frank pointed out, this is a concept that would need to be fleshed out objectively if the LCI is to be applied to situations in which a target isn't given. Dembski also mentioned "independent functional specifications" as an example of salience. My question is: Independent of what? If he means "independent of the probability of the outcome matching the specification", as he did in his specified complexity work, then targethood depends on the choice of search. It seems that having targets appear and disappear depending on our choice of search violates Marks and Dembski's assumptions. - In Dembski's previous work, he stated, "It follows, therefore, that how we measure information needs to be independent of whatever procedure is used to individuate the possibilities under consideration." Agreed, but the active info measure is not independent of how the search space elements are individuated. So different but equally accurate models of a problem could yield different information measurements. Why has Dembski stopped seeing this as a problem?R0b
May 6, 2009
May
05
May
6
06
2009
04:31 PM
4
04
31
PM
PDT
beelzebub and Mark Frank, You don't need Dembski to respond. All you need to do is take something that is alleged to have active information and show it can arise via nature, operating freely.Joseph
May 6, 2009
May
05
May
6
06
2009
01:18 PM
1
01
18
PM
PDT
Clade:
The term "clade" did not exist in the older Linnaean taxonomy, which was by necessity based only on morphological similarities between organisms. The concept embodied by the word "clade" does not fit well into the rigid hierarchy that the Linnaean system of taxonomy uses; indeed, cladistics and Linnean taxonomy are not really compatible. Linnaean taxonomy demands that all organisms be placed neatly into a rigid, ranked, hierarchy of taxa, such that one individual kind of organism must belong in one of each of the categories: species, genus, family, order, class, phylum and kingdom. Because of this necessity to "file things away neatly", the Linnaean system is often very convenient indeed in organizing such things as large museum reference collections, however it does not represent well the process of change that actually happens over evolutionary time. Because clades can be nested at any level, they do not have to be neatly slotted into a rank in an overall hierarchy. In contrast, the Linnaean taxa of "order," "class" etc. must all be used when naming a new taxon. They cannot be avoided, and each one implies a certain (admittedly very poorly defined) level of diversity, which is supposed to be equivalent throughout the system.
Joseph
May 6, 2009
May
05
May
6
06
2009
12:59 PM
12
12
59
PM
PDT
Allen, I am not sure that "cladogram = nested hierarchy" Nested hierarchy follows the rules I linked to. Did you read it? It is from the ISSS. Cladograms do not have to follow such rules. Cladograms are based on shared characteristics only. So the problem is you conflating a cladogram with a nested hierarchy and then trying to use that to refute my claim. IOW Allen once again you use dubious tactics to try to make a point. I told you to read "Evolution: A Theory in Crisis"- Denton goes over this. But anyway Allen, if you take a cladogram based on living organisms- do you see the lines and Vs? Each point and each node is a TRANSITIONAL form. A form with a MIX of characteristics. If we included ALL (alleged) transitional forms in a cladogram, what do you think it would look like? Would it look nice and neat with distinct tips? No. It would be a mess beacuse characteristics can be lost or gained, mixed and matched. So all you have done is refute a strawman. You should be very proud of yourself.Joseph
May 6, 2009
May
05
May
6
06
2009
12:46 PM
12
12
46
PM
PDT
In #144 joseph wrote:
"...perhaps you can provide a nested hierarchy based on a loss of characteristics. That would help your case but I know you won’t post one because it doesn’t exist."
It's difficult to do this without providing a diagram (i.e. a cladogram), but here goes (it helps if you draw your own cladogram using the definitions listed below; just draw a big "check mark" – that is, a large letter "V" with the right-hand line extended to the right, and then follow the directions given): Consider a very simple set of metallic hardware fasteners, consisting of: 1) a ten-penny nail 2) a deck nail 3) a wood screw 4) a stove bolt One can construct a nested hierarchy (i.e. a cladogram) of these four objects, using the following nodes and internode definitions: 1) the outgroup is the ten-penny nail, as it lacks threads (put the name "ten-penny nail" at the end of the left-hand line of the big "V"); 2) the first internode definition (i.e. the first synapomorphy) leading to the first derived clade is "threaded" (put a hash mark and this label up and to the right of the point of the big, lopsided letter "V" you started with); 3) the first derived clade is "deck nails" (put a line branching off to the left from the extended right-hand line of the letter "V", and label the end of this left-hand-sloping line "deck nails"); 4) the second internode definition (i.e. the second synapomorphy) is "slotted heads" and "finer threads" (put a hash mark and these labels up and to the right of the branch point between the leftward sloping line that leads to "deck nails" and the right-hand line of the letter "V" you started with); 5) the second derived clade is "wood screws" (put a second line branching off to the left from the extended right-hand line of the letter "V" above the second hash mark, and label the end of this left-hand-sloping line "wood screws"); 6) the third internode definition (i.e. the third synapomorphy) is "loss of sharpened point" (put a hash mark and this label up and to the right of the branch point between the leftward sloping line that leads to "wood screws" and the rest of the right-hand line of the letter "V"); 7) complete your cladogram (i.e. your "nested hierarchy") by labeling the end of the far-right-hand line "stove bolts". You should have a large letter "V", slanting to the right and with four branches off of it, reading in order from left to right: "ten-penny nails", "deck nails", "wood screws", and "stove bolts". This is a simple cladogram for metal hardware fasteners. Now, in the context of your request, notice that the the third internode definition (i.e. the third synapomorphy) is "loss of sharpened point". I have provided you with a "nested hierarchy" that does, indeed, have a loss of a character as one of its defining characteristics. Ergo, your assertion, "I know you won’t post one because it doesn’t exist" has been conclusively and demonstrably falsified. One might object that the cladogram/nested hierarchy I provided in the above example is for "designed" objects (i.e. objects that are the product of "intelligent design", in this case the "designers" of metallic hardware fasteners). Therefore, I will now provide an analogous cladogram/nested hierarchy using vertebrates (for which, I assume, you have no problem with the idea that they have evolved in the pattern indicated): Consider a very simple set of vertebrates, consisting of: 1) a lobe-finned fish (i.e. a member of the Rhipidistia) 2) an ancestral amphibian (such as a Labyrinthodont 3) an ancestral mammal (such as Cynognathus) 4) a derived protocetacean (such as Indocetus ramani) 5) a modern whale (such as Physeter macrocephalus) One can construct a nested hierarchy (i.e. a cladogram) of these five taxa, using the following nodes and internode definitions: 1) the outgroup is the lobe-finned fish, based on its anatomical characteristics, especially the bone structure of its fins (put the name "lobe-finned fish" at the end of the left-hand line of the big "V"); 2) the first internode definition (i.e. the first synapomorphy) leading to the first derived clade is "tetrapod anatomy" (put a hash mark and this label up and to the right of the point of the big, lopsided letter "V" you started with); 3) the first derived clade is "Labyrinthodonts" (put a line branching off to the left from the extended right-hand line of the letter "V", and label the end of this left-hand-sloping line "Labyrinthodonts"); 4) the second internode definition (i.e. the second synapomorphy) is "mammalian skeletal structure", especially the structure of the bones at the hinge of the jaw (put a hash mark and this label up and to the right of the branch point between the leftward sloping line that leads to "Labyrinthodonts" and the right-hand line of the letter "V" you started with); 5) the second derived clade is "Cynognathus" (put a second line branching off to the left from the extended right-hand line of the letter "V" above the second hash mark, and label the end of this left-hand-sloping line "Cynognathus"); 6) the third internode definition (i.e. the third synapomorphy) is "skeletal modifications for locomotion in water" (put a hash mark and this label up and to the right of the branch point between the leftward sloping line that leads to "Cynognathus" and the rest of the right-hand line of the letter "V"); 7) the third derived clade is "Indocetus ramani" (put a second line branching off to the left from the extended right-hand line of the letter "V" above the second hash mark, and label the end of this left-hand-sloping line "Indocetus ramani"); 8) the fourth internode definition (i.e. the fourth synapomorphy) is "loss of hind legs" (put a hash mark and this label up and to the right of the branch point between the leftward sloping line that leads to "Indocetus ramani" and the rest of the right-hand line of the letter "V"); 7) complete your cladogram (i.e. your "nested hierarchy") by labeling the end of the far-right-hand line "Physeter macrocephalus" (i.e. sperm whales)". You should have a large letter "V", slanting to the right and with five branches off of it, reading in order from left to right: "lobe-finned fish", "Labyrinthodont", "Cynognathus", Indocetus ramani, and "Physeter macrocephalus". This is a simple cladogram for the phylogeny of modern whales. Once again, in the context of your request, notice that the the fourth internode definition (i.e. the fourth synapomorphy) is "loss of hind legs". I have provided you with another "nested hierarchy" that does, indeed, have a loss of a character as one of its defining characteristics. Ergo, your assertion, "I know you won’t post one because it doesn’t exist" has again been conclusively and demonstrably falsified.Allen_MacNeill
May 6, 2009
May
05
May
6
06
2009
09:15 AM
9
09
15
AM
PDT
Re #147 Beelzebub I see you made the same point. It is reasonable for Dr. Dembski not to respond. We are all busy and no one is obliged to take part in an internet discussion. Perhaps someone else will respond?Mark Frank
May 6, 2009
May
05
May
6
06
2009
09:12 AM
9
09
12
AM
PDT
"As Dembski says in #90 above the theorems are “elementary but not trivial”." If I had about 6 months with nothing else to do, I could probably master the mathematics here but I abandoned formal mathematics long ago. I had a fellowship in a PhD program for math at Duke University and in the first year one of the courses was titled Analysis. A simple sounding name but Analysis was the hardest course of all the initial graduate courses. Half the students in the class were second year PhD students and nearly everyone struggled with it. Calculus is a small offshoot of Analysis. The book was about 800 pages and every page was mainly a long progression to some very complicated theorems in the discipline. One of the last theorems we proved was the "Central Limit Theorem." This is a basic or elementary theorem of Statistics. However, the mathematics behind the proof took a year of esoteric mathematics to prove it. So yes, as Dembski say, somethings are elementary but not trivial. Now I have no idea how non trivial his theorems are but by saying they are elementary does not make them easy to understand or prove. Also the term "information" seems to be used in different ways.jerry
May 6, 2009
May
05
May
6
06
2009
06:52 AM
6
06
52
AM
PDT
Mark Frank writes:
As I tried to emphasise above this definition entails that the “information” content of an outcome is relative to a target. This fundamental point is rather easily hidden when you talk about bits of information rather than probabilities.
Mark, Don't hold your breath for a response from Dr. Dembski. I made a similar point two days ago:
Third, the active information of a Darwinian “search” is not independent of the target. A particular fitness regime therefore contains very little active information with respect to some targets, and a huge amount with respect to others.
Dr. Dembski has not responded to that point or to the others I raised.beelzebub
May 6, 2009
May
05
May
6
06
2009
01:17 AM
1
01
17
AM
PDT
Jerry #145 You write: "no one seems to understand how the term “information” is being used let alone “active information” though we seem to have bit of a grasp on what that generally means." " I doubt any of us understand the mathematics with all its implications which would seem to be a requisite for an evaluation. " Surely the mathematics and the definition of information are straightforward? I agree the implications are more subtle but we are all as well qualified as each other to discuss them. As Dembski says in #90 above the theorems are "elementary but not trivial". For example, the information content of an outcome is defined as log base 2(p) where is the probability of that outcome meeting a target. That's pretty elementary. But what is the value of redefining a probability as "information". In fact it may confuse more than help. As I tried to emphasise above this definition entails that the "information" content of an outcome is relative to a target. This fundamental point is rather easily hidden when you talk about bits of information rather than probabilities.Mark Frank
May 6, 2009
May
05
May
6
06
2009
12:01 AM
12
12
01
AM
PDT
Tom English, I find the whole discussion interesting in ways different than most. First, no one seems to understand how the term "information" is being used let alone "active information" though we seem to have bit of a grasp on what that generally means. And we understand it is very different from CSI which I claim no one here can define. A few people will get upset at that characterization but I have been reading about it at various places for over 3 years and still cannot find a good definition. Without much of an understanding of the paper some of the anti ID people are willing to attack the article on things they do not comprehend. What these amateurs think they are doing is a mystery. I guess they seem to feel they must defend Darwinian evolution at all times even if they do not understand the threat. You come along who obviously know what the article is about and attack it on some peripheral stuff such as Dawkin's Weasel. Why waste your time on something so trivial when the essence of an aspect ID is at stake. About a 1000 comments have been written here on Weasel in the last 4-6 weeks and all to naught. Maybe you don't know that but it has been a colossal waste of cyber time and disc space. Who are you trying to communicate to? I do not use "with" because I don't get the idea you want a conversation or are anxious to teach us what this is about. To Dembski? It cannot be to the peons here who do not understand more than the bare outline of what the paper is about. If it is to Dembski, you picked a strange way and it reflects badly on yourself. None of us are willing to take the paper at face value as the beginning of the establishment of ID as a legitimate discipline at least in terms of information. I doubt any of us understand the mathematics with all its implications which would seem to be a requisite for an evaluation. We hope it will validate ID on an information basis because we all believe intuitively what the paper is saying is valid and it would be nice to be able to point to something that it rigorous and compelling from a mathematical point of view. But I do not know what your objective is. If you think the paper is flawed, then lay it out. We may not be able to understand it but I bet Bill Dembski will and he may comment or he may not. But one thing for sure he is not going to respond to what you have already written.jerry
May 5, 2009
May
05
May
5
05
2009
08:43 PM
8
08
43
PM
PDT
Allen MacNeill:
On the contrary, the loss of a characteristic is just as significant as the gain of a characteristic, and just as important to the construction of a nested hierarchy.
With a loss of characteristics you lose containment: A SUMMARY OF THE PRINCIPLES OF HIERARCHY THEORY
nested hierarchies involve levels which consist of, and contain, lower levels.
And a loss of containment equals a loss of NH. Dr Micael Denton wrote a thorough refutation of the premise that evolution leads to/ expects a nested hierarchy in "Evolution: A Theory in Crisis". Perhaps you should read it and respond to the refutations it contains. But hey perhaps you can provide a nested hierarchy based on a loss of characteristics. That would help your case but I know you won't post one because it doesn't exist. BTW I have already posted that evolution does NOT have a direction. So I don't know what your issue with that is.Joseph
May 5, 2009
May
05
May
5
05
2009
08:27 PM
8
08
27
PM
PDT
I meant to write, Only by using the term information equivocally can Dembski claim to have made good on his promise to provide a Law of Conservation of Information.T M English
May 5, 2009
May
05
May
5
05
2009
08:20 PM
8
08
20
PM
PDT
jerry, I am an equal-opportunity offender. Mark Perakh was right in saying that active information would end up playing a role analogous to that complex specified information had played in Dembski's earlier work. I was right in insisting that active information was in fact quite different from complex specified information. This leads to an important point. Dembski has suggested that he and Marks are filling in the mathematical details he left out of No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence. Active information and complex specified information are radically different measures. Active information is measured on a search, relative to a null search and a target. Complex specified information is measured on an event, relative to a probabilistic model and a semiotic agent. Only by using the term information equivocally can Dembski claim to have made good on his promise to provide a Law of Conservation. The claim in his book was that complex specified information is conserved (actually, with a "leak" of up to 500 bits). The claim in the chapter is that active information, quite different from CSI, is conserved.T M English
May 5, 2009
May
05
May
5
05
2009
07:22 PM
7
07
22
PM
PDT
Dr. Dembski, it's encouraging that you're actively participating in this thread. I have several issues to bring up, but I'm sure you have a lot of demands on your time, so I'll make my points a few at a time. - In this paper, as well as previous active info work, it's pointed out that the reduction of a search space constitutes active information. But for every search space, there are ways to define a proper superset by relaxing constraints, e.g. expanding an alphabet. By viewing these constraints as contingent, we can view any search space as having an unbounded amount of built-in active information. Can you comment on this? How do we non-arbitrarily decide which aspects of the problem should be considered contingent and which should not? - I see a problem that runs through all of your CoI theorems: Proving that the probability of selecting an alternate-search-whose-chance-of-success-is-at-least-q is less than or equal to p/q does not prove that selecting an alternate search doesn't improve performance on average. That's an awkward statement, so I'll provide a counterexample: Consider a scenario in which the selection of an alternate search has a 1/3 chance of doubling the odds of success (q=2*p), and a 2/3 chance of having no effect (q=p). On average, selecting an alternate search improves performance (active info), yet the scenario accords with your theorems, i.e. the probability of choosing a search with a chance-of-success of at least q is always less than p/q.R0b
May 5, 2009
May
05
May
5
05
2009
01:04 PM
1
01
04
PM
PDT
For those interested, Tom M. English has a website http://www.boundedtheoretics.com/jerry
May 5, 2009
May
05
May
5
05
2009
12:40 PM
12
12
40
PM
PDT
Hazel, Nobody is objecting to comments. I cannot imagine what Tom M. English is up to but it does not seem like anything positive or else he would have taken a completely different tact. And by the way I know who Tom M. English is from his past comments on this site. He was banned about a year ago for what I believe were derogatory comments.jerry
May 5, 2009
May
05
May
5
05
2009
12:28 PM
12
12
28
PM
PDT
There was a post about the two Tom Englishes a couple of years ago.David Kellogg
May 5, 2009
May
05
May
5
05
2009
12:06 PM
12
12
06
PM
PDT
1 2 3 4 7

Leave a Reply