Intelligent Design

“Conservation of Information” — on the choice of expression

Spread the love

Conservation of information as developed in several articles (see the publications page at www.evoinfo.org) by Robert Marks and me has come in for criticism not only conceptually but also terminologically. None of the conceptual criticisms has in our view succeeded. To be sure, more such criticisms are likely to be forthcoming. But as this work increasingly gets into the peer-reviewed literature, it will be harder and harder to dismiss.

That leaves the terminological criticism. Some have objected that a conservation law requires that the quantity in question remain unchanged. Take conservation of energy, which states that in an isolated system energy may change forms but total energy remains constant. Some have argued that what we are calling conservation of information is more like entropy. But that’s not the case either. Entropy, as characterized by the second law of thermodynamics, says that usable energy will diffuse and thus be guaranteed (with overwhelming probability) to increase. Hence entropy, unless usable energy is in a maximally diffuse state, will change and cannot rightly be regarded as falling under a conservation principle.

Conservation of information, by contrast, falls in a middle-ground between conservation of energy and entropy. Conservation of information says that the information that must be inputted into a search for it to successfully locate a target cannot fall below the information that a search outputs in successfully locating a target. Robert Marks and I show that this characterization of conservation of information is non-tautological. But as stated, it suggests that as we move logically upstream and try to account for successful search, the information cost of success cannot fall below a certain lower bound.

Strictly speaking, what is conserved then is not the actual inputs of information to make a search successful but the minimum information cost required for success. Inefficiencies in information usage may lead to more information being inputted into a search than is outputted. Conservation of information thus characterizes information costs when such inefficiencies are avoided. Thus it seems to Robert Marks and me that the expression “conservation of information” is in fact appropriate.

51 Replies to ““Conservation of Information” — on the choice of expression

  1. 1
    Atom says:

    Dr. Dembski wrote:

    Strictly speaking, what is conserved then is not the actual inputs of information to make a search successful but the minimum information cost required for success

    This is the key point.

    Glad to see more UD posts about *science* and ID. Hopefully this thread won’t devolve into a heated discussion over Theism…

    Atom

  2. 2
    bornagain77 says:

    Dr. Dembski,
    In regards to trying to reconcile LCI with Genetic Entropy; What is the maximum limit for functional information bits, (Fits), generation that is now set for the random processes of the universe? Has it changed from the generously set 140 Fits limit set by Durston?

  3. 3
    serendipity says:

    Dr. Dembski,

    The six existing conservation laws of physics are all strict conservation laws; the quantities in question neither increase nor decrease.

    Your proposed “Law of Conservation of Information” would be the first conservation law for which the quantity in question was not, in fact, conserved. Doesn’t that strike you as a bit presumptuous?

    The Second Law of Thermodynamics stipulates that in an isolated system, entropy will either increase or at best, remain constant; your LCI states that information will either decrease or at best, remain constant.

    Apart from a change in sign, they are exactly parallel. The SLoT doesn’t purport to be a conservation law. Why should the LCI?

  4. 4
    George L Farquhar says:

    Dr Dembski,
    Could I ask what has prompted your shift towards publishing your work in peer reviewed journals instead of your more usual avenue, books?

    It’s true to say I think that you have already make clear your reasons for publishing your work in books, has your opinion now changed in that regard?

  5. 5
    Atom says:

    serendipity wrote:

    the quantities in question neither increase nor decrease.

    while Dr. Dembski wrote:

    Strictly speaking, what is conserved then is not the actual inputs of information to make a search successful but the minimum information cost required for success

    The minimum information cost doesn’t change, so that would be the “quantit[y] in question [that] neither increase[s] or decrease[s].” At least that’s how I read it.

    Atom

  6. 6
    Atom says:

    GLF,

    I think the only change has been a journal’s willingness to publish something from a well-known ID advocate.

    Dr. D can correct me if I’m wrong, but I’m guessing he’ll say the same thing. The reason we don’t see more pro-ID articles published is not because ID scientists don’t write papers or want them published. I’m sure if Nature or Science would allow a pro-ID article to go through their peer-review with an even chance of being published, most ID advocates would jump at the opportunity.

    Atom

  7. 7

    George L. Farquhar: I assume you’re referring to my quote in the CHE: “I’ve just gotten kind of blase about submitting things to journals where you often wait two years to get things into print. And I find I can actually get the turnaround faster by writing a book and getting the ideas expressed there. My books sell well. I get a royalty. And the material gets read more.” (The Chronicle of Higher Education, December 21, 2001)

    That was then and this is now. I made this comment at the time that my center for ID at Baylor had been shut down by the Baylor administration (see here). That ordeal, in which a “peer review committee” set up a star chamber to destroy my center and discredit my research, left me quite sour about the prospects of peer review facilitating my program of ID research. Since then I’ve published about ten books and have five more on the way (all of which are finished and in production). My books are accomplishing their end, and now I do see the need to get out some technical peer-reviewed publications. Moreover, I’m encouraged that the engineering community is open to my ideas and willing to publish them.

  8. 8
    serendipity says:

    Atom writes:

    The minimum information cost doesn’t change, so that would be the “quantit[y] in question [that] neither increase[s] or decrease[s].” At least that’s how I read it.

    Hi Atom,

    The problem is that you could say the same thing about entropy. The starting entropy of a system is conserved, though the overall amount may (and usually does) increase. Yet we don’t call the SLoT the “Law of Conservation of Entropy”. Why? Because it would be highly misleading, since entropy is not conserved overall.

    Likewise, it is misleading for Dembski and Marks to call their principle the “Law of Conservation of Information” when information is not conserved overall.

  9. 9

    serendipity: I suppose we are just going to have to disagree about the appropriateness of referring to this as conservation. I would say, however, that it is quite different from entropy. Concentrate a gas in a corner of a box, and it WILL diffuse at a given rate and entropy will correspondingly increase. On the other hand, go back in time and track the information that enables a search to be successful, and you know that you’ll be finding at least a certain amount of information. But you don’t know whether it will be more.

    Also, it’s worth noting that the word “conservation” has been coming up in these discussions for some time now (see, for instance, the references to Peter Medawar [in the 1980s] and to Cullen Schaffer [in the 1990s) in our article “Life’s Conservation Law,” available on the publications page at http://www.evoinfo.org).

  10. 10
    Atom says:

    serendipity,

    Would you rather they be more specific and call it the “Law of Conservation of Minimum Information Cost”?

    They could do that, but it lacks the same ring.

    I’m just glad that the concept has a name and is now coming to light.

    Atom

  11. 11
    JayM says:

    Atom @6

    I’m sure if Nature or Science would allow a pro-ID article to go through their peer-review with an even chance of being published, most ID advocates would jump at the opportunity.

    Atom,

    Whether you realize it or not, your insinuation that peer-reviewed journals like Science and Nature reject articles solely because of the views of their authors is grossly insulting. Those two journals in particular have the highest standards of academic integrity.

    Are you aware of any papers supportive of ID that have been rejected by peer-reviewed journals out of hand? If so, it would be instructive for the authors to make those papers available on the web, so that everyone can see the unfair bias you are suggesting exists.

    I have looked for such papers, personally, assuming that the authors would want to make their research known. I have thus far found none.

    JJ

  12. 12
    PaV says:

    In[5], Atom provides the appropriate answer as to ‘what’ is conserved. What is variable here, is the target and the amount of information to be outputted.

    Now one could argue, I suppose, that since both the target space and the output information are variable, then it is wrong to speak of conservation.

    But let’s remember that when it comes to entropy—the very subject field from whence our understanding of ‘energy’ developed—requires a “closed system”. Well, isn’t that a hedge? Where does one ‘find’ a ‘closed system’, except for those that are artifically kept? So, if enquirers into the laws of entropy are required to construct a closed system, then there is at least some parallel to an enquirer ‘constructing’ a ‘closed information system’ when he/she defines a target space and then decides on the amount of information to be outputted.

    Here’s another way of looking at conservation laws. Gravity is considered a conserved force, that is, it does not change with time. Yet, the gravitational potential at any point on the surface of the earth varies because of the differences in altitude from one area of the world to another. So when you talk about gravity as a conserved force, this simply means that the difference between initial position A, and final position B, is always the same, INDEPENDENT of the path selected for movement. Now, is the ‘difference’ between A and B always the same? No. Why? Because the gravitational force of the ‘system’ can change. IOW, if you lift the same object on earth and on the moon the same difference in altitude, you will get distinctly different values for the ‘energy’=’work’ needed to do this.

    Analogously, once an informational system is defined, via target and output requirements, then, INDEPENDENT of the search method used, the ‘same’ minimal information is needed to arrive at the output.

    I believe this is sufficiently parallel here.

  13. 13
    George L Farquhar says:

    Atom,

    I think the only change has been a journal’s willingness to publish something from a well-known ID advocate.

    Please correct me if I’m wrong, but as the journal is yet to be named can we make assumptions about it’s willingness to publish content from well-known ID advocates, insofar as it’s normal content may be as “controversial” as the well-known ID advocates? I’m not saying it is, of course, or would have to be but it’s currently an open unresolved question AFAIK.

    I’m sure if Nature or Science would allow a pro-ID article to go through their peer-review with an even chance of being published, most ID advocates would jump at the opportunity.

    Would such papers, if submitted, have rejection letters detailing the reasons for rejection? Have ID advocates already attempted to do what you say (publish in Nature or Science?) and if not, how do you know for certain they would be rejected out of hand? A positive attitude in this regard may pay dividends, whereas you can be certain if no ID advocate attempts to publish a paper in Nature or Science none will ever be published in Nature or Science

    As they say for the Lottery, “you’ve got to be in it, to win it”.

  14. 14
    George L Farquhar says:

    Dr Dembski,
    Yes, that was the quote I was refering to. You say that your books have accomplished their end. What end is that?

    I’m encouraged that the engineering community is open to my ideas and willing to publish them.

    Many,if not most, of the most active debates and discussions on this board relate to biological evolution (and the limits thereof).

    Have you then put a hold on attempting to make headway with your ideas in the biological community?

  15. 15
    Atom says:

    JayM,

    See Walter ReMine’s ordeal on getting a simple article on Haldane’s Dilemma published.

    The vitriol I witness anytime ID is mentioned in scientific circles combined with the the way ID is misrepresented by various organizations (including papers published in prestigious journals and editorials) as well as in the media, leaves little doubt to the hostile climate ID scientists currently face. Add to that the documented cases of reprisal for mere darwin doubting and you can judge for yourself how likely the well-known journals would be to publish pro-ID papers.

    Look at the firestorm surrounding Sternberg after he allowed Dr. Stephen Meyer’s article to go through the review process. Most people don’t want to go through what Sternberg did, so I can’t really blame them.

    Things will continue this way for some time until people lose their fear of ID. You can only misrepresent something for so long; one by one, people eventually find out what a view really is. Doors will eventually open for ID scientists (even if not in our generation), and I have no doubt that eventually both pro-ID and anti-ID articles will regularly be printed in the journals. It is just a matter of time.

    Sorry if my comments were offensive, but the backlash ID scientists face is much more offensive. Hopefully the journals will prove they’re not biased against ID by allowing ID scientists to at least respond in their journals to papers directly concerning their work. That would be a step forward.

    Atom

  16. 16
    serendipity says:

    William Dembski writes:

    I would say, however, that it is quite different from entropy. Concentrate a gas in a corner of a box, and it WILL diffuse at a given rate and entropy will correspondingly increase.

    Yet if you distribute the gas molecules randomly, entropy will not increase.

    On the other hand, go back in time and track the information that enables a search to be successful, and you know that you’ll be finding at least a certain amount of information. But you don’t know whether it will be more.

    The same is true for entropy (with the sign reversed, of course): Go back in time, and you know that you’ll be finding at least a certain amount of entropy. But you don’t know whether it will be less.

    Since the situation is exactly parallel, why does the LCI merit the status of “conservation law” if the SLoT does not?

    Also, it’s worth noting that the word “conservation” has been coming up in these discussions for some time now…

    It’s also worth noting that the proposals of Medawar and Schaffer are not considered laws of nature by the majority of scientists.

  17. 17
    T M English says:

    Bill,

    Conservation of information says that the information that must be inputted into a search for it to successfully locate a target cannot fall below the information that a search outputs in successfully locating a target.

    Physical systems do not hop uniformly over their phase spaces. Uniform search is merely simulated in nature — one physical system controls and samples another. For instance, a person rolls a precision die over a flat surface, waits for it to come to rest, and then counts the pips on the top surface. Physical search is typically far from uniform, while physical simulation of uniform search requires that one system input information to another.

    The fact that material systems do not arbitrarily morph in configuration from one instant to the next (realize uniform search processes) does not give us warrant to believe that something input information to the cosmos (designed the “laws” of nature). You cannot frame physical constraints as the outcome of a repeatable experiment, and thus you generally have no basis for speaking of the physical probability of a physically constrained search process. Only when one casts a search process as a repeatable experiment can one refer to the physical chances of its doing what it does. You have not done this. And there would remain vexing philosophical problems, even if you did.

  18. 18
    JayM says:

    Atom @13

    See Walter ReMine’s ordeal on getting a simple article on Haldane’s Dilemma published.

    Was it rejected solely because of Mr. ReMine’s well-known views or did it just not meet the level of quality required by the journals to which he submitted it? Even very good papers are rejected by the most prestigious journals such as Nature and Science.

    The vitriol I witness anytime ID is mentioned in scientific circles combined with the the way ID is misrepresented by various organizations (including papers published in prestigious journals and editorials) as well as in the media, leaves little doubt to the hostile climate ID scientists currently face. Add to that the documented cases of reprisal for mere darwin doubting and you can judge for yourself how likely the well-known journals would be to publish pro-ID papers.

    Do you have evidence that any particular journal has unfairly rejected an otherwise high quality and appropriate paper solely because it supported ID? If so, can you provide a reference to that paper?

    Look at the firestorm surrounding Sternberg after he allowed Dr. Stephen Meyer’s article to go through the review process. Most people don’t want to go through what Sternberg did, so I can’t really blame them.

    I’ve read both sides of the Sternberg case and, while I know it isn’t the conventional wisdom here at UD, it appears that he did circumvent the usual publication guidelines for the journal he was editing.

    Things will continue this way for some time until people lose their fear of ID. You can only misrepresent something for so long; one by one, people eventually find out what a view really is. Doors will eventually open for ID scientists (even if not in our generation), and I have no doubt that eventually both pro-ID and anti-ID articles will regularly be printed in the journals. It is just a matter of time.

    Where are the rejected articles?

    Sorry if my comments were offensive, but the backlash ID scientists face is much more offensive. Hopefully the journals will prove they’re not biased against ID by allowing ID scientists to at least respond in their journals to papers directly concerning their work. That would be a step forward.

    Again, do you have any actual evidence of the unfair rejection you claim these journals practice? If you can’t provide the actual papers that were rejected and demonstrate that they were otherwise suitable for the journal and capable of passing peer review, then your claims are baseless. You shouldn’t make allegations as serious as you are without solid support.

    JJ

  19. 19
    PaV says:

    [15] TM English:

    While I don’t agree with your conclusion entirely, I think you’ve framed the issue of whether or not uniform probability distributions can be used in phase space, as we encounter them in nature, about as well as I’ve seen it.

    But consider this in regards to ‘repeatable experiments’: is it not possible to conduct an experiment in which DNA nucleotides are combined to see whether there is any kind of preference shown to a particular nucleotide, or to particular strings of nucleotides? If none is observed, then would not uniform probabilities be applied in the same way as the person throwing a die? The phase space of a protein that corresponds to a particular sequence of such nucleotides seems to be independent of the combinatrics involved in any given string of nucleotides.

  20. 20
    serendipity says:

    Atom asks:

    Would you rather they be more specific and call it the “Law of Conservation of Minimum Information Cost”?

    They could do that, but it lacks the same ring.

    Atom,

    When you’re proposing a law of nature, don’t you think accuracy is a little more important than whether the name of the law has a nice “ring” to it?

    In any case, the problems with the LCI go beyond the misuse of the word “conservation”. The term “information” is also used questionably.

    By “information”, Dembski and Marks mean “active information”, which is their own idiosyncratic invention. The rest of the world takes “information” to mean something quite different.

    It’s as if I were to propose a universal “Law of Obfuscation of Matter”, only to reveal that I was redefining both “obfuscation” and “matter” in ways that were unique to me.

    If it’s not about conservation and it’s not about information, then why call it the “Law of Conservation of Information?” At the very least, Dembski and Marks should drop the word “conservation” and substitute “active information” for “information.”

  21. 21
    Atom says:

    serendipity,

    Active Information is simply log_2(q/p) where p is the probability of blind, null search achieving success and q is the probability of success for an assisted search, such as evolutionary search with a suitable fitness function. This is a simple definition. Perhaps you can explain what you find unclear about it?

    As for the “conservation” aspect, Bill has made clear that it is this information baseline that is conserved. You disapprove of this and say it is similar to entropy. That’s fine, you’re free to call the law whatever you like. Dembski clearly stated his reasons for using conservation, citing past precedence among other things, so I won’t fault him for that. If you do I don’t see the point trying to convince you otherwise as he’s already given his reasons.

    Come up with your own label for -log(|Q|/|O2|) >= log(q/p) (to borrow R0b’s concise formulation.) I’ll continue to call it the LCI.

    Atom

    Atom

  22. 22
    serendipity says:

    To see just how idiosyncratic “active information” is, consider a blind search and an augmented search, both of which successfully locate a given target. According to classic measures of information such as Shannon’s and Kolmogorov’s, both searches yield the same information (they found the same target, after all). To Dembski and Marks, the augmented search yields more information than the blind search, despite the fact that they both found the same target.

  23. 23
    Atom says:

    serendipity,

    Dembski and Marks begin with p, which is the probability of finding a target in a search space using a null, blind search. They take the log base 2 of this probability to define the endogenous information, or in simpler terms, the inherent “difficulty” of the search problem. They then consider an assisted search, which finds the target with probability q, where q > p. They take the log base 2 of this, and define this as the exogenous information. They then define the active information as the difference between the endogenous and exogenous information, or log(q/p). It is useful in measuring how much information the assisted search adds towards finding the target.

    The assisted search will find the target in less queries than blind search, hence why it is suitable that its information measure relative to the problem is greater than null search.

    To use an everyday example, the probability of my finding my keys in my apartment by brute force is p. The probability of finding them using information about their location (my wife telling me where they are) is q. Furthermore, q >> p and I will have to search less places once I have the information associated with q. Therefore, Dembski and Marks’ conventions work well when applied to real problems. We want to remember that q imparts more problem specific information than p and their notation scheme reflects this.

    But why argue about notation? You’re free to come up with your own notation scheme. I won’t stop you.

    Atom

  24. 24
    serendipity says:

    Atom asks:

    This is a simple definition. Perhaps you can explain what you find unclear about it?

    I haven’t said that the definition of “active information” is unclear. I’ve stated that the phrase “Law of Conservation of Information” is highly misleading.

    You disapprove of this and say it is similar to entropy.

    I’ve shown that it’s parallel to entropy, and that the difference Dr. Dembski cites is not a difference at all.

    Dembski clearly stated his reasons for using conservation, citing past precedence among other things, so I won’t fault him for that.

    He cited the precedence of two people who may or may not use the word “conservation” in the loose way that he does (I don’t have access to Medawar and Schaffer, so I can’t say). Compare that to the consensus among scientists for using “conservation” to refer to situations where a quantity neither increases nor decreases.

    Suppose we add the LCI to the pantheon of recognized conservation laws. Look what happens:

    Q. Is the law of conservation of mass/energy about mass/energy?

    A. Yes.

    Q. Is mass/energy conserved?

    A. Yes.

    Q. Is the law of conservation of charge about charge?

    A. Yes.

    Q. Is charge conserved?

    A. Yes.

    Q. Is the law of conservation of angular momentum about angular momentum?

    A. Yes.

    Q. Is angular momentum conserved?

    A. Yes.
    .
    .
    .
    Q. Is the law of conservation of information about information?

    A. Well, no. It only applies to active information.

    Q. Oh. Well, is “active information” conserved?

    A. Well, no. It can decrease.

    Q. What was the name of that law again?

  25. 25
    serendipity says:

    Atom writes:

    But why argue about notation? You’re free to come up with your own notation scheme. I won’t stop you.

    I’m not arguing about notation. I’m pointing out that the name “Law of Conservation of Information” is highly misleading.

    The name of the “law” is the topic of this thread, after all.

  26. 26
    Atom says:

    serendipity,

    I don’t think Dembski is basing his usage on just the two examples he mentioned, though I don’t think they should be dismissed either. If the precedent is there, even with two, it is still a precedent.

    Furthermore, I remember seeing many discussions on ISCID in its heyday about a “4th Law of Thermodynamics” (with relation to information) where many similar ideas were discussed and if I remember correctly, the phrase Conservation of Information was also used in association with those concepts. (I could be mistaken, but the phrase already sounded familiar to me when Dembski and Marks used it.) I’m sure Dembski can point to even more examples, but again, this isn’t something I want to waste time arguing over; if you don’t like the phrase, please come up with a better one and share it with others. If it is better than Dembski’s I’m sure it will catch on.

    Atom

  27. 27
    serendipity says:

    Atom writes:

    If the precedent is there, even with two, it is still a precedent.

    If a precedent set by two people counts as justification, then millions of ridiculous ideas are justified by precedent.

    For example, hundreds of thousands of Moonies think that Reverend Moon is the second coming of Christ. Do you accept that precedent?

    Furthermore, I remember seeing many discussions on ISCID in its heyday about a “4th Law of Thermodynamics” (with relation to information) where many similar ideas were discussed and if I remember correctly, the phrase Conservation of Information was also used in association with those concepts.

    The ISCID forums aren’t exactly the first place I’d go if I were trying to gauge the scientific zeitgeist.

    I have seen the phrase “conservation of information” used in reference to reversible computation. However, this usage is legitimate because information is actually conserved in those processes: the system contains the same amount of information after a reversible computation as it does before.

    In irreversible computations, information is destroyed and lost forever, which is another reason why a “Law of Conservation of Information” is inappropriate. Information, as we know it, is not conserved. Even “active information” is not conserved, as Dembski admits.

    …if you don’t like the phrase, please come up with a better one and share it with others.

    Since Dembski and Marks are proposing the law, it’s up to them to provide an accurate name for it. I am here to dispute the conclusion of Dembski’s opening post:

    Thus it seems to Robert Marks and me that the expression “conservation of information” is in fact appropriate.

    Because the LCI is neither about conservation nor about information, the name “Law of Conservation of Information” is inappropriate.

  28. 28
    Atom says:

    seren,

    Aside from your unnecessary jab at ISCID, you’ve made your point. I wasn’t aware of a stronger existing precedent in Computation theory. If that’s the case, your different usage would make sense.

    But if you’d like to have conversations with me in the future you’ll stay clear from the “I’m-so-clever” little references to ISCID’s popularity and Moonies. I have limited time and prefer not to spend it on people who would think to insult me behind a keyboard.

    Atom

  29. 29
    Atom says:

    PS And yes, I consider it insulting when you try to get cute with me in a way you’d never do in my presence. Have respect and keep it civil.

  30. 30
    serendipity says:

    Atom,

    I think you’re being oversensitive.

    I chose my Moonie example precisely because I knew it would seem ridiculous to almost every reader here (unless Jonathan Wells happens to be lurking), both theists and atheists. That makes it the perfect demonstration of the fact that precedence by itself does not constitute justification.

    As for ISCID, I’m not speaking of its popularity. I’m speaking of the fact that it does not represent the scientific zeitgeist, and I stand by that characterization. Do you disagree?

  31. 31
    Nakashima says:

    Dr Dembski,

    I have to disagree with your conclusion. LCI is not a marketing term. If you say cost is conserved, then call it LCC.

  32. 32
    R0b says:

    I agree with serendipity that the LCI is no more or less a conservation law than the 2LoT, but I have no problem with either of them being labeled a conservation law.

    The term information, on the other hand, does seem to add confusion to Marks and Dembski’s account. They refer to active information as a measure of content (as well as the content itself), but refer to endogenous information as a measure of difficulty. We would expect endogenous and exogenous information to describe disjoint content, but what content, if any, do they refer to?

    If we look at it from a classical information standpoint, the content is the outcome of the event whose probability is being measured. Since endogenous and exogenous information measure the probability of the search succeeding, their content is boolean — a simple “yes” as opposed to no. The confusing part is that this is not the information we seek when we search. It’s like when my wife asks me if I know where her keys are, and I say “yes”, pretending that she’s only interested in whether I know, and not in the location of the keys.

    The concepts seem much more straightforward when described in terms of probability. Consider the following fact:
    P(K)*P(S|K) <= P(S)
    That’s Marks and Dembski’s conservation principle. That is, the probability of having problem-specific knowledge AND succeeding with that knowledge is no greater than succeeding without that knowledge. Why couch this in terms of “information” when it’s perfectly clear in probabilistic terms?

  33. 33
    Jehu says:

    R0b

    That is, the probability of having problem-specific knowledge AND succeeding with that knowledge is no greater than succeeding without that knowledge.

    What? Is that a typo?

  34. 34
    Mark Frank says:

    Re #12

    “Here’s another way of looking at conservation laws. Gravity is considered a conserved force, that is, it does not change with time. Yet, the gravitational potential at any point on the surface of the earth varies because of the differences in altitude from one area of the world to another.”

    However, a given point on the earth’s surface has only one gravitational potential. A given outcome e.g a protein can have different levels of “information” simultaneously depending on the target under consideration.

  35. 35
    Frost122585 says:

    I would like for Bill to explain shortly how this new work of COI fits in with the NFl (no free lunch) theorems. I think the conceptual idea that specified complexity cannot be purchased without intelligence is the correct thesis for this kind of statistical and mathematical side of ID. When I first read about NFL I was really taken back at how brilliant a conceptual criticism it really was.

    I would however like the concept of specificity to be a little more thoroughly developed though.

  36. 36
    Mark Frank says:

    “I would however like the concept of specificity to be a little more thoroughly developed though.”

    Seconded – “information” is defined in terms of the probability of meeting a specified target. Therefore, without an objective definition of “specified” there is no objective definition of information.

    Is this still the most authoritive attempt to define specification?

  37. 37
    Frost122585 says:

    I think they have defined it a little but I think we can do better if more time is spent on it.

  38. 38
    Arthur Hunt says:

    On the ISCID boards a long time ago, I tossed out the idea that the vagaries of information in biology be viewed as work and not as a state variable.

    From the essay (sorry, I can’t seem to embed the link – go to ISCID and search for all essays by member 179, it’s the essay from March 10, 2002):

    “B. Which brings me to a second point. Usually, (in my reading, at least), information content is reflective of the informational entropy of a system. Entropy, in turn, is usually taken as a state variable – the informational entropy of, say, a protein is independent of the pathway by which the protein originated. The preceding indicates that complexity does not share this property. It follows (at least to me) that the property “complex specified information” (CSI) is not a state variable, and thus should not be rigorously equated with information per se. I would suggest that a better analogy to be used here is that of thermodynamic work. Work is a property that is pathway-dependent – the amount of work obtained in going from state A to state B is determined as much by pathway as the inherent thermodynamic properties of the initial and final states (although the poises of the state variables do affect the work that can be done). It seems (naively, to be sure) that CSI would be better defined in terms of some sort of informational “work”, rather than inherent information content. (This would take into account the pathway dependence of the assignment of complexity, as indicated in the preceding.)”

  39. 39
    Joseph says:

    I see the anti-IDists are still complaining about definitions.

    Yet their position doesn’t have anything that is rigorously defined.

    Ya know all you guys have to do to refute Dembski and Marks is to demonstrate that their isea of information is reducible to matter, energy, chance and necessity.

  40. 40
    R0b says:

    Dr. Dembski:

    Moreover, I’m encouraged that the engineering community is open to my ideas and willing to publish them.

    It bears noting that the two peer-reviewed papers make no controversial (i.e. ID) claims. The only way I can think of to connect those papers to ID is via the notion that intelligence creates information, while nature does not. Unfortunately, I see no logical or empirical support for that notion. As Atom’s example in 23 shows, we humans use problem-specific information, but on what basis is it claimed that we create it? On the contrary, unless we’re provided with such information, we can’t find targets any faster than random sampling.

  41. 41
    Jehu says:

    R0b,

    Can you reconcile these two statements that you made?

    On the contrary, unless we’re provided with such information, we can’t find targets any faster than random sampling.

    And

    That is, the probability of having problem-specific knowledge AND succeeding with that knowledge is no greater than succeeding without that knowledge.

    I am sorry but you seem to be contradicting yourself.

  42. 42
    R0b says:

    Jehu, I’m afraid I don’t see the contradiction. The former statement seems rather obvious to me, and the latter is a mathematical fact: P(K)*P(S|K) <= P(S). Which statement do you think is false?

    To see that Marks and Dembski’s conservation principle follows from this mathematical fact, we can rearrange the equation to get:
    P(K) = I(S)-I(S|K)
    In Marks and Dembski’s terminology, this says that the information cost of the problem-specific knowledge is at least as great as the active info (that is, the endogenous info minus the exogenous info).

  43. 43
    R0b says:

    Sorry, the previous comment was misrendered because of less-thans and greater-thans. Here’s the second paragraph:

    To see that Marks and Dembski’s conservation principle follows from this mathematical fact, we can rearrange the equation to get:
    P(K) <= P(S)/P(S|K)
    and take the negative log to render in information terms:
    I(K) >= I(S)-I(S|K)
    In Marks and Dembski’s terminology, this says that the information cost of the problem-specific knowledge is at least as great as the active info (that is, the endogenous info minus the exogenous info).

  44. 44
    PaV says:

    [34]”However, a given point on the earth’s surface has only one gravitational potential. A given outcome e.g a protein can have different levels of “information” simultaneously depending on the target under consideration.”

    But potentials aren’t ‘conserved’, forces are. So an electron at a particular elevation on earth would have both an electrical potential and simultaneously a gravitational potential. Same particle, two potentials; and both forces are conserved.

  45. 45
    tragic mishap says:

    Conservation of information says that the information that must be inputted into a search for it to successfully locate a target cannot fall below the information that a search outputs in successfully locating a target.

    Strictly speaking, what is conserved then is not the actual inputs of information to make a search successful but the minimum information cost required for success.

    Thanks you for the succinct statements, Dr. Dembski.

    Also, I find the conversation on this thread rather amusing. Please continue. Generally speaking the guy who does the work gets to choose the name. Hence, we have californium and einsteinium. Dembski would be well within his rights to name it after his fictional pet goat, Biff.

  46. 46
    Atom says:

    R0b,

    Please contact me off-list through my website contact form (atomthaimortal.com), if you’re able. Someone wants to give you credit for your work.

    Atom

  47. 47
    R0b says:

    Thanks Atom. In my email I mentioned that the logic behind the LCI could be stated in a single line. Here’s my attempt to do so. I’m posting it here not in hopes of a response, but so I that I can forget about it and refer back to this comment if I need to.

    First, your observation regarding Marks and Dembski’s implied condition in defining higher-level search spaces leads to a fact that greatly simplifies things. Namely, the probability of success at a given level is independent of how many higher levels there are. That is, when we talk about the probability of success, we don’t need to specify whether the probability is based on a blind search, or on a search that was found by a blind search, or on a search that was found by a search that was found by a blind search, etc., because the probability is the same regardless.

    The LCI says that finding a good search which in turn finds the low-level target or no easier than simply finding the low-level target. By noting that the probability of finding the low-level target is the same in both cases, the above sentence becomes self evident, since the former case has the added condition that the high-level target is also found. So the LCI is simply saying that finding targets A and B is no easier than finding target A.

    We can show formally how the above translates into the LCI.

    Event definitions:
    Lb: Low-level target found by blind search
    Lh: Low-level target found by a search that was found by a high-level blind search.
    Hb: High-level target found by a blind search

    Start with the above bolded statement. The following is true regardless of how Lh and Hb are defined:
    P(Lh & Hb) <= P(Lh)
    Restating:
    P(Lh|Hb)*P(Hb) <= P(Lh)
    Your observation is that P(Lb)=P(Lh), so:
    P(Lh|Hb)*P(Hb) <= P(Lb)

    And that is the LCI. We can also rearrange it:
    P(Hb) <= P(Lb)/P(Lh|Hb)
    And take the negative log to put in information notation:
    I(Hb) >= I(Lb)-I(Lh|Hb)

    And that’s the more familiar form of the LCI: The information cost of finding a search is at least as much as the active info (endogenous minus exogenous info) of that search.

  48. 48
    Atom says:

    Hey R0b,

    Thanks for the one line version. Not to be a gadfly, but I followed your new proof until this line:

    Your observation is that P(Lb)=P(Lh), so:

    Even though this is meant to represent my condition, I am not following how it is equivalent, given your notation. Perhaps I’m misreading it.

    I take P(Lb) to be “Probability of finding low-level target T using blind search” and P(Lh) to be “Probability of finding low-level target T using assisted search.” If this is incorrect, please let me know.

    If it is correct, however, then I wouldn’t say that P(Lb) = P(Lh), since the point of the assisted search is to raise the probability of success by some factor. I would think P(Lh) > P(Lb).

    I’m sure I’m missing something, so any clarification and patience would be appreciated.

    Thanks,
    Atom

  49. 49
    Atom says:

    Addendum:

    I guess I missed this part

    Lh: Low-level target found by a search that was found by a high-level blind search.

    That would mean that it is possible that P(Lb) = P(Lh), but the steps to get there would still need to be filled in.

    Atom

  50. 50
    R0b says:

    Atom:

    If it is correct, however, then I wouldn’t say that P(Lb) = P(Lh), since the point of the assisted search is to raise the probability of success by some factor. I would think P(Lh) > P(Lb).

    Yes, your interpretation is exactly correct. But keep in mind that the “assisted” search is randomly pulled from a higher-level space of searches, and could either increase or decrease the likelihood of success at the lower level. If the higher-level target is a set of “good” searches, and if the higher-level search is successful, then your inequality P(Lh) > P(Lb) is certainly correct. But I’m not including those assumptions in Lh.

    When I say P(Lb)=P(Lh), I mean that all of the following methods have the same probability of finding the low-level target:
    1) Blindly select a point in the low-level space.
    2) Blindly select a search from the higher-level space and use it to select a point in the lower-level space.
    3) Blindly select a search from the 3rd-level space, use it to select a search from the 2nd-level space, and use that search to select a point in the lowest-level space.
    etc.

    This assumption seems to hold in all of Marks and Dembski’s latest examples. (Although it might not hold for examples IV.1, IV.2, and IV.3 in an older paper. I’m assuming that there are implied levels beyond those that are shown in these examples, and that these implied higher levels even things out.)

    I know my terminology and prose stink, but hopefully you can dig through the obscurity to find some logic inside. After all, the key insight came from you.

  51. 51
    R0b says:

    Atom:

    That would mean that it is possible that P(Lb) = P(Lh), but the steps to get there would still need to be filled in.

    One way to fill them in would be to break down P(Lh) as follows. P(Lh) is:
    the probability of selecting search 1 from the high-level space, times the probability of search 1 finding the low-level target,
    plus the probability of selecting search 2 from the high-level space, times the probability of search 2 finding the low-level target,
    plus the probability of selecting search 3 from the high-level space, times the probability of search 3 finding the low-level target,
    etc.

    Using my notation from here, the probability of selecting any given search from the high-level space is 1/|O2|, so we can factor 1/|O2| out of the sum, and the sum is sum(O2)/|O2|. And according to your insight, that is P(Lb).

    Alternately, we could argue from symmetry. In all of Marks and Dembski’s examples, the high-level space doesn’t favor any low-level points over any other low-level points, so every low-level point is equally likely to be selected. This means that the probability of low-level success is equivalent to that of a low-level blind search.

Leave a Reply