To facilitate discussion, we are publishing the abstracts of the 24 papers from the Cornell Conference on the Origin of Biological Information here at Uncommon Descent, with cumulative links to previous papers at the bottom of each page.

Here is the Abstract for Chapter 3:

This paper provides a general framework for understanding targeted search. It begins by defining the search matrix, which makes explicit the sources of information that can affect search progress. The search matrix enables a search to be represented as a probability measure on the original search space. This representation facilitates tracking the information cost incurred by successful search (success being defined as finding the target). To categorize such costs, various information and efficiency measures are defined, notably, active information. Conservation of information characterizes these costs and is precisely formulated via two theorems, one restricted (proved in previous work of ours), the other general (proved for the first time here). The restricted version assumes a uniform probability search baseline, the general, an arbitrary probability search baseline. When a search with probability q of success displaces a baseline search with probability p of success where q > p, conservation of information states that raising the probability of successful search by a factor of q/p(>1) incurs an information cost of at least log (q/p). Conservation of information shows that information, like money, obeys strict accounting principles. More.

*Note:* All conference papers here.

*See also:* Origin of Biological Information conference: Its goals

*Open Mike:* Origin of Biological Information conference: Origin of life studies flatlined

*Open Mike:* Cornell OBI Conference— Can you answer these conundrums about information?

*Open Mike: *Cornell OBI Conference—Is a new definition of information needed for biology? (Chapter 2)

*Open Mike: *Cornell OBI Conference—New definition of information proposed: Universal Information (Chapter 2)

Thanks for starting this thread – I think this is a very interesting topic, and I hope that the authors of the article chime in to address some of the questions which will certainly be raised in the comments. In fact, I asked the authors via email to do so!

But perhaps to start the discussion: What is “information cost”. The authors use it as the probability to find a search which is effective to find a certain target.

Then they state that this probability multiplied with the probability to find the target via this search is less (or equal) to the probability to find the target via

uninformed random search. This result seems to be trivial (see my post Please show all your work for full credit… ).Pleased to see Winston Ewert’s name spelled correctly! 🙂

Well, it would have been more work reinventing him. – News (O’Leary)

The paper has described the problem as if it were a computer search algorithm, and the “cost” is time, or CPU-cycles, or possibly electricity to accomplish the search.

But there is a sense in which biology and chemistry have to search as well. Enzymatic activity, for example, requires the catalyst and the reactant to find each other in the right geometry. If there were a way to pre-align the reactants, then enzymatic activity would be boosted. So chemically, “cost” could be considered as the speed or efficiency or activity of an enzyme.

In physics, there are processes that “tunnel” through a potential barrier. So for example, the alpha-particle tunnels out of the nucleus in radioactive decay, where the probability of tunnelling is proportional to the width and height of the barrier. Information might consist of external potentials that can increase or hinder the tunnelling probability–for example, a neutrino field gravitationally bound to the Sun, or a source of EM fields like a gamma-ray that can substantially alter the height of the barrier.

So Marks and Dembski’s work is really discussing not just computer searches, but the speed of many physical and chemical reactions, and the influence of external “information” on those reactions.