Information Intelligent Design

Eric Holloway: Why Bell’s theorem matters

Spread the love

Especially to conservation of information theory:

This brings us to a more general result known as the conservation of information. Design theorists William Dembski and Robert J. Marks defined the law of conservation of information in their 2009 paper “Conservation of Information in Search” and then proved the result in their follow-on 2010 paper “The Search for a Search”. The conservation of information (COI) says the expected active information produced by any combination of random and deterministic processes is guaranteed to be zero or less. Active information is itself the difference between two different probability distributions.

We can see the conservation of information is a generalization of Bell’s no-go theorem in quantum mechanics. It contrasts the difference between two probability distributions, and then take the expectation to get a hard limit. Finally, we measure whether this limit is met by averaging a large number of physical measurements.

Eric Holloway, “Why is Bell’s Theorem Important for Conservation of Information? ” at Mind Matters News

Further reading on information theory:

But is determinism true? Does science show that we fated to want whatever we want? (Michael Egnor)

At the movies: can AI restore blurred images? Working with pixels, we can do remarkable things—as long as we are not asking for magic (Robert J. Marks)

Why information theory is like a good run. Information theory can help us understand a wide range of fields besides computers. (Eric Holloway)


COVID-19: When 900 bytes shut down the world. A great physicist warned us, information precedes matter and energy: Bit before it.

One Reply to “Eric Holloway: Why Bell’s theorem matters

  1. 1
    Querius says:

    There are interesting observations and conclusions in the linked papers concerning information and searches. I’m not done with them, but the Search for a Search paper linked above includes the following statement:

    . . . an arbitrary search space structure will, on average, result in a worse search than assuming nothing and simply performing an unassisted search.

    I wonder what’s meant by “arbitrary” in the quoted statement. In my experience, a well-designed search structure itself encapsulates information in the form of contextual relationships that facilitate results by successive approximation.

    Two things come to mind in context of needing information to find information addressed by the paper: Bayesian logic and Taxonomic identification.

    Anyone have any insights?


Leave a Reply