Uncommon Descent Serving The Intelligent Design Community

Does information theory support design in nature?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Eric Holloway argues at Mind Matters that design theorist William Dembski makes a convincing case, using accepted information theory principles relevant to computer science:

When I first began to look into intelligent design (ID) theory while I was considering becoming an atheist, I was struck by Bill Dembski’s claim that ID could be demonstrated mathematically through information theory. A number of authors who were experts in computer science and information theory disagreed with Dembski’s argument. They offered two criticisms: that he did not provide enough details to make the argument coherent and that he was making claims that were at odds with established information theory.

In online discussions, I pressed a number of them, including Jeffrey Shallit, Tom English, Joe Felsenstein, and Joshua Swamidass. I also read a number of their articles. But I have not been able to discover a precise reason why they think Dembski is wrong. Ironically, they actually tend to agree with Dembski when the topic lies within their respective realms of expertise. For example, in his rebuttal Shallit considered an idea which is very similar to the ID concept of “algorithmic specified complexity”. The critics tended to pounce when addressing Dembski’s claims outside their realms of expertise.

To better understand intelligent design’s relationship to information theory and thus get to the root of the controversy, I spent two and a half years studying information theory and associated topics during PhD studies with one of Dembski’s co-authors, Robert Marks. I expected to get some clarity on the theorems that would contradict Dembski’s argument. Instead, I found the opposite.

Intelligent design theory is sometimes said to lack any practical application. One straightforward application is that, because intelligence can create information and computation cannot, human interaction will improve computational performance.
More.

Also: at Mind Matters:

Would Google be happier if America were run more like China? This might be a good time to ask. A leaked internal discussion document, the “Cultural Context Report” (March 2018), admits a “shift toward censorship.” It characterizes free speech as a “utopian narrative,” pointing out that “As the tech companies have grown more dominant on the global stage, their intrinsically American values have come into conflict with some of the values and norms of other countries.”

Facebook’s old motto was “Move fast and break things.” With the current advertising scandal, it might be breaking itself A tech consultant sums up the problem, “Sadly Facebook didn’t realize is that moving fast can break things…”

AI computer chips made simple Jonathan Bartlett: The artificial intelligence chips that run your computer are not especially difficult to understand. Increasingly, companies are integrating“AI chips” into their hardware products. What are these things, what do they do that is so special, and how are they being used?

The $60 billion-dollar medical data market is coming under scrutiny As a patient, you do not own the data and are not as anonymous as you think. Data management companies can come to know a great deal about you; they just don’t know your name—unless, of course, there is a breach of some kind. Time Magazine reported in 2017 that “Researchers have already re-identified people from anonymized profiles from hospital exit records, lists of Netflix customers, AOL online searchers, even GPS data of New York City taxi rides.” One would expect detailed medical data to be even more revelatory.

George Gilder explains what’s wrong with “Google Marxism”
In discussion with Mark Levin, host of Life, Liberty & Levin, on Fox TV: Marx’s great error, his real mistake, was to imagine that the industrial revolution of the 19th century, all those railways and “dark, satanic mills” and factories and turbine and the beginning of electricity represented the final human achievement in productivity so in the future what would matter is not the creation of wealth but the redistribution of wealth.

Do we just imagine design in nature? Or is seeing design fundamental to discovering and using nature’s secrets? Michael Egnor reflects on the way in which the 2018 Nobel Prize in Chemistry has so often gone to those who intuit or impose desire or seek the purpose of things

Comments
We have a discussion going on over at TSZ on this topic: http://theskepticalzone.com/wp/breaking-the-law-of-information-non-growth/EricMH
November 20, 2018
November
11
Nov
20
20
2018
01:07 PM
1
01
07
PM
PDT
thanks for the updates.Mung
November 19, 2018
November
11
Nov
19
19
2018
05:57 AM
5
05
57
AM
PDT
@Mung as Tom English pointed out I should have used the newer format for AMI: I(U:M) = K(U) - K(U|M*). The M* is the shortest program that outputs M, and makes the bound have only a constant error. The previous form I used has a logarithmic error.EricMH
November 18, 2018
November
11
Nov
18
18
2018
10:38 AM
10
10
38
AM
PDT
@Mung, Bill Cole and Joe Felsenstein, My Wigner argument is basically an elaboration on Levin's argument in his paper I keep referencing. For a more accessible overview of what Levin proved, see this Quora comment: https://www.quora.com/Will-mathematics-be-%E2%80%9Ccompleted%E2%80%9D-at-some-point-Will-there-be-a-time-when-there-is-nothing-more-to-add-to-the-body-of-mathematics-and-research-has-exhausted/answer/Claude-Taillefer?ch=10&share=13cbea14&srid=hW60TEricMH
November 16, 2018
November
11
Nov
16
16
2018
10:50 AM
10
10
50
AM
PDT
@Mung, yes algorithmic mutual information.EricMH
November 14, 2018
November
11
Nov
14
14
2018
04:16 AM
4
04
16
AM
PDT
EricMH:
The mutual information is I(U:M) = K(U) – K(U|M).
This is algorithmic mutual information?Mung
November 13, 2018
November
11
Nov
13
13
2018
05:27 PM
5
05
27
PM
PDT
For those interested in how Wigner's unreasonable effectiveness of math in the natural sciences is a form of mutual information, the basic idea is straightforward. Say U is the universe and M is mathematics. The mutual information is I(U:M) = K(U) - K(U|M). Since the universe can be described mathematically, that means math allows us to compress the universe, so K(U) > K(U|M). Thus, I(U:M) > 0, and there is mutual information between the universe and mathematics. Furthermore, since mathematics exists independently from the universe, it is an independent target, and Levin's law of information non-growth applies. If our explanation of the universe is restricted to naturalism, i.e. randomness and a universal Turing machine, then the expected mutual information between the universe and mathematics is 0. Therefore, since I(U:M) > 0, some other explanation, such as a halting oracle, is a better explanation for our universe.EricMH
November 13, 2018
November
11
Nov
13
13
2018
04:12 PM
4
04
12
PM
PDT
Hi Eric
At any rate, this latest exchange has confirmed my almost a decade old observation that the skeptics either agree with ID claims, or miss the obvious connections with information theory, even though they are “information theory experts.” The supposed controversy regarding Dembski’s theory of intelligent design is without merit. His claims are entirely consistent with well established information theory.
I agree that all I see is hand waving. To a man all the objections are based on politics.bill cole
November 13, 2018
November
11
Nov
13
13
2018
12:50 PM
12
12
50
PM
PDT
At any rate, this latest exchange has confirmed my almost a decade old observation that the skeptics either agree with ID claims, or miss the obvious connections with information theory, even though they are "information theory experts." The supposed controversy regarding Dembski's theory of intelligent design is without merit. His claims are entirely consistent with well established information theory.EricMH
November 13, 2018
November
11
Nov
13
13
2018
11:35 AM
11
11
35
AM
PDT
Another statement at odds with information theory over at TSZ: http://theskepticalzone.com/wp/eric-holloway-needs-our-help-new-post-at-pandas-thumb/comment-page-3/#comment-237134 > Conservation of information might make sense in terms of algorithmic information theory. But it makes no sense if we are talking about Shannon information, where we can generate as much new information as we want just because we decide to communicate. In Shannon information there is the data processing inequality, which says mutual information is conserved for markov chain X -> Y -> Z such that I(X;Y) >= I(X;Z). It is easy to convert this to Dembski's CSI setting. If we set X to the specification and Y to the chance hypothesis, then per detachability P(Z|Y,X) = P(Z|Y), which is the markov property, and thus the DPI applies to CSI. Again, I do not understand why information theory experts do not see the connections between Dembski's theory and well established information theory identities. It seems they are not trying very hard to understand Dembski's work in the best possible light.EricMH
November 13, 2018
November
11
Nov
13
13
2018
08:22 AM
8
08
22
AM
PDT
ET: "Unless there is pre-biotic natural selection- which has been called a contradiction in terms- when NS is discussed it is always in the presence of existing CSI/ FSCI/O." Of course. NS is indeed a very limited process that "recycles" in slightly different forms the FI already existing in the reproducing organism. It appears to be an adaptive optimization of some aspects in some cases, but in reality it is only a variation of the general balance of some given functional program, that already existed, given maybe a few bits of information variation in the environment. So, bacteria can change in a limited repertoire that expresses in different modalities the same FI that already was in the bacterium. They can lose a little bit of information in a molecule to become resistant to an antibiotic in the environment, they can change their ability to control citrate metabolism under strong environmental stress, they can use some existing molecule to digest nylon instead of penicillin when nylon becomes suddenly abundant. All those "optimizations" are simply random variations in the global expression of a huge program with huge FI that essentially defines that specific organism as a specific form of living organism, an already existing designed program which goes on implementing its original function: to make the organism's life possible and to make the organism survive and reproduce. Probably, the global FI in that organism does not really increase, even if there are very small local increases for locally defined functions, always in the range of those few bits that are allowed by the essential limits of the probabilistic resource involved in biologic RV. The true jumps in FI that we observe so often in natural history are instead always examples of new plans involving new functional configurations that change deeply and in a coordinated way the whole plan of the existing organisms, adding new proteins or deeply re-engineered proteins, new control networks, new structures, and so on, IOWs a completely different perspective that can only be implemented controlling hundreds, thousands and even millions of functional bits in the process (see for example the transition to vertebrates, involving about 1.7 millions of functional bits).gpuccio
November 13, 2018
November
11
Nov
13
13
2018
12:31 AM
12
12
31
AM
PDT
Two points, guys: 1- The first step of NS is RV-
The first step in selection, the production of genetic variation, is almost exclusively a chance phenomenon except that the nature of the changes at a given locus is strongly constrained.- Ernst Mayr "What Evolution Is" page 281
And it has to be heritable variation. "RV + NS" is redundant, like PIN number; ATM machine; DOA on arrival 2- Unless there is pre-biotic natural selection- which has been called a contradiction in terms- when NS is discussed it is always in the presence of existing CSI/ FSCI/O.ET
November 12, 2018
November
11
Nov
12
12
2018
04:33 PM
4
04
33
PM
PDT
Eric, "Rather, the FI exists in the range of possibilities covered by RV+NS." It is down to interpretation. If there is a local optimum within the reach of RV+NS of a given system, it can be found by random walk. If this is what you mean, we are talking about the same thing.EugeneS
November 12, 2018
November
11
Nov
12
12
2018
03:33 PM
3
03
33
PM
PDT
Regarding BruceS' question about how the conservation of information applies to evolution if the environment is included in the evolution term: 1. Since I(E,X:Y) >= I(U(E,X):Y), then evolution does nothing to explain the mutual information. 2. Including the environment does not necessarily improve things. If K(K(E)|E) ~ 0, then including the environment is helpful. On the other hand, K(K(E)|E) ~ log K(E), then including the environment in evolution does not help increase the mutual information, so the original form, I(X:Y) >= I(U(E,X):Y), is still valid.EricMH
November 12, 2018
November
11
Nov
12
12
2018
12:27 PM
12
12
27
PM
PDT
Continuing the trend of the skeptics agreeing with ID whenever they speak in their domain of expertise, here is Dr. Tom English over at TSZ: http://theskepticalzone.com/wp/eric-holloway-needs-our-help-new-post-at-pandas-thumb/comment-page-2/#comment-236757 > The crucial point is that there are more ways for a discrete, deterministic universe to go than there are algorithms (finite sequences over a finite set of symbols) to describe how it goes. This is exactly the point that Jonathan Bartlett argues in the latest Blyth Institute book on alternatives to methodological naturalism. Another interesting point Dr. English makes: > I’m not aware of any biologist having claimed that evolution creates “new information” (whatever that is) out of nothing. So, again, this is what ID and Levin's law of information non growth argue, and what Dr. Marks and I recently proved. Looks like Dr. English is on the same page with ID, at least in the formal theory he writes if not in his allegiances. Now, if he wants to call our theories "naturalism", that's fine by me. The empirical and theoretical formalisms are what matter, not the terms that Dr. English wants to use. If it'll make Dr. English happy, we can call our theory the naturalistic theory of ID, with a natural soul and a natural creator of everything, and the natural teleological guidance of evolution and/or the natural special creation of creatures. We can all believe in a natural Christianity, founded by the natural Jesus who is both natural man and natural God.EricMH
November 12, 2018
November
11
Nov
12
12
2018
12:18 PM
12
12
18
PM
PDT
@ES, well I wouldn't say it "produces" the FI. Rather, the FI exists in the range of possibilities covered by RV+NS. If you take the expectation of ASC it is always non positive.EricMH
November 12, 2018
November
11
Nov
12
12
2018
12:12 PM
12
12
12
PM
PDT
Eric, Of course. RV+NS can produce very low quantities of functional information, noise really. Thanks for the reference!EugeneS
November 12, 2018
November
11
Nov
12
12
2018
11:46 AM
11
11
46
AM
PDT
@ES, even with a complex starting point, Darwinian evolution is limited in how much variety it can produce. See this proof I published with Dr. Marks: Observation of Unbounded Novelty in Evolutionary Algorithms is Unknowable https://robertmarks.org/REPRINTS/2018_Observation-of-Unbounded-Novelty.pdf Open ended evolution seeks computational structures whereby creation of unbounded diversity and novelty are possible. However, research has run into a problem known as the “novelty plateau” where further creation of novelty is not observed. Using standard algorithmic information theory and Chaitin’s Incompleteness Theorem, we prove no algorithm can detect unlimited novelty. Therefore observation of unbounded novelty in computer evolutionary programs is nonalgorithmic and, in this sense, unknowable.EricMH
November 12, 2018
November
11
Nov
12
12
2018
07:06 AM
7
07
06
AM
PDT
KF 533, I have no problem with that, as you may already know :) Rather, what I am saying is that some of those biologists I converse with are ready to accept that intelligence was necessary to achieve that. However, they claim that, given this complex starting point, everything else is automatically achievable in the Darwinian manner.EugeneS
November 12, 2018
November
11
Nov
12
12
2018
05:31 AM
5
05
31
AM
PDT
Nonlin, with all due respect, that is a rhetorically evasive dismissive one-liner in response to a substantial discussion. KF PS: For reference, 22, in response to an earlier bit of dismissiveness: >>symbols implies a communication context where distinct symbols in context convey meaningful or functional messages. ASCII-based alphanumeric text strings in English, as in this thread, is an example of this in action. Similarly, D/RNA has meaningful coded symbols used to assemble proteins step by step, again not controversial. If you will go to my Sec A in my always linked you will see an outline of how we get to Shannon Entropy, which is indeed the weighted average info per symbol. This is fairly standard stuff dating to Shannon’s work in 1948. The similarity to the Gibbs expression for entropy created much interest and as I noted Jaynes et al have developed an informational perspective on thermodynamics that makes sense of it. Thermodynamic entropy is effectively an index of missing info to specify microstate for a system in a given macrostate. For, there are many ways for particles, momentum/energy etc to be arranged consistent with a macrostate. As for the clip from my online note, I an simply speaking to the longstanding trichotomy of law of mechanical necessity vs blind chance contingency on closely similar initial conditions, vs functionally specific complex organisation and associated information. While we can have an onward discussion of how laws and circumstances of the world are designed [per fine tuning] or that randomness can be used as part of a design, the issue here is to discern on empirically grounded reliable sign, whether the more or less IMMEDIATE cause of an effect is intelligently directed configuration or is plausibly explained on blind chance and/or mechanical necessity.>>kairosfocus
November 11, 2018
November
11
Nov
11
11
2018
02:48 PM
2
02
48
PM
PDT
EricMH: Thank you! I will be happy to know your thoughts. :)gpuccio
November 11, 2018
November
11
Nov
11
11
2018
07:23 AM
7
07
23
AM
PDT
@gpuccio, thanks for your very in depth responses. I will be analyzing everything you wrote see if FI is addressed by Levin's information conservation.EricMH
November 11, 2018
November
11
Nov
11
11
2018
04:23 AM
4
04
23
AM
PDT
kairosfocus @522 Sorry, you keep doing your monologue. I already explained, but let me know if you have any questions or counterarguments to my comments.Nonlin.org
November 10, 2018
November
11
Nov
10
10
2018
06:04 PM
6
06
04
PM
PDT
@Mung not sure about the AIT1-3 designation. ASC uses prefix-free Kolmogorov complexity for the specification term. The nice thing about prefix-free Kolmogorov complexity is it follows the Kraft inequality, sum 2^-K(x) < 1, so we can prove the probability ASC is >= a is <= 2^-a. Thus, high ASC is good evidence the chance hypothesis is false.EricMH
November 10, 2018
November
11
Nov
10
10
2018
11:46 AM
11
11
46
AM
PDT
EricMH:
ASC has a ‘self information’ quantity which is the -log of the probability for X according to some chance hypothesis.
ASC is Algorithmic Specified Complexity? Is ASC based on Algorithmic Information Theory, which in turn is based on Kolmogorov Complexity? Chaitin identifies AIT1, AIT2, and AIT3. Of AIT1 he says it "is only of historical or pedagogic interest."Mung
November 10, 2018
November
11
Nov
10
10
2018
11:30 AM
11
11
30
AM
PDT
EricMH:
Also, I saw that you only found Levin’s Russian publication. Here is his English paper that talks about conservation of information in section 1.2.
Levin, L.A. Appears on nine pages in the book Information and Randomness I'll have to check them out.Mung
November 10, 2018
November
11
Nov
10
10
2018
11:22 AM
11
11
22
AM
PDT
Thanks Bill, that is encouraging :) I am happy to be disproven, but as you say it needs to be on a technical level.EricMH
November 10, 2018
November
11
Nov
10
10
2018
09:21 AM
9
09
21
AM
PDT
Hi Eric I just read part of Shallit's argument and his first two main points are philosophical. I have to give credit to Joshua for engaging you technically as he is the only one that is doing this. I don't find his argument as a credible challenge but he is engaging on a technical level.bill cole
November 10, 2018
November
11
Nov
10
10
2018
08:54 AM
8
08
54
AM
PDT
This line is incorrect in my response to Felsenstein: I(f,i:j) > I(U(f.i):j) should be I(f,i:y) > I(U(f.i):y). U is a universal Turing machine applied to the concatenation of f and i, which is the same as executing function f on input i.EricMH
November 10, 2018
November
11
Nov
10
10
2018
08:06 AM
8
08
06
AM
PDT
@bill cole, while shared proteins is a form of mutual information, it is not the sort that is required for my argument. The target Y has to be independent from the formation of X. A better example is the effectiveness of mathematics for describing the natural world. Mathematics is an independent target because the physical world did not create mathematics.EricMH
November 10, 2018
November
11
Nov
10
10
2018
08:04 AM
8
08
04
AM
PDT
1 2 3 4 20

Leave a Reply