Uncommon Descent Serving The Intelligent Design Community

Does information theory support design in nature?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Eric Holloway argues at Mind Matters that design theorist William Dembski makes a convincing case, using accepted information theory principles relevant to computer science:

When I first began to look into intelligent design (ID) theory while I was considering becoming an atheist, I was struck by Bill Dembski’s claim that ID could be demonstrated mathematically through information theory. A number of authors who were experts in computer science and information theory disagreed with Dembski’s argument. They offered two criticisms: that he did not provide enough details to make the argument coherent and that he was making claims that were at odds with established information theory.

In online discussions, I pressed a number of them, including Jeffrey Shallit, Tom English, Joe Felsenstein, and Joshua Swamidass. I also read a number of their articles. But I have not been able to discover a precise reason why they think Dembski is wrong. Ironically, they actually tend to agree with Dembski when the topic lies within their respective realms of expertise. For example, in his rebuttal Shallit considered an idea which is very similar to the ID concept of “algorithmic specified complexity”. The critics tended to pounce when addressing Dembski’s claims outside their realms of expertise.

To better understand intelligent design’s relationship to information theory and thus get to the root of the controversy, I spent two and a half years studying information theory and associated topics during PhD studies with one of Dembski’s co-authors, Robert Marks. I expected to get some clarity on the theorems that would contradict Dembski’s argument. Instead, I found the opposite.

Intelligent design theory is sometimes said to lack any practical application. One straightforward application is that, because intelligence can create information and computation cannot, human interaction will improve computational performance.
More.

Also: at Mind Matters:

Would Google be happier if America were run more like China? This might be a good time to ask. A leaked internal discussion document, the “Cultural Context Report” (March 2018), admits a “shift toward censorship.” It characterizes free speech as a “utopian narrative,” pointing out that “As the tech companies have grown more dominant on the global stage, their intrinsically American values have come into conflict with some of the values and norms of other countries.”

Facebook’s old motto was “Move fast and break things.” With the current advertising scandal, it might be breaking itself A tech consultant sums up the problem, “Sadly Facebook didn’t realize is that moving fast can break things…”

AI computer chips made simple Jonathan Bartlett: The artificial intelligence chips that run your computer are not especially difficult to understand. Increasingly, companies are integrating“AI chips” into their hardware products. What are these things, what do they do that is so special, and how are they being used?

The $60 billion-dollar medical data market is coming under scrutiny As a patient, you do not own the data and are not as anonymous as you think. Data management companies can come to know a great deal about you; they just don’t know your name—unless, of course, there is a breach of some kind. Time Magazine reported in 2017 that “Researchers have already re-identified people from anonymized profiles from hospital exit records, lists of Netflix customers, AOL online searchers, even GPS data of New York City taxi rides.” One would expect detailed medical data to be even more revelatory.

George Gilder explains what’s wrong with “Google Marxism”
In discussion with Mark Levin, host of Life, Liberty & Levin, on Fox TV: Marx’s great error, his real mistake, was to imagine that the industrial revolution of the 19th century, all those railways and “dark, satanic mills” and factories and turbine and the beginning of electricity represented the final human achievement in productivity so in the future what would matter is not the creation of wealth but the redistribution of wealth.

Do we just imagine design in nature? Or is seeing design fundamental to discovering and using nature’s secrets? Michael Egnor reflects on the way in which the 2018 Nobel Prize in Chemistry has so often gone to those who intuit or impose desire or seek the purpose of things

Comments
@EricMH. Thanks for the quote from Shannon.
If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely.
He is talking about the number of messages. Not about the probability of a given message. when he says "this number" he is talking abut the number of messages, not their probability. This is the mistake that Swamidass makes. Given -log p, what does the p refer to? If it does not refer to the number of messages then it's not an information measure as defined by Shannon. When I see -log p, to me, the p stands for a probability. What is it the probability of? My point here is that you wrote: Shannon proposes -log p as a measure of information I just don't see that. What am i missing?Mung
October 25, 2018
October
10
Oct
25
25
2018
07:11 AM
7
07
11
AM
PDT
infotropy - love that one too!Mung
October 25, 2018
October
10
Oct
25
25
2018
06:56 AM
6
06
56
AM
PDT
gizmeter. i love it!Mung
October 25, 2018
October
10
Oct
25
25
2018
06:54 AM
6
06
54
AM
PDT
EricMH @ 23 -
[CSI] is an objective property of an object, regardless of whether we can measure it or not.
But the specification in CSI is done by people, is it not? So how do you make that objective? If the specification is defined in two ways by two different researchers (if they need names, I guess we can call them Bob and Bill) and they are different, which is the property of the object?Bob O'H
October 25, 2018
October
10
Oct
25
25
2018
05:40 AM
5
05
40
AM
PDT
@daveS, yes that's a good question. All definitions of information measures I know of are in reference to some possible generating source. Shannon uses a random process. Kolmogorov uses a Turing machine. So, the external reference seems inescapable, at least within current theory. On the other hand, all the external definitions are usually related to an intrinsic property of the object: how many parts it has. In general, the more parts, the more information capacity. The second aspect is the arrangement of the parts. Certain arrangements possess more information content than others. The closest I've seen to an entirely intrinsic definition of information content is the Kolmogorov minimal sufficient statistic, which separates the Kolmogorov complexity of an object into noise and signal. It still is defined in reference to an external Turing machine, however. All that being said, I don't think requiring an external reference is a mark against information measures. Every metric I can think of is in reference to some external reference. Measurement, in general, is about the relationship between two things. So, if external reference is a problem for information theory, it is a problem for every form of measurement we have.EricMH
October 25, 2018
October
10
Oct
25
25
2018
05:29 AM
5
05
29
AM
PDT
PS to #24: I do agree that with some stipulations, then the CSI of the paper clip would have an objective, quantifiable value. That is, it would "exist" uniquely, as does the max girth of a rectangular box.daveS
October 25, 2018
October
10
Oct
25
25
2018
05:05 AM
5
05
05
AM
PDT
EricMH, Yes, there are indeed two distinct issues here. Despite my comment about feasibility (which is really irrelevant), I'm mostly interested in whether things such as paper clips have CSI in them or whether it's part of the "map" as Bob O'H states. In order to (in principle) compute CSI, I think at least we would have to specify what initial conditions are assumed in the probability calculation. That is, we would need to specify "X" in: P(paper clips spontaneously forming | X). And since the initial conditions are external to the paper clip, this probability depends on things external to the paper clip. I agree that the shape of the paper clip is easily and concisely described mathematically, and if the length of a description of the shape is all we're after, that quantity doesn't depend on external factors.daveS
October 25, 2018
October
10
Oct
25
25
2018
04:46 AM
4
04
46
AM
PDT
@daveS, there are two concerns here that tend to get conflated. First, whether we can measure something or not does not have the measurement. There could be a 10 pound rock on some distant planet we will never reach, and thus never be able to measure it. But, the rock is still 10 pounds. Same with CSI. It is an objective property of an object, regardless of whether we can measure it or not. With the paperclip, we'd first determine whether metals have inherent properties that turn them into paperclip shapes. If not, then the probability of paperclips spontaneously forming out of metal would be very small. Additionally, the paperclip is a shape that can be concisely described. Thus, it has a lot of CSI. Back with Shannon information, say the source is a uniform distribution over letters. So, LGCRUIB has the same Shannon information as ROSEBUD. However, the latter word is meaningful but the first word is gibberish. This shows Shannon information is a measure of informational capacity and not of informational content.EricMH
October 25, 2018
October
10
Oct
25
25
2018
04:08 AM
4
04
08
AM
PDT
EricMH, I agree with you to some extent, at least for certain statistics. For example, the max girth of a rectangular box is the sum of its length and girth, and that is objective and quantifiable. Whether it exists "out there" in the box rather than in our minds, I'll have to think about. Now for CSI, obviously that would be a bit more difficult to calculate. Consider finding the CSI of a 1 3/8-inch smooth paper clip. You would need to find the probability of it arising, I guess, under certain conditions*, as well as an estimate of its Kolmogorov complexity. Is that feasible? *Edit: I guess that means it depends partly on things outside the actual paper clip; the laws of physics, the history of the universe, etc?daveS
October 24, 2018
October
10
Oct
24
24
2018
03:13 PM
3
03
13
PM
PDT
@daveS, I think more important than arguing about definitions is looking at the mathematics. Is there a way to quantify order, or is one person's order another person's disorder? This question is answered by Kolmogorov complexity, K(X). Orderly objects are highly compressible, disorderly objects are incompressible. A second question is: what about things like crystals that are highly ordered? A crystal is highly ordered, but because it is naturally produced that way, the object has high probability. So, we want an object that has low probability (i.e. not a crystal). Thus, we arrive at the quantity we want: an object that has a low probability and low Kolmogorov complexity. This is CSI. We measure it by CSI(X) = -log p(X) - K(X). CSI gets a bit more sophisticated, but the above is the basic idea. The main takeaway is CSI is an objective, quantifiable property of the object.EricMH
October 24, 2018
October
10
Oct
24
24
2018
02:42 PM
2
02
42
PM
PDT
@Mung, from Shannon's paper: If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic function. Although this definition must be generalized considerably when we consider the influence of the statistics of the message and when we have a continuous range of messages, we will in all cases use an essentially logarithmic measure.EricMH
October 24, 2018
October
10
Oct
24
24
2018
02:33 PM
2
02
33
PM
PDT
Bob O'H @ 8, it is a semantic argument masquerading as a logical argument. The argument goes like this: ID claims there is a quantifiable difference between the output of a random coin and the output of a designing intelligence, and we call this difference information (CSI). When infotropy crowd sees the ID argument, they say: 1. Entropy is information 2. Flipping a coin generates a lot of entropy 3. Therefore, flipping a coin generates a lot of information 4. Therefore, all the information in DNA can be generated by flipping a coin Both sides are talking about apples and oranges, but the infotropy crowd declares victory by renaming apples oranges.EricMH
October 24, 2018
October
10
Oct
24
24
2018
02:16 PM
2
02
16
PM
PDT
'Then looking inside the cell, at DNA, and claiming that it just “decided” to develop a mechanism to copy its data and store immaterial information to construct and build breaking all laws of entropy and the law of common damn sense! Also it happened to “decide” to make this elaborate and elegant system to copy its DNA and pull half of it to either side of itself, and then divide?? Why in the world would dumb matter behave in this way? The answer is simple, it can’t and it would not. Some people still think given enough time ANYTHING CAN HAPPEN – two big problems here – anything keeps happening over a limited amount of time in just the right way to create and maintain life? AND the fact that ONLY things that are possible through chemistry and physics can happen over time, it is completly false to say given the laws of our universe, and enough time, anything can happen – NO, only thing that are allowed by physical laws can happen given enough time – not anything. And organic matter is so fragile to begin with I knew they were off their rocker.' - Tom Robbins More LOL, surreal, knockabout humour from the atheists, eh, Tom ? How do they get away with it ? I remember reading that Pauli was appalled that they hadn't even performed any kind of calculations in support of this barmy 'promissory' twaddle. I believe I read that just to perform some fiddling little 'transmogrification' of matter by a fortuitous interaction of protein 'gizmeters' (sorry, can't remember the name), would require greater odds against it occurring than there are SUBATOMIC particles in the known universe. I suppose Pauli wold be familiar enough with that tribe to not be shocked to learn that they still hadn't produced any kind of supporting calculations !Axel
October 24, 2018
October
10
Oct
24
24
2018
01:17 PM
1
01
17
PM
PDT
If intellectual integrity were a necessary criterion - perhaps even acceptable criterion - for teaching most tertiary studies, surely the academic establishment would be decimated. But isn't it curious, however understandable, how 'light' as well as being a physical reality, is used metaphorically for intellectual illumination, and widely identified either with the divine. Surely, most of all in Judaeo-Christianity. The latter, bearing in mind, particularly that, via their Baptism, Christians are, at least, embryonically, already inducted into the Mystical Body of Christ, the True Vine. Moreover, we know that, while light photons interact with space-time, their proper framework of reference has to be outside of space-time and absolute in relation to it. I believe I read here that the origin of all sub-atomic particles has been found to be external to space-time. Is that right ?Axel
October 24, 2018
October
10
Oct
24
24
2018
01:07 PM
1
01
07
PM
PDT
daveS:
On the other hand, couldn’t some of these information statistics tell me whether the gibberish at least had some sort of structure?
Perhaps some. But in general the measure defined by Shannon is a measure across a probability distribution. If you model the message as a distribution you can measure the "amount of information" associated with that distribution. And that has to do with the frequencies or probabilities of symbols and not with the actual content of the message. So "information content" is an unfortunate misnomer.Mung
October 24, 2018
October
10
Oct
24
24
2018
12:15 PM
12
12
15
PM
PDT
Mung,
And I guess it depends on whether you prefer clarity to nonsense. In one view you can receive a “message” that can be total gibberish and they will tell you they can measure the “information content” of the message. Simple intuition tells you that is nonsensical.
I would agree (based only on my intuition) that the measurement of the "information content" would not tell me anything about the meaning (or lack of meaning in this case) in the message. On the other hand, couldn't some of these information statistics tell me whether the gibberish at least had some sort of structure? Perhaps the "messages" we're looking at are more like "signs", where there is no text being transmitted, but rather just improbable structures, such crop circles or something.daveS
October 24, 2018
October
10
Oct
24
24
2018
07:30 AM
7
07
30
AM
PDT
EricMH:
Shannon proposes -log p as a measure of information
1.) Where does he do that? 2.) -log p of what? Can you help me decipher this? Swamidass:
Do you realize that -log p = H (information equals entropy)? This is why, for example, people use the formula p=2-Hp = 2^{-H}, the probability of a message is the negative exponential of its entropy.
ThanksMung
October 24, 2018
October
10
Oct
24
24
2018
06:36 AM
6
06
36
AM
PDT
daveS
Is there some way to decide which view (if either) is correct?
It's not something that can be settled by science. :) And I guess it depends on whether you prefer clarity to nonsense. In one view you can receive a "message" that can be total gibberish and they will tell you they can measure the "information content" of the message. Simple intuition tells you that is nonsensical. Shannon and Weaver were quite explicit about his measure, but the so-called experts ignore that. Presumably because they know better than Shannon.Mung
October 24, 2018
October
10
Oct
24
24
2018
06:31 AM
6
06
31
AM
PDT
Bob O'H,
From what you write, you think that information is something “out there”, i.e. independent of our science and observations. But I look on information as something purely a part of human science. it’s a statistics we use to summarise a lot of complexity. It’s convenient and useful, but is purely part of the map, not the landscape.
I think this clarifies some things for me. Is there some way to decide which view (if either) is correct?daveS
October 24, 2018
October
10
Oct
24
24
2018
05:56 AM
5
05
56
AM
PDT
Interesting discussion.PaoloV
October 24, 2018
October
10
Oct
24
24
2018
05:09 AM
5
05
09
AM
PDT
EricMH @ 6 And of course you're wrong too: entropy is not a property of information, but of data. For instance, the idea of 'a circle' has no entropy, but the hex representation of its character string does.Nonlin.org
October 23, 2018
October
10
Oct
23
23
2018
05:48 PM
5
05
48
PM
PDT
EricMH @ 6 Indeed, confusing entropy with information is very bad especially when you claim to be an expert. Even worse is that these "experts" don't know the difference between information, data, and media. BTW, I also disagree with Robert Marks' claim that a picture of Mt. Rushmore contains more information than one of Mt. Fuji. That's because information is abstract and user dependent. Does a circle have more, less, or same information as a square? A lot of confusion comes from Shannon himself calling his theory "of information" when in fact he only cared about data integrity, not information.Nonlin.org
October 23, 2018
October
10
Oct
23
23
2018
02:02 PM
2
02
02
PM
PDT
EricMH @ 6 - I wonder if that's not a mis-reading, but a different interpretation. From what you write, you think that information is something "out there", i.e. independent of our science and observations. But I look on information as something purely a part of human science. it's a statistics we use to summarise a lot of complexity. It's convenient and useful, but is purely part of the map, not the landscape. I don't know if Shallit and Swarmidass think the same thing, but if they do that might suggest a more fundamental difference of opinion.Bob O'H
October 23, 2018
October
10
Oct
23
23
2018
01:48 PM
1
01
48
PM
PDT
I completely understand the more intellectual route to Christianity. I always believed in some kind of creator and I used the word God at an early age. My Mom at the time was a unitarian, and my father an obvious agnostic, who would only consider an impersonal force that could have a type of intelligence that got things going - but he was doubtful of this. But my fathers thinking was very different than mine, I simply could not, and would not believe, that suddenly I am here in this moment, having these experiences, all due to some kind of dumb luck, and for no purpose. We studied both neo-darwinism and Gould's Punctuated equilibrium, but even though I understood the material, understood what the claims were, I knew instinctively that what I was being taught was not science, but a more of a terribly oversimplified story, much like you would expect from a creation story of some isolated tribe, only with a lot of words that sounded scientific. I noticed right off the bat that the word Evolution was used way too much to explain the theory of evolution, as in "Evolutionary processes selected for this or that" and that somehow Evolution was creative, and it was always talked about as if it could think ahead, even though the theory does not allow it. I recall reading a story about how a squirrel became a flying squirrel by Richard Dawkins - he was a decent writer, but his description where the flap of skin under the arm and shoulder, mutated to be larger and more flabby, so it could now survive a fall from a tree from hight X, then it grew even larger due to more lucky mutations, and it could then survive a fall from height X+1 and so matted more often!! I was like you have to be joking - this from the great Richard Dawkins!!?? I honestly think you have to be an idiot to believe this is how life diversified. I went on to learn about quantum physics, and all the attempt to discredit the observer effect have failed, so it became very clear to me that MIND is primary, matter/space and time were secondary. Then looking inside the cell, at DNA, and claiming that it just "decided" to develop a mechanism to copy its data and store immaterial information to construct and build breaking all laws of entropy and the law of common damn sense! Also it happened to "decide" to make this elaborate and elegant system to copy its DNA and pull half of it to either side of itself, and then divide?? Why in the world would dumb matter behave in this way? The answer is simple, it can't and it would not. Some people still think given enough time ANYTHING CAN HAPPEN - two big problems here - anything keeps happening over a limited amount of time in just the right way to create and maintain life? AND the fact that ONLY things that are possible through chemistry and physics can happen over time, it is completly false to say given the laws of our universe, and enough time, anything can happen - NO, only thing that are allowed by physical laws can happen given enough time - not anything. And organic matter is so fragile to begin with I knew they were off their rocker. So bottom line this, plus the historical facts of christianity + reading the words of Jesus, made me get on my knees and say I believe in you, I am your child, forgive me. And it makes one stronger by letting go of control, and serving, not weaker... blah, blah.Tom Robbins
October 23, 2018
October
10
Oct
23
23
2018
01:12 PM
1
01
12
PM
PDT
Shallit says entropy is information. Same with Swamidass. This appears to be a misreading of information theory to me. Shannon proposes -log p as a measure of information, and E[-log p] as the expected information from a random variable, which Shannon calls entropy or uncertainty. But a measurement is not the thing being measured. For example, a pound is a measure of fruit, but a pound is not a fruit. So, just because a source has high entropy does not mean it has a large amount of information. It'd be like saying because my bag can hold 10 pounds, it has 10 pounds of fruit. The capacity does not dictate the content.EricMH
October 23, 2018
October
10
Oct
23
23
2018
10:31 AM
10
10
31
AM
PDT
EricMH, Shallit has no answers: https://www.blogger.com/comment.g?blogID=20067416&postID=3404749992796650535&page=1&token=1540315804321 ... and neither do Joe Felsenstein and Joshua Swamidass. BTW, did you read these: http://nonlin.org/biological-information/ http://nonlin.org/intelligent-design/Nonlin.org
October 23, 2018
October
10
Oct
23
23
2018
10:10 AM
10
10
10
AM
PDT
Thanks for that, EricMH.daveS
October 23, 2018
October
10
Oct
23
23
2018
06:13 AM
6
06
13
AM
PDT
If Christianity required math I'd be going to hell!Mung
October 23, 2018
October
10
Oct
23
23
2018
06:13 AM
6
06
13
AM
PDT
@daveS, it's hard to say exactly what route I would have gone. But ID did have a big influence on the direction I did go. However, I noticed the things that many other Christians found pretty convincing were not as effective for me. Definitely none of the emotional appeals worked for me, nor the good lives lived by Christians. The accounts of miracles were a bit more convincing, and I attended Pentecostal churches until I became Catholic, and heard a number of fascinating stories from people I trust, though never directly witnessed a miracle. I was willing to reject all of that if it was false. I really wanted something absolute like mathematics and testable like science. The big draw of ID is that it says religious claims fit into those categories.EricMH
October 23, 2018
October
10
Oct
23
23
2018
05:49 AM
5
05
49
AM
PDT
EricMH,
When I first began to look into intelligent design (ID) theory while I was considering becoming an atheist, I was struck by Bill Dembski’s claim that ID could be demonstrated mathematically through information theory.
Slightly tangential, but do you think if you hadn't studied information theory (and perhaps related fields), that you would likely be an atheist today? I'm asking because I believe the majority of Christians I know did not get there via mathematics and the sciences, but rather through studying the bible, history, perhaps reflecting on sermons they had heard in church, witnessing how Christians live their lives, experiencing miracles, etc.daveS
October 23, 2018
October
10
Oct
23
23
2018
05:34 AM
5
05
34
AM
PDT
1 18 19 20

Leave a Reply