Uncommon Descent Serving The Intelligent Design Community

Does information theory support design in nature?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Eric Holloway argues at Mind Matters that design theorist William Dembski makes a convincing case, using accepted information theory principles relevant to computer science:

When I first began to look into intelligent design (ID) theory while I was considering becoming an atheist, I was struck by Bill Dembski’s claim that ID could be demonstrated mathematically through information theory. A number of authors who were experts in computer science and information theory disagreed with Dembski’s argument. They offered two criticisms: that he did not provide enough details to make the argument coherent and that he was making claims that were at odds with established information theory.

In online discussions, I pressed a number of them, including Jeffrey Shallit, Tom English, Joe Felsenstein, and Joshua Swamidass. I also read a number of their articles. But I have not been able to discover a precise reason why they think Dembski is wrong. Ironically, they actually tend to agree with Dembski when the topic lies within their respective realms of expertise. For example, in his rebuttal Shallit considered an idea which is very similar to the ID concept of “algorithmic specified complexity”. The critics tended to pounce when addressing Dembski’s claims outside their realms of expertise.

To better understand intelligent design’s relationship to information theory and thus get to the root of the controversy, I spent two and a half years studying information theory and associated topics during PhD studies with one of Dembski’s co-authors, Robert Marks. I expected to get some clarity on the theorems that would contradict Dembski’s argument. Instead, I found the opposite.

Intelligent design theory is sometimes said to lack any practical application. One straightforward application is that, because intelligence can create information and computation cannot, human interaction will improve computational performance.
More.

Also: at Mind Matters:

Would Google be happier if America were run more like China? This might be a good time to ask. A leaked internal discussion document, the “Cultural Context Report” (March 2018), admits a “shift toward censorship.” It characterizes free speech as a “utopian narrative,” pointing out that “As the tech companies have grown more dominant on the global stage, their intrinsically American values have come into conflict with some of the values and norms of other countries.”

Facebook’s old motto was “Move fast and break things.” With the current advertising scandal, it might be breaking itself A tech consultant sums up the problem, “Sadly Facebook didn’t realize is that moving fast can break things…”

AI computer chips made simple Jonathan Bartlett: The artificial intelligence chips that run your computer are not especially difficult to understand. Increasingly, companies are integrating“AI chips” into their hardware products. What are these things, what do they do that is so special, and how are they being used?

The $60 billion-dollar medical data market is coming under scrutiny As a patient, you do not own the data and are not as anonymous as you think. Data management companies can come to know a great deal about you; they just don’t know your name—unless, of course, there is a breach of some kind. Time Magazine reported in 2017 that “Researchers have already re-identified people from anonymized profiles from hospital exit records, lists of Netflix customers, AOL online searchers, even GPS data of New York City taxi rides.” One would expect detailed medical data to be even more revelatory.

George Gilder explains what’s wrong with “Google Marxism”
In discussion with Mark Levin, host of Life, Liberty & Levin, on Fox TV: Marx’s great error, his real mistake, was to imagine that the industrial revolution of the 19th century, all those railways and “dark, satanic mills” and factories and turbine and the beginning of electricity represented the final human achievement in productivity so in the future what would matter is not the creation of wealth but the redistribution of wealth.

Do we just imagine design in nature? Or is seeing design fundamental to discovering and using nature’s secrets? Michael Egnor reflects on the way in which the 2018 Nobel Prize in Chemistry has so often gone to those who intuit or impose desire or seek the purpose of things

Comments
@EugeneS, check out the "entropy is not information" article by Dr. Schneider.EricMH
November 8, 2018
November
11
Nov
8
08
2018
07:18 AM
7
07
18
AM
PDT
Mung: You say: "(Shannon) entropy measures the probability distribution" Please, accept a simple thought that comes from my long practice of statistics. I don't think that we can "measure" a probability distribution. A probability distribution is a mathematical object. I think what you mean is that Shannon's entropy uses a probability distribution to effect measures on empirical objects. Please, correct me if I am wrong about your thought.gpuccio
November 8, 2018
November
11
Nov
8
08
2018
07:16 AM
7
07
16
AM
PDT
Mung: "I would, but I couldn’t find any." And, coming from you, that's really flattering! :) Indeed, that request was made to Antonin. With you, I would probably be a little scared! :)gpuccio
November 8, 2018
November
11
Nov
8
08
2018
07:11 AM
7
07
11
AM
PDT
EricMH @ 464. Well said! Sadly, it's a struggle to get some information theory experts to first cop to the fact that (Shannon) entropy measures the probability distribution.Mung
November 8, 2018
November
11
Nov
8
08
2018
07:09 AM
7
07
09
AM
PDT
Mung: Just a humble suggestion. Maybe we should avodi to use the word "information", at least without further clarifications. So, if I say: The potential total information content of a sequence the concept is clear: it's a^n, where a is the number of possible symbols, and n is the length. That is clear. If I say: The Kolmogorov complexity of a sequence that is well defined too, it's the shortest "program" that can generate the sequence in some environment. If I say: The functional information of a sequence for function x, explicitly defined again, it's clear: it's the ratio between the target space of all sequences that imlement that function and the search space of all possible sequences (always in a well defined environment or system). But if I say: The information in this object or sequence What does it mean? It's better to be precise, rather than looking for some universal definition of information that does not exist.gpuccio
November 8, 2018
November
11
Nov
8
08
2018
07:08 AM
7
07
08
AM
PDT
gpuccio:
Why don’t you just try to “point out the flaws” in my approach yourself?
I would, but I couldn't find any. :DMung
November 8, 2018
November
11
Nov
8
08
2018
07:02 AM
7
07
02
AM
PDT
Antonin: The idea of a pdf is not bad. Maybe I could work at it, let's see. However, I cannot believe that, as EugeneS says, people, including JF, still have problems to understand the idea of specified information and functional information. It's not so difficult. Anyone can understand that a machine, or a software, or a text, or a protein, needs a minimal level of bit complexity to do what it does. It's such a simple and intuitive idea that it is a real mystery how people try to deny it. I have not yet read JF's article about Dembski. As I am not a mathematician, I try not do discuss details of general theories about information, information conservation, and so on. As I have said many times, I have problems with the famous Dembski paper about specification. Maybe it's just my limited mathematical proficiency. But the basci idea of specified information, and in particular of functionally specified information, is simple beautiful and universal. I have only tried ot express it in empirical terms that can easily be applied to biology. JF has explicitly criticized my arguments as if my idea that functional information must be computed for one explicitly defined function were some weird addition to ID theory. But that's not true. All examples of functional information discussed in biological settings are of the kind. How can JF insist that many simple mutations that give reproductive advantage, happening "anywhere in the genome", add up to generate 500 bits of functional information? What argument is this? Those are independent events, simple events, that can be naturally selected if the increase "fitness", exactly as the thief can gain from different safes with simple keys. That has nothing to do with complex functional information. The alpha and beta chains of ATP synthase are complex functional information. A single object (indeed, just part of it) that requires hundreds of specific AAS to work. That's the big safe. That's what cannot be reached by RV + NS, because: a) RV cannot find anything that needs more than 500 bits of functional complexity to implement its function. b) There is no naturally selectable path to a complex function which cannot exist without at least 500 bits of information already there. The fact that complex proteins require a lot of specific information wo work is absolutely incontrovertible: even those biologists that are honest enough to recognize the importance of functional information admit it. See Szostak. Even JF admits that the concept of functional information is important, even if immediately after he demonstrates that he has not understood it at all. My application of a simple procedure to compute FI in proteins is really simple too. The idea is simple and powerful. We know there is FI in proteins. How can we approximate its value? The answer is: functional conservation of the sequence thoough long evolutionary times. It's an answer directly derived from the only part of the neo-darwinist paradigm that works: the idea of negative NS, aka purifying selection. The idea is not mine, of course. It's there in all biological thought. Durston was probably the first to apply it in a definite bioinformatic procedure. Mine idea is essentially the same, even if the procedure is somewhat different. As I have explained in detail many times.gpuccio
November 8, 2018
November
11
Nov
8
08
2018
07:01 AM
7
07
01
AM
PDT
EugeneS:
The standard definition of information (due to Shannon)
I have a lot of respect for you and I agree that ID supporters should be extra careful when talking about information, but I do not see a definition of information in that page. I see definition for various entropies. Surely there is a difference between information and entropy. If not why not? ThanksMung
November 8, 2018
November
11
Nov
8
08
2018
07:00 AM
7
07
00
AM
PDT
Earth to Antonin- There isn't any "unconfusing him". Joe F doesn't want to understand ID. He relishes in erecting strawman after strawman and then knocking them down. Natural selection is impotent and so are all of Joe F's arguments against ID.ET
November 8, 2018
November
11
Nov
8
08
2018
06:31 AM
6
06
31
AM
PDT
Eric: The standard definition of information (due to Shannon): https://en.wikipedia.org/wiki/Information_theory Natural processes CAN create information as per that definition. To deny that is wrong and makes various people think that ID is moot. BUT (and it is a big 'but') the amount of functional information (defined differently) that a natural process can generate is bounded in practice. It is just a fact of life evo supporters cannot come to terms with. Various notable figures in biological physics like Pattee have acknowledged at different times that information in Shannon's sense is NOT suitable for a description of biological processes. The juggling magic tricks with Shannon's information that folks like J Shallit are in favor of, miss the point entirely. An allusion to this is perhaps the C paradox.EugeneS
November 8, 2018
November
11
Nov
8
08
2018
06:29 AM
6
06
29
AM
PDT
JF is competely confused about ID and functional information here.
Unconfuse him then! The problem seems to be that that definitions seem to vary depending on who is defining. JF at PT (after listing a number of critiques of Dembski's various stances on CSI):
So only Dembski’s first argument, the one using the Law of Conservation of Complex Specified Information, even tried to show that there was some information-based Law that prevented natural selection from putting adaptive information into the genome. And, as we’ve seen, that law does not work to do that. And Holloway seems to have missed that. As he missed all these other refutations of Dembski.
Antonin
November 8, 2018
November
11
Nov
8
08
2018
05:55 AM
5
05
55
AM
PDT
gpuccio:
I would like to clairfy that I have never “tried to get someone to call Joe Felsenstein’s attention”. I have simply posted my old argument, specifying that it was a criticism to a statement by Joe Felsenstein, and that, as far as I am aware, he has never answered that particular point.
Sure. It was my suggestion to pass on a head-up. I don't know if mung or someone else did so or whether Professor Felsenstein independently noticed comments when preparing his article at The Panda's Thumb. In all events, your scenario should now get an airing. My thoughts regarding your (shall-we-say) evolution-critical articles here at UD involving you performing number crunching on raw sequence data is that it doesn't bear on reality. The difficulty is that you have spread your argument over several opening posts and many comments so it's hard to know what your best and most concise exposition is. Not wanting to create work for you but how much effort would it take to cut and paste the salient points into an abstract or summary with links to the original material. Apologies if you already did this and I missed it. A downloadable PDF would be marvelous! You are far from a BA77 or a KF but a little editing, fact from opinion, would be helpful.Antonin
November 8, 2018
November
11
Nov
8
08
2018
05:47 AM
5
05
47
AM
PDT
Antonin (and Joe Felsestein): By the way, just to be precise, I would like to clairfy that I have never "tried to get someone to call Joe Felsestein's attention". I have simply posted my old argument, specifying that it was a criticism to a statement by Joe Felsestein, and that, as far as I am aware, he has never answered that particular point. That's all. Others, including you, have "tried to get someone to call JF". Not me. But, of course, I will be happy if he answers my argument. :) By the way, JF's statement was made at TSZ, and I quote it in my initial post about that issue, my comment #828 in the Ubiquitin thread: https://uncommondescent.com/intelligent-design/the-ubiquitin-system-functional-complexity-and-semiosis-joined-together/#comment-656019 This is the original comment by JF, from TSZ:
1. The 500 bits criterion, which originated with Dembski, was gpuccio’s criterion for “complex”, as I demonstrated in clear quotes from gpuccio in my previous comment. 2. That counts up changes anywhere in the genome, as long as they contribute to the fitness, and it counts up whatever successive changes occur. 3. Now, in both gpuccio’s and your comments, the requirement is added that all this occur in one protein, in one change, and that it be “new and original function”. 4. That was not a part of the 500-bit criterion that gpuccio firmly declares to be the foundation of ID. 5. There was supposed to be some reason why a 500 bit increase in functional information was not attainable by natural selection. Without any requirement that it involve “new and original function”. So what is the “foundational” requirement? A 500-bit increase in functional information, taken over multiple changes, possibly throughout the genome? Or attainment of a “new and original function”? In the latter case who judges newness and originality of function?
JF is competely confused about ID and functional information here. I have only tried to explain, by the thief example, that complex information is always referred to one function, and that it is completely different from: "changes anywhere in the genome, as long as they contribute to the fitness" IOWs, complex functional information is the single safe with the complex key. "changes anywhere in the genome, as long as they contribute to the fitness" are the many safes with simple keys. If JF thinks that the 500 bits threshold stated by Dembski and used by me refers to the sum of many independent changes, he has understood nothing of ID theory.gpuccio
November 8, 2018
November
11
Nov
8
08
2018
05:29 AM
5
05
29
AM
PDT
Antonin: I mentioned Joe Felsestein about my thief scenario because that scenario was a criticism to a statement made by him. You, on the other hand, have said here, repeating a statement you had already mage in another thread: "Professor Felsenstein is a geneticist, specializing in population genetics, a mathematical approach to allele distribution. Of course he’s going to be able to more eloquently point out the flaws in your approach." Why don't you just try to "point out the flaws" in my approach yourself? OK, however, let's wait for Joe Felsestein's answer. I am really interested in it.gpuccio
November 8, 2018
November
11
Nov
8
08
2018
05:00 AM
5
05
00
AM
PDT
gpuccio:
You have already exploited poor Joe enough in this discussion!
How so? You mentioned him in connection with your thief scenario. Anyway, it appears that Professor Felsenstein has noticed this thread. He has posted at The Skeptical Zone and The Panda's Thumb. In his post at The Skeptical Zone he says
gpuccio also comments there trying to get someone to call my attention to an argument about Complex Functional Information that gpuccio made in the discussion of that earlier. I will try to post a response on that here soon, separate from this thread.
linkAntonin
November 8, 2018
November
11
Nov
8
08
2018
12:11 AM
12
12
11
AM
PDT
EricMH at #464: Very good points! :) I believe that free will is one of the components that are exclusive to conscious experiences, and that explain CSI, especially in its functional form. The reasoning is simple: a) We have desires (a definite experience of our feeling inner space) b) We understand meanings: that helps us understand how our desires can be satisfied c) We have free will: we can initiate actions that are meant to satisfy our desires. One of the ways that such a sequence acts is in designing objects. As a result of that sequence of inner experiences, we create complex functional information in our consciousness, applying meaning to our desires, and then by our free will we implement that functional information into material objects, as a specific configuration meant to be functional to our desire. To do that, we use those rare high entropy and high contingency configurations that have the function we have envisioned.gpuccio
November 7, 2018
November
11
Nov
7
07
2018
03:04 PM
3
03
04
PM
PDT
@Mung & @KF the point I'm getting at is entropy measures the probability distribution, the probability distribution distributes probabilities, and probabilities are a count of options, and the options are possibilities. So, collapsing the chain shows entropy is just a measure of possibility, as these info theory quotes say. Pro evolution people want to say entropy is the same as information so that a random process can be said to generate information. However, entropy is just a measure of possible information, and so it doesn't mean that everything produced by a high entropy source is actually information. This is a similar problem that people have with the concept of free will. Free will must be undetermined, but they equate undetermined and random, and randomness does not seem to be the same as free will. Thus, they conclude the will must be determined. But, there is a third option, where an object has high contingency, but is not random. This is what CSI measures. Thus, the acts of free will also have high contingency like a random process, but they are not random because they correlate with an independent target. So, if we go back to our high entropy source, its high entropy means it generates objects with high contingency. However, since high contingency is not coextensive with random, not all objects generated by the high entropy source necessarily fall in the same bucket. Some high contingency objects can fall in the random bucket, and other high contingency objects can fall in the CSI bucket, but the high entropy source does not predetermine which bucket the events fall into. The high entropy just means the source generates objects with high contingency, which is a necessary but not sufficient condition for CSI. Therefore, entropy is not the same as information, but information requires entropy.EricMH
November 7, 2018
November
11
Nov
7
07
2018
02:52 PM
2
02
52
PM
PDT
Antonin: The truth is very simple: There is no chemistry that can explain the mapping of AAs to codons of three nucleotides (IOWs, the genetic code). The mapping is implemented by the 20 aaRNA synthetases. Those 20 proteins are not a law of chemistry. They are specific machines that implement a task by their specific configuration. This is how the genetic code works, in all living beings. There is no explanation based on chemistry. The system works by a symbolic mapping, arbitrary and that has nothing to do with any laws of chemistry. If you disagree, please explain why. I am tired of your half statements and innuendos.gpuccio
November 7, 2018
November
11
Nov
7
07
2018
02:02 PM
2
02
02
PM
PDT
Antonin: From: "your approach is not persuasive" to: "Good grief!" your discussion is certainly gaining intensity, if not clarity and detail. Maybe you could explain to us fools what laws of chemistry explain the mapping of AAs to nucleotide codons in the translation system. And please, don't say that Joe Felsestein can certainly do that better than you can. You have already exploited poor Joe enough in this discussion! :)gpuccio
November 7, 2018
November
11
Nov
7
07
2018
01:58 PM
1
01
58
PM
PDT
Antonin, it seems you have homework to do to see what is actually supported. You at least acknowledge the basic fact of the folded tRNA, where anti-codons and CCA-tips are at opposite ends. The anti-codon does not determine the loading of the tRNA, it is not interacting by forming bonds with the AA. The universality of the CCA- tip means that the chemistry of bonding to the AA does not differ across the 20 tRNA's of main interest. It is not an it fails to follow to note this and its import: the same CCA- tip can bond to 20 diverse AA's (and IIRC artificial ones have been added). The isolation of protein fold domains in AA sequence space, the discovery of thousands of such, etc are facts on the ground. The functionality of a protein depends on how it folds, fits etc, and that is dependent on the codon sequence in mRNA, which is highly contingent; just ponder how for eukaryotes, there is post transcription editing. And so forth. Obviously, not just any AA sequence will function as a protein, and the distances in sequence space validate, isolated. KFkairosfocus
November 7, 2018
November
11
Nov
7
07
2018
01:51 PM
1
01
51
PM
PDT
EMH, the emphasis falls on the distribution aspect. Probabilities are what the distribution distributes. As for defining a probability, that is an exercise in itself in Mathematics and philosophy. In a situation where there are contingencies such that under similar circumstances diverse outcomes are possible, we may then ask whether there is any reason some are more or less likely; often adjudged on relative frequencies as observed or perceived. If not, we apply indifference yielding the sort of 1/n ratio for n possibilities. Where there is reason to view some as more or less likely, then the sum of likelihoods is re-balanced to reflect such, while overall we still sum to 1. Then, we can add endpoints, where something may never happen, the zero, or something must happen, the 1. Onward, we move to the continuous random variable, where then finite values obtain for ranges within the span of possibilities. Obviously, there is more but this is enough to see how we are setting up an abstract model-world with structural and quantitative features. KFkairosfocus
November 7, 2018
November
11
Nov
7
07
2018
01:39 PM
1
01
39
PM
PDT
That, that functionality depends on picking chaining patterns which come from thousands of deeply isolated domains in AA sequence space?
No. Unsupported assertion.Antonin
November 7, 2018
November
11
Nov
7
07
2018
01:28 PM
1
01
28
PM
PDT
KF:
Antonin, can you at least acknowledge that the CCA- universal joint that attaches the AA is at the opposite end of the anticodon triplet that locks to the mRNA codon in the ribosome as proteins are made?
Yes.
That, therefore the AA put on a given tRNA is not chemically determined?
No. (non sequitur)Antonin
November 7, 2018
November
11
Nov
7
07
2018
01:25 PM
1
01
25
PM
PDT
@KF defining a probability distribution in terms of probabilities seems circular. What is a probability? I think it is 1 / number of options, like marbles in a bag.EricMH
November 7, 2018
November
11
Nov
7
07
2018
01:15 PM
1
01
15
PM
PDT
Mung, Nothing is fatal to something that is not alive. :)jawa
November 7, 2018
November
11
Nov
7
07
2018
12:22 PM
12
12
22
PM
PDT
Antonin:
This is the observed fact of the matter but not fatal to evolutionary theory
Of course not. It's what is required for evolution to occur in the first place. It's a pre-requisite.Mung
November 7, 2018
November
11
Nov
7
07
2018
11:32 AM
11
11
32
AM
PDT
Antonin, can you at least acknowledge that the CCA- universal joint that attaches the AA is at the opposite end of the anticodon triplet that locks to the mRNA codon in the ribosome as proteins are made? That, therefore the AA put on a given tRNA is not chemically determined? That, the codon-anticodon match is physically separated from the action that clicks an AA to the emerging protein? That, the tRNAs are loaded by enzymes that sense overall conformation not simply anticodon? Thus, that the chaining of a protein is based on the genetic code and the high contingency along a D/RNA chain, and that protein functionality is separated from the high contingency of chaining possibilities? That, that functionality depends on picking chaining patterns which come from thousands of deeply isolated domains in AA sequence space? And more? KFkairosfocus
November 7, 2018
November
11
Nov
7
07
2018
11:20 AM
11
11
20
AM
PDT
During translation, the codon-to-anticodon association is chemically independent of anticodon-to-amino acid association.
What I suspect you are objecting to is the evolvability of the current process from simpler precursors. What we can observe is how living organisms currently surviving on Earth manage their biochemistry. We can only surmise what the biochemistry was like in the earliest organisms that could both grow and replicate. It's an interesting fact that aminoacyl tRNA synthetases have an arbitrary associative link between codon and amino acid residue. This is the observed fact of the matter but not fatal to evolutionary theory
I think we all now see that such facts do not interest you...
You are mistaken. The question of how the basic biochemical pathways essential to life evolved from simpler precursors is challenging with no direct evidence to work with. It's an active field for hypothesis construction.
...and that you will not respond to them...
Insofar as I am able, I'm a layman with regard to biochemistry and proposed evolutionary pathways (A simpler doublet code, a smaller suite of amino acids, RNA as precursor for both heritable storage and catalysis) but I try to keep myself informed on progress and new ideas.
...but they remain just the same.
Do you, as a layman, keep up with progress in the field? What do you consider a problem for evolution with respect to biochemistry of the chain of living organisms from the first life to today? Michael Behe is a biochemist sympathetic to "Design". Have you run your ideas past him?Antonin
November 7, 2018
November
11
Nov
7
07
2018
09:47 AM
9
09
47
AM
PDT
EMH and Mung, a probability distribution for some variable quantity x, indicates the range of values and the relative likelihood that the variable will take particular values. A fair six-sided die will have one of {1, 2 . . . 6} uppermost and each is equiprobable, i.e. "flat." If the die is loaded so say it now reads 6 90% of the time and the other five values 2% each, it will now have a J-distribution (or perhaps, reverse-L is a better description). If the variable follows the Gaussian pattern, its relative frequency of possible values follow a bell curve. The Quincunx/ Galton Board model illustrates how: https://www.youtube.com/watch?v=AUSKTk9ENzg KFkairosfocus
November 7, 2018
November
11
Nov
7
07
2018
09:44 AM
9
09
44
AM
PDT
@EricMH, I received Kolmogorov's book Foundations of the Theory of Probability yesterday and was hoping to find an answer in it but I had difficulty reading the notation he uses. :) Have you seen this book? Information and Randomness: An Algorithmic PerspectiveMung
November 7, 2018
November
11
Nov
7
07
2018
09:25 AM
9
09
25
AM
PDT
1 3 4 5 6 7 20

Leave a Reply