The concept of information is central to intelligent design. In previous discussions, we have examined the basic concept of information, we have considered the question of when information arises, and we have briefly dipped our toes into the waters of Shannon information. In the present post, I put forward an additional discussion regarding the latter, both so that the resource is out there front and center and also to counter some of the ambiguity and potential confusion surrounding the Shannon metric.
As I have previously suggested, much of the confusion regarding “Shannon information” arises from the unfortunate twin facts that (i) the Shannon measurement has come to be referred to by the word “information,” and (ii) many people fail to distinguish between information about an object and information contained in or represented by an object. Unfortunately, due to linguistic inertia, there is little to be done about the former. Let us consider today an example that I hope will help address the latter.
At least one poster on these pages takes regular opportunity to remind us that information must, by definition, be meaningful – it must inform. This is reasonable, particularly as we consider the etymology of the word and its standard dictionary definitions, one of which is: “the act or fact of informing.”
Why, then, the occasional disagreement about whether Shannon information is “meaningful”?
The key is to distinguish between information about an object and information contained in or represented by an object.
A Little String Theory
Consider the following two strings:
String 1:
kqsifpsbeiiserglabetpoebarrspmsnagraytetfs
String 2:
The first string is an essentially-random string of English letters. (Let’s leave aside for a moment the irrelevant question of whether anything is truly “random.” In addition, let’s assume that the string does not contain any kind of hidden code or message. Treat it for what it is intended to be: a random string of English letters.)
The second string is, well, a string. (Assume it is a real string, dear reader, not just a photograph – we’re dealing with an Internet blog; if we were in a classroom setting I would present students with a real physical string.)
There are a number of instructive similarities between these two strings. Let’s examine them in detail.
Information about a String
It is possible for us to examine a string and learn something about the string.
String of Letters
Regarding String 1 we can quickly determine some characteristics about the string and can make the following affirmative statements:
1. This string consists of forty-two English letters.
2. This string has only lower-case characters.
3. This string has no spaces, numerals, or special characters.
It is possible for us to determine the foregoing based on our understanding of English characters, and given certain parameters (for example, I have provided as a given in this case that we are dealing with English characters, rather than random squiggles on the page, etc.). It is also possible to generate these affirmative statements about the string because the creator of the statements has a framework within which to create such statements to convey those three pieces of information, namely, the English language.
In addition to the above quickly-ascertainable characteristics of the string, we could think of additional characteristics if we were to try.
For example, let’s assume that some enterprising fellow (we’ll call him Shannon) were to come up with an algorithm that allowed us to determine how much information could – in theory – be contained in a string consisting of those 3 characteristics: a string with forty-two English letters, using only lower-case characters, and with no spaces, numerals, or special characters. Let’s even assume that Shannon’s algorithm required some additional given parameters in this particular case, such as the assumption that all possible letters occurred at least once, that all letters could occur with the relative frequency at which they show up in the string and so forth. Shannon has also, purely as a convenience for discussing the results of his algorithm, given us a name for the unit of measurement resulting from his algorithm: the “bit.”
In sum, what Shannon has come up with is a series of parameters, a system for identifying and analyzing a particular characteristic of the string of letters. And within the confines of that system – given the parameters of that system and the particular algorithm put forward by Shannon – we can now plug in our string and create another affirmative statement about that characteristic of the string. In this case, we plug in the string, and Shannon’s algorithm spits out “one hundred sixty-eight bits.” As a result, based on Shannon’s system and based on our ability in the English language to describe characteristics of things, we can now write a fourth affirmative statement about how many bits are required to convey the string:
4. This string requires one hundred sixty-eight bits.
Please note that the above 4 affirmative pieces of information about the string are by no means comprehensive. We could think of another half dozen characteristics of the string without trying too hard. For example, we could measure the string by looking at the number of characters of a certain height, or those that use only straight lines, or those that have an enclosed circle, or those that use a certain amount of ink, and on and on. This is not an idle example. Font makers right up to the present day still take into consideration these kinds of characteristics when designing fonts, and, indeed, publishers can be notoriously picky about which font they publish in. As long as we lay out with reasonable detail the particular parameters of our analysis and agree upon how we are going to measure them, then we can plug in our string, generate a numerical answer and generate additional affirmative statements about the string in question. And – note this well – it is every bit as meaningful to say “the string requires X amount of ink” as to say “the string requires X bits.”
Now, let us take a deep breath and press on by looking back to our statement #4 about the number of bits. Where did that statement #4 come from? Was it contained in or represented by the string? No. It is a statement that was (i) generated by an intelligent agent, (ii) using rules of the English language, and (iii) based on an agreed-upon measurement system created by and adopted by intelligent agents. Statement #4 “The string rquires one hundred sixty-eight bits” is information – information in the full, complete, meaningful, true sense of the word. But, and this is key, it was not contained in the artifact itself; rather, it was created by an intelligent agent, using the tools of analysis and discovery, and articulated using a system of encoded communication.
Much of the confusion arises in discussions of “Shannon information” because people reflexively assume that by running a string through the Shannon algorithm and then creating (by use of that algorithm and agreed-upon communication conventions) an affirmative, meaningful, information-bearing statement about the string, that we have somehow measured meaningful information contained in the string. We haven’t.
Some might argue that while this is all well and good, we should still say that the string contains “Shannon information” because, after all, that is the wording of convention. Fair enough. As I said at the outset, we can hardly hope to correct an unfortunate use of terminology and decades of linguistic inertia. But we need to be very clear that the so-called Shannon “information” is in fact not contained in the string. The only meaning we have anywhere here is the meaning Shannon has attached to the description of one particular characteristic of the string. It is meaning, in other words, created by an intelligent agent upon observation of the string and using the conventions of communication, not in the string itself.
Lest anyone is still unconvinced, let us hear from Shannon himself:
“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. (bold added)”*
Furthermore, in contrast to the string we have been reviewing, let us look at the following string:
“thisstringrequiresonehundredsixtyeightbits”
What makes this string different from our first string? If we plug this new string into the Shannon algorithm, we come back with a similar result: 168 bits. The difference is that in the first case we were simply ascertaining a characteristic about the string. In this new case the string itself contains or represents meaningful information.
String of Cotton
Now let us consider String 2. Again, we can create affirmative statements about this string, such as:
1. This string consists of multiple smaller threads.
2. This string is white.
3. This string is made of cotton.
Now, critically, we can – just as we did with our string of letters – come up with other characteristics. Let’s suppose, for example, that some enterprising individual decides that it might be useful to know how long the string is. So we come up with a system that uses some agreed-upon parameters and a named unit of measurement. Hypothetically, let’s call it, say, a “millimeter.” So now, based on that agreed-upon system we can plug in our string and come up with another affirmative statement:
4. This string is one hundred sixty-eight millimeters long.
This is a genuine piece of information – useful, informative, meaningful. And it was not contained in the string itself. Rather, it was information about the string, created by an intelligent agent, using tools of analysis and discovery, and articulated in an agreed-upon communications convention.
It would not make sense to say that String 2 contains “Length information.” Rather, I assign a length value to String 2 after I measure it with agreed-upon tools and an agreed-upon system of measurement. That length number is now a piece of information which I, as an intelligent being, have created and which can be encoded and transmitted just like any other piece of information and communicated to describe String 2.
After all, where does the concept of “millimeter” come from? How is it defined? How is it spelled? What meaning does it convey? The concept of “millimeter” was not learned by examining String 2; it was not inherent in String 2. Indeed, everything about this “millimeter” concept was created by intelligent beings, by agreement and convention, and by using rules of encoding and transmission. Again, nothing about “millimeter” was derived by or has anything inherent to do with String 2. Even the very number assigned to the “millimeter” measurement has meaning only because we have imposed it from the outside.
One might be tempted to protest: “But String 2 still has a length, we just need to measure it!”
Of course. If by having a “length” we simply mean that it occupies an area of space. Yes, it has a physical property that we define as length, which when understood at its most basic, simply means that we are dealing with a three-dimensional object existing in real space. That is, after all, what a physical object is. That is to say: the string exists. And that is about all we can say about the string unless and until we start to impose – from the outside – some system of measurement or comparison or evaluation. In other words, we can use information that we create to describe the object that exists before us.
Systems of Measurement
There is no special magic or meaning or anything inherently more substantive in the Shannon measurement than in any other system of measurement. It is no more substantive to say that String 1 contains “Shannon information” than to say String 2 contains “Length information.” This is true notwithstanding the unfortunate popularity of the former term and the blessed absence in our language of the latter term.
This may seem rather esoteric, but it is a critical point and one that, once grasped, will help us avoid no small number of rhetorical traps, semantic games, and logical missteps:
Information can be created by an intelligent being about an object or to describe an object; but information is not inherently contained in an object by its mere existence.
We need to avoid the intellectual trap of thinking that just because a particular measurement system calls its units “bits” and has unfortunately come to be known in common parlance as Shannon “information,” that such a system is any more substantive or meaningful or informative or inherently deals contains more “information” than a measurement system that uses units like “points” or “gallons” or “kilograms” or “millimeters.”
To be sure, if a particular measurement system gains traction amongst practitioners as an agreed-upon system, it can then prove useful to help us describe and compare and contrast objects. Indeed, the Shannon metric has proven very useful in the communications industry; so too, the particular size and shape and style of the characters in String 1 (i.e., the “font”) is very useful in the publishing industry.
The Bottom String Line
Intelligent beings have the known ability to generate new information by using tools of discovery and analysis, with the results being contained in or represented in a code, language, or other form of communication system. That information arises as a result of, and upon application of, those tools of discovery and can then be subsequently encoded. And that information is information in the straight-forward, ordinary understanding of the word: that which informs and is meaningful. In contrast, an object by its mere existence, whether a string of letters or a string of cotton, does not contain information in and of itself.
So if we say that Shannon information is “meaningful,” what we are really saying is that the statement we made – as intelligent agents, based on Shannon’s system and using our English language conventions – the statement that we made to describe a particular characteristic of the string, is meaningful. That is of course true, but not because the string somehow contains that information, but rather because the statement we created is itself information – information created by us as intelligent agents and encoded and conveyed in the English language.
This is just as Shannon himself affirmed. Namely, the stuff in String 1 has, in and of itself, no inherent meaning. And the stuff that has the meaning (the statement we created about the number of bits) is meaningful precisely because it informs, because it contains information, encoded in the conventions of the English language, and precisely because it is not just “Shannon information.”
—–
* Remember that Shannon’s primary concern was that of communication. More narrowly, the technology of communication systems. The act of communication and the practical requirements for communication, yes, are usually related to, but are not the same thing as information. Remembering this can help keep things straight.
I think that information is the wrong metric. I think that meaning is the point. As long as we keep talking about information, rather than meaning, I think we’ll be playing loop-de-loop with the loopies forever.
Consider the following tale:
A man flips a coin, it lands tails, tails, heads, tails.
He waits a bit and flips again, this time he flips tails, tails.
After another pause he flips tails, head, tails.
After a last pause he flips tails.
Another man yells in a loud voice, “Everybody please clear the building!”
They asked him why he “yelled fire in a crowded theater.” He responded, “I am a ham.”
This is a tale of meaning.
Eric:
If you’re thinking of me I don’t mind being called out by name as holding the position that “meaningless information” is an oxymoron and for agreeing with Jan Kahre:
Perhaps I don’t articulate my position as well as I ought, but I think it is an eminently reasonable one.
🙂
I don’t know that I would go so far as to claim that information must inform “us.”
An instance of information informs a system capable of producing a functional effect.
Hi Eric, I’d appreciate your thoughts on the following.
The length of a string of cotton could be seen as a property of the string. The units of measurement for this property might be millimeters. The length of the string in millimeter units might be 168.
Now take a string of characters. The string might be 168 characters long. But ‘character’ is not a unit of measurement. So when we say that the string is 168 characters long we are not saying the same thing about the first string as we are saying about the second string when we say it has a certain length.
Further, that a string is 168 characters in length doesn’t mean it “contains” 168 bits of Shannon information. So we are talking two very fundamentally different things.
Mung @2:
Thanks for the clarification. I agree it need not inform “us”. I’ve updated the OP.
Overall I think I get the point of the OP, which urges caution when speaking about information.
However, for the sake of being argumentative and general pita, I will advance the claim that Shannon Information is “in” the string. Where else would the information be derived from?
Shannon’s theory is about an actual message from a set of possible messages, or put another way, about an actual string from a set of possible strings. And knowing the actual message or string we can say we have received an amount of information (because the receipt of the actual message or string has ruled out the alternatives).
So in some way it is the actual string which enables us to calculate the “information content” of the string, but that content is always relative to the non-actualized possibilities.
But what about the case where each string proceeding from the source is equally likely? If each string proceeding from a source is equally likely, how can any particular string be informative? It is informative in the sense of being an actualization from a set of the merely potential.
Knowing the potential strings does not give us any information about the actual string. Therefore the information must reside in the actual string.
Or not. 😉
Eric and Mung:
I would reflect again on Shannon’s point:
“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.”
Shannon’s theory is obviously about communication of a string which is a message. It is true that he does not enter into the debate of why that string is a message. But he assumes that it is the message that we want to communicate.
So, the string always has the meaning of being the message we want to communicate. IOWs, it is functional in conveying a desired message.
Shannon’s theory is about the answer to a simple question: how many bits are needed to communicate that specific message?
So, he is measuring functional information (in respect to a communication problem).
ID is measuring functional information in respect to a generation problem. It answer the simple question: how many bits are necessary in an object to implement this function? So, in a sense, there is a strict formal similarity between Shannon’s theory and ID. That’s why Durston has been able to use so naturally Shannon’s metric in an ID context.
The problem of generating a string which conveys a meaning or implements a function is not essentially different from the problem of communicating a message. In a sense, a designer can be conceived as a conscious beings who is trying to communicate a message to another conscious being. The message can be a meaning or a function, but in all cases only another conscious being will be able to recognize it for what it is.
Now, the important point that I want to suggest is that neither Shannon’s theory of communicating the message nor ID theory of generating the message are really “qualitative”. Both are “quantitative” theories. In a sense, neither deals with the problem of “what is meaning” and “what is function”.
I will try to be more clear. Shannon assumes that a string is the message, and for him that’s all. OK.
ID specifies the string as meaningful or functional. For example, in my personal version of the procedure to assess dFSCI, I require an explicit definition of the function for which we measure dFSCI in an object. OK.
Shannon generates a partition in the space of all possible communicated strings: this string is the message, all other strings are not.
ID generates a partition in the space of all possible generated strings: this set of strings is functional (the target space); all the others are not.
At that point, both reasonings are interested only in quantitative measurements: how many bits are necessary to convey the message, how many bits are necessary to implement the defined function.
Neither theory really measures the meaning or the function. They only measure the quantitative bits necessary to deal with that meaning/function in a specific context (communicate it/generate it). OK?
Let’s see an example with English language.
a) One two three four five six seven eight nine ten eleven twelve
b) This is a table in some room in some building in a city street
c) Pi means the ratio of a circle’s circumference to its diameter
These three strings have the same length (62 characters) and all are correct statements in English. If we don’t consider possible differences in compressibility, we can measure the number of bits which is necessary to communicate or generate each particular “message”, and it should be more or less similar.
Does that quantitative consideration tell us anything specific about the three different “messages”? No. They are very different, not only because they convey different meanings, but also because those meanings are of very different type and quality. But our “quantitative” theories (both Shannon’s theory and ID) are not really dealing with that aspect.
Let’s apply the same reasoning to a context that I always use: the origin of functional information in proteins.
So, let’s say that we have two different proteins families, with two different, well defined, biochemical functions (for example, two different enzymes). They may have different length in AAs, but we apply the Durston method and we come up with a similar value for their functional complexity: let’s say 200 bits. OK, that’s what ID can tell us.
But let’s say that one protein is essential for survival of the biological being (whatever it is), so that a “knockout” experiment is incompatible with life, while the second protein is inserted in some redundant system, and the consequences of its “knockout” are much less obvious. St the level of the basic informational analysis, ID tells us nothing about that “difference” between the two functions. (Of course, we could apply the analysis to wider systems, but that’s another story).
Another way of saying that is that the same “informational” complexity can be computed for a Shakespeare sonnet, for a passage from a treatise of mathematics, or for a shopping list. Both Shannon’s theory and ID are not dealing with the “quality” of the information (for example, the beauty in the sonnet), but only with the quantitative aspect of communicating or implementing each message, whatever it is, in a physical medium (the string).
One if by land, two if by sea.
In binary: 0 if by land 1 if by sea.
Reduced further, 0 or 1.
Assuming a finite alphabet where each symbol is equally likely and given the selection (actualization) of one or the other we can say we have an amount of information of one bit.
But this measure of information tells us nothing about the meaning of the “0” or the meaning of the “1.”
It ought to go without saying, but alas, it needs to be said:
From the fact that this measure of information tells us nothing about the meaning of the “0” or the meaning of the “1” it does not follow that there is or even can be such a thing as “meaningless information.”
Warren Weaver:
Mung:
Thanks for your thoughts. Just a couple of quick comments before the sandman arrives:
The information (as you have regularly and correctly pointed out, the “meaningful” aspect) is (i) created by an intelligent being, (ii) using an arbitrary, agreed-upon, measurement system, and (iii) expressed in an arbitrary agreed-upon unit, which (iv) can then be encoded and communicated to other intelligent beings, who (v) can then decode and understand the information because they are also familiar with the measurement system and units in question.
In other words, the entire exercise of creating what we refer to as “Shannon information” is a mental activity (or investigative activity) if you will.
The statement: “This string requires one hundred sixty-eight bits” is information, yes. But not because that was somehow contained in String 1, but because it was created by an intelligent being to describe String 1.
Also, the reason I highlighted these two strings:
kqsifpsbeiiserglabetpoebarrspmsnagraytetfs
and
thisstringrequiresonehundredsixtyeightbits
is because the Shannon measurement (using the particular parameters I have outlined) spits out the same result. In other words, one particular way of measuring a string of characters happens to give the same answer for both strings. However, the strings are obviously quite different in their actual content.
With each string of characters we can create a new piece of information, namely, a description in the English language that tells us something about the string. But only the second string contains information in its own right.
I would also add, that I believe the only logical alternative to what I have outlined is to argue that everything, everywhere contains information. Such a view is not only contrary to our experience and understanding of what information is, but is utterly useless because we then have no ability to determine when we are dealing with information. I discussed this in detail in an earlier thread, so I won’t get into the details here unless people are interested. But the upshot is that in order to have any rational conversation about information at all, we must distinguish between information about an object and information contained in an object.
Hi Eric, all,
I’ll pop in again as this is the only topic I feel qualified to comment on.
I think you are making this too complicated, Eric, and you are violating an assumption Shannon made. Perhaps this illustration can help:
You are the radio operator on a WWII ship. Every message given to you to send is in code. You have no idea what the message means: that is not your job. Your job is to make sure the message is sent and that it is received absolutely correctly on the other end.
In our modern world we do the same thing but our computers do it for us (TCP/IP is such as example). The computer makes no judgement on the meaning of the message, it just sends it where it has been commanded to go.
The point is, at the level where Shannon information is calculated, the assumption is that EVERY message given to us is packed completely full of information. It is pointless to speculate on whether a random ASCII string has information or not. In the low-level communication domain Shannon and we are working in, we assume the string has the maximum content of information.
As the WWII radio operator, you attach extra information to your message that will help the receiver to calculate if he or she (lots of women radio operators in that day) got it right. Shannon does this as well, but in a formal manner. The extra information makes the set of valid messages that can be received much much smaller than all possible messages, and when an invalid message is received the error correction codes map the received message to the closest valid message.
Shannon de-emphasizes meaning because he is talking about the things the radio operator must worry about. The assumption is that higher-ups encoded meaning, and will decode it on the other end. The message may contain gibberish as an enemy decoy, it may have extra characters appended (common practice), but the radioman is not the judge of thst: it is all information to him. Further, he has no way to tell what or how much meaning each message has.
The receiver of the message attaches meaning to it according to pre-arranged rules.
So Shannon’s assumption is that all stuff going through a Shannon channel is pure information.
Here is an example message:
ogjkesijdydnzobiweucgoiqme
We are not allowed to comment on whether or not this is information at the Shannon level: it is. In this case, it would be a very common type of message: it happens to be a portion of the first paragraph of your article, zipped, then represented by the lower case ascii characters. A computer may send exactly this message. I realize you stated your string does not represent anything, but that is against the rules. The assumption, again, is that all stuff sent is pure information.
As an interesting sidelight, speaking of random strings, information sent at near-channel-capacity is indistinguishable (to the uninformed observer) from random noise.
Glenn
GBDixon:
Thank you for the very clear contribution. 🙂
GBDixon hit it- good job. The transmitting and receiving equipment doesn’t give a darn about meaning. However we care that the number of bits transmitted = the number of bits received. And that is where Shannon comes in.
That said, this is why, IMO, Dembski came up with “complex specified information”- to differentiate between the meaningless and the meaningful. Where Shannon is all about information carrying capacity, Dembski is all about what that information is/ does/ means.
People seem to struggle with the idea of how to apply information theory to biology, and there are lots of confusing or conflicting ideas. But here is a simple exercise that can place a lower bound on the information content of a cell.
1. We take a DNA sequence that encodes to a protein. We call this a valid message because we know there is a receiver (the cell) that does something with it: make a protein.
2. For this section of DNA there are a multitude of possible DNA combinations that do not do anything useful, or are harmful. We may be able to find a few more combinations that encode to other useful proteins (experts could comment on that) or that encode to the same protein (some sort of redundancy). These would be added to the set of valid messages.
3. The number of valid messages and the number of all possible DNA sequences (total messages) are then used to determine the information content of that DNA sequence.
4. This can be done for every DNA sequence that has a known function, and the results combined to give a lower bound on the cell’s DNA information content.
I am out of my comfort zone here…my expertise is in digital communication theory. But this is how I would go about approaching the problem of how much information a cell contains.
Glenn
I have to admit that I do not understand the OP at all, and the thread as a whole. For example I can’t tease out any understanding from “4. This string requires …” and the two paragraphs preceding.
I have to say that I hope not too many other EE’s from the opposing camp will be sizing us up here. But as a good sport I’m going to link to a chapter by Carlson from his textbook which I have. You guys should read it if you want a concise summary of what Shannon and his colleagues were after. This chapter is similar but more extensive than the 1949 Shannon paper (not the more famous 1948 paper).
If you guys as non-specialists are looking for understanding, I recommend going to the horses mouth – download this, and I hope you have some better understanding than I’m having right now, then maybe we can get onto the same page: http://posterwall.com/blog_att.....1318295130
Glenn@ 14- that is Crick’s version of biological information:
Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein. Sir Francis Crick in “Central Dogma”
GBDixon as to:
Your approach is called ‘functional’ information (or more precisely dCSI by gppucio), and functional information has been fleshed out for proteins to a certain extent by Dr. Szostak, Dr. Johnson, Dr. Abel, Dr. Durston and others,,,
At the 17 minute mark of the following video, Winston Ewert uses a more precise metric to derive a higher value for the functional information inherent in proteins:
Here Kalinsky gives a ‘ballpark figure’ for the functional infomation required for the simplest life:
Yet GBDixon there is another way that, IMHO, tells us more precisely how much information a cell contains,,,
First, in this endeavor, it is important to learn that information resides throughout the cell, not just in DNA sequences,,
As well, in our endeavor to understand precisely how much information a cell contains, it is important to learn that ‘The equations of information theory and the second law are the same’,,,
And by using this relationship between entropy and information, researchers have calculated, from themodynamic considereations, that ‘a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica’,,,
For calculations, from the thermodynamic perspective, please see the following site:
Moreover, through many years of extreme effort, this conection between thermodynamics and information was finally experimentally verified to be true,,,
Moreover, Dr Andy C. McIntosh, Professor of Thermodynamics at the University of Leeds, holds that it is non-material information which is constraining “the local thermodynamics (of a cell) to be in a non-equilibrium state of raised free energy’
Dr. McIntosh’s contention that ‘non-material’ information is constraining “the local thermodynamics (of a cell) to be in a non-equilibrium state of raised free energy’ has now been verified.
In showing you this verification, first it is important to learn that ‘non-local’, beyond space and time, quantum entanglement, can be used as a ‘quantum information channel’,,,
And this ‘non-material’, non-local, beyond space and time quantum entanglement/information, is now found, as Dr. McIntosh had theorized, to be ‘holding life together’, i.e. ‘constraining “the local thermodynamics (of a cell) to be in a non-equilibrium state of raised free energy”
classical ‘digital’ information is found to be a subset of ‘non-local’ (i.e. beyond space and time) quantum entanglement/information by the following method:
,,,And here is the evidence that quantum information is in fact ‘conserved’;,,,
Besides providing direct empirical falsification of neo-Darwinian claims as to the generation of information, the implication of finding ‘non-local’, beyond space and time, and ‘conserved’ quantum information in molecular biology on a massive scale is fairly, and pleasantly, obvious:
Verse and Music:
GBDixon:
Thank you again for your very useful comments in #14. 🙂
As BA has already mentioned, your approach is completely compatible with the concept of functional information in biological molecules, and in particular in functional proteins, and with the ways used to compute it and to infer design according to the complexity observed.
This ‘information’ is one slippery beast. On par with entropy, and most interestingly, possibly intimately connected with it. But it is certainly real, whether or not we can pin it down yet. Although with darwinist harping about the lack of formality, one could think they don’t believe in it as a real thing. Perhaps it is ALL meaning, ‘information’ being 100% immaterial. Created solely by non-physical minds, any physical arrangement related to information is just that – a relation.
So DNA does not contain information.
Roy
Semi related:
Programming of Life – video
https://www.youtube.com/watch?v=mIwXH7W_FOk
Seminar by Dr. Don Johnson
Apologetics Symposium – Sept. 2014
Cedar Park Church, Bothell WA
Roy:
It should be simple, why don’t you understand?
We can derive information from any object about the object itself. So, any object is a source of information about itself. But that does not mean that the object conveys meaningful information about something else.
A DNA protein coding gene certainly can give us information about itself, like any other object: it is a molecule, made of atoms, and so on. It has molecular weight, and so on.
But the sequence of nucleotides in it is all another matter: it describes something else, a functional protein. With the correct procedures, it can convey that meaningful information to a protein synthesis system, and indeed it does exactly that in the cell.
So, a water molecule is a molecule, but it has no meaningful information about anything else. A protein coding gene is a molecule, but it conveys in its symbolic sequence a very meaningful information about something else.
Should be simple.
groovamos:
the link does not work, apparently.
http://posterwall.com/blog_att.....1318295130
groovamos’ link takes me to a PDF document on information theory and communication
gpuccio
Apparently groovamos fixed the link in his post #15
It gave me a 6.98 Mb file named
blog_attachment.pdf
which I just emailed to you
check your email
the document title is
information theory and communication systems
All, thanks for the good thoughts. I wish I had time for a detailed discussion of all the excellent comments on this thread, but perhaps I can at least offer a couple of quick reactions on a few of them. For convenience of discussion, I will do so in separate individual comments later today.
gpuccio @7:
Some good comments. A couple of quick reflections:
I believe I understand what you are saying, with one important caveat. Shannon calls the string to be communicated a “message” because that is the term used in discussing communication. But he does not assume that it is functional or that it conveys anything in particular. Indeed, he assumes that those aspects are irrelevant. It doesn’t matter whether the “message” to be communicated is what we would normally understand as a message: namely, an encoded piece of information intended for receipt and decoding by a recipient. It could just as well be a jumble of random nonsense.
Agreed.
Again, he isn’t interested in whether the message has any function or any meaning. He is only interested in how many bits are required (as you noted); in other words, he is interested in how big the pipeline has to be to carry the string, whether or not the string is meaningful or nonsensical.
I like the idea of it being primarily a generation problem; that is probably true. I’m wondering, however, if ID is limited to generation? There are aspects of biology that deal with transmission of information, so it seems transmission is relevant as well.
I agree that Shannon’s theory is explicitly non-qualitative. However, ID is very much interested in the qualitative aspect. Behe’s whole notion of irreducible complexity (which is, after all, a subset of CSI) is built upon identifying and appreciating real-world functionality.
It is only the “C” part of CSI that can be related to Shannon’s theory. And I agree that Shannon’s metric can be a useful metric to use when analyzing the complexity of a particular string. It is less useful in direct application to physical machines in real three-dimensional space, but even there, as kairosfocus has pointed out, we can perhaps use some kind of complexity calculation if we consider the amount of information required to specify such a machine (such as in an AutoCAD file).
But the “S” part of CSI is not equivalent to Shannon’s metric. Indeed, it is precisely the thing that Shannon said he was not addressing, namely, meaning, function, purpose, etc.
I see where you are headed, and I think we are largely on the same page. Let me try to state it this way:
We cannot measure something like meaning or function in the same way that we can measure complexity. So we are reduced to just measuring complexity. In Shannon’s case, that is all he was interested in. In ID’s case we typically only measure complexity once we have already identified that there is a meaning or function. (In other words, once we see a “specification”.)
Agreed.
Agreed, as to the complexity calculation. But ID is broader than Shannon’s theory. It necessarily encompasses not only a complexity calculation but a (in my view non-calculable) recognition of the specification (i.e., a meaning, function, purpose, etc.).
If ID is only about calculating complexity then we don’t need ID. Dembski’s whole insight, if you will, about how we detect design was the recognition that we have to tie the complexity calculation to a specification. So, yes, ID includes a component of complexity that is similar to Shannon’s metric (and indeed, we can use Shannon’s direct metric to calculate it in many cases). But ID is broader than Shannon’s concept and additionally includes the tying of that calculation to a specification.
Glenn @11:
Thank you for your thoughtful comment, which highlights the fact that Shannon was interested in communication and the communication process, not on the substantive content of the string. I think your example of the WWII operator’s role is spot on.
The reason this discussion comes up in the intelligent design debate is not because we are talking so much about faithful communication of a pre-existing information-rich string, but because we are interested in how information-rich strings get produced in the first place. (gpuccio raised this point and I responded above @28, with one caveat about transmission in biological systems.)
What typically happens in the ID debate is an exchange something like the following:
The issue I am addressing is not whether for communication purposes the operator of the communications equipment is obligated to treat the string as chock-full of information (the operator is, whether due to military orders in your WWII example or due to contractual obligations like our modern telecommunications operators). The issue I am addressing is that the number of bits required to transmit a string (the Shannon metric) is independent of whether the underlying string constitutes meaningful information.
Yes, as I said in the OP, by definitional fiat we could call everything that is transmitted through a communications network “information.” But doing so robs the word of content and is utterly useless in addressing the all-too-familiar disconnect between the ID proponent and the ID opponent. Furthermore, such a generalized and “everything-is-information approach” is anathema to the “information” we are interested in for purposes of ID.
I would note that you said everything transmitted by the operator was “pure information.” But that statement is possible only because you assumed at the outset that every message was “packed completely full of information.” That is an assumption that simply does not hold in the situation of trying to generate meaningful biological information from a string of nucleotides or amino acids. Indeed, that assumption is known to not be the case.
So, contrary to the low-level WWII operator who is obligated by virtue of his job to assume that every string handed to him is meaningful information, the question of whether a string in the real world – particularly one that we come across and have to determine whether it is designed or not – is very much an open question. As a result, there is very much a legitimate question as to whether or not the string contains real information. We can’t get around this real-world distinction by simply defining every string as “information.”
Joe @13:
Exactly. CSI incorporates the concept of complexity as the “C”, which can be measured in a variety of ways, including by Shannon’s metric (for strings of characters/symbols). But CSI is broader than that, in that CSI ties that complexity calculation to real-world function/meaning.
Glenn @14:
Good thoughts.
BA77:
Thanks for the references to some preliminary calculations that give us a hint as to the amount of information in a cell.
Roy:
DNA most certainly does contain information. That is why I said information is not inherently contained in an object by its mere existence. The fact that information can be encoded in a physical medium (like DNA) is quite clear.
gpuccio has given a good answer, but you might check out this OP, as I address this issue directly, including the DNA situation:
http://www.uncommondescent.com.....formation/
Eric:
I agree with what you say.
ID is obviously more than Shannon’s theory: its purpose is to infer design.
The specification, and the computation of the complexity linked to it, are certainly peculiar aspects of ID theory.
Shannon’s metric, in the right context, is a very good way to measure that complexity (as shown by Durston’s application).
The specification, in the traditional ID inference, can be considered a binary measure: either it is there, or not. As you know, there are different kinds of specification.
I generally use functional specification, but indeed any objectively and explicitly defined rule which generates a binary partition in the search space is an example of specification. Once a specification is given, we can compute the complexity linked to it (the target space / search space ratio, given the partition generated by that specification).
Another example of specification is pre-specification. That is more similar to the Shannon scenario. Let’s say that I have a random string, and I give my specification as “this string”. That is valid, but only if used as a pre-specification. IOWs, I can measure the probability of getting exactly that string in a new random search, and the probability will be 1:search space (there is only one string identical to the one which specifies).
So, we could say that Shannon pre-specifies his string: this is the string that we want to communicate. He is using a pre-specification, and not a specification based on meaning and function. That’s why he can avoid dealing with meaning and function.
Another consequence of the binary form of specification in the ID inference is that the inference is the same for any kind of specification, once we get some specified complexity. IOWs, we infer design with the same degree of “strength”, according to the value of the specified complexity, for any kind of specification (meaning, function, pre-specification, special forms of order). In that sense, the inference is similar for a Shakespeare sonnet and for one random pre-specified string which is successfully “generated” again after having been specified.
The type of specification, however, cab help us very much in the other part of the ID inference: excluding explanations based on necessity.
Indeed, while both pre-specification and specification given by some special order can often be explained by some necessity mechanism already existing in the system, and in that case do not allow a design inference, the specification based on meaning or function is the best of all: necessity algorithms cannot explain meaning or function or, to explain it, they must be usually much more complex than the observed object itself.
Finally, you say:
“I like the idea of it being primarily a generation problem; that is probably true. I’m wondering, however, if ID is limited to generation? There are aspects of biology that deal with transmission of information, so it seems transmission is relevant as well.”
Well, certainly there are many aspects of biology that deal with transmission of information, and they grow every day.
But the point is: the ID interest in that case would be: how did the system which transmits information in this biological being arise? How was its generated? Because you need an informationally complex functional system to transmit information.
So, in the end, ID is interested always in the design inference, IOWs, in identifying the origin of information from a conscious agent.
Eric:
Oops, I was mainly referring to the “specification” part. Your string 1 may be considered complex but it doesn’t have any specification other than the string itself. OTOH Hamlet’s soliloquy is both complex and specified.
gpuccio @33:
I think you’re right that, although transmission aspects do occur in biology, for the most part ID is interested in the generation of the information.
The Shannon metric is useful as a way to calculate complexity for certain things (strings of characters being our typical example), but in some ways it was really intended to drive toward a different question. As a result, the focus on “Shannon information” by some ID proponents has been something of a distraction, in my opinion, although it is hard to come up with another simple, easy-to-use avenue to calculate complexity, so I’m not sure what else to use.
“Your string 1 may be considered complex but it doesn’t have any specification other than the string itself. OTOH Hamlet’s soliloquy is both complex and specified.”
Precisely.
Dionisio:
For some reason, the link does not work for me. And I have not received your email!
Some adverse destiny must be at work 🙂