Uncommon Descent Serving The Intelligent Design Community

Is the term “biological information” meaningful?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Some say that it’s not clear that the term is useful. A friend writes, “Information is always “about” something. How does one quantify “aboutness. ” He considers the term too vague to be helpful.”

He suggests the work of Christoph Adami, for example, this piece at Quanta:

The polymath Christoph Adami is investigating life’s origins by reimagining living things as self-perpetuating information strings.

Once you start thinking about life as information, how does it change the way you think about the conditions under which life might have arisen?

Life is information stored in a symbolic language. It’s self-referential, which is necessary because any piece of information is rare, and the only way you make it stop being rare is by copying the sequence with instructions given within the sequence. The secret of all life is that through the copying process, we take something that is extraordinarily rare and make it extraordinarily abundant.

But where did that first bit of self-referential information come from?

We of course know that all life on Earth has enormous amounts of information that comes from evolution, which allows information to grow slowly. Before evolution, you couldn’t have this process. As a consequence, the first piece of information has to have arisen by chance.

A lot of your work has been in figuring out just that probability, that life would have arisen by chance.

On the one hand, the problem is easy; on the other, it’s difficult. We don’t know what that symbolic language was at the origins of life. It could have been RNA or any other set of molecules. But it has to have been an alphabet. The easy part is asking simply what the likelihood of life is, given absolutely no knowledge of the distribution of the letters of the alphabet. In other words, each letter of the alphabet is at your disposal with equal frequency. More.

So Chance wrote an alphabet? And created “aboutness”? It;s not the same type of chance we live with today.

See also: Does the universe have a “most basic ingredient” that isn’t information?

New Scientist astounds: Information is physical

and

Data basic: An introduction to information theory

Follow UD News at Twitter!

Comments
Mung: The other thing most people don't understand (and have probably never thought about) is that any time we implement an additional code or protocol to convey a message (such as a single bit to tell us who won the coin toss in the Super Bowl), it ultimately requires more overall channel capacity to convey the message, not less. This is doubly true when our code implements a string that lacks specified complexity, such as in the case of a single bit. I don't have a lot of time tonight to expand on this and I hope that makes some sense, but let me know if it would be helpful for me to elaborate. Eric Anderson
Mung @46:
Eric, I know you and I have had our differences in the past over this whole “information” topic, but now I am beginning to wonder why, lol.
Thank you for your kind words. :) I think we have to be careful about coin toss examples. There have been, IMHO, a number of missteps in the analysis over the years on these pages as people have discussed coin tosses. The value of the coin toss examples is that they are simple and avoid some of the complexities. The risk of the coin toss examples is that they tend to smuggle in through the back door some additional information that appears to have been generated through the coin toss – rather like Dawkins’ weasel program smuggled in the information needed to generate the desired result. That said, let's consider a simple coin toss, as proposed: We have to be careful when we talk about the “quantity” of information. If we don’t know anything else about the coin toss, it doesn’t convey anything. As you well state, it might represent “which team got to choose whether to kick or receive.” How do we quantify the concept of which team got to choose, what that means in a particular instance, how it might impact the larger game? For example, in the recent Super Bowl, the overtime coin toss was of substantial significance and, potentially, had a large impact on the outcome of the game in that particular circumstance and moment of the game. That coin toss was much more significant, due to the rules and momentum of the game, than the initial coin toss. Yet in each case we are presented with a single bit. I agree with you that “looking only at the physics of the coin toss might utterly overlook the message that is conveyed by the result.” Thus, there might be many different messages conveyed by a coin toss. The importance, or relevance, or significance could be vastly different in each case. In other cases, it might signify nothing at all. Does it make sense in every case to say that we received the same quantity of information? Logically, it seems there are two possible approaches here: 1. We say that we have received the same exact amount of information in each case, namely one bit. However, we acknowledge that this one bit of information ties back to a much larger informational landscape: in some cases leading to significant, important, vast quantities of information; in other cases leading to nothing of import. This is not without rationale and, at first blush, this is a tempting approach to take, particularly when we acknowledge that information is generally conveyed, received, and understood within some broader context. 2. We say that the bit we measured is largely independent from the information. There are a couple of reasons this is the case. First, the information conveyed could have been conveyed with a larger number of bits and in various ways. In other words, there is nothing inherently critical about the information itself that requires a certain number of bits. Second, we can get any quantity of bits we want, without any useful information attached (see your Liddle comment below). Of these two approaches, the second is the more consistent and rational. Allow me to expand on that for a moment. Part of the problem with the simple binary coin toss example, such as the Super Bowl coin toss, is that we have agreed, behind the scenes, on a significant meaning that can be expressed in a single bit. A simple yes/no or up/down decision point. Then, when we measure the bit – more strictly, when we measure the amount of channel capacity required to transmit our simple message – we are tempted to think we have measured the quantity of information conveyed by the pre-agreed-upon bit. We have smuggled in through the back door, as it were, a huge amount of information, set up and prepared, and arranged, and pre-agreed-upon, in such a way that a simple signal – a switch – can convey a great deal of information. We probably don’t ever get away from this completely, as information doesn’t necessarily exist in a vacuum. There is always some background knowledge required. But the point is clear. However, we are not quite there yet. Let's take the analysis one step further. Consider the fact that there is a huge difference between the following two scenarios: 1. Here are the rules of communication (e.g., the grammatical rules of the English language) we have agreed upon. Now I may convey some information to you later using these rules of communication. versus 2. Here is a large amount of information. I am conveying it to you now (or have already conveyed it to you) using our agreed-upon rules of communication. You have it already in your possession and have it handy. Later, I will send you an additional piece of information – a simple binary yes/no piece of information. When you receive that additional piece of information, then you will know whether to take into account or to disregard that much larger message I previously conveyed. The above should make it clear what is really happening in a coin toss situation, such as the Super Bowl coin toss. In addition, as soon as we step away from these basic yes/no type of inquiries and start looking at more detailed communication, the distinction between the information and the channel capacity required to convey the information becomes even more clear. -----
Now say we toss the coin 500 times. Someone might argue that in doing so we have generated 500 bits of information. Information about what? [A question Elizabeth Liddle never seemed to be able to answer.]
Yes, I agree with you that this is problematic. There is no evident meaning, or purpose, or function in the sequence. Yet, per the Shannon measure, we have generated just as great a quantity of so-called “information”, as if we had a meaningful string. This is the confusion that Liddle and company have fallen into. Again, much of the confusion results from the idea that the Shannon measure is measuring information. It isn’t. Shannon himself was quite clear on that point. What we are measuring is channel capacity. In intelligent design terms, we are measuring complexity. Whether or not there is real information in the string is entirely a separate inquiry. And one that has to be made separate from and independent of the particular channel, or size of channel, used to convey that information.
Frankly, I think what many people don’t understand is that shannon information isn’t even defined without a given probability distribution. But the shannon measure can be defined for any probability distribution.
Yes, quite true. Again though, even once we have defined a probability distribution and run our Shannon calculation, we need to realize we are not measuring information, just the channel capacity that would be relevant to conveying a particular piece of information, if it were there. Eric Anderson
Critics will never be forced to accept the design inference on the basis of anything, because the only reason why they don’t accept it is ideological and dogmatic. I am firmly convinced that ID theory and the design inference for biological functional information is so self-evident, according to the facts we know, that only dogma and cognitive obstinacy can induce any reasonable intelligent person to deny those things.
:D Mung
Mung: Some simple answers to old questions: a) UNIPROT is a database of proteins. This is the link: http://www.uniprot.org/ Let's say you do a search for some protein, for example myoglobin. You get 903 results. You can filter the search for organism, or in other ways. However, the first result (in this case) is the human proteins. You get the Uniprot accession number: P02144 and the entry name: MYG_HUMAN By clicking on the first value, you get a very detailed page about the protein, including function, domains, and the sequence. b) BLAST is an online local alignment tool at NCBI. For proteins, you use BLASTP. Here is the link: https://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastp&PAGE_TYPE=BlastSearch&LINK_LOC=blasthome Here, you can paste the accession number from uniprot (in this case, P02144) in the first filed of the online form, and blast the sequence against all known sequences in the non-redundant protein sequences database (the default). You can also filter the alignment by organism or group of organisms. So, blasting human myoglobin against cartilaginous fish, you get 12 hits. The best is listed first. You can see the bitscore of homology: 127 bits and the Expect value: 4e-38 the number and percent of identities: 65/148(44%) (referred to the query coverage, that is 96%) the number and percent of positives: 87/148(58%) (referred to the query coverage) the number and percent of gaps: 1/148(0%) and the alignment itself. This just as a start. gpuccio
Eric Anderson: Thank you for your thoughtful comments. I fully agree with you. I agree with you more than you may think. You see, I have never said that functional complexity is simply the result of a calculation. As you very correctly say, it is the result of a procedure which includes: a) The explicit definition of a function b) An explicit and objective procedure to assess if the object we observe can implement the function or not c) The computation of the bits of information that are necessary to implement that function The whole process is one procedure. All parts are necessary. I have never given any emphasis to the computation itself. Of course, the whole procedure is a cognitive process that includes some computation. I have also clearly stated that all computations are essentially cognitive processes, and that therefore there is no essential difference between defining and assessing a function and measuring or computing a numerical variable: all are cognitive processes, that arise in the mind and only in the mind. I firmly believe in the neo-platonic interpretation of mathematics and logic: they are cognitive fields that are not empirical, and arise in the mind. I fully agree with Penrose about that. Numbers and categories are both essential components of mathematical thought. Mathematics is not computation: it is much more. When you define a function, you generate a binary partition in a set (the search space), and you get two subsets (function yes, function no). That is mathemathical reasoning, even if it is not "computation". Categorical and numeric variables are the foundation of all statistical thought. I do agree that the concept of function is deeper than other forms of categorizations: its very peculiar content is the idea of purpose, of "being able to do something desired", and purpose is a very peculiar experience of consciousness. But meaning too cannot be defined without referring to consciousness. All mathematics, indeed all science and philosophy and everything else in human experience arises only, only in conscious experiences. I fully agree that no absolute value of functional information exists for an object. In all my OPs, starting with this one: https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/ I have always said that any value of functional information is related to one explicitly defined function. So, it should be clear that: a) Functional information is a property of an explicitly defined function, not of the object. b) An object can implement a function only if it exhibits the functional information linked to the function. c) The same object can implement different functions, and therefore we can compute different values of functional information for each function that the object can implement. d) If an object can implement even one single function that is complex according to some appropriate threshold, we can infer design for that object. A final note: Critics will never be forced to accept the design inference on the basis of anything, because the only reason why they don't accept it is ideological and dogmatic. I am firmly convinced that ID theory and the design inference for biological functional information is so self-evident, according to the facts we know, that only dogma and cognitive obstinacy can induce any reasonable intelligent person to deny those things. And the value of ID is not "mathematical". It is empirical. ID is empirical science. It is not a theorem. Like all empirical theories, it uses mathematics to build the best explanation of known facts. But that's all. gpuccio
gpuccio: Thank you for your thoughtful comments, as always. I apologize for the length, but bear with me a for a moment, and I hope I can better explain the issue I am focusing on: Let me reiterate that there is much we agree on, and I appreciate your willingness to delve into the nuances – I feel that is often when we build our collective understanding the most. Let me also reiterate that I fully agree we can calculate complexity. That is not the issue. The question is whether function alone, independent of complexity, (not functional complexity) is amenable to precise mathematical calculation. If we take the view that function alone can be calculated with mathematical precision, it seems that we end up with some very strange results. First, it stretches the use of language to call our assessment of function a “calculation.” This would be a strange calculation indeed – one that uses no formula, performs no mathematical operations, has no units of measure. I have never seen such a calculation. Furthermore, if we call every assessment or recognition we make as intelligent beings a “calculation”, then every time I walk into a room and recognize a table or a chair, I am performing a mathematical calculation? Every time I walk down the street and recognize a car or a tree or a lamp post, I am performing a mathematical calculation? When I see the words “I love you” scrawled on a paper, my recognition was based on a mathematical calculation? It strains the concept of a mathematical calculation to the breaking point. It also unfortunately minimizes the other important aspects of life and of intelligence that allow me to see and understand and recognize and feel and appreciate the world, – reducing them to a “calculation.” The fact is, our recognition of function or meaning or purpose – the kinds of things we are looking for when we infer design – is not a mathematical calculation. Yes, it is a logical and cognitive awareness and allows us to make a decision, but it is not a mathematical calculation. And it doesn’t become a mathematical operation just because we arbitrarily assign a mathematical-sounding word (“binary”) to the decision. Second, there are often various ways to implement a given function, various ways to instantiate that function in the real world. As a result, the calculation of complexity will be different in each case. If we then mistakenly think our calculation of complexity constitutes a calculation of function, we end up with a strange result. It would indeed be a strange kind of math that resulted in different results for the same function. Third, it is possible to get the same exact number of bits of complexity from non-function or from pure nonsense strings. This is precisely why some people get confused (like Elizabeth Liddle and company previously on these pages) and think they have discovered how complex information can arise through the generation of random strings. After all, if we claim that our complex functional string yields a result of X bits, and the same exact result of X bits can be produced from pure random nonsense, then we haven’t adequately distinguished between the two cases – we have done ourselves a disservice and have confused the issue. It is critical that we distinguish between function and complexity and not lump both together into the same bucket. And when we use a term like “functional complexity”, we need to remember we are speaking about two distinct aspects, not a single aspect. ----- Well, that is plenty for now. Let me again say that I agree with your approach to intelligent design generally and practically. We both understand that to determine design we have to determine function and we have to determine complexity. Only when both are present, can we infer design. My point is that this analysis consists of two parts: one a mathematical calculation of complexity; the other a logical or cognitive analysis of whether we have function, meaning, purpose, information. I fear that some design proponents would like to make the concept of complex specified information into a watertight, unassailable, purely mathematical analysis. They seem to believe that if we can somehow make the design inference a purely mathematical construct, then opposition to intelligent design will fade away, critics will be forced to accept the design inference on the basis of pure, calculable math, and intelligent design will win the day. I don’t think the design inference is amenable to such an approach. Yes, complexity is essentially a mathematical calculation and it forms an important part of the design inference. But the recognition of function is not a sanitized, mathematical calculation. And I don’t think that is a problem. It doesn’t need to be. Most of what we do in life as we recognize design on a day-to-day basis is not based on mathematical formulas and calculations. That is OK. The recognition of function comes from our intelligence and our general ability to reason and our awareness and our experience and our understanding of how the world works. That is a perfectly reasonable and supportable approach. Then the complexity calculation can be added to help avoid false positives and to make the inference to design a rigorous determination. Eric Anderson
Mung, You see? As I told you @51, your questions @48 were very technical, so let's better ask gpuccio. He can handle those technical issues very nicely and explains them very clearly. Although some politely dissenting interlocutors pretend not to understand what he explains. :) You may see his comments @52, 53, 55. gpuccio handles the BLAST tools so well, that I don't hesitate to ask him questions associated with that established technology. Now I'm glad I brought up the morphogen gradients here in this discussion thread. For doing that I got in return a bag of goodies! Can't complain. :) BTW, please note that the whole issue of morphogen gradients formation and interpretation is not settled yet. There remain unanswered questions. Anyone interested in the subject may find many references to recent research papers on that topic in the discussion thread "Mystery at the heart of life" Y'all have a good weekend! Dionisio
gpuccio,
The whole acquisition of functional information in vertebrates for this process, as computed from only 5 of the 82 components, is at least of 3403 bits. I could easily compute it for the whole system of 82 proteins, but I think this should already give some idea of what we are dealing with.
Wow! That's a very good illustration! I see your point clearly. I think I learned another interesting lesson. Thank you! Dionisio
Mung: I hope my post #55 is an answer to some of your questions. gpuccio
Dionisio: Just an example. The human proteins returned by an UNIPROT search for the GO function "regulation of BMP signaling pathway", which is a morphogen based function, are 82. I give you here, as an example, my computation of the vertebrate-related functional bits (defined as bits conserved in vertebrates, and which are not present in pre-vertebrates) of the first 5 of them, in the order they are given from UNIPROT: 1) Fibrillin-1 (P35555): 1299 bits 2) Bone morphogenetic protein 4 (P12644): 200 bits 3) Tyrosine-protein kinase ABL1 (P00519): 681 bits 4) Caveolin-1 (Q03135): 128 bits 5) Neurogenic locus notch homolog 1 (P46531): 1095 bits These are just 5 out of 82, for the regulation of a morphogen-based process. The result is: a) 3 proteins out of five have complex functionally specified information beyond the 500 bits threshold (two of them much beyond!). b) 4 proteins out of 5 have complex functionally specified information beyond the threshold of 150 bits. c) The whole acquisition of functional information in vertebrates for this process, as computed from only 5 of the 82 components, is at least of 3403 bits. I could easily compute it for the whole system of 82 proteins, but I think this should already give some idea of what we are dealing with. gpuccio
GPuccio: ... if we try to compute how many of those bits are really necessary for the protein function, then we have a measure of its functional information.
Like a word cannot be understood without context (sentence, paragraph, ... , the language as a whole), one cannot understand 'function' without context. The number of really necessary bits can tell us that the implementation of a certain function in a certain (biological) context requires a lot of information.
Mung: Can it be quantified? Measured? What are it’s units of measurement?
We can measure the amount of information required to implement a function in some given context. Origenes
Dionisio: Of course, for connected systems, we should sum the bits of functional information of each protein in the system to get the functional information for the whole system. gpuccio
Dionisio: It is certainly functionally specified biological information: it is information which is necessary for some defined function, and it is present in biological objects. It is almost certainly complex at the threshold of 150 bits, and probably in many cases also at the threshold of 500 bits: but to be sure of that, we should quantify the functionally specified information for each protein. If you give me some names of proteins, I can tell you very quickly the bitscore of conserved information in vertebrates for those proteins, which is a way of measuring a well defined level of functional information. gpuccio
Mung @48: I don't know how to answer those questions correctly. Let's better ask gpuccio to comment on what is written @47. Dionisio
Mung: "Perhaps one day you can share the bioinformatics tools you use and show the rest of us how to do the same sort of research you are engaged in." Well, I have already explained in some detail what I do. The main tool I use is the online BLAST software at NCBI. Now I am using it locally, that is a little more complex. The informations about proteins are derived mainly from UNIPROT. gpuccio
Mung: Thank you for the kind words. You say: "So perhaps the answer is that biological information is actually a subset of functional information. But that would mean that all biological information had to be functional." No. The fact is that "biological information" is a not well defined term. I would say: a) "Information", in a general sense, is a very wide concept. According to Dembski, if I am not wrong, any result conveys information, because it eliminates other possible results. So, if we have a random string of, say, 100 values, obtained by tossin a coin 100 times, that string has high information in it: 100 bits. It "informs" us that, of all the possible sequences (2^100), that specific one occurred. So, if someone asks: "What sequence occurred when you tossed that coin 100 times"? He can expect any of 2^100 answers. When we give him the exact sequence, we reduce the probabilities to 1, the correct result. So, even in a Shannon sense, that reduction of uncertainty is due to the fact that we are conveying 100 bits of information. So, in this general sense, "information" is only a measure of the reduction of uncertainty that can be conveyed, in bits. b) The term "meaningful information", or "functional information", instead, refers to that part of information that is linked to a specification, in the form of meaning of function. That is a subset of the total conformation that a string of bits can have. As we have seen, a string of 100 bits has the potential of conveys 100 bits of information, even if it is a random string. But, in terms of meaning or function, a random string has almost no information at all. Just try to define a function for a random string that is complex (of course, without using a posteriori the specific sequence of bits that you observe in the string). You just cannot find any complex function implemented by a random string. So, functional information is a subset of the generic concept of "information" (as described in a) ). c) Now, I don't know what you mean with the term "biological information". Information is biological if we find it in biological objects. Biological information is not different from any other form of information. So, we can have biological information in the generic sense defined in a), or biological functional information as described in b). For example, if we have a proteins that is 100 aminoacid long, the potential information in that protein is about 430 bits. However, those 430 bits of "biological information" (in the general sense), have no necessary connection to a function. But, if we try to compute how many of those bits are really necessary for the protein function, then we have a measure of its functional information. Let's say that, using a metrics based on conservation for long evolutionary times, as I have proposed, we conclude that the functional information in that protein is about 150 bits. That is a measure of the "biological functional information" in that protein. I believe that, when we speak of "biological information" in our discussions, we almost always mean "biological functional information". I usually specify that I am talking of functional information, because that is the only concept that is relevant for our discussion. So, it is obvious that "biological functional information" is functional: that's what it is by definition. "Biological information" in a general sense is very easy to compute: for a protein, it's enough to multiply the length in AAs by 4.3. But that is simply the search space for that protein, and in itself it has no special interest. It is simply a very trivial concept. Functional information, instead, be it biological or not, is the basic concept for ID theory, because it is the foundation for design inference. gpuccio
Dionisio, Can it be quantified? Measured? What are it's units of measurement? Mung
Morphogen gradients provide positional information that help determine the fate of the receptor cells that interpret the concentrations. Is this another case of biological information? Is it complex? Is it functional? Is it specified? Dionisio
Eric, I know you and I have had our differences in the past over this whole "information" topic, but now I am beginning to wonder why, lol. Exploring further the concept of information and aboutness. Say someone tosses a coin, and they tell us that the coin landed heads up. Then we might say we have received one bit of information. This assumes the shannon measure in which each outcome is equally likely. We'll set aside how we know that or why we should assume it. :) So we might say that we received some quantity of information about the outcome of a coin toss, and that the quantity of information received was one bit. But his tells us nothing of why the coin was tossed. Which team got to choose whether to kick or receive, for example. There can be many reasons to toss a coin. Yet looking only at the physics of the coin toss might utterly overlook the message that is conveyed by the result. Any problems with that so far? Now say we toss the coin 500 times. Someone might argue that in doing so we have generated 500 bits of information. Information about what? [A question Elizabeth Liddle never seemed to be able to answer.] Information about whether the coin is fair? Information about why the coin was being tossed in the first place? Information about the distribution of H and T? Information about how to build a bridge? Frankly, I think what many people don't understand is that shannon information isn't even defined without a given probability distribution. But the shannon measure can be defined for any probability distribution. well ... enough rambling for one post ... Mung
gpuccio my friend! I have the utmost respect for you. I hope you know that. If I had one complaint, it would be that you do not do enough OPs! Perhaps one day you can share the bioinformatics tools you use and show the rest of us how to do the same sort of research you are engaged in. Baby steps. :) The original question had to do with the concept of biological information and whether and in what meaningful sense biological information could be measured. It seems to me that your response is that we should consider some subset of biological information, which you refer to as functional information. It also seems to me that in doing so we abandon the concept of biological information altogether, as evidenced by the ease with which you move outside biology in order to explain the concept (in programming terms). So perhaps the answer is that biological information is actually a subset of functional information. But that would mean that all biological information had to be functional. Thoughts? Mung
Eric:
So, no. I don’t think we can throw out the entire concept of common descent.
I know, right? What concept would we replace it with? Even young earth creationism, with it's disembarked animals, accepts the reality of common descent from an original pair or perhaps an initial population. I just don't know why this topic keeps coming up! :) Mung
Eric @ 41
Why not? A designer isn’t necessarily constrained to a single approach. Why not create something that is able to develop and change over time? As I noted earlier, whether organisms really have that capability is a separate question, and front loading is at this stage highly speculative, but it is perfectly consistent with design.
All good points but I think if we make the case for staggered insertion of complex specified information the last vehicle it would expect is directed point mutations :-) bill cole
bill cole @38:
This is a good point. What i think is the poster child for this theory is the claim that we share a common ancestor with Apes and all mammals share a common ancestor. This does not appear to be how life is designed. I appears designed for animals to stay in their current form. The exceptions like finch modifications, I agree with.
You are absolutely right that it appears organisms are designed to stay in their current form. Even the alleged exceptions confirm the rule. The real takehome lesson from the peppered moths, the finch beaks, the insects and insecticide, and all similar claims of evolution is this -- and you can take it to the bank: Organisms have the ability to temporarily oscillate around a norm, while ultimately avoiding fundamental change. This is the key observation from all such examples. Eric Anderson
bill cole @39: Why not? A designer isn't necessarily constrained to a single approach. Why not create something that is able to develop and change over time? As I noted earlier, whether organisms really have that capability is a separate question, and front loading is at this stage highly speculative, but it is perfectly consistent with design. Eric Anderson
gpuccio: I know you've talked in the past about the determination of function being a binary assessment. In general, I agree with you. But we shouldn't mistakenly think that means we are performing a calculation. All it means is that we are making a decision yes our no. It is a decision point, but not a calculation. Think of it this way: What formula are you using to calculate the function? I'm pretty sure I know the formula you are using to calculate the complexity. But what formula are you using to calculate the function? (And to Mung's point, what is the unit of measure? ) Eric Anderson
gpuccio
But, of course, it is possible to “evolve by design interventions”. Which is compatible with common descent. Guided mutations and intelligent selection of random mutations are possible procedures that can implement design by non random variation.
The issue here is we are thinking about this IMO only because of the darwinian paradigm. Why would a designer mutate a genome to new function? If you can design a human why not just down load the software mods(DNA sequences) completely designed? bill cole
Eric
I think you have to be more specific. Not all ideas of common descent are broken. After all, presumably you accept the fact that you descended from your parents, grand parents, great-grand parents and so on. And presumably you accept the fact that you are slightly different from your great-great grandfather.
This is a good point. What i think is the poster child for this theory is the claim that we share a common ancestor with Apes and all mammals share a common ancestor. This does not appear to be how life is designed. I appears designed for animals to stay in their current form. The exceptions like finch modifications, I agree with. bill cole
gpuccio @34:
Designed descent by designed modifications is a perfectly reasonable way to implement a new design. That’s what many programmers do when they implement new features in existing software.
That seems like a valid scientific inference from the available evidences and the known precedents. Dionisio
Mung: I see that I have not fully answered your last question. "In what units is function measured?" Function is not "measured". It is assessed as present or absent, according to some explicit definition and procedure. It is a binary variable, therefore it has no units. Your question is like asking in what units is sex measured. Functional information, instead, is a numerical value: the number of bits that are necessary to implement the defined function. Therefore, the natural unit of measure for functional information is the bit. gpuccio
bill cole: "This design flies in the face of any evolutionary concept of animal a and b evolving from animal c. " Yes, if you mean "evolving without any design intervention". But, of course, it is possible to "evolve by design interventions". Which is compatible with common descent. Guided mutations and intelligent selection of random mutations are possible procedures that can implement design by non random variation. The problem of common descent should be analyzed and, I hope, solved in the end according to empirical observations and impartial reasoning, but it remains a problem not strictly connected to the problem of design. gpuccio
Eric Anderson: "Bits can be calculated with math. Function is recognized by our cognition." I am not sure I understand the difference. Math and calculations are a form of cognition too. Everything we do has a cognitive foundation and aspect. Have you read my post #26? Attributing a binary value (male or female, functional or not functional) to what we observe is a form of assessment which is the foundation of all scientific reasoning, as much as measuring variables by continuous numbers. Both procedures are strictly scientific, both procedures are a form of cognition. The concept of measure is a very refined form of cognition. So, where is the problem? Measuring functional information is not different from measuring weight in people according to their sex. "I don’t think we can throw out the entire concept of common descent." And I agree with you. Designed descent by designed modifications is a perfectly reasonable way to implement a new design. That's what many programmers do when they implement new features in existing software. gpuccio
Mung: "But if someone were to ask you, how many steps does it take to create ‘x’ amount of function what would you be able to say, meaningfully? In what units is function measured?" I am not sure I understand the question. Let's say that a program which is n bits long can do something. Then you decide that you want to add some new functionality, so you define the new functionality and then you implement it changing the program code. Let's say that both the original code and the new addition are extremely efficient in terms of programming. Now, your new program is n+x bits long. So, the functional complexity linked to the added function is x bits. Where is the problem? gpuccio
bill cole @29: Thanks for your further thoughts.
The question in my mind is if the theory is completely broken and not just Neo-Darwinism but all ideas of common descent.
I think you have to be more specific. Not all ideas of common descent are broken. After all, presumably you accept the fact that you descended from your parents, grand parents, great-grand parents and so on. And presumably you accept the fact that you are slightly different from your great-great grandfather. Now I know that isn't what you are talking about, which is why we need to be more specific. Some design proponents would propose a type of "front loading" that would play out the various forms of life over time. There is scant evidence for such a scenario and it might turn out to be wrong, but at least it would be consistent with design principles -- and would also be a form of common descent. Some design proponents are also willing to accept limited common descent, in the sense that new species could flow from an original form. So, for example, we could end up with various species of finch that relate back to a common ancestor. There is some decent evidence for this kind of common descent. Whether we have new species or variations on a single species often turns on our definitions and particulars of categorization, but the idea that an ancestral population can eventually split into two or more slightly different populations over time has merit. So, no. I don't think we can throw out the entire concept of common descent. The key, as I said, is the claim that all this can come about through a long series of accidents. That is the preposterous claim. But that is also the key claim. If that fails, then the overall theory fails. And it fails even if there are some other biological aspects that make sense (limited common descent, genetics, etc.) independent of evolutionary theory. Many aspects of biology don't depend on evolutionary theory, and don't need it to be true for them to be true. Eric Anderson
Origines @25:
Gpuccio studies protein sequences which are functional and are preserved in many species. Both properties are strong indicators for not being ‘bits of nonsense’.
gpuccio @27:
Thank you! That’s exactly what I am trying to say.
----- How does gpuccio know that they are not bits of nonsense? Because he ran a calculation of the bits and came up with some number of bits? No! Because, as you say, he recognized that there is an indicator of information: function. Bits can be calculated with math. Function is recognized by our cognition. Eric Anderson
gpuccio:
Functional complexity is a simple and powerful concept, and it is objective and measurable.
But that wasn't really the question, or it least it wasn't my question. :) One can always create an operational definition, which is what I think you have done. But if someone were to ask you, how many steps does it take to create 'x' amount of function what would you be able to say, meaningfully? In what units is function measured? Mung
Eric gpuccio
That we can build a functionally-integrated, complex, highly-scalable, massively-parallel system architecture, that operates on the basis of a 4-bit digital code, with storage, retrieval and translation mechanisms, implementing concatenation and error correction algorithms — that all this and more can be built by introducing random errors into a database.
Yes, this is absurd. The question in my mind is if the theory is completely broken and not just Neo-Darwinism but all ideas of common descent. If we look at the design concept of multicellular life it appears to optimized to reduce variation. The theory is founded on the concept of variation. As gpuccio properly states, mutations do occur but the evidence is mostly loss of function with an occasional serendipitous adaptive advantage. So the design concept of the genome is a sequence which creates diversity as long as the sequence is known or designed. Any random change is usually problematic. Without very accurate replicating and error correction mechanisms I don't believe multicellular life is possible. This design flies in the face of any evolutionary concept of animal a and b evolving from animal c. Is Genesis the most accurate description we have? bill cole
bill cole: I am not in particular an expert about DNA repair.It is certainly a very complex and fascinating issue, and we can assume that it is extremely efficient in correcting random errors. It's enough to look at this Wikipedia page, to understand what happens if that complex repair system cannot work well: https://en.wikipedia.org/wiki/DNA_repair-deficiency_disorder However, we certainly know that random mutations do happen, even in the presence of some normally functioning DNA repair mechanism. Indeed, there are different ways to measure mutations. It is also reasonable to believe that random mutations are the cause of genetic diseases in humans, including those genetic diseases that cause defects in the DNA repair system. For example, look at the following Wikipedia page about xeroderma pigmentosum: https://en.wikipedia.org/wiki/Xeroderma_pigmentosum In particular: "One of the most frequent defects in xeroderma pigmentosum is an autosomal recessive genetic defect in which nucleotide excision repair (NER) enzymes are mutated, leading to a reduction in or elimination of NER.[6] If left unchecked, damage caused by ultraviolet light can cause mutations in individual cell's DNA." And the table in that page lists the main known mutations that cause the disease. All that should not be a surprise. We know well, from software programming, that even the best error correction system can never prevent all possible errors, especially in very complex systems. And biological systems are extremely complex. gpuccio
Origenes: "Gpuccio studies protein sequences which are functional and are preserved in many species. Both are strong indicators for not being ‘bits of nonsense’." Thank you! :) That's exactly what I am trying to say. I have been doing exactly that in these recent days, exploring whole genome comparisons using the metrics I have proposed in my posts here, and I get real results, real numbers that measure objective things. So, it's really difficult for me to read about the difficulty of measuring functional complexity, because I know from personal experience that it can be measured. It can be easy or very difficult, in different cases, but there is no absolute obstacle, in principle. Functional complexity is a simple and powerful concept, and it is objective and measurable. gpuccio
Eric Anderson @22: Well, I agree that we agree in essence. However, I am too, in this discussion, "focusing on an important nuance that is too often glossed over". When you say that "The latter (specification, function) is not so amenable to calculation", I think you are not really considering that, for a binary variable, establishing if what we observe can be categorized in one or the other level or the variable is a quantitative assessment. You may not want to call it a "calculation", but that's exactly part of the quantitative analysis we daily perform in science. So, if I measure weight in a number of people, I have a numerical variable. But if I group my subjects according to their sex, I have a binary variable. According to your language, I am "calculating" weight (I am measuring it, and I can derive quantitative parameters, like mean, standard deviation, and so on), and I am "recognizing" if the subjects are male or female. Therefore, function is essentially not different from sex, or any other binary (or, in general, categorical) variable that we "recognize" according to some explicit definition that allows a categorization. There is no special elusive quality in function, any more than there is in sex, or any other categorization. But there is more: the recognition of function is a procedure that requires two important steps: a) An explicit definition of the function, and how to assess its presence or absence: that takes the whole issue out of subjectivity, and makes it wholly objective. b) The application of the above procedure to each individual case, that is a quantitative assessment. For example, if we define the function of a watch as the ability to measure time with at least a specific level of precision, we have to check how well each object in our sample measures time, to decide if we can categorize it as a clock, or not. That is a calculation, a calculation that gives us the final binary value for the variable: "is this a clock?". So, function is not some mystic property. It can be observed, measured, and expressed as a binary value. So, the bits necessary to implement a function are no mystical concept They can be measured as the minimum amount of specific configuration that can objectively implement the defined function. gpuccio
Eric Anderson:
GPuccio: We can measure the amount of specific bits of information that are necessary to implement some well defined function.
Bits of what? What makes you think you are calculating bits of information? For all we know, you might be calculating bits of nonsense.
Gpuccio studies protein sequences which are functional and are preserved in many species. Both properties are strong indicators for not being 'bits of nonsense'. Origenes
Eric Anderson @23:
If we stop and think about what is actually being claimed, we start to realize what an absolute, utter, complete, preposterous absurdity it is.
"If we stop and think..." That's a fundamental requirement that unfortunately we can't guarantee. People are free to seriously think or not to think seriously. Once people start thinking seriously we see things like the website "a third way of evolution" created. And still they may not be thinking with totally open mind outside preconceived boxes. Dionisio
bill cole @21:
Could this whole idea of evolution be completely wrong?
What is the fundamental, foundational, basic claim of evolution? There are lots of claims, but at the most basic level the NeoDarwinian claim is this: That we can build a functionally-integrated, complex, highly-scalable, massively-parallel system architecture, that operates on the basis of a 4-bit digital code, with storage, retrieval and translation mechanisms, implementing concatenation and error correction algorithms -- that all this and more can be built by introducing random errors into a database. If we stop an think about what is actually being claimed, we start to realize what an absolute, utter, complete, preposterous absurdity it is. Eric Anderson
gpuccio @20:
We can measure the amount of specific bits of information that are necessary to implement some well defined function.
Bits of what? What makes you think you are calculating bits of information? For all we know, you might be calculating bits of nonsense. The number of bits (per Shannon) can be precisely the same in each case and the calculation tells us precisely nothing about whether we are dealing with information or not. Rather, it is our recognition of meaning, purpose, function that tells us we are dealing with information, not the fact that we have calculated some particular number of bits. That is the point. We can measure complexity. But we recognize information -- because it informs.
That is the measure of the functional complexity of that function, as defined.
Not quite. It is the measure of the complexity only. The functional part (or the information) must be determined separately and apart from the complexity. And it is not a bits calculation. It is a recognition of, as you say, function, or meaning or purpose.
The important point is: complex functions, those that require a high number of specific bits to be implemented (for example, more than 500 – 1000 bits), are only detected in designed objects. That kind of specific configuration requires the intervention of a conscious being, capable of understanding and purpose and will, to exist in a material object. That’s why, if we observe a material object that implements a complex function, we can infer design for its origin.
Absolutely agree. We are on the same page generally. I am, in this discussion, focusing on an important nuance that is too often glossed over. Namely, that complex specified information consists both of (a) complexity, and (b) specification (or function, if you prefer). The first can be calculated in various ways, including, per Shannon, in bits. The latter is not so amenable to calculation. It is a recognition -- an awareness of the fact that we are dealing with specification, meaning, function, purpose, something that informs. When we look at some functional system and say that it exceeds 500 bits, there are two parts to the analysis: a recognition of meaning/function, and a calculation of complexity. Both go together. And both are necessary in order to infer design. But we don't calculate 'CSI' in toto. We calculate the 'C'. We recognize the 'SI'. Eric Anderson
gpuccio A slight change of subject. What is you level of knowledge of DNA repair? I understand it is a process that occurs during the cell cycle but also when transcription takes place. It is accurate to 10^10 but when a human is built there are 22 billion potential mutations across the animal. There are, however only about 1 or so per every few cells. If there is another pass(dna repair) when the gene is transcribed then the 1 gets corrected during transcription. If this is true what are the real raw materials of evolutionary theory since mutations are continuously corrected with very rare exceptions? Could this whole idea of evolution be completely wrong? BTW cancers often occur when this machinery breaks. bill cole
Eric Anderson: I insist. We can measure the amount of specific bits of information that are necessary to implement some well defined function. That is the measure of the functional complexity of that function, as defined. So, the functional complexity is referred to a specific function, not to the object itself. However, the basic idea is: if the object can perform the defined function, because its configuration exhibits all the necessary bits, then we can say that the object exhibits functional information, in that amount and for that function. There is no absolute information in an object. In a sense, the highest "information" can be found in random strings, but that "information" is not specially functional. A random string is not a computer program. It cannot implement highly specific functions, like ordering numbers, or solving mathematical equations, and so on. The important point is: complex functions, those that require a high number of specific bits to be implemented (for example, more than 500 - 1000 bits), are only detected in designed objects. That kind of specific configuration requires the intervention of a conscious being, capable of understanding and purpose and will, to exist in a material object. That's why, if we observe a material object that implements a complex function, we can infer design for its origin. gpuccio
groovamos @16: Some good thoughts. This jumped out at me though:
Shannon showed that information can be measured.
Shannon specifically said he was not measuring information -- certainly not in any substantive sense that might be relevant. So-called Shannon "information" would be much better described as the Shannon "measure". Measurement of what? Essentially, channel carrying capacity. If Shannon's result had been termed the "Shannon Measure", rather than "Shannon Information", we would all be a lot better off and no small amount of confusion would have been avoided. Eric Anderson
Mung:
I agree with other posters that information is always information about something. But when it comes to “biological information” what does that term mean, and can the concept of “biological information” be quantified such that it can be measured?
We quantify and measure complexity, not information per se.
Can “aboutness” be quantified and measured?
Not really. But see my last comment below.
Would people agree or disagree that using the concept of shannon informtion isn’t helpful because shannon information isn’t about aboutness.
So-called "Shannon information" isn't helpful to the task, because it isn't measuring information -- as Shannon himself explained.
If some proposed measure of “biological information” isn’t measuring aboutness, what is it measuring?
Good question. I personally don't think information lends itself very well to a precise mathematical measurement -- as though information were some kind of physical object that could be measured, weighed, and quantified precisely. We could assign values to certain pieces of information and then find out whether an artifact contains those pieces of information. Then we could come up with a measurement value and declare that the artifact has such-and-such "quantity" of information. However, such an approach would simply be an exercise in categorizing, assigning values, and then comparing artifacts against our assigned values. I suppose that could constitute "measuring" and "quantifying" information in some loose sense. Might even be useful in some cases. But I suspect we would have so many exceptions and corner cases that any rules would be swallowed up by the exceptions. Eric Anderson
groovamus: "I’ve said it on here before, things as extant don’t have information without an involved mind." But machines can do things, and they can do those things because of the information that a mind implemented in them. We, as conscious observers, can witness what a machine can do, and define that as a function. The simple fact remains that a function, once defined, needs some amount of information to be implemented. That is the functional information for that function. gpuccio
Shannon showed that information can be measured. I would like to know from Mr. Adami, that if information from the "environment" resides in the genes, as he apparently thinks, how much "information" resides in the environment? How much information from my genes matches up somehow with the environment? And BTW Mr. Adami, where in the genes is the information that makes your face appear the way it does, and your voice to sound the way it does? Where exactly? Did that information start out in the "environment" and if so, where is it in the "environment"? I've said it on here before, things as extant don't have information without an involved mind. A rock doesn't have information, only the descriptions, the measurements with tolerances, etc. have information. Since there can be arbitrarily large numbers of descriptions of a rock, including an almost infinite number of measurements with associated tolerances which can in themselves be almost infinitely precise, a rock can be said to represent infinite information which is to say that for humans it really has zero information, but our description of the rock can encompass an arbitrarily large amount of information. Just ONE measurement of the rock can contain information approaching infinity if it is estimated to a precision approaching infinity. So this brings us back to a requirement for information measure to make any sense: a mind or minds are somehow involved. Shannon avoided philosophical implications of his work and I think this has caused some confusion and misunderstanding in the ID community. For example Tom Bethell in his wonderful "Darwin's House Of Cards" quotes Steven Meyer: "The problem is Shannon's theory 'cannot distinguish between functional or message bearing signals from random or useless ones' as Stephen Meyer said...." Big problem for Bethell and Meyer in saying this. It takes a mind to decide if a "random" signal is useless, and a mind might not have a clue as to what "random" means, and whether measurements of randomness can be undertaken (actually yes) which is why I put the word in quotes elsewhere. "Random" signals are generated by processes. Such processes or classes of them may be useful for a particular mind and useless to everyone else. A weather station encoding wind direction and speed over a comm link is sending randomly generated numbers, useless to almost everyone except a few people. A person trying to type out random letters may think they are random, and to him they might be useless (not really - there has to be a motivation if the person is sane). But his method in doing so will have bias(es). We have an example of that on this thread: https://uncommondesc.wpengine.com/design-inference/intelligent-design-basics-information-part-iii-shannon/ On that link we have the OP actually posting a "random" string of all caps @39. Having a mind, I can estimate a probability of the caps lock being on during the typing. The more letters are in the string, the higher the probability. At 100% probability I have one bit of Shannon information, the same for 0%. But this is why Shannon bowed out of the philosophical aspects of meaning as it applies to the measurement of information. Even he showed that the surprisal value of a message or character transmitted/stored is a measure of information content. But how much information is in a message like: "The sun shines on the earth" based on surprisal? You would have to come up with some arcane scenarios to show that a class of minds would find the information value there. The bottom line is this: the ID community thinkers and Mr. Adami should be more careful in their discussion of Shannon information. groovamos
Mung: "But are you really quantifying the amount of information if you’re not quantifying the amount of aboutness?". Yes. I am quantifying the amount of information necessary to implement the function as defined. There are quantitative variables and qualitative (categorical) variables. there is nothing special in that. "Is there a way to measure “biological information” and the situations in which that measure can properly be used." Of course there is. I have done exactly that many times. "Is there more than one way to measure “biological information”?" Of course. You can change the measure unit, the methodology, and so on. Again, there is nothing special in that. "For example, can the amount of “biological information” inherited by a child from each parent be quantified?" Of course. In general, for most genes, children inherit a similar amount of information (one allele from each parent). Of course, the functionality of each allele can be different. For example, one allele can be non functional. Like in genetic diseases. Or it can be imprinted, and therefore not expressed in the child. "How about the amount of “biological information” gained or lost during the course of evolution in a specific lineage." That's exactly what I am working at. You could have a look at some of my previous posts: https://uncommondesc.wpengine.com/intelligent-design/homologies-differences-and-information-jumps/ https://uncommondesc.wpengine.com/intelligent-design/information-jumps-again-some-more-facts-and-thoughts-about-prickle-1-and-taxonomically-restricted-genes/ https://uncommondesc.wpengine.com/intelligent-design/the-highly-engineered-transition-to-vertebrates-an-example-of-functional-information-analysis/ gpuccio
gpuccio writes:
There is no need to quantify it [aboutness], because it can be treated as a categorical, binary variable. What we quantify is the information linked to some functional specification.
But are you really quantifying the amount of information if you're not quantifying the amount of aboutness? Is there a way to measure "biological information" and the situations in which that measure can properly be used. Is there more than one way to measure "biological information"? For example, can the amount of "biological information" inherited by a child from each parent be quantified? How about the amount of "biological information" gained or lost during the course of evolution in a specific lineage. Mung
I agree with other posters that information is always information about something. But when it comes to "biological information" what does that term mean, and can the concept of "biological information" be quantified such that it can be measured? Can "aboutness" be quantified and measured? Would people agree or disagree that using the concept of shannon informtion isn't helpful because shannon information isn't about aboutness. :) If some proposed measure of "biological information" isn't measuring aboutness, what is it measuring? Mung
gpuccio @8:
Strange that such simple and important concepts are so obstinately ignored.
You pressed the right button as usual: it's about "functionally specified information". (Perhaps "cooked" with a "complex complexity" flavor?) Dionisio
BA77 @1, @2 & @6: Very interesting references. Thank you. Dionisio
News, Thank you for opening this discussion thread. Dionisio
Some say that it’s not clear that the term is useful. A friend writes, “Information is always “about” something. How does one quantify “aboutness. ” He considers the term too vague to be helpful.”
Information is obviously a useful term. How can anyone in his right mind doubt this? Well, we can be pretty sure that a particular metaphysical bias is in play when someone does. Materialism cannot ground information, because it originates from a level over and beyond fermions and bosons. Whenever we speak of information we are referring to the alignment of low-level functions in support of the top-level function. Every post on this forum uses letters as the basic building blocks at the bottom level. These letters are arranged in accord with influencing conventions of spelling to form words one level up. To reach the next higher level, words are chosen for the purpose of expressing a thought and arranged in accord with influencing grammatical conventions in order for that thought to be intelligibly conveyed. The point is: the parts are subservient to the whole and it is exactly this subservience of the parts that materialism cannot explain. Materialistic 'explanations' always boil down to 'it must be blind luck'. The materialist would sleep much better if only the term 'information' was indeed 'too vague to be helpful'. Origenes
“Information is always “about” something." Yes. That's why ID has developed the concept of specified information, and in particular of functionally specified information. The "about" is of course the specification, and in biology the specification is a function. "How does one quantify “aboutness. ” There is no need to quantify it, because it can be treated as a categorical, binary variable. What we quantify is the information linked to some functional specification. Strange that such simple and important concepts are so obstinately ignored. gpuccio
A friend writes, “Information is always “about” something. How does one quantify “aboutness. ” Yes! Information is about something. And DNA is information about something. It is the technical description of how to make a living organism. Each element in the DNA plays its own role in that description. Each element in the DNA has its own unique "aboutness". bFast
Atheist's logic 101 - cartoon "If I can only create life here in the lab (or in my computer), it will prove that no intelligence was necessary to create life in the beginning" http://dl0.creation.com/articles/p073/c07370/Scientist-synthesize-life-machine.jpg Panda’s Thumb Richard Hoppe forgot about Humpty Zombie - April 15, 2014 Excerpt: I discovered if you crank up Avida’s cosmic radiation parameter to maximum and have the Avida genomes utterly scrambled, the Avidian organisms still kept reproducing. If I recall correctly, they died if the radiation was moderate, but just crank it to the max and the creatures come back to life! This would be like putting dogs in a microwave oven for 3 days, running it at full blast, and then demanding they reproduce. And guess what, the little Avida critters reproduced. This little discovery in Avida 1.6 was unfortunately not reported in Nature. Why? It was a far more stupendous discovery! Do you think it’s too late for Richard Hoppe and I to co-author a submission? Hoppe eventually capitulated that there was indeed this feature of Avida. To his credit he sent a letter to Dr. Adami to inform him of the discovery. Dr. Adami sent Evan Dorn to the Access Research Network forum, and Evan confirmed the feature by posting a reply there. http://www.creationevolutionuniversity.com/idcs/?p=90
Avida, when using realistic biological parameters as its default settings, instead of using highly unrealistic default settings as it currently does, actually supports Genetic Entropy instead of Darwinian evolution:
Biological Information - Mendel's Accountant and Avida 1-31-2015 by Paul Giem https://www.youtube.com/watch?v=cGd0pznOh0A&list=PLHDSWJBW3DNUUhiC9VwPnhl-ymuObyTWJ&index=14 Computational Evolution Experiments Reveal a Net Loss of Genetic Information Despite Selection Chase W. Nelson1,* and John C. Sanford http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0014
further notes:
Podcast: Winston Ewert on computer simulation of evolution (AVIDA) that sneaks in information https://uncommondesc.wpengine.com/intelligent-design/podcast-winston-ewert-on-computer-simulation-of-evolution-avida-that-sneaks-in-information/ LIFE’S CONSERVATION LAW - William Dembski - Robert Marks - Pg. 13 Excerpt: (Computer) Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case. http://evoinfo.org/publications/lifes-conservation-law/ “Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information.” - William Dembski Main Publications - Evolutionary Informatics http://evoinfo.org/publications/
bornagain77
AD, thanks, will watch it later. bornagain77
From the article... "I’ve been under attack from creationists from the moment I created life when designing [the artificial life simulator] Avida. I was on their primary target list right away. I’m used to these kinds of fights. They’ve made kind of timid attacks because they weren’t really understanding what I’m saying, which is normal because I don’t think they’ve ever understood the concept of information." Wow. Just... Wow. I got nuthin'. ronvanwegen
BA, have you seen this? Curious as to your comments, if so. https://youtu.be/eCtDqCuhWNM AnimatedDust
The preceding paper was experimentally verified (as was referenced in the “What is Information?” video):
New Scientist astounds: Information is physical – May 13, 2016 Excerpt: Recently came the most startling demonstration yet: a tiny machine powered purely by information, which chilled metal through the power of its knowledge. This seemingly magical device could put us on the road to new, more efficient nanoscale machines, a better understanding of the workings of life, and a more complete picture of perhaps our most fundamental theory of the physical world. https://uncommondesc.wpengine.com/news/new-scientist-astounds-information-is-physical/
Moreover, besides classical digital information in DNA, quantum information, (of which classical information is shown to be a subset), is now also found in DNA, (and also found in Proteins),
“What happens is this classical information (of DNA) is embedded, sandwiched, into the quantum information (of DNA). And most likely this classical information is never accessed because it is inside all the quantum information. You can only access the quantum information or the electron clouds and the protons. So mathematically you can describe that as a quantum/classical state.” Elisabeth Rieper – Classical and Quantum Information in DNA – video (Longitudinal Quantum Information resides along the entire length of DNA discussed at the 19:30 minute mark; at 24:00 minute mark Dr Rieper remarks that practically the whole DNA molecule can be viewed as quantum information with classical information embedded within it) https://youtu.be/2nqHOnVTxJE?t=1176 Classical and Quantum Information Channels in Protein Chain – Dj. Koruga, A. Tomi?, Z. Ratkaj, L. Matija – 2006 Abstract: Investigation of the properties of peptide plane in protein chain from both classical and quantum approach is presented. We calculated interatomic force constants for peptide plane and hydrogen bonds between peptide planes in protein chain. On the basis of force constants, displacements of each atom in peptide plane, and time of action we found that the value of the peptide plane action is close to the Planck constant. This indicates that peptide plane from the energy viewpoint possesses synergetic classical/quantum properties. Consideration of peptide planes in protein chain from information viewpoint also shows that protein chain possesses classical and quantum properties. So, it appears that protein chain behaves as a triple dual system: (1) structural – amino acids and peptide planes, (2) energy – classical and quantum state, and (3) information – classical and quantum coding. Based on experimental facts of protein chain, we proposed from the structure-energy-information viewpoint its synergetic code system. http://www.scientific.net/MSF.518.491
And then the coup de grace for demonstrating that immaterial information is its own distinct physical entity, separate from matter and energy, is Quantum Teleportation:
Quantum Teleportation Enters the Real World – September 19, 2016 Excerpt: Two separate teams of scientists have taken quantum teleportation from the lab into the real world. Researchers working in Calgary, Canada and Hefei, China, used existing fiber optics networks to transmit small units of information across cities via quantum entanglement — Einstein’s “spooky action at a distance.”,,, This isn’t teleportation in the “Star Trek” sense — the photons aren’t disappearing from one place and appearing in another. Instead, it’s the information that’s being teleported through quantum entanglement.,,, ,,, it is only the information that gets teleported from one place to another. http://blogs.discovermagazine.com/d-brief/2016/09/19/quantum-teleportation-enters-real-world/#.V-HqWNEoDtR
Moreover, quantum information/entanglement simply refuses to be reduced to any reasonable within space-time, matter-energy, explanation:
Looking beyond space and time to cope with quantum theory – 29 October 2012 Excerpt: “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” http://www.quantumlah.org/highlight/121029_hidden_influences.php Quantum correlations do not imply instant causation – August 12, 2016 Excerpt: A research team led by a Heriot-Watt scientist has shown that the universe is even weirder than had previously been thought. In 2015 the universe was officially proven to be weird. After many decades of research, a series of experiments showed that distant, entangled objects can seemingly interact with each other through what Albert Einstein famously dismissed as “Spooky action at a distance”. A new experiment by an international team led by Heriot-Watt’s Dr Alessandro Fedrizzi has now found that the universe is even weirder than that: entangled objects do not cause each other to behave the way they do. http://phys.org/news/2016-08-quantum-imply-instant-causation.html
Moreover, in quantum mechanics it is information that is primarily conserved, not matter-energy,,,
Quantum no-hiding theorem experimentally confirmed for first time – 2011 Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html Quantum no-deleting theorem Excerpt: A stronger version of the no-cloning theorem and the no-deleting theorem provide permanence to quantum information. To create a copy one must import the information from some part of the universe and to delete a state one needs to export it to another part of the universe where it will continue to exist. http://en.wikipedia.org/wiki/Quantum_no-deleting_theorem#Consequence
Besides providing direct empirical falsification of neo-Darwinian claims that say information is ’emergent’ from a material basis, the implication of finding ‘non-local’, beyond space and time, and ‘conserved’, quantum information in molecular biology on such a massive scale, in every DNA and protein molecule, is fairly and pleasantly obvious. That pleasant implication, or course, being the fact that we now have strong physical evidence suggesting that we do indeed have an eternal soul that lives beyond the death of our material bodies.
“Let’s say the heart stops beating. The blood stops flowing. The microtubules lose their quantum state. But the quantum information, which is in the microtubules, isn’t destroyed. It can’t be destroyed. It just distributes and dissipates to the universe at large. If a patient is resuscitated, revived, this quantum information can go back into the microtubules and the patient says, “I had a near death experience. I saw a white light. I saw a tunnel. I saw my dead relatives.,,” Now if they’re not revived and the patient dies, then it’s possible that this quantum information can exist outside the body. Perhaps indefinitely as a soul.” – Stuart Hameroff – Quantum Entangled Consciousness – Life After Death – video (5:00 minute mark) https://youtu.be/jjpEc98o_Oo?t=300
Verse:
Mark 8:37 “Is anything worth more than your soul?”
bornagain77
There are two erroneous beliefs from the portion of the article that you quoted News,, first,,,
We of course know that all life on Earth has enormous amounts of information that comes from evolution, which allows information to grow slowly.
The belief that evolution can slowly grow information is a unfounded belief which has no empirical support. 'It is almost universally acknowledged that beneficial mutations are rare compared to deleterious mutations':
Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez 1, Robert J. Marks II 2, Jorge Fernandez 3 and John C. Sanford 4 - May 2013 Excerpt: It is almost universally acknowledged that beneficial mutations are rare compared to deleterious mutations [1–10].,, It appears that beneficial mutations may be too rare to actually allow the accurate measurement of how rare they are [11]. 1. Kibota T, Lynch M (1996) Estimate of the genomic mutation rate deleterious to overall fitness in E. coli . Nature 381:694–696. 2. Charlesworth B, Charlesworth D (1998) Some evolutionary consequences of deleterious mutations. Genetica 103: 3–19. 3. Elena S, et al (1998) Distribution of fitness effects caused by random insertion mutations in Escherichia coli. Genetica 102/103: 349–358. 4. Gerrish P, Lenski R N (1998) The fate of competing beneficial mutations in an asexual population. Genetica 102/103:127–144. 5. Crow J (2000) The origins, patterns, and implications of human spontaneous mutation. Nature Reviews 1:40–47. 6. Bataillon T (2000) Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? Heredity 84:497–501. 7. Imhof M, Schlotterer C (2001) Fitness effects of advantageous mutations in evolving Escherichia coli populations. Proc Natl Acad Sci USA 98:1113–1117. 8. Orr H (2003) The distribution of fitness effects among beneficial mutations. Genetics 163: 1519–1526. 9. Keightley P, Lynch M (2003) Toward a realistic model of mutations affecting fitness. Evolution 57:683–685. 10. Barrett R, et al (2006) The distribution of beneficial mutation effects under strong selection. Genetics 174:2071–2079. 11. Bataillon T (2000) Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? Heredity 84:497–501. http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0006 A Billion Genes and Not One Beneficial Mutation – August 26, 2016 Excerpt: Nature just published results of the Exome Aggregation Consortium (ExAC), the largest survey of human genes to date. (An "exome" is the portion of the genome that codes for proteins.) The exomes from 60,706 individuals from a variety of ethnic groups have been collected and analyzed. If we multiply 60,000 people by the 20,000 genes in the human genome (the lowest estimate), we get a minimum of 1.2 billion genes that have been examined by ExAC for variants.,,, ,,, we search(ed) the paper in vain for any mention of beneficial mutations. There's plenty of talk about disease. The authors only mention "neutral" variants twice. But there are no mentions of beneficial mutations. You can't find one instance of any of these words: benefit, beneficial, fitness, advantage (in terms of mutation),improvement, innovation, invention, or positive selection. They mention all kinds of harmful effects from most variants: missense and nonsense variants, frameshift mutations, proteins that get truncated on translation, and a multitude of insertions and deletions. Quite a few are known to cause diseases. There are probably many more mutations that never survive to birth. As for natural selection, the authors do speak of "negative selection" and "purifying selection" weeding out the harmful mutations, but nowhere do they mention anything worthwhile that positive selection appears to be preserving. http://www.evolutionnews.org/2016/08/a_billion_genes103091.html
And in Michael Behe's survey of four decades of laboratory evolution experiments it was found that 'the great majority of helpful mutations degrade the genome to a greater or lesser extent.'
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain. http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/
Even the gain of function mutations listed in Behe's paper were the result of the experimenters first deleting a gene and then the programming of the cell modifying the genome to compensate for that deleted gene. In other words, no one has ever demonstrated the origin of a completely new gene by Darwinian processes. They have only demonstrated the replacement of a preexisting gene that was deleted. (And even then, the replacement gene was not generated by truly 'random', i.e. Darwinian, processes.) Thus whilst the belief that evolution will 'slowly grow information' seems to be widespread, it is a completely unfounded belief that has never been empirically demonstrated, The second erroneous belief from the article is the belief that information is 'emergent' from a material basis. In other words, the author assumes reductive materialism to be true. Yet, as the video you highlighted in the OP points out News, information is, contrary to Darwinian assumptions, its own distinct physical entity that is separate from matter and energy. i.e. Information is NOT 'emergent' from a material basis as is assumed Darwinian evolution. In the following paper, Dr Andy C. McIntosh, who is professor of thermodynamics and combustion theory at the University of Leeds, holds that it is non-material information that constrains biological life to be so far out of thermodynamic equilibrium. Moreover, Dr. McIntosh holds that regarding information as independent of energy and matter ‘resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions’.
Information and Thermodynamics in Living Systems – Andy C. McIntosh – 2013 Excerpt: ,,, information is in fact non-material and that the coded information systems (such as, but not restricted to the coding of DNA in all living systems) is not defined at all by the biochemistry or physics of the molecules used to store the data. Rather than matter and energy defining the information sitting on the polymers of life, this approach posits that the reverse is in fact the case. Information has its definition outside the matter and energy on which it sits, and furthermore constrains it to operate in a highly non-equilibrium thermodynamic environment. This proposal resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions, which despite the efforts from alternative paradigms has not given a satisfactory explanation of the way information in systems operates.,,, http://www.worldscientific.com/doi/abs/10.1142/9789814508728_0008
And in support of Dr. McIntosh’s contention that it must be non-material information which constrains biological life to be so far out of thermodynamic equilibrium, and as was briefly mentioned in the “What is information?” video, information has now been experimentally shown to have a ‘thermodynamic content’:
Maxwell’s demon demonstration (knowledge of a particle’s position) turns information into energy – November 2010 Excerpt: Scientists in Japan are the first to have succeeded in converting information into free energy in an experiment that verifies the “Maxwell demon” thought experiment devised in 1867.,,, In Maxwell’s thought experiment the demon creates a temperature difference simply from information about the gas molecule temperatures and without transferring any energy directly to them.,,, Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information. http://www.physorg.com/news/2010-11-maxwell-demon-energy.html Demonic device converts information to energy – 2010 Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski. http://www.scientificamerican.com/article.cfm?id=demonic-device-converts-inform Information: From Maxwell’s demon to Landauer’s eraser – Lutz and Ciliberto – Oct. 25, 2015 – Physics Today Excerpt: The above examples of gedanken-turned-real experiments provide a firm empirical foundation for the physics of information and tangible evidence of the intimate connection between information and energy. They have been followed by additional experiments and simulations along similar lines.12 (See, for example, Physics Today, August 2014, page 60.) Collectively, that body of experimental work further demonstrates the equivalence of information and thermodynamic entropies at thermal equilibrium.,,, (2008) Sagawa and Ueda’s (theoretical) result extends the second law to explicitly incorporate information; it shows that information, entropy, and energy should be treated on equal footings. http://www.johnboccio.com/research/quantum/notes/Information.pdf J. Parrondo, J. Horowitz, and T. Sagawa. Thermodynamics of information. Nature Physics, 11:131-139, 2015.
Of related note to immaterial information having a ‘thermodynamic content’, classical digital information was found to be a subset of ‘non-local’, (i.e. beyond space and time), quantum entanglement/information by the following method which removed heat from a computer by the deletion of data:
Quantum knowledge cools computers: New understanding of entropy – June 2011 Excerpt: No heat, even a cooling effect; In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.” http://www.sciencedaily.com/releases/2011/06/110601134300.htm Scientists show how to erase information without using energy – January 2011 Excerpt: Until now, scientists have thought that the process of erasing information requires energy. But a new study shows that, theoretically, information can be erased without using any energy at all.,,, “Landauer said that information is physical because it takes energy to erase it. We are saying that the reason it (information) is physical has a broader context than that.”, Vaccaro explained. http://www.physorg.com/news/2011-01-scientists-erase-energy.html
bornagain77

Leave a Reply