
Check out Science Uprising 3. In contemporary culture, we are asked to believe – in an impressive break with observed reality – that the code of life wrote itself:
… mainstream studies are funded, some perhaps with tax money, on why so many people don’t “believe in” evolution (as the creation story of materialism). The fact that their doubt is treated as a puzzling public problem should apprise any thoughtful person as to the level of credulity contemporary culture demands in this matter.
So we are left with a dilemma: The film argues that there is a mind underlying the universe. If there is no such mind, there must at least be something that can do everything that a cosmic mind could do to bring the universe and life into existence. And that entity cannot, logically, simply be one of the many features of the universe.
Yet, surprisingly, one doesn’t hear much about mainstream studies that investigate why anyone would believe an account of the history of life that is so obviously untrue to reason and evidence.Denyse O’Leary, “There is a glitch in the description of DNA as software” at Mind Matters News
Maybe a little uprising wouldn’t hurt.
Here at UD News, we didn’t realize that anyone else had a sense of the ridiculous. Maybe the kids do?
See also: Episode One: Reality: Real vs. material
and
Episode Two: No, You’re Not Robot made of Meat
Notes on previous episodes
Seven minutes to goosebumps (Robert J. Marks) A new short film series takes on materialism in science, including that of AI’s pop prophets
Science Uprising: Stop ignoring evidence for the existence of the human mind Materialism enables irrational ideas about ourselves to compete with rational ones on an equal basis. It won’t work (Denyse O’Leary)
and
Does vivid imagination help “explain” consciousness? A popular science magazine struggles to make the case. (Denyse O’Leary)
Further reading on DNA as a code: Could DNA be hacked, like software? It’s already been done. As a language, DNA can carry malicious messages
and
How a computer programmer looks at DNA And finds it to be “amazing” code
Follow UD News at Twitter!
DNA isn’t the software. The immaterial information that runs the genetic code, is.
Making what Bill Gates said about DNA such a big deal is misleading at best. Mr Gates May know quite a bit about software but has no clue about DNA.
When Professor Denis Noble, who may know a little more about cellular biology than Mr Gates, was asked at a physiology meeting to explain what a gene is he simply said that nobody knows. DNA without the rest of the sophisticated cellular machinery is as valuable as a zero written on the left side of an integer number (eg. $099=$99). DNA seems like a very complex repository of information that the cellular machinery can access and process.
Some folks lack the humility to admit that out of our deep ignorance they simply explain what they don’t understand using reductionistic poetry and if things get tough then oversimplified illustrations along with some elegant handwaving may help to persuade the gullible crowd to belief they know something that we don’t.
It’s time to stop playing superfluous games and to start calling things by their names.
The IP metaphor is sticky and easy to use. Computers and software have only been around just about one life time. Yet it is the only way we describe our own physiology. It’s similar to comparing your hands to a hammer, both can be used to pound sharp objects into walls but one does a lot better and is a lot less painful . Yet we don’t make these parallels between our hand to the hammer.
Computers and software that they use are tools.
They are not the same as our brain and our DNA
They are even fundamentally different on a molecular level. There are many things in this world that are capable of performing similar or the same job as other things in this world but that most certainly does not make them the same.
There is a school of thought that says that DNA is no more a code than tree rings are a code.
I think it looks more like a formatted database if you want to use computer comparisons, but even that has shortcomings.
The day human engineers and scientists come up with anything at least remotely close to the complex functionality and the functional complexity of the biological systems, we deserve to uncork all the champagne bottles in the world and brag about how smart we are. Until then, let’s be humble. Deal ? 🙂
Sequential information on DNA is one thing, the quantum ‘positional information’ of an entire organism takes the argument against Darwinian materialism to entirely new level.
Belfast:
Tree rings are data recorders. There isn’t any code. DNA encodes for amino acids in the grand scheme of the genetic code. One codon represents an amino acid or STOP.
ET,
Couldn’t one argue that in a cross-section of a tree, the rings “represent” seasons (e.g., light rings representing the growing season and the dark rings representing the dormant season)?
daves- Tree rings don’t represent anything unless you have studied them. And only then can you get any information. With the genetic code we only discovered the existing code. It keeps chugging along regardless of what we know.
ET,
Isn’t that also true of, say, a message expressed in Morse Code? I could compose a message and transmit it with a shortwave radio. Unless someone was monitoring that frequency at the moment, it would simply vanish into the aether. It’s still an encoded message.
Suppose I’m hiking in the woods and come upon the stump of a tree that someone has recently cut down. I notice a curious sequence of rings, where “-” means a light ring and “.” means a dark ring.
If we apply our ID techniques, clearly we may be able to conclude that’s a coded message, correct?
Tree rings do not provide instructions for variable operations and functions to follow. They are just the record of past events.
If there is actually a “school of thought” that proposes that kind of analogy, it’s not much of a school.
Dave
We apply ID techniques to determine if there is an information (messaging) circuit.
Sender. Transmission. Receiver. Translation. Response.
That’s an information circuit. Do we see it in tree rings?
SA,
Not that I know. I’m suggesting a hypothetical, where we discover a sequence of tree rings which turns out to form a recognizable message in Morse Code (for example, perhaps the first sentence of the Gettysburg Address). If such a tree trunk was found, clearly we would identify it as a coded message, even in the absence of this other machinery you refer to, correct?
Pee tests. Is pee a code? Trained medical staff can get information from pee. And what about ColoGuard?
The genetic code is a series of symbols that instruct the ribosome how to construct a protein. Any reason to believe the ribosome behavior is not a finite automata? Otherwise it is a computational code. I do not understand why people say the genetic code is not a computational code.
ET
Well, I guess that information must be encoded somehow, right?
Your palms. There is a code on your palms. And for a small fee the people of the silk bandanas will decode that message for you. For another small fee they will decode the message of the cards- your message. They also have a glass ball…
The people of the silk bandanas can be found at most seaside boardwalks, traveling carnivals and may even be lurking locally.
EricMH- The ribosome is a genetic compiler. The source code is the string of nucleotides and the object code is the functional protein that is produced. And the ribosome recognizes miscoding errors: The Ribosome: Perfectionist Protein-maker Trashes Errors
Just more positive evidence for ID
daves:
Look up the word “encode” and try to find a definition that fits “living one’s life”, because that is how the information gets in our waste.
DaveS at #13:
If the encoded message is complex enough, that’s correct. If the tree rings encoded at least 500 bits of functional information, that would be an object for which we could infer design.
Are you looking for that in stumps? Good luck, really!
The connection between tree rings and seasons is, of course, a necessity connection, and not an encoded message. That should be obvious to anybody.
An example that I have made a few times here is the following:
Some human mission arrives at a distant planet, about which we know nothing. There is no sign of life or intelligence on it, but the astronauts observe a mountain wall where a long series of marks is present. Each mark can very well be interpreted as a result of wheather events. However, the marks can be easily còassified into two different types, and so the sequence can be read as a binary string.
One of the astronauts, who is a mathematician, after some observation finds that the binary sequence in the wall, when read by an appropriate code, corresponds exactly to the first million of decimal digits of pi.
The question is very simple: can we infer design?
The answer, of course, is yes.
So, good luck with your stumps.
@GP, DaveS’ point is good, I think, and merits further analysis to explain the flaw.
There is a natural process that creates a compressible code, i.e. 01 pattern of rings that represent the seasons, regular seasons = compressible encoding. If we use the uniform chance hypothesis, then the tree rings register a large amount of CSI. If we don’t use the uniform chance hypothesis, what is the chance hypothesis? The change in seasons. If we use the change in seasons as the chance hypothesis, there is zero CSI.
Same issue with the genetic code. Say we find a compressible code in the gene. Uniform hypothesis of course shows high CSI, so uniform hypothesis is false. But, that does not rule out a natural process capable of creating compressible regularity, e.g. seasonal regularity in the tree ring case.
This is why the detachable specification is so important, as your example with pi illustrates. The digits of pi are independent from the seasons, yet the seasons are the chance hypothesis for tree rings. So, if using the seasons as a chance hypothesis and pi as a specification results in high CSI, then we can infer the causal agency of something other than the seasons. If the specification is an abstraction, such as pi, then since intelligence is the only known cause that can implement abstractions, we can infer intelligent agency in the case of rings spelling out the digits of pi.
Returning full circle to the genetic code, we can apply the same reasoning. Generating functional proteins from the genetic code is extremely small probability with know natural processes, especially the Darwinian process of random mutation and natural selection. If we use the specification of a software code, and as far as we know only human intelligence can create software codes since they require the power of abstraction and deductive logic, then we end up getting a high amount of CSI with the genetic code. Thus, we can infer intelligent agency at work in the genetic code.
EricMH:
In general I agree with what you say. But I have to make some specifications which are, IMO, important:
a) A chance hypothesis (null hypothesis) is simply the hypothesis that no real effect is observed, and that the configurations we observe can reasonably be explained by random events following some probability distribution that can reasonably describe the system, given the necessity laws working in the system itself.
There is no rerason at all that we have to hypothesize a uniform distribution. Any reasonable probability distribution could describe the system, and still the result would be a random result.
b) A necessity explanation has nothing to do with any random hypothesis, and with any probability distribution. A necessity explanation observes a cause and effect relationship that explains the configuration we observe. Causes existing in the system are generating the configuration we observe, and not because of a probability distribution, but because of a direct causal connection. So, the connection between seasons and tree rings is a necessity connection, not the result of any probability distribution (even if, of course, random effects can be present too).
c) Specific configurations that have the features of code and a functional specification cannot really arise as a result of any probability distribution, if they are complex enough. A probability distribution, of course, does not know anything about English language, and it does not understand meanings. That’s why a Shakespeare sonnet will never arise from a probability distribution, any probability distribution. There is no need for a uniform distribution of the letters. You can make one or more letter more likely, but that will never generate the sonnet.
In the same way, even if random mutations do not really follow a uniform distribution (indeed they don’t), no special probability distribution has any chance to generate the code for a complex function protein. It’s not important if some mutation has a probability which is different from some other mutation, still the correct sequence is by far too unlikely to originate in any possible physical system.
In a design process, there is a very specific necessity connection between the designer, his understanding, his conscious representations, his actions and the final result of the process. IOWs, the physical object is shaped by the designer according to the form and meaning alredy present in his consciousness. A series of necessity events establishes the connection. Probability has no role here, except for possible noise generation.
The connection between seasons and tree rings is a necessity one. But it is not symbolic, and it is very simple. Given the laws existing in the system, we understand very well how a relatively simple and repetitive binary sequence like tree rings originates from existing and repetitive events in the system.
You mention compressibilitty. That’s an important point, because I have always argued that compressible information is often a result of necessity laws acting in the system.
For example, a sequence of 1000 heads from coin tossing is extremely unlikely, if the coin is really fair. So, it could well be a result of design. But still, if the coin is not fair, or if any other condition in the system strongly favours heads, then that sequence can become very likely, maybe necessary.
A sequence of 1000 heads is highly compressible. Compressible information can be a result of design or of some simple necessity law. The explanatory filter has always been well aware of that, that’s why necessity explanations must be seriously considered, especially with compressible information.
But a Shakespeare sonnet, or the sequence of a functional protein, is scarcely compressible information. The functional information in those objects does not derive from some repetition of simple configurations: it is directly connected to much more complex realities, like those of language and of meanings, or those of a clear understanding of biological functions, folding, biochemistry, and so on.
Only design can generate that type of complex objects. They never arise neither from probability distributions, of any kind, nor from the action of existing necessity laws.
DaveS
Yes, I think you’re right. It would be difficult to explain that correlation from merely the randomness of tree rings alone. There are always outliers and chance occurrences. But some sort of explanation would be needed if we found that exact sequence as you describe it. At the same time, the information remains embedded in a tree and does not appear to be communicated beyond that and we also know the origin and cause of that information.
But I’d put it this way – if people are really presenting that analogy as a materialist response to the ID detection we have with DNA, I mean seriously? That’s just clutching at straws. It indicates the extreme weakness of the materialist view — just running away from the evidence.
I think I’ve followed you long enough on this site to guess correctly that you do not really believe that is in any way a valid response to the strength of the ID argument … right? Or do you think that is a strong opposing argument to ID?
Gpuccio,
Glad to see you back!!!
I was missing your posts and comments.
SA,
No, it’s not an anti-ID argument at all. It’s more an attempt to understand what the minimal requirements are for something to count as a code.
Dave – sorry I misunderstood. It’s a good question and it’s exactly the kind of thing that ID can work on. It’s always a matter of gaining more precision over a science that is based on probabilities and predictives.
Hi Gpuccio
I mentioned you toward the end of this book review of Darwin’s Devolves that Perry Marshall asked me to participate in.
https://youtu.be/MiiV5LgUe5k
daves- The minimal requirements are that it has to meet the definition of a code. Larry Moran on the real genetic code and how it is the same type of code as Morse code.
@GP, thanks, what you have written has given me some ideas.
I would say meaningful information is usually somewhere between extreme simplicity and incompressibility. E.g. most code and English text is pretty compressible.
One interesting sidepoint, Shannon says English is about 50% redundant, so can create 2D crosswords. On the other hand, if it was only 30% redundant Shannon claims we’d have to create 3D crosswords. Proteins are essentially 3D crosswords and the genetic code is very low in redundancy.
On the other hand, being incompressible with an external function is not quite the same as being designed. A photograph of a crystal will be incompressible due to noise, and have a concise external referent, but does not indicate intelligent design.
So, I would still say the distinguishing feature of CSI is the detachable specification, which furthermore has to be an abstract specification, i.e. something that cannot be derived from physical material.
The photograph of a crystal does not meet this criterion because the photograph is generated from the external referent, so the external referent is not detachable. However, in the case of the genetic code, the code is not generated from the function, so the function is a detachable specification.
EricMH:
If materialism is correct then the genetic code was generated from the function. It just emerged from the system, which emerged from the components nature just happened to produce.
Here is a stunning claim from Abel and Trevors.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/
One of the key questions you have to answer if you believe in a naturalistic dys-teleological origin for the DNA or RNA is how did chemistry create the code? Do you have any evidence of how an undirected and purposeless physical process created what we intelligent beings recognize as code? If you do please give us your explanation. Or, is it just your belief?
If you don’t have an explanation, I’m going to make the same assumptions I use to identify ducks: If it looks like a code and operates like a code chances are that it really is a code.
Some people call that the “duck test.” I just call it logical thinking
ET,
That’s useful 😛
daves @ 32- It should be very useful to anyone saying:
😛
To John A Designer, Upright Biped, gpuccio, EricMH, et al: Please see:
The Origin of Prebiotic Information System in the Peptide/RNA World: A Simulation Model of the Evolution of Translation and the Genetic Code
Sankar Chatterjee 1,* and Surya Yadav 2 1 Department of Geosciences, Museum of Texas Tech University, Box 43191, 3301 4th Street, Lubbock, TX 79409, USA 2 Rawls College of Business, Texas Tech University, Box 42101, 703 Flint Avenue, Lubbock, TX 79409, USA; surya.yadav@ttu.edu * Correspondence: sankar.chatterjee@ttu.edu; Tel: +1-806-787-4332
Received: 13 December 2018; Accepted: 25 February 2019; Published: 1 March 2019
https://www.ncbi.nlm.nih.gov/pubmed/30832272
Food for thought- although it is all speculation.
OLV:
Yes, I took some rest!
Nice to see you again 🙂
daves re 32- Tree rings are not a code because they do not meet any standard and accepted definition of a code.
ET,
Tree rings in themselves are not a code, but a designer could use them to send messages in Morse Code, presumably.
DaveS
A designer could use them to send messages about the age of trees.
daves:
The Slowskys may like to communicate that way. But with who, we don’t know.
Where in the world DaveS was trying to go with tree rings I have no idea. I thought he might be trying to rehash the old fallacy that the coding in DNA could occur naturally, but he apparently does not hold the tree rings to occur naturally, i.e. “a designer could use them to send messages in Morse Code”. Whatever that is suppose to mean.
But anyways, the coded information in DNA is certainly not reducible to the laws of (classical) physics or chemistry:
And the other Darwinian gambit, i.e. Natural selection, is also a joke as to explaining the coded information within DNA,
Whereas, on the other hand, experimental realization of Maxwell’s demon thought experiment has now demonstrated that an Intelligent observer does have the physical capacity to encode information into material substrates at the atomic level.
As the following paper highlights, it has now been experimentally demonstrated that knowledge of a particle’s location and/or position converts information into energy.
And as the following 2010 article stated about the preceding experiment, “This is a beautiful experimental demonstration that information has a thermodynamic content,”
And as the following 2017 article states: James Clerk Maxwell (said), “The idea of dissipation of energy depends on the extent of our knowledge.”,,,
quantum information theory,,, describes the spread of information through quantum systems.,,,
Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,,
Moreover, classical information is shown to be a subset of quantum information by the following method. Specifically, in the following 2011 paper, “researchers ,,, show that when the bits (in a computer) to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer’s state to that of the computer in such a way that they know more about the memory than is possible in classical physics.,,, In measuring entropy, one should bear in mind that (in quantum information theory) an object does not have a certain amount of entropy per se, instead an object’s entropy is always dependent on the observer.”
Thus, to put it simply, Darwinists have no clue how coded information was put into DNA so as to circumvent the second law, whereas on the other hand, ID advocates have a demonstrated mechanism, via experimental realization of Maxwell’s demon thought experiment, that mind in able to encode information at the atomic level in order to circumvent the second law.
As far as empirical science itself is concerned, the matter is settled. The materialistic explanations of Darwinian evolution are found to be grossly inadequate as to explaining the coded information in DNA. And only Intelligence has the demonstrated casual sufficiency in order the explain the coded information in DNA.
SA,
Heh. Indeed.
Anyway, it seems there are at least a couple of different ID arguments having to do with codes here:
1) We could find a message encoded in nature in some obvious and human-readable way (e.g., in tree rings, perhaps in DNA, etc). It appears no one here finds that a realistic possibility.
2) We could find information circuits, or entire information processing systems (which use codes) somewhere in nature. That sort of message is more subtle, in that the designer is not explicitly announcing his existence. It’s not like the constellations suddenly rearranging themselves so as to spell out “John 3:16” or the like.
The message that the genetic code is the result of intelligent design is far from subtle. The components and systems required to carry it out is more than enough evidence for ID. To think that nature did it, without trying to nor wanting to, is beyond absurd. Especially given that nature seeks the line of least resistance, meaning simple is the rule. Just look at Spiegelman’s Monster.
Follow up video to the “DNA Is Code: Who Coded It?” video was just uploaded:
One of the icons or so-called irreducible complexity (IC) is the bacterium flagellum. However, there other perhaps even better examples of IC. In my opinion, prokaryote DNA replication is a far more daunting problem for the Darwinist. However, instead of one molecular machine, like the flagellum, you have several interacting machines acting in a coordinated manner. This still fits Behe’s definition of IC as being “a single system which is composed of several interacting parts, and where the removal of any one of the parts causes the system to cease functioning.”
For example, to start replication in prokaryote DNA you need an initiation enzyme which creates a replication bubble where another enzyme called helicase attaches itself and begins, like a zipper, to unbind the two complimentary strands of DNA double helix. Another enzyme called primase creates another starting point (a primer) on both of the separated strands known as the 5’ and 3’ or leading and lagging strands. DNA polymerase III uses this primer– actually a short strand of RNA– and adds the complementary nucleobases (A to T, T to A, C to G, G to C) to the single parent strand. In a nutshell, helicase divides one double stranded DNA helix into two single “parent” or template stands to which complimentary nucleotides are added by pol III and the result is two identical double stranded DNA helixes.
Of course, it is somewhat more complicated than that. (Please watch the first video below.) For example, as helicase unbinds the two strands of the double helix, which are wrapped around each other to begin with, there is a tendency for tangling to occur as a result of the process. Another enzyme called gyrase (or topoisomerase II) is needed to prevent this tangling from occurring. Another problem is that the bases for the lagging strand must be added discontinuously which results in short segments know as Okazaki fragments. These fragments must eventually be joined back together by an enzyme known as ligase. (We could also discuss error correction which is another part of the replication process.)
Here are a few videos which describe the process in more detail.
https://www.youtube.com/watch?v=O3v04spjnEg&t=2s
https://www.youtube.com/watch?v=bePPQpoVUpM
https://www.youtube.com/watch?v=0Ha9nppnwOc
While it’s true that the flagellum is irreducibly complex it is not essential for life itself. There are a number of single celled organism that exist without flagella. However, life cannot exist without DNA replication (nor transcription, translation etc.) Furthermore, with DNA replication the Darwinist cannot kick the can down the road any further. DNA replication in prokaryotes is as far as you can go and then you are confronted with the proverbial chicken or egg problem. DNA is necessary to create the proteins which are used in its own replication. For example, the helicase which is absolutely essential for DNA replication is specified in the DNA code which it replicates. How did that even get started? Maybe one of our know-it-all interlocutors can tell us.
The problem with the Darwinian approach is not scientific; it is philosophical. The people committed to this approach believe in it because they believe that natural causes are the ultimate explanation for their existence. However, science has not proven such a world view to be true. (That’s not something science can do.) So ironically, whatever they believe, they believe it by faith.
DaveS
ID gives the case that some aspects of the universe show evidence of having been designed by an intelligence. Clearly, it would be a pointless task for ID to travel around human culture and identify everything that we already know that humans created and then declare that as evidence. However, if there was a new discovery of caves where images of animals are inscribed on the walls, that is a relevant use for ID, somewhat – in a forensics task inferring that the images could not have been created by random, non-human movements. Yes, you’re right that nobody expects to find a quote from the Gospel written out in tree ring codes. I think, actually, such a finding would be dismissed as a radical outlier by most ID researchers, although it would be almost impossible to explain as a random occurrence. I suppose it would be evidence of human interference or better yet, some kind of alien intelligence. As a single instance, it’s an outlier. If every tree in a particular forest showed similar results, it’s an ID conclusion. There is some intelligence at work there. We could keep looking for such things, or looking at every rock formation, converting it to Morse Code and then reading it — but ID finds enough positive, repeated and stronger evidence in fine-tuning of the universe and biological systems.
Right, exactly. The designer appears to be hiding the evidence. For centuries, before the birth of micro-biology, we had no way of really seeing those ID messages in the cell. To me, it appears that those ID messages are imbedded into realty and are only slowly revealed over time, and even though the evidence for ID seems blatanly obvious to me, it is only rarely a case where the designer makes a bold, undeniable statement.
I say rarely because I think events like Guadalupe, the miracle of the sun at Fatima, the shroud of turin, stigmatic or incorrupted saints, for example, are very bold statements of ID, with a designer’s “signature” all over them, so to speak. But since all of those have a theological component, people do not like to investigate them. In the study of the cosmos or biology, the message is always very subtle, in my view. It can always be denied. The human imagination allows for a lot of escape paths and if people do not like the ID evidence, it’s relatively easy to invent alternative scenarios.
DaveS at #37 and #41:
Not so easily. Messages are coded using configurable switches, IOWs switches which, according to the laws of nature operating in the system, can well assume different configurations. IOWs, the configuration of each switch must be “neutral” according to the necessity laws operating in the system (for example it could be 0 or 1, indifferently), and its specific value is set by the designer. This freedom allows the designer to output the meaningful configuration.
In the case of tree rings, the configuration is set by the laws of nature and by the biology of the tree. IOWs, it is set by necessity. That would make it really difficult ti use the tree rings themselves to express any meaningful message.
In the same way, we can write a message in the sand, but not in the position of the atoms in a crystal.
DaveS:
The previous post was posted while I was writing, by mistake, so I am continuing here:
We could. But we have not, as far as I am aware. Science is done with facts, not with possibilities.
This sort of message we do observe all the time in biological beings. Maybe it is subtle, but it is very clear and implies design beyond any possible doubt.
For the constellations, you can just wait, while you look at the stumps… 🙂
EricMH at #29:
There is IMO a lot of confusion about compressibility in the ID debate. That’s why I would like to add a few thoughts about that.
Order and compressibility can be a form of specification. In that case, and only in that case, the configuration we observe is specified because it is ordered, and for no other reason.
Consider my example of the sequence of 1000 heads. It is highly ordered, and that’s why we distinguish it from a random outcome, which typically is very different. So, we suspect it may be designed because it is specified (by its order and compressibility) and it is complex (1000 bits).
However, as explained in my previous post, we have to consider the poibble role of necessity, because soimple necessity laws can generate order.
(More in next post, because this one, again, was posted by mistake: something in my typing, I suppose).
EricMH at #29:
So, let’s go on.
The sequence of 1000 heads is specified by its order. Maybe it is designed, maybe it is the result of necessity laws operating in the system. We have to check carefully, before reaching a conclusion.
But functional information of the kind we observe in language, in proteins, in living system, is not of that kind. Functional information is not specified by its order, even if some order can be detected in it. Indeed, functional information is specified in spite of its order.
I will try to be more clear. Let’s consider English language, and my Shakespeare sonnet, again.
You say, very correctly, that English language has its redundancies, and that it is, in part, compressible. But the point is: the sonnet we are considering is not specified because it is, in part, compressible. It is, indeed, specified beacuse it expresses specific meanings that are not compressible, using specific configurations of partially compressible components.
Just as a reminder, the sonnet I have always offered as an example is Sonnet 76:
“Why is my verse so barren of new pride?”
Now, while the first verse and the whole sonnet, in a wonderful paradox, seem to affirm the repetition in the poet’s obsessions, there can be no doubt that the sonnet itself is a masterpiece of creativity and originality of thought and feeling and beauty.
Now the simple question is: does its meaning, and creativity, and beauty, derive from its compressible components? Of course not. We could observe some sequence of letters which is equally compressible, from a Shannon perspective, but which means exactly nothing. In that case, we could infer design because of the compressibility (which, in itself, is a form of order), but not from any meaning in the poem.
So, in functional information, be it language or software or proteins, the functional specification is linked to what the object means or can do: meaning or function, descriptive information or prescriptive information, as Abel would say. If the switches used to get the configuration are partially compressible or not has no relevance. It’s the meaning or the function in the specific, unique configuration that matters.
Functional specifiction, if complex enough, is sufficient to infer design. If more than 500 bits are necessary to implement that meaning or function, and if we observe it implemented in some object, we can infer design for that object. It’s as simple as that.
@GP, I agree compression is orthogonal to whether something is meaningful.
Functional is some kind of external definition, and perhaps is obvious from a practical standpoint. But, what is the mathematical definition of functional?
While a comprehensive definition is probably not possible, at least a necessary component is that it is a detachable specification, per Dembski. Otherwise, we cannot say the function did not itself arise from the chance hypothesis. E.g. take a Kolmogorov random bitstring, copy it, and now each incompressible bitstring has a perfect, external specification in its twin. If we do not require ‘detachability’ in our specification, then both incompressible bitstrings have maximal CSI. Additionally, this operation can be done entirely through a natural process and does not indicate design. So, in this example, by removing the detachability requirement high CSI clearly does not indicate design.
John_a_designer:
All you had to do was ask: Peering into Darwin’s Black Box:
The cell division processes required for bacterial life
Evolutionists just handwave it all away or they will attack the author.
EricMH:
Does it require one? We observe something doing some work of some type and we call it a function. Functionality is a specification of sorts. Then we attempt to discover how it all came to be the way it is, in part by using our knowledge of cause and effect relationships. We could apply Wm. Dembski’s mathematics with respect to discrete combinatorial objects to help us.
EricMH:
While I have nothing against the concept of “detachability”, I don’t think it is absolutely necessary.
Functional specification can be well defined empirically without any problem. I have done that many times here.
In brief, the procedure is as follows:
a) An observer defines a function that can be implemented by some specific material object. The important point is: any observer can define any function. However, the definition of the function must be objective, and it must include some level that defines the function as present or absent, and an objective way to measure it.
b) After we have defined the function, we can verify that the object can implement it, and we can measure the minimal complexity needed to implement that function as defined. IOWs, the number of specific bits of configuration that are necessary to implement the function as defined. That is the functional complexity linked to that defined function, and observed in the object in relation to the defined function. That functional complexity can usually be computed as the ratio between all possible forms (or sequences, for digital information) that can implement the function as defined, and the total number of possible configurations.
So, functional information is always relative to some defined function. The same object can have different functional information for different functions.
The point is: if an object exhibits more than 500 bits of functional information, for any possible function, we can infer design for it.
Of course, any function we define must be defined independently from the observed configuration: this is the only important rule. Maybe this is w, and hat you mean by “detachability”: if that is the case, you are perfectly right.
IOWs, we cannot define the function using the specific configuration observed.
Just to be clear, if we observe a sequence of 100 digits, we cannot use that sequence to set it as the key to a safe, and then define the function as the ability to open that safe. That would be cheating.
You say:
No. You cannot define any function for string B. It is the same as string A because you have copied it. And so? That is no functional specification. And copying simply means to duplicate already existing information.
If I copy a Shakespeare sonnet, I am creating no new functional information, no new meaning. The procedure of copying is only a necessity procedure where object A determines the form of object B according to laws operating in the system (the copying system). There is no design here, except maybe the design of the copying system (which has nothing to do with the design of the sonnet).
So, when the information for a protein in a protein coding gene is trancribed and translated, and it generates, say, 1000 copies of the protein, no new information about the protein sequence is generated: the necessay information is already in the gene. The gene already has in itself the bits necessary to implement the function of the protein.
So, if by detachability you only mean that the function cannot be defined ad hoc for the specific bits observed, then I agree. But that has always been an obvious requirement of the definition of functional information.
That said, it is extremely easy to define functional information and a way to measure it. And in all cases, more than 500 bits of functional information imply design, whatever the defined function may be.
One of my first OPs here has been about defining functional information objectively:
https://uncommondescent.com/intelligent-design/functional-information-defined/
Here we go again with ‘information’ misuse (abuse) and tree rings (very much dependent on the sampling rate) and “specified complexity” nonsense and DNA as information (when 1 GB can’t even hold your phone OS). And let’s not forget the “Shakespeare sonnet” and “functional information”.
Here’s some help:
http://nonlin.org/dna-not-essence-of-life/
http://nonlin.org/biological-information/
http://nonlin.org/intelligent-design/
gpuccio,
Oddly enough, I’m very intrigued by these sorts of events. I think I would find this evidence most convincing, if I could witness it myself.
Nonlin.Org:
Here we go again…
Yes. Definitely, I have not changed my mind!
And neither have you, it seems… 🙂
DaveS at #55:
I am intrigued too, of course. And that kind of events certainly deserves to be investigated with an open mind.
However, I am afraid that at present there is no chance that they are accepted as facts by most scientists, and so it would be difficult to use them in some general scientific theory.
So, for the moment, I am perfectly happy to stick to the billions of amazing miracles that everybody can observe daily in living beings. 🙂
EricMH:
It’s rather simple.
“Functional” just mean that some object can be used to implement some explicitly define function. Any possible function will do. Of course, the same object will be functional for some functions, and not for others.
The mathematical definition of functional information, instead, is: the least number of specific bits necessary to implement the defined function.
Again, it’s as simple as that.
gpuccio,
A couple of random questions:
Should any nontriviality conditions be imposed on the concept of “function”? For example, I could ask how much functional information is necessary to implement a paperweight. How about “a solid object which displaces 1 liter of air”? These functions are obviously uninteresting, but it would seem under your definition, they should each possess (or specify?) a well-defined amount of functional information.
A slightly more interesting question, perhaps: How much functional information is required to construct a mechanism which rotates a small metal shaft at a rate of 1 rotation per hour (i.e., a very simple clock)? I’m not expecting you to calculate this, mind you, it’s more food for thought.
daves- While you are awaiting gpuccio:
We should investigate everything we observe to get to the root cause of it and understand it. So it would all depend on the specific paperweight.
ET,
But it shouldn’t “all” depend on a specific paperweight. According to our definition, we take a minimum over all functional paperweights.
Hold down paper- how many bits is that? It isn’t CSI, that’s for sure.
GP@57, you mention that the same object can have different functions, but does that mean that it has more than one measure of functional information. For example, the artifact that was used as the standard kilogram for over a century surely has a tremendous amount of functional information. But as of a few months ago, it is little more than a paper weight. Has it lost its functional information?
Brother Brian:
Let’s see your math. Or are you just fishing?
We do NOT use functional information for everything to determine whether or not it was the product of intelligent design.
ET,
One would hope not.
Now to be fair, gpuccio would likely respond by asking for the more information about the specific function. For example, perhaps this paperweight needs to be able to hold down a stack of twenty A4 sheets of paper (say 80 gsm) in a 10 km/hr breeze.
I would be curious to see if anyone can come up with a number in bits.
I would be curious as to why anyone would want to.
ET,
To show it’s possible, of course.
We do NOT use functional information for everything to determine whether or not it was the product of intelligent design.
But we do have Measuring the functional sequence complexity of proteins
ET
Without an internationally recognized standard for the kilogram (or an equivalent standard for mass) we would not have been able to put people on the moon. Surely that means that the kilogram has functional information.
Brother Brian:
How do you know?
It functions as a standard.
But that is all moot. YOU made a claim and I asked you to back it up. Can you?
ET,
If that’s directed toward me, then of course no one said otherwise. gpuccio says that “any possible function will do”, which I understand as implying that we should be able to calculate the functional information required to implement the paperweight function I described.
ET
Are you seriously suggesting that we could put a man on the moon without a standard unit of mass? Newton begs to differ. If you are just going to keep parroting back this nonsense I am going to take my earlier advice and continue to ignore your comments.
DaveS, I would be interested in you opinion on my statement that the standard kilogram artifact that was used for over a century had functional information. It was critical for a century’s worth of advances in industry, technology and commerce. Every country maintained their own kilogram artifact that was traceable to the prototype kilogram housed in France, and these artifacts were critical in the design and manufacture of everything from bicycles to airplanes to space craft. As well, it was used in commerce in the sale of anything that was based on weight.
However, in 2019 this artifact was replaced. Does this mean that the functional information that this artifact contained for well over a century cease to exist?
Brother Brian:
Unbelievable. TRY to stay focused. We could easily put a man on the moon without an INTERNATIONAL standard.
So you are parroting the nonsense here, Brian. And you have once again avoided the question. That is very telling
Brother Brian:
Evidence please- and don’t ask me a question, just provide the reference to support your claim. Or stop making them
DaveS at #59 (and others):
Good questions from you and from others. I will try to answer them, in some order I hope. I think my answers will be useful to the general debate, so I invite all who are interested to read this post and those immediately following, whoever they are addressed to.
So, your first question:
No. No conditions at all are imposed to the concept of function. Anybody can define any function he likes, and the functional complexity can be assessed (at least in principle, it is not always easy) for each of them. It is not important if the function is interesting or not. Usually, uninteresting functions will have low functional complexity, as I will try to show.
I list again the only rules that must always be respected in defining a function. They are not “conditions”, just obvious procedures that must be followed to have the right result:
1) We can never define a function using the specific value of the bits already observed in the object. In that case, we use the (generic) information observed in the object and we use it to define an ad hoc function. That is obviously wrong. See my example of the safe, in post #53.
2) While the function is defined by an observer, it must be defined explicitly, so that it becomes an objective reference for anybody. There must be no ambiguity in the definition.
3) Included in the definition there must be a level that defines the function as present or absent. IOWs, we must be able to assess potential objects as exhibiting the function or not, in some objective way, direct or indirect.
4) All reasonings and measurements of funtional complexity are never done abstractly. To be useful in inferring design, they must always refer to some specified system, time window, and so on.
To see that, let’s try to apply those principles to your example:
“A solid object which displaces 1 liter of air”
This is a perfectly valid function definition, but incomplete. We need to know the system, the time window, and the level of precision to assess function.
So, let’s say we have a beach with approximately 1 billion stones, formed apparently by natural laws operating in that system in one million years.
We observe a stone whose volume is one liter.
Is it designed, or not?
Let’s say we define our function as “having a volume of one liter with a precision of one part in a million”.
OK, that is more complete. First of all, the reference we are using (the liter) exists independently (we are not using the observed volume to define it). Of course, any stone could be defined as having the volume it has. In that case, we would be using the observed configuration (the volume of the observed stone) to define the function, and we know that it is not correct. But with the liter, we have not that problem.
So, using our definition, we can in principle apply it to generate a binary partition on the set of all possible stones (which could include the billion we can observe, but also all those that could have been forned in the time window). However, a finite number. Our binary partition will classify all possible objects in the system as exhibiting the function or not.
At this point, the ratio of all possible objects in the system exhibiting the function (the target space) to all possible object in the system (let’s say a billion, the search space) is the functional complexity of our function in that system.
We can try to compute that. It may not be easy, but in principle it can be done. Possible by indirect methods. The task here is more difficult because we are dealing with analoigic configurations. It’s usually easier with information that is natively in digital form, like in most biological objects.
Now, let’s say that in some way we compute that a stone randomly generated in our system by natural causes has a probability of satisfying our definition of 1:10^20, IOWs 10^-20, IOWs about 66 bits of functional information.
Now we must consider the probabilistic resources of the system. If we evaluate that about one billion stones have been generated in the system in the time window, then the probabilistic resources are about 10^9, IOWs about 30 bits.
So, we have a result that has a probability of 66 bits in a system which has probabilistic resources of 30 bits. There is a global improbability of observing that result of about 36 bits. And that is something.
Is that enough to infer design? Not accordign to the general, extremely conservative rule used usually in ID: 500 bits of functional information observed, whatever the probabilistic resources of the system.
But, in the end, our conclusion depends on the system we are observing, the meaning of our conclusion, its generality, and so on.
The 500 bits threshold is usually selected because it ensures utter improbability in practically any possible physical system in the universe, whatever the probabilistic resources.
More in next post.
DaveS at #59 (and others):
So, let’s go to your second question:
Again we can apply, in principle, the method described.
We need a system where such an object could arise without design in some time window, and then we have to compute the probability of such an object arising (of course the function must be defined with precision) by chance, IOWs the ratio between spontaneous objects that would exhibit the function and the total number of objects generated in the system. I will not try to compute that for any system, but I would say that, if the function is defined with high precision, the probability will be really low. For a whole watch, I would rather blindly accept Paley’s inference of design in any case and system.
DaveS and ET:
About paperweights:
Of course a paperweight has low functional information: if it is defined without great precision (no great precision is necessary, I would say), then a lot of possible objects qualify.
Let’s say that we only need a solid object weighing something between 1 and 2 kilograms.
In one of my OPs, I have used the paperweight function to illustrate an object with two different functions and two differen values of functional information.
A laptop can be used as a paperweight and as a computer.
As a paperweight, its functional information is very low.
As a computer, it is extremely high.
Is that clear?
daves and gpuccio:
Specification- that would be the specification the paper weight needed to meet. For example, perhaps this paperweight needs to be able to hold down a stack of twenty A4 sheets of paper (say 80 gsm) in a 10 km/hr breeze.
We would also have to know if the stack had to be held down such that the papers don’t bend or get damaged.
So a stone used as a paper weight would have some functional information. But that functional information is imparted by the person who wants to meet some criteria, such as the above specification. The stone wasn’t necessarily designed, unless it had to be cut and shaped, but its function was.
(I was typing when gpuccio posted 78. I agree with 78)
Brother Brian at #63:
That’s perfectly correct. see my example in the previous post (laptop computer used as a paperweight).
Absolutely not. It was designed to correspond to a very precise level to be used as reference. That has been true up to May 20, 2019. Now the standard has changed, but it does not change the function of the previous object, which has been used for a lot of time. And it has a rather precise corresponence to the mass of one liter of water, anyway. So, nothing has changed about its functional information.
DaveS at #65:
I hope my previous posts have clarified my views about that.
DaveS at #71:
Of course that’s correct. I think I have shown the procedure. A real computation requires defining a system and time window, and a precise functional definition. And, of course, some real work.
Brother Brian,
Regarding the kilogram example, I think it has changed in a sense. In the past, the kilogram was by definition exactly the mass of this object. It was correct to infinitely many decimal places, so to speak. Now its mass is just very close to 1 kg (and the error varies as atoms occasionally fly off it). Perhaps that means its functional information has changed.
DaveS at #83:
Its functional information is linked to the way it was designed at the beginning, to satisfy certain requirements. It has not changed. Only its use has changed now, but that has nothing to do with the specific configuration that was given to the object when it was designed.
gpuccio,
Thanks for the responses. My question then becomes, how does the number 66 bits quantify the amount of information needed to implement the function?
If I actually wanted to create a solid object which displaces 1 liter, how does 66 bits fit into the design and/or construction process?
DaveS at #83:
It is also interestying to consider that the functional information when it was designed was relative to its properties at the moment of its design (for example, to correspon rather well to the weight of onw liter of water). Its use as a reference after it was built is something “after the fact”, so it has nothing to do with the functional information.
Remember, the functional informatio measures the bits that have to be configured to make an object able to perform a pre-defined, independent function.
If we take a random stone and decide that it will be the new kilogram fron now on, whatever its weight, there is no functional information in the object: we are just using it for a functiona that we define using the configuraion of the objects itself. The new function is designed, but not the original object.
Functional information means that the designer has to configure exactly a number of bits because, otherwise, the independent function cannot be implemented by the object. Natural causes can generate some functional information, but itis always very low, in the range of what probabilistic resources can do.
That’s why paperweights abound in nature without any need for design, but watches and computers don’t.
DaveS at #85:
It is the level of minimum precision that I have to get to have that exact volume. Of course it is not a true measure. I have derived it from the hypothesis that such a precise volume could be attained by chance in that system only once in 10^20 attempts.
1:10^20 = 10^-20
-log2(10^-20) = about 66 bits
DaveS:
The meaning of the value in bits is more intuitive when we are dealing with digital information in native form. However, the meaning is always the same: -log2 of the ratio between target space and search space.
gpuccio,
Ok, I might be getting it. Would it be correct to say this functional information measure is always relative to a “null hypothesis” (in this case that the 1-liter solid was produced by natural processes on this 1-billion stone beach)?
DaveS:
Yes, of course. The absence of design is the null hypothesis.
Joe asks for evidence that unit standardisation a good thing. Here is a negative example:
https://en.m.wikipedia.org/wiki/Mars_Climate_Orbiter
daveS
That is very good to hear and it tells me that you are open to the evidence, at least in this case, through direct experience. There are somewhat living artifacts or testimonies of design in the tilma of Guadaluope and the shroud. It’s not a direct experience of the events, but at least artifacts that can be observed. In both cases, some inference must be drawn about the origin of both. I think the miracle of the sun is very difficult to explain from a materialist perspective, even though it is an historical event that is subject to that kind of analysis. The stigmata of St. Pio, for example, is documented with photos. But even here, there is always some room for denial. To me, they’re strong evidences of design, but as you said previously, there’s nothing that makes an absolute statement which is completely undeniable. I see that as part of the designer’s methodology. Others think it is a weakness of a design perspective that there is never found anything like a Shakespeare sonnet written in Morse Code in tree rings.
timothya @ 91- No one here asked for evidence that unit standardization a good thing.
I would say the Mars orbiter problem was a communication issue and not a standardization issue. If the contract called for one thing and something else was delivered, that is a sign of a communication breakdown. But it does show how critical complex specified and functional information can be .
ET
Silly me. And I always thought standardization was a communication issue. But what would I know? I only make a living in the standardization field.
Brother Brian:
Context. You know that word that you refuse to understand. Quote-mining, on the other hand, is something that you do quote well.
The sentences after the one that you so cowardly quote-mined should have been explanation enough for someone who allegedly makes a living in the standardization field. But that is moot as the programmer was using a standard, just the wrong one. Hence the communication issue.
gpuccio,
Is there a benefit to stating all this in terms of functional information? In order to do that, you have to estimate the relative frequency with which this function would occur naturally and the total number of “trials” that have occurred (10^-20 and 10^9), so you already know the chance of greater than zero functional trials is miniscule (assuming the null), hence the null hypothesis is likely false. Why not just stop there?
A 2007 paper published in PNAS published by Jack Szostak and his colleagues defines functional information this way:
“Functional information is defined only in the context of a specific function x. For example, the functional information of a ribozyme may be greater than zero with respect to its ability to catalyze one specific reaction but will be zero with respect to many other reactions. Functional information therefore depends on both the system and on the specific function under consideration. Furthermore, if no configuration of a system is able to accomplish a specific function x [i.e., M(Ex) = 0], then the functional information corresponding to that function is undefined, no matter how structurally intricate or information-rich the arrangement of its agents.”
https://www.pnas.org/content/104/suppl_1/8574
Take for example, a bike sprocket. Without a system, the bicycle, the sprocket has no function. However, it still has a potential function and a purpose. If we find a sprocket in a warehouse next to a factory where they assemble bicycles we could quickly deduce what the purpose of the sprocket is. In other words, it still has a purpose defined by its potential function.
I was trying to make a similar point above at #44 when I talked about helicase and DNA replication.
https://uncommondescent.com/intelligent-design/why-describing-dna-as-software-doesnt-really-work/#comment-679003
What is the function of helicase without the DNA helix? It has no other function. So it is highly specified.
The paper that JAD posted might answer my last question to gpuccio:
And this definition is very similar to the one gpuccio illustrated above. I don’t see any dependence on the null hypothesis we discussed above (absence of design), however [Edit: Perhaps it’s implicit?]. Would this matter? I guess the denominator in Szostak’s version is simply the total number of possible configurations of the system, period, not the total number of configurations that are “reachable” through natural processes.
ETA:
My compliments to gpuccio. His detailed and very clear explanation makes the abstract to the Szostak paper comprehensible.
Jad@98, thank you. That makes it clearer. And matches up with what I thought functional information is.
DaveS at #97:
I am not sure I understand your point.
Functional information and its measurement are essential to infer design. We can infer design when the functional information is high enough, in relation to the probabilistic resources of the system.
Where should we “stop”? We stop when, after having measured the functional information for some function, and finding it high enough (for example, more than 500 bits), we infer design for the object.
That was the purpose from the beginning, wasn’t it?
John_a_designer at #98:
Szostak’s definition is essentially the same as mine.
Of course the function is defined in a context. There is no problem with that. However, the functional information corresponds to the minimal number of bits necessary to implement the function. The function definition will include the necessary context.
For example, helicase will be defined as a protein that can “separate two annealed nucleic acid strands (i.e., DNA, RNA, or RNA-DNA hybrid) using energy derived from ATP hydrolysis” (from Wikipedia), of course in cells with nucleic acids and ATP.
DaveS:
Yes, as said Szostak’s definition of functional information is the same as mine.
The null hipothesis has a fundamental role in inferring design from functional information, not in the definition of functional information itself.
For obvious reasons, Szostak does not use the concept of functional information to infer design. That’s why you don’t see any mention of the null hypothesis in his paper.
But functional information above a certain threshold is a safe marker of design, and allows to infer design as the process which originated the configuration we are observing.
Of course, that can be demontrated separately. Up to now, the discussion was about the definition of functional information and its measurement, so I have sticked to that.
DaveS:
“My compliments to gpuccio. His detailed and very clear explanation makes the abstract to the Szostak paper comprehensible.”
OK, so I share that with Szostak. Good, so I will feel less a “bad guy” each time I criticize his paper about the ATP binding protein (and, unfortunately, that happens quite often here! 🙂 )
gpuccio,
This point might now be moot, but if we simply wanted to test the null hypothesis of no design, we don’t really need to transform the probability to units of functional information via the -log_2 function, do we? Using the two numbers 10^-20 and 10^9, we can show the p-value is tiny and therefore reject H_0.
After some reflection, I guess it’s convenient to frame this all in terms of bits of functional information and probabilistic resources. The numbers (66 bits, e.g.) turn out to be easier to work with, anyway.
Gpuccio @56
Yet one of us is wrong and leading others astray. But you’re not even curious, let alone interested in determining the truth. Nice going!
Here’s an article from the Stanford Encyclopedia of Philosophy:
Levels of organization are structures in nature, usually defined by part-whole relationships, with things at higher levels being composed of things at the next lower level. Typical levels of organization that one finds in the literature include the atomic, molecular, cellular, tissue, organ, organismal, group, population, community, ecosystem, landscape, and biosphere levels. References to levels of organization and related hierarchical depictions of nature are prominent in the life sciences and their philosophical study, and appear not only in introductory textbooks and lectures, but also in cutting-edge research articles and reviews. In philosophy, perennial debates such as reduction, emergence, mechanistic explanation, interdisciplinary relations, natural selection, and many other topics, also rely substantially on the notion.
Yet, in spite of the ubiquity of the notion, levels of organization have received little explicit attention in biology or its philosophy. Usually they appear in the background as an implicit conceptual framework that is associated with vague intuitions. Attempts at providing general and broadly applicable definitions of levels of organization have not met wide acceptance. In recent years, several authors have put forward localized and minimalistic accounts of levels, and others have raised doubts about the usefulness of the notion as a whole.
There are many kinds of ‘levels’ that one may find in philosophy, science, and everyday life—the term is notoriously ambiguous. Besides levels of organization, there are levels of abstraction, realization, being, analysis, processing, theory, science, complexity, and many others.
Although ‘levels of organization’ has been a key concept in biology and its philosophy since the early 20th century, there is still no consensus on the nature and significance of the concept. In different areas of philosophy and biology, we find strongly varying ideas of levels, and none of the accounts put forward has received wide acceptance. At the moment, the mechanistic approach is perhaps the most promising and acclaimed account, but as we have seen, it may be too minimalistic to fulfill the role that levels of organization continue to play in biological theorizing.
https://plato.stanford.edu/entries/levels-org-biology/#ConcRema
A layer of regulatory information on top of DNA is proving to be as important as genes for development, health and sickness.
To explain how the epigenome works, some have likened it to a symphony: the sheet music (genome) is the same, but can be expressed in vastly different ways depending on the group of players and their instruments (epigenome).
Human DNA in a single cell is enormously long-six feet-and folds with proteins into packages (chromatin) to fit within a nucleus.
https://inside.salk.edu/summer-2016/epigenomics/
A major question in cell biology is how cell type identity is maintained through mitosis.
We are only starting to understand the mechanisms by which epigenetic information contained within the vertebrate chromatin is transmitted through mitosis and how this occurs in the context of a mitotic chromosome conformation that is dramatically different from interphase. One important question that remains unanswered is how molecular details of epigenetic bookmarks are read in early G1 and enable re-establishment of cell type specific chromatin organization. Insights into these processes promise not only to lead to mechanistic understanding of mitotic inheritance of cell type specific chromatin state, they will also reveal how the spatial organization of interphase chromosomes is determined in general by the action of cis-acting elements along the chromatin fiber. This will also lead to a better understanding of what epigenetic mechanisms underlie processes in which cell type identity is changed, for example in stem cell differentiation or in diseases that result in cancer development and aging.
It will be very interesting to explore the pathways and mechanisms that are used to initiate epigenetic changes in cellular phenotype, how differences between sister chromatids are established and proper sister segregation is controlled.
Epigenetic Characteristics of the Mitotic Chromosome in 1D and 3D
Marlies E. Oomen and Job Dekker
Crit Rev Biochem Mol Biol. 2017 Apr; 52(2): 185–204. doi: 10.1080/10409238.2017.1287160
PMCID: PMC5456460
NIHMSID: NIHMS863269
PMID: 28228067
Cells establish and sustain structural and functional integrity of the genome to support cellular identity and prevent malignant transformation.
Physiological control of gene expression is dependent on chromatin context and requires timely and dynamic interactions between transcription factors and coregulatory machinery that reside in specialized subnuclear microenvironments. Multiple levels of nuclear organization functionally contribute to biological control…
Cells establish and retain structural and functional integrity of the genome to support cellular identity and prevent malignant transformation. Mitotic bookmarking sustains competency for normal biological control and propetuates gene expression associated with transformed and tumor phenotypes.
Elucidation of mechanisms that mediate the genomic organization of regulatory machinery will provide novel insight into control of cancer?compromised gene expression.
Higher order genomic organization and epigenetic control maintain cellular identity and prevent breast cancer
A.J. Fritz N.E. Gillis D.L. Gerrard P.D. Rodriguez D. Hong J.T. Rose P.N. Ghule E.L. Bolf J.A. Gordon C.E. Tye J.R. Boyd K.M. Tracy J.A. Nickerson A.J. van Wijnen A.N. Imbalzano J.L. Heath S.E. Frietze S.K. Zaidi F.E. Carr J.B. Lian J.L. Stein G.S. Stein
https://doi.org/10.1002/gcc.22731
Genes, Chromosomes and CancerVolume 58, Issue 7
https://onlinelibrary.wiley.com/doi/full/10.1002/gcc.22731
Genome structure and function are intimately linked.
the nuclear architecture of rod photoreceptors differed fundamentally in nocturnal and diurnal mammals. The rods of diurnal retinas, similar to most eukaryotic cells, had most heterochromatin situated at the nuclear periphery with euchromatin residing toward the nuclear center. In contrast, the rods of nocturnal retinas displayed a unique inverted pattern with the heterochromatin localized in the nuclear center, whereas the euchromatin and nascent transcripts and splicing machinery lined the nuclear periphery. This inverted pattern was formed by remodeling of the conventional pattern during terminal differentiation of rods.
the inverted rod nuclei acted as collecting lenses, and computer simulations indicated that columns of such nuclei channel light efficiently toward the light?sensing rod outer segments. Thus, nuclear organization displays plasticity that can adapt to specific functional requirements.
Understanding the mechanisms that underlie the nuclear structural order and its perturbations is the focus of many studies. We do not have a complete understanding; however, a few key mechanisms have been described.
Introduction to the special issue “3D nuclear architecture of the genome”
Sabine Mai
https://doi.org/10.1002/gcc.22747
Genes, Chromosomes and CancerVolume 58, Issue 7
Gpuccio @ 102:
The DNA Helicase is composed of 3 polymers that contain 14 chains (454 amino acid residues long).
https://cbm.msoe.edu/crest/ePosters/16DNAHelicase4ESV.html
What is the probability that DNA Helicase could originate by chance? Below Steven Meyer has elucidated a method by which we can calculate the probability of a single protein originating by chance alone.
http://www.arn.org/docs/meyer/sm_origins.htm
Actually I believe that the probability for a 100 aa protein forming by chance would be 10^30th + 10^30th + 10^130th = 10^190th, according to Meyer, or 10^125th, according to Sauer. For some reason Meyer doesn’t give us the grand total– a chance probability that for all intents and purposes is impossible. The probability that 454 aa helicase forming by chance is therefore absolutely staggering. Someone else can do the calculation. I won’t because it would be pointless to do so. Again, helicase forming by chance in isolation would have no function. Helicase’s function depends on the existence of the system of which it is a part and that technically involves the entire cell.
Therefore, if we really want grasp the probabilities we need to calculate the probability of a basic prokaryotic cell. I believe Harold Morowitz has already done something like that. The number is “astronomical.”
Please note, I am not arguing this is necessarily true of every function within the cell. For example, the bacterial flagellum adds the function of motility to the cell but since there are many prokaryotes which lack motility it is obviously not essential for survival. On the other hand, the flagellum is not constructed out of a single protein. Are the any stand-alone single function proteins which add functionality to the cell? I am not suggesting that there are not. I just can’t think of any.
John_a_designer:
Of course I agree with you.
Given living cells, with their complex systems already existing, the probability of a new functional protein will be linked mainly to the probability of getting the right sequence of nucleotides in a protein coding gene. Which is, however, astronomically small for almost all proteins.
I have said here many times that even one complex protein is enough to falsify darwinism.
The most difficult aspect in computing functional complexity for observed proteins is to estimate the target space. The search space is easy enough, and for all practical purposes it can be equaled to 20^n, where n is the number of aminoacids in the observed protein.
But the target space, IOWs the number of those sequences that could still perform the function we observe at a biologically relevant level, is much more difficult to estimate.
I am not familiar with Sauer’s method, that you quote. I will try to give a look at it, it seems interesting.
I have quoted here many times Durston’s method, based on conservation in protein families. And of course I have used many times, in detail, a method developed by me, and inspired to similar ideas as Durston’s, based on homologies conserved for long evolutionary times.
Using that method, for example, I have shown here:
https://uncommondescent.com/intelligent-design/the-amazing-level-of-engineering-in-the-transition-to-the-vertebrate-proteome-a-global-analysis/
that “more than 1.7 million bits of unique new human-conserved functional information are generated in the proteome with the transition to vertebrates”.
So, if one single protein is enough to falsify darwinism, 1.7 million bits of new, original functional information generated in a specific evolutionary event, in a time window of a few million years at most, is its final death certificate.
And this is just about protein coding sequences, without considering the huge functional information arising in the epigenome, in all regulatory parts, and so on.
I am really happy that I must not defend the neo-darwinian theory. Of course Intelligen Design is the only reasonable approach, it’s as simple as that.
GP @ 22, welcome back, we missed you. I note, there are dynamic-stochastic systems that blend chance and necessity, with feedback and lags bringing memory and reflexive causal aspects. They are a refinement, they don’t change the main point. KF
PS, 113, I note that for some cases, once the config space is big enough, atomic resources of sol system or observed cosmos are insufficient to carry out a search that rises above rounding down to zero. 500 – 1,000 bits suffices. At that point, needle in haystack challenge already makes blind chance and/or mechanical necessity maximally implausible without explicit probability estimates. And if one suggests a golden search, a search in a config space is a subset sampled, so a higher order search for a golden search is a search from the power set, where for 500 bits, the log of the power set’s cardinality is nearly 10^150. I don’t try to give the actual number, that’s calculator smoking territory, indeed when I asked an online big num calculator to spit it out for me, it complained that it cannot handle a number that large. Hence, the Dembski point that search for search is exponentially harder than direct search. The FSCO/I result is hard to evade, just like fine tuning.
DNA replication
Differences in firing efficiency, chromatin, and transcription underlie the developmental plasticity of the Arabidopsis DNA replication origins
Joana Sequeira-Mendes, Zaida Vergara, Ramon Peir1, Jordi Morata, Irene Aragüez, Celina Costas, Raul Mendez-Giraldez, Josep M. Casacuberta, Ugo Bastolla and Crisanto Gutierrez
DOI: 10.1101/gr.240986.118
Genome Res. 2019. 29: 784-797
GP @113:
“I am really happy that I must not defend the neo-darwinian theory. Of course Intelligen Design is the only reasonable approach, it’s as simple as that.”
🙂
It feels good to be on the winning side of the debate.
The neo-Darwinian ideas are under attack from the third way folks too, who are not ID friendly
Gpuccio @ 113,
Neither am I. I was quoting Meyer who was alluding to Sauer who apparently has an argument that proteins do not have to be so highly specified. Indeed, as I am sure you already know, there is some variability in the sequencing of well known proteins. For example, not all cytochrome-c is the same. Douglas Axe I know has done some work that at least from what I understand “probably puts an ax” to Sauer’s higher probability estimate. However, even if Sauer is right 10^125th, for a 100 aa protein does not bode well for any kind of naturalistic explantion.
In his book, The Varieties of Scientific Experience: A Personal View of the Search for God,* Carl Sagan life also calculates the probability of “a modest” 100 aa long enzyme. “A way to think of it,” he writes, “is a kind of a necklace on which there are a hundred beads. There are twenty different kinds any one of which could be in any one of these positions. To reproduce the molecule precisely, you have to put all the right beads– all the right amino acids– in the molecule in the right order. If you were blindfolded,” Sagan then goes on to explain your chances of coming up with the right sequences by chance alone, he calculates, is about 10^130th or the same result as Steven Meyer. (By the way, Sagan gave these lectures in 1985 before there even was a modern ID movement.) He then adds that, “Ten to the hundred-thirtieth power, or 1 followed by130 zeros, is vastly more than total number of elementary particles in the entire universe, which is only [only?] about ten to the eightieth (10^80th).” (p.99-100)
He then, like Dembski, factors in Planck time along with a universe full of planets with oceans like our own (not that that really helps any) and concludes (ta-dah!,) “You could never produce on enzyme molecule of predetermined structure.” Of course, ID’ists agree that there is not enough time or chance in the entire universe to form even one modest protein molecule. But not so fast.
Sagan then tries to very deftly take back with his left hand what he has just put on the table with his right. (Haven’t we seen this act before?)
“Now let’s take another look,” he writes on page 101. “Does is matter if I have a hemoglobin molecule here and I pull out this aspartic acid and put in a glutamic?” (Notice that in less than 2 pages he had gone from a protein of 100 aa’s to one with over 850 aa’s. I won’t explain why he does this– well, actually I am really not sure why.) “Does that make the molecule function less well? In most cases it doesn’t. In most cases an enzyme has a so-called active site, which is generally about five amino acids long. And it’s the active site that does the stuff. And the rest of the molecule is involved in folding and turning the molecule on of turning it off. And it’s not a hundred places you need to explain, it’s only five to get going. And 20^5 is an absurdly small number, only about five million. Those experiments are done in one ocean between now and next Tuesday. Now, remember what we are trying to do: We’re not trying to make a human being from scratch… What we’re asking for is something that gets life going, so this enormously powerful sieve of Darwinian natural selection can start pulling out the natural experiments and encouraging them, and neglecting the cases that don’t work.”
Does Sagan have a point here? Remember he gave the Gifford lectures right after his big success with Cosmos, so publicly he had achieved scientific “rock star” status. Nevertheless, I a mere layman, can spot at least half a dozen big flaws– actually major blunders– in his argument. Does anyone else see them?
But who am I to question the great Carl Sagan? He was one of the pioneers of astrobiology and SETI and played a big role in helping to design the scientific instruments for NASA’s Viking Mars landers and Voyager interplanetary probes. Who am I with zero scientific credentials to my name to question such greatness? So am I mistaken in thinking that Sagan’s thinking is mistaken?
Again, what do you see?
*[According to Wikipedia: “The Varieties of Scientific Experience: A Personal View of the Search for God is a book collecting transcribed talks on the subject of natural theology that astronomer Carl Sagan delivered in 1985 at the University of Glasgow as part of the Gifford Lectures.[1] The book was first published posthumously in 2006, 10 years after his death. The title is a reference to The Varieties of Religious Experience by William James.
The book was edited by Ann Druyan, who also provided an introduction section…”]
JAD,
I don’t know thing 1 about this stuff, but I would appreciate an enumeration of these blunders at some point.
Edit: And I don’t doubt that Sagan made many errors. The discussion of the death of Hypatia in the original Cosmos tv series apparently is a well-known example.
John_a_designer @118:
“But who am I to question the great Carl Sagan?”
You are a thinking person hence you may question anybody. You may not get any coherent answer back, but that’s not your problem.
KF at #114 and 155:
Hi, always great to discuss with you! 🙂
Of course, chance and necsssity are often mixed in systems. That’s why we have to try to separate them in our evaluations. But, as you correctly say, that does not change the main point.
OLV:
“It feels good to be on the winning side of the debate.”
Yes, even if almost everybody thinks the opposite, it’s truth that counts! 🙂
And thank you for your usual very interesting links and quotes.
John_a_designer at #118 and DaveS at #119:
Of course not all AA positions in a protein sequence have the same functional specificity. That’s why indirect methods based on homology, like Durston’s and mine, help to evaluate the real functional information.
Let’s take for example a 154 AAs protein, human myoglobin. If all AAs had to be exactly what they are for the protein to be functional, the functional information in the protein would be:
-log2(1:20^154) = about 665 bits.
But of course that’s not the case. We know that many AAs can be different, and others cannot. Moreover, some AA positions are almost indifferent to the function (they can be any of the 20 AAs), while others can only change into some similar AA. All that is well known.
It is false that only the active site is important for the protein function. The whole structure is very important, and it depends on most of the AA positions. The active sire, certainly, has a very specific role, but it is only part of the story.
So, how can we have an idea of how big functional information is in human myoglobin?
My method is rather simple. If we blast the human protein against, for example, cartilaginous fishes, we get a best hit of 127 bits (Heterodontus portusjasksoni), and others very similar with other members of the group (123 bits for Callorhincus milii and 121 bits for Rhincodon typus). That means that about 120 bits of functional information have been conserved between cartilaginous fish and humans.
That value is very conservative. It corresponds to about 65 identities and 87 positives (in the best hit), and is already heavily corrected for chance similarities. So, we can be rather safe if we take it as a measure of the real functional information: the true value will almost certainly be higher.
The reason why conserved homology corresponds to function is very simple: cartilaginous fishes and humans are separated by more than 400 million years in evolutionary history. In that time window, any nucleotide sequence in the genome will be saturated by neutral variation, IOWs it will show no detectable homology in the two groups, unless it is preserved by negative, purifying selection, beacuse ot its functional role.
So, we can see that myoglobin is after all not so functionally specific. As a 154 AA sequence, it has only at least 120 bits of functional complexity, which is not so much. Always a lot, however.
That is not surprising, because the structure of the myoglobin molecule is rather simple: it is a globular protein, with one well defined active site. Not the most complex of the lot.
Now, let’s consider another protein that I have discussed many times: the beta subunit of ATP synthase.
Again, let’s consider the human form: a 529 AA long sequence, P06576.
Now, as this is a very old protein, originating in bacteria, let’s blast the human form against the same protein in E. coli, a well known prokaryote (P0ABB4, a 460 AA long sequence).
The result is rather amazing: the two sequences show 660 bits of homology, after a separation of billions of years!
We can have no doubts that those 660 bits are true functional information.
However, as you can see, the functional information as evaluated by this method is always much less than the total possible information for a sequence that long. That’s because of course many positions are not functionally specific, and also because the BLAST method is very conservative.
Anyway, the beta subunit of ATP synthase, which is only a part of a much more complex molecule, is more than enough, with its (at least) 660 bits of functional information, to demonstrate biological design.
And it’s just one example, among thousands!
Thanks, gpuccio. I’ll have to chew on that (although I doubt my understanding will reach beyond the superficial).
gpuccio,
Two questions, if you please.
In the calculation of functional information, we take -log_2 of the ratio of the number of functional structures to the total number of structures possible in a particular system. It’s essentially -log_2 of a conditional probability P(E | F) where E and F are very precisely defined events.
OTOH, these BLAST scores in bits are calculated (via various schemes, I take it) simply by comparing two sequences.
I think you allude to this, but should it be clear that the BLAST numbers are lower bounds for the amount of functional information? In particular, for the E and F that Sagan is referring to? (I take it that E = some form of life arising via these proteins and F = a “primordial soup” exists).
gpuccio,
Please disregard the above post—I’ll go back to chewing.
DaveS at #125 and 126:
OK, but maybe I can help a little with your chewing! 🙂
The logical connection between functional information and homology conservation is probably not completely intuitive, so I will try to give some input about that point.
First of all, we must consider that the information in the genome, be it functional or not, is subject to what is called neutral variation. IOWs, errors in DNA duplication will affect the sequence of nucleotides in time.
The process is slow, but evolutionary times are big. So, in principle, because of Random Variation each sequence in the genome of living beings would lose any connection with the original form, given enough time.
However, luckily that is true only for neutral or quasi neutral variation. IOWs, for sequences that have no function.
If the sequence is functional, and if the function is relevant enough (IOWs, if it can be seen by Natural Selection), what happens is that change is not allowed beyond some level: if the sequence changes enough, so that its function is lost or severely impaired, that variation is elminated by what is called negative selection, or purifying selection.
Now, while positive selection is elusive and scarcely detectable in most cases, negative selection is really a powerful and ubiquitous force of nature. It is the reason why proteins retain much of their sequence specificity through hundreds or thousands of million yeras, in spite of random variation.
All those things are well known, and can be easily proved. Neutral variation is well detectable in non functional, or weakly functional, sites, for example in the third nucleotide in protein coding genes, which usually can change without affecting the protein sequence.
The concept of “saturation” is also important: it is the time necessary to erase any similarity between two neutral (non functional) sequences, because of RV. While that time can vary in different cases, in general an evolutionary split of 200 – 400 million years will be enough to cancel any detectable or significant homology between two neutral, non functional sequences in the genome.
More in next post.
DaveS (and all interested):
The concepts I have summarized in my previous post are the foundation for using homology to detect functional information. Of course the idea is not mine. Durston, in particular, has applied it brilliantly in his important paper. However, I have developed a personal approach and methodology which is slightly different, even if inspired by the same principles.
So. let’s imagine that we have two sequences that are 100% identical, and that are found in two species separated by more than 400 million years of evolutionary time. For example, humans and cartilaginous fishes, which is the scenario I have analyzed many times here.
That is a very good scenario for our purposes, because the evolutionary split between the two groups is uspposed to be more than 400 million years old. So, if we compare a human protein with the same protein in sharks, we have two proteins separated by more than 400 million years of time (humans, of course, are derived from bony fishes).
So, let’s suppose that the same protein, with the same structure and function, has identical AA sequence in the two groups. That is not true for any protein I know of, but it is useful for our reasoning.
So, let’s say that we have a 150 AA protein, with a well known important function, and that our protein has exactly the same AA sequence in the two groups: humans and sharks.
Will the two protein coding genes be the same? Of course not. The third nucleotide will be different in most sites, because of neutral variation. In most cases, it could change without affecting the sequence of AAs, and in 400+ million years those sited did really change.
But, as said, let’s suppose the AA sequence is exactly the same.
What does that mean?
As explained, it means that we can confidently assume that all those 150 AAs must be what they are, for the protein to be really functional. Or, at least, most of them.
As said, that is true of no real protein. But if it were true, what does it mean?
It simply means that the target space is 1.
So, in that case, the computation is easy.
Target space = 1
Search space = 20^150 = 2^648
Target space / Search space ratio = 2^ -648
Functional information = 648 bits.
Of course, this is the highest information possible for a 150 AA sequence. In this extreme case, the functional information is the same as the total potential information in the sequence.
So, we can easily see that, in this very special, and unrealistic, case, each AA identity corresponds to about 4.3 bits of functional information.
More in next post.
DaveS (and all interested):
A couple of important points:
1) It is very important that we consider long evolutionary separations. If we compare the same protein in humans and chimps, it will be almost identical in most cases. But the meaning here is different. The evolutionary separation between human and chimps is rather short. Therefore, neutral variation operated only for a very short time, and neutral sequences can be almost the same in the two groups just because there was not enough time to change. IOWs, the homology could be simply a passive result.
2) Identities are not the whole story. We must also consider similarities, IOWs AAs that are substituted by very similar ones.
Now, let’s see how the BLAST algorithm works. Again, let’s consider the homology between human ATP synthase subunit beta and the same protein in E. coli. Proteins P06576 and P0ABB4. The length is similar, but not identical (529 vs 460).
Comparing the two sequences, we find:
Identities: 334
Positives: 382 (that includes the identities)
Score: 660 bits
Expect: 0.0
IOWs, the algorithm has performed an empirical alignment between the two sequences, and found 334 identities and almost 50 similarities. The algorithm computes a bitscore of 660 and an E value of practically 0 (that is more or less a p value related to the null hypothesis that the two sequences are evolutionarily unrelated, and that the observed similarities are due to chance, given the number of comparisons performed and the number of sequences in the present protein database).
Now, I will try to show why the Blast algorithm is very conservative, when used to evaluate functional information.
If we reason according to the full potential of functional information, and just stick to identities, 334 identities would correspond to:
334 x 4.3 = 1436 bits
Indeed, the raw bitscore is given by the Blast algorithm as 1703 bits. But the algorithm works in a way that the final, corrected value is “only” 660 bits.
IOWs, the Blast algorithm usually computes
DaveS (and all interested) (continued):
IOWs, the Blast algorithm usually computes about 2 bits for each identity. Considering that we are dealing with logarithmic values, that is an extreme underestimation.
But the Blast tool is easy to use, and universally used in biology. So I stick to its result, even if certainly too conservative for my purposes. Of course that is a lot of compensation for other possible factors (of course some of the identities could be a random effect, and there could be some redundancy in the functional information, and so on).
Even considering all those aspects, one single protein like the beta subunit of ATP synthase is more than enough to infer design. And, as said, there are thousands of examples like that.
gpuccio,
Thanks very much, this is helpful. I was looking for a “BLAST for Dummies” type tutorial, but even those assume background that I don’t have. Anyway, the way you have broken it down clears up a lot of questions I had.
GP, good stuff, of course log_2 20 = 4.32, i.e. 4.32 bits/character. KF
Earlier @ 118 I asked:
“I can spot at least half a dozen big flaws– actually major blunders– in [Sagan’s] argument. Does anyone else see them?”
https://uncommondescent.com/intelligent-design/why-describing-dna-as-software-doesnt-really-work/#comment-679457
Gpuccio @123 pointed out a couple of problems. For example:
Earlier @ 112 I pointed out a couple of other problems the Sagan doesn’t even mention.
https://uncommondescent.com/intelligent-design/why-describing-dna-as-software-doesnt-really-work/#comment-679203
Even if we grant, for sake of argument, Sagan’s claim that “it’s not a hundred places you need to explain, it’s only five to get going… And 20^5 is an absurdly small number, only about five million. Those experiments are done in one ocean between now and next Tuesday.” However, Sagan completely ignores (1) the chirality problem and (2)the problem of creating the right chemical bond– a peptide bond. In the quote I provided above at 112 Meyer gives a very succinct explanation as to why:
Again, Sagan completely ignores these two problems, even though they were well known by OoL researchers at the time. Indeed, as a layman who had an interest in the subject at the time (1985) I knew about them. Why didn’t astronomer/astrobiologist Dr. Sagan know?
In other words, the probability of forming a single 100aa protein by chance even if we accept his 20^5 has to roughly (converting to base 10) be about be 10^66.
However, that creates another problem that Sagan completely ignores. A single protein floating alone in an ocean won’t evolve into anything, even if we assume Darwinian evolution because protein does not self-replicate. So even a universe of oceans full of amino acids and proteins wouldn’t get you anywhere. This is why the majority of OoL researchers have moved on to the RNA world hypothesis, but that has a whole set of problems of its own.
However, I am sure that Sagan’s audience was so enamored by his scientific “rock star” status that they gave him a complete pass on a subject that the average person knows very little about.
Can anyone else see any other problems with Sagan’s argument?
Epigenetic regulation of glycosylation is the quantum mechanics of biology
Gordan Laucab, Vlatka Zoldošc
DOI: 10.1016/j.bbagen.2013.08.017
Biochimica et Biophysica Acta (BBA)
Volume 1840, Issue 1, Pages 65-70
Highlights
The majority of proteins are glycosylated.
Glycan parts of proteins perform numerous structural and functional roles
There are no genetic templates for glycans, instead glycans are defined by dynamic interaction between genes and environment.
Epigenetic changes enable adaptation to variations in environment.
Epigenetic regulation of glyco—genes is a powerful evolutionary tool.
Abstract
Background
Most proteins are glycosylated, with glycans being integral structural and functional components of a glycoprotein. In contrast to polypeptides, which are fully encoded by the corresponding gene, glycans result from a dynamic interaction between the environment and a network of hundreds of genes.
Scope of review
Recent developments in glycomics, genomics and epigenomics are discussed in the context of an evolutionary advantage for higher eukaryotes over microorganisms, conferred by the complexity and adaptability which glycosylation adds to their proteome.
Major conclusions
Inter-individual variation of glycome composition in human population is large; glycome composition is affected by both genes and environment; epigenetic regulation of “glyco-genes” has been demonstrated; and several mechanisms for transgenerational inheritance of epigenetic marks have been documented.
General significance
Epigenetic recording of acquired characteristics and their transgenerational inheritance could be important mechanisms used by higher organisms to compete or collaborate with microorganisms.