Genetics Intelligent Design Origin Of Life

Why describing DNA as “software” doesn’t really work

Spread the love
File:DNA simple.svg

Check out Science Uprising 3. In contemporary culture, we are asked to believe – in an impressive break with observed reality – that the code of life wrote itself:

… mainstream studies are funded, some perhaps with tax money, on why so many people don’t “believe in” evolution (as the creation story of materialism). The fact that their doubt is treated as a puzzling public problem should apprise any thoughtful person as to the level of credulity contemporary culture demands in this matter.

So we are left with a dilemma: The film argues that there is a mind underlying the universe. If there is no such mind, there must at least be something that can do everything that a cosmic mind could do to bring the universe and life into existence. And that entity cannot, logically, simply be one of the many features of the universe.

Yet, surprisingly, one doesn’t hear much about mainstream studies that investigate why anyone would believe an account of the history of life that is so obviously untrue to reason and evidence.Denyse O’Leary, “There is a glitch in the description of DNA as software” at Mind Matters News

Maybe a little uprising wouldn’t hurt.

Here at UD News, we didn’t realize that anyone else had a sense of the ridiculous. Maybe the kids do?

See also: Episode One: Reality: Real vs. material

and

Episode Two: No, You’re Not Robot made of Meat

Notes on previous episodes

Seven minutes to goosebumps (Robert J. Marks) A new short film series takes on materialism in science, including that of AI’s pop prophets

Science Uprising: Stop ignoring evidence for the existence of the human mind Materialism enables irrational ideas about ourselves to compete with rational ones on an equal basis. It won’t work (Denyse O’Leary)

and

Does vivid imagination help “explain” consciousness? A popular science magazine struggles to make the case. (Denyse O’Leary)

Further reading on DNA as a code: Could DNA be hacked, like software? It’s already been done. As a language, DNA can carry malicious messages

and

How a computer programmer looks at DNA And finds it to be “amazing” code

Follow UD News at Twitter!

134 Replies to “Why describing DNA as “software” doesn’t really work

  1. 1
    ET says:

    DNA isn’t the software. The immaterial information that runs the genetic code, is.

  2. 2
    OLV says:

    Making what Bill Gates said about DNA such a big deal is misleading at best. Mr Gates May know quite a bit about software but has no clue about DNA.
    When Professor Denis Noble, who may know a little more about cellular biology than Mr Gates, was asked at a physiology meeting to explain what a gene is he simply said that nobody knows. DNA without the rest of the sophisticated cellular machinery is as valuable as a zero written on the left side of an integer number (eg. $099=$99). DNA seems like a very complex repository of information that the cellular machinery can access and process.
    Some folks lack the humility to admit that out of our deep ignorance they simply explain what they don’t understand using reductionistic poetry and if things get tough then oversimplified illustrations along with some elegant handwaving may help to persuade the gullible crowd to belief they know something that we don’t.
    It’s time to stop playing superfluous games and to start calling things by their names.

  3. 3
    AaronS1978 says:

    The IP metaphor is sticky and easy to use. Computers and software have only been around just about one life time. Yet it is the only way we describe our own physiology. It’s similar to comparing your hands to a hammer, both can be used to pound sharp objects into walls but one does a lot better and is a lot less painful . Yet we don’t make these parallels between our hand to the hammer.

    Computers and software that they use are tools.
    They are not the same as our brain and our DNA
    They are even fundamentally different on a molecular level. There are many things in this world that are capable of performing similar or the same job as other things in this world but that most certainly does not make them the same.

  4. 4
    Belfast says:

    There is a school of thought that says that DNA is no more a code than tree rings are a code.
    I think it looks more like a formatted database if you want to use computer comparisons, but even that has shortcomings.

  5. 5
    OLV says:

    The day human engineers and scientists come up with anything at least remotely close to the complex functionality and the functional complexity of the biological systems, we deserve to uncork all the champagne bottles in the world and brag about how smart we are. Until then, let’s be humble. Deal ? 🙂

  6. 6
    bornagain77 says:

    Sequential information on DNA is one thing, the quantum ‘positional information’ of an entire organism takes the argument against Darwinian materialism to entirely new level.

    Darwinian Materialism vs. Quantum Biology – Part II – video
    https://www.youtube.com/watch?v=oSig2CsjKbg

  7. 7
    ET says:

    Belfast:

    There is a school of thought that says that DNA is no more a code than tree rings are a code.

    Tree rings are data recorders. There isn’t any code. DNA encodes for amino acids in the grand scheme of the genetic code. One codon represents an amino acid or STOP.

  8. 8
    daveS says:

    ET,

    Couldn’t one argue that in a cross-section of a tree, the rings “represent” seasons (e.g., light rings representing the growing season and the dark rings representing the dormant season)?

  9. 9
    ET says:

    daves- Tree rings don’t represent anything unless you have studied them. And only then can you get any information. With the genetic code we only discovered the existing code. It keeps chugging along regardless of what we know.

  10. 10
    daveS says:

    ET,

    Isn’t that also true of, say, a message expressed in Morse Code? I could compose a message and transmit it with a shortwave radio. Unless someone was monitoring that frequency at the moment, it would simply vanish into the aether. It’s still an encoded message.

    Suppose I’m hiking in the woods and come upon the stump of a tree that someone has recently cut down. I notice a curious sequence of rings, where “-” means a light ring and “.” means a dark ring.

    If we apply our ID techniques, clearly we may be able to conclude that’s a coded message, correct?

  11. 11
    Silver Asiatic says:

    There is a school of thought that says that DNA is no more a code than tree rings are a code.

    Tree rings do not provide instructions for variable operations and functions to follow. They are just the record of past events.

    If there is actually a “school of thought” that proposes that kind of analogy, it’s not much of a school.

  12. 12
    Silver Asiatic says:

    Dave

    If we apply our ID techniques, clearly we would conclude that’s a coded message, correct?

    We apply ID techniques to determine if there is an information (messaging) circuit.

    Sender. Transmission. Receiver. Translation. Response.

    That’s an information circuit. Do we see it in tree rings?

  13. 13
    daveS says:

    SA,

    Not that I know. I’m suggesting a hypothetical, where we discover a sequence of tree rings which turns out to form a recognizable message in Morse Code (for example, perhaps the first sentence of the Gettysburg Address). If such a tree trunk was found, clearly we would identify it as a coded message, even in the absence of this other machinery you refer to, correct?

  14. 14
    ET says:

    Pee tests. Is pee a code? Trained medical staff can get information from pee. And what about ColoGuard?

  15. 15
    EricMH says:

    The genetic code is a series of symbols that instruct the ribosome how to construct a protein. Any reason to believe the ribosome behavior is not a finite automata? Otherwise it is a computational code. I do not understand why people say the genetic code is not a computational code.

  16. 16
    daveS says:

    ET

    Trained medical staff can get information

    Well, I guess that information must be encoded somehow, right?

  17. 17
    ET says:

    Your palms. There is a code on your palms. And for a small fee the people of the silk bandanas will decode that message for you. For another small fee they will decode the message of the cards- your message. They also have a glass ball…

    The people of the silk bandanas can be found at most seaside boardwalks, traveling carnivals and may even be lurking locally.

  18. 18
    ET says:

    EricMH- The ribosome is a genetic compiler. The source code is the string of nucleotides and the object code is the functional protein that is produced. And the ribosome recognizes miscoding errors: The Ribosome: Perfectionist Protein-maker Trashes Errors

    Just more positive evidence for ID

  19. 19
    ET says:

    daves:

    Well, I guess that information must be encoded somehow, right?

    Look up the word “encode” and try to find a definition that fits “living one’s life”, because that is how the information gets in our waste.

  20. 20
    gpuccio says:

    DaveS at #13:

    I’m suggesting a hypothetical, where we discover a sequence of tree rings which turns out to form a recognizable message in Morse Code (for example, perhaps the first sentence of the Gettysburg Address). If such a tree trunk was found, clearly we would identify it as a coded message, even in the absence of this other machinery you refer to, correct?

    If the encoded message is complex enough, that’s correct. If the tree rings encoded at least 500 bits of functional information, that would be an object for which we could infer design.

    Are you looking for that in stumps? Good luck, really!

    The connection between tree rings and seasons is, of course, a necessity connection, and not an encoded message. That should be obvious to anybody.

    An example that I have made a few times here is the following:

    Some human mission arrives at a distant planet, about which we know nothing. There is no sign of life or intelligence on it, but the astronauts observe a mountain wall where a long series of marks is present. Each mark can very well be interpreted as a result of wheather events. However, the marks can be easily còassified into two different types, and so the sequence can be read as a binary string.

    One of the astronauts, who is a mathematician, after some observation finds that the binary sequence in the wall, when read by an appropriate code, corresponds exactly to the first million of decimal digits of pi.

    The question is very simple: can we infer design?

    The answer, of course, is yes.

    So, good luck with your stumps.

  21. 21
    EricMH says:

    @GP, DaveS’ point is good, I think, and merits further analysis to explain the flaw.

    There is a natural process that creates a compressible code, i.e. 01 pattern of rings that represent the seasons, regular seasons = compressible encoding. If we use the uniform chance hypothesis, then the tree rings register a large amount of CSI. If we don’t use the uniform chance hypothesis, what is the chance hypothesis? The change in seasons. If we use the change in seasons as the chance hypothesis, there is zero CSI.

    Same issue with the genetic code. Say we find a compressible code in the gene. Uniform hypothesis of course shows high CSI, so uniform hypothesis is false. But, that does not rule out a natural process capable of creating compressible regularity, e.g. seasonal regularity in the tree ring case.

    This is why the detachable specification is so important, as your example with pi illustrates. The digits of pi are independent from the seasons, yet the seasons are the chance hypothesis for tree rings. So, if using the seasons as a chance hypothesis and pi as a specification results in high CSI, then we can infer the causal agency of something other than the seasons. If the specification is an abstraction, such as pi, then since intelligence is the only known cause that can implement abstractions, we can infer intelligent agency in the case of rings spelling out the digits of pi.

    Returning full circle to the genetic code, we can apply the same reasoning. Generating functional proteins from the genetic code is extremely small probability with know natural processes, especially the Darwinian process of random mutation and natural selection. If we use the specification of a software code, and as far as we know only human intelligence can create software codes since they require the power of abstraction and deductive logic, then we end up getting a high amount of CSI with the genetic code. Thus, we can infer intelligent agency at work in the genetic code.

  22. 22
    gpuccio says:

    EricMH:

    In general I agree with what you say. But I have to make some specifications which are, IMO, important:

    a) A chance hypothesis (null hypothesis) is simply the hypothesis that no real effect is observed, and that the configurations we observe can reasonably be explained by random events following some probability distribution that can reasonably describe the system, given the necessity laws working in the system itself.

    There is no rerason at all that we have to hypothesize a uniform distribution. Any reasonable probability distribution could describe the system, and still the result would be a random result.

    b) A necessity explanation has nothing to do with any random hypothesis, and with any probability distribution. A necessity explanation observes a cause and effect relationship that explains the configuration we observe. Causes existing in the system are generating the configuration we observe, and not because of a probability distribution, but because of a direct causal connection. So, the connection between seasons and tree rings is a necessity connection, not the result of any probability distribution (even if, of course, random effects can be present too).

    c) Specific configurations that have the features of code and a functional specification cannot really arise as a result of any probability distribution, if they are complex enough. A probability distribution, of course, does not know anything about English language, and it does not understand meanings. That’s why a Shakespeare sonnet will never arise from a probability distribution, any probability distribution. There is no need for a uniform distribution of the letters. You can make one or more letter more likely, but that will never generate the sonnet.

    In the same way, even if random mutations do not really follow a uniform distribution (indeed they don’t), no special probability distribution has any chance to generate the code for a complex function protein. It’s not important if some mutation has a probability which is different from some other mutation, still the correct sequence is by far too unlikely to originate in any possible physical system.

    In a design process, there is a very specific necessity connection between the designer, his understanding, his conscious representations, his actions and the final result of the process. IOWs, the physical object is shaped by the designer according to the form and meaning alredy present in his consciousness. A series of necessity events establishes the connection. Probability has no role here, except for possible noise generation.

    The connection between seasons and tree rings is a necessity one. But it is not symbolic, and it is very simple. Given the laws existing in the system, we understand very well how a relatively simple and repetitive binary sequence like tree rings originates from existing and repetitive events in the system.

    You mention compressibilitty. That’s an important point, because I have always argued that compressible information is often a result of necessity laws acting in the system.

    For example, a sequence of 1000 heads from coin tossing is extremely unlikely, if the coin is really fair. So, it could well be a result of design. But still, if the coin is not fair, or if any other condition in the system strongly favours heads, then that sequence can become very likely, maybe necessary.

    A sequence of 1000 heads is highly compressible. Compressible information can be a result of design or of some simple necessity law. The explanatory filter has always been well aware of that, that’s why necessity explanations must be seriously considered, especially with compressible information.

    But a Shakespeare sonnet, or the sequence of a functional protein, is scarcely compressible information. The functional information in those objects does not derive from some repetition of simple configurations: it is directly connected to much more complex realities, like those of language and of meanings, or those of a clear understanding of biological functions, folding, biochemistry, and so on.

    Only design can generate that type of complex objects. They never arise neither from probability distributions, of any kind, nor from the action of existing necessity laws.

  23. 23
    Silver Asiatic says:

    DaveS

    I’m suggesting a hypothetical, where we discover a sequence of tree rings which turns out to form a recognizable message in Morse Code (for example, perhaps the first sentence of the Gettysburg Address). If such a tree trunk was found, clearly we would identify it as a coded message, even in the absence of this other machinery you refer to, correct?

    Yes, I think you’re right. It would be difficult to explain that correlation from merely the randomness of tree rings alone. There are always outliers and chance occurrences. But some sort of explanation would be needed if we found that exact sequence as you describe it. At the same time, the information remains embedded in a tree and does not appear to be communicated beyond that and we also know the origin and cause of that information.

    But I’d put it this way – if people are really presenting that analogy as a materialist response to the ID detection we have with DNA, I mean seriously? That’s just clutching at straws. It indicates the extreme weakness of the materialist view — just running away from the evidence.

    I think I’ve followed you long enough on this site to guess correctly that you do not really believe that is in any way a valid response to the strength of the ID argument … right? Or do you think that is a strong opposing argument to ID?

  24. 24
    OLV says:

    Gpuccio,
    Glad to see you back!!!
    I was missing your posts and comments.

  25. 25
    daveS says:

    SA,

    No, it’s not an anti-ID argument at all. It’s more an attempt to understand what the minimal requirements are for something to count as a code.

  26. 26
    Silver Asiatic says:

    Dave – sorry I misunderstood. It’s a good question and it’s exactly the kind of thing that ID can work on. It’s always a matter of gaining more precision over a science that is based on probabilities and predictives.

  27. 27
    bill cole says:

    Hi Gpuccio
    I mentioned you toward the end of this book review of Darwin’s Devolves that Perry Marshall asked me to participate in.
    https://youtu.be/MiiV5LgUe5k

  28. 28
    ET says:

    daves- The minimal requirements are that it has to meet the definition of a code. Larry Moran on the real genetic code and how it is the same type of code as Morse code.

  29. 29
    EricMH says:

    @GP, thanks, what you have written has given me some ideas.

    I would say meaningful information is usually somewhere between extreme simplicity and incompressibility. E.g. most code and English text is pretty compressible.

    One interesting sidepoint, Shannon says English is about 50% redundant, so can create 2D crosswords. On the other hand, if it was only 30% redundant Shannon claims we’d have to create 3D crosswords. Proteins are essentially 3D crosswords and the genetic code is very low in redundancy.

    On the other hand, being incompressible with an external function is not quite the same as being designed. A photograph of a crystal will be incompressible due to noise, and have a concise external referent, but does not indicate intelligent design.

    So, I would still say the distinguishing feature of CSI is the detachable specification, which furthermore has to be an abstract specification, i.e. something that cannot be derived from physical material.

    The photograph of a crystal does not meet this criterion because the photograph is generated from the external referent, so the external referent is not detachable. However, in the case of the genetic code, the code is not generated from the function, so the function is a detachable specification.

  30. 30
    ET says:

    EricMH:

    However, in the case of the genetic code, the code is not generated from the function, so the function is a detachable specification.

    If materialism is correct then the genetic code was generated from the function. It just emerged from the system, which emerged from the components nature just happened to produce.

  31. 31
    john_a_designer says:

    Here is a stunning claim from Abel and Trevors.

    Genes are not analogous to messages; genes are messages. Genes are literal programs. They are sent from a source by a transmitter through a channel (Fig. (Fig.3)3) within the context of a viable cell. They are decoded by a receiver and arrive eventually at a final destination. At this destination, the instantiated messages catalyze needed biochemical reactions. Both cellular and extracellular enzyme functions are involved (e.g., extracellular microbial cellulases, proteases, and nucleases). Making the same messages over and over for millions to billions of years (relative constancy of the genome, yet capable of changes) is one of those functions. Ribozymes are also messages, though encryption/decryption coding issues are absent. The message has a destination that is part of a complex integrated loop of information and activities. The loop is mostly constant, but new Shannon information can also be brought into the loop via recombination events and mutations. Mistakes can be repaired, but without the ability to introduce novel combinations over time, evolution could not progress. The cell is viewed as an open system with a semi-permeable membrane. Change or evolution over time cannot occur in a closed system. However, DNA programming instructions may be stored in nature (e.g., in permafrost, bones, fossils, amber) for hundreds to millions of years and be recovered, amplified by the polymerase chain reaction and still act as functional code. The digital message can be preserved even if the cell is absent and non-viable. It all depends on the environmental conditions and the matrix in which the DNA code was embedded. This is truly amazing from an information storage perspective. (emphasis added)

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/

    One of the key questions you have to answer if you believe in a naturalistic dys-teleological origin for the DNA or RNA is how did chemistry create the code? Do you have any evidence of how an undirected and purposeless physical process created what we intelligent beings recognize as code? If you do please give us your explanation. Or, is it just your belief?

    If you don’t have an explanation, I’m going to make the same assumptions I use to identify ducks: If it looks like a code and operates like a code chances are that it really is a code.

    Some people call that the “duck test.” I just call it logical thinking

  32. 32
    daveS says:

    ET,

    The minimal requirements are that it has to meet the definition of a code

    That’s useful 😛

  33. 33
    ET says:

    daves @ 32- It should be very useful to anyone saying:

    It’s more an attempt to understand what the minimal requirements are for something to count as a code.

    😛

  34. 34
    ET says:

    To John A Designer, Upright Biped, gpuccio, EricMH, et al: Please see:

    The Origin of Prebiotic Information System in the Peptide/RNA World: A Simulation Model of the Evolution of Translation and the Genetic Code
    Sankar Chatterjee 1,* and Surya Yadav 2 1 Department of Geosciences, Museum of Texas Tech University, Box 43191, 3301 4th Street, Lubbock, TX 79409, USA 2 Rawls College of Business, Texas Tech University, Box 42101, 703 Flint Avenue, Lubbock, TX 79409, USA; surya.yadav@ttu.edu * Correspondence: sankar.chatterjee@ttu.edu; Tel: +1-806-787-4332
    Received: 13 December 2018; Accepted: 25 February 2019; Published: 1 March 2019

    https://www.ncbi.nlm.nih.gov/pubmed/30832272

    Food for thought- although it is all speculation.

  35. 35
    gpuccio says:

    OLV:

    Yes, I took some rest!

    Nice to see you again 🙂

  36. 36
    ET says:

    daves re 32- Tree rings are not a code because they do not meet any standard and accepted definition of a code.

  37. 37
    daveS says:

    ET,

    Tree rings in themselves are not a code, but a designer could use them to send messages in Morse Code, presumably.

  38. 38
    Silver Asiatic says:

    DaveS

    Tree rings in themselves are not a code, but a designer could use them to send messages in Morse Code, presumably.

    A designer could use them to send messages about the age of trees.

  39. 39
    ET says:

    daves:

    Tree rings in themselves are not a code, but a designer could use them to send messages in Morse Code, presumably.

    The Slowskys may like to communicate that way. But with who, we don’t know.

  40. 40
    bornagain77 says:

    Where in the world DaveS was trying to go with tree rings I have no idea. I thought he might be trying to rehash the old fallacy that the coding in DNA could occur naturally, but he apparently does not hold the tree rings to occur naturally, i.e. “a designer could use them to send messages in Morse Code”. Whatever that is suppose to mean.

    But anyways, the coded information in DNA is certainly not reducible to the laws of (classical) physics or chemistry:

    British Geneticist Robert Saunders Leaves a Highly Prejudiced Signature in His Review of “Signature in the Cell” – April 2012
    Excerpt: Meyer points out a rather astonishing fact – about which there is no scientific controversy – regarding the arrangements of the nucleobases in DNA. There are absolutely no chemical affinities or preferences for which nucleobases bond with any particular phosphate and sugar molecule. The N-glycosidic bond works equally well with (A), (T), (G), or (C). And secondly, there are also no chemical bonds in the vertical axis between the nucleobases. What this means is that there are no forces of physical/chemical attraction and no chemical or physical law that dictates the order of the nucleobases; they can be arranged in a nearly infinite amount of different sequences.
    http://www.algemeiner.com/2012.....-the-cell/

    And the other Darwinian gambit, i.e. Natural selection, is also a joke as to explaining the coded information within DNA,

    The waiting time problem in a model hominin population – 2015 Sep 17
    John Sanford, Wesley Brewer, Franzine Smith, and John Baumgardner
    Excerpt: The program Mendel’s Accountant realistically simulates the mutation/selection process,,,
    Given optimal settings, what is the longest nucleotide string that can arise within a reasonable waiting time within a hominin population of 10,000? Arguably, the waiting time for the fixation of a “string-of-one” is by itself problematic (Table 2). Waiting a minimum of 1.5 million years (realistically, much longer), for a single point mutation is not timely adaptation in the face of any type of pressing evolutionary challenge. This is especially problematic when we consider that it is estimated that it only took six million years for the chimp and human genomes to diverge by over 5 % [1]. This represents at least 75 million nucleotide changes in the human lineage, many of which must encode new information.
    While fixing one point mutation is problematic, our simulations show that the fixation of two co-dependent mutations is extremely problematic – requiring at least 84 million years (Table 2). This is ten-fold longer than the estimated time required for ape-to-man evolution. In this light, we suggest that a string of two specific mutations is a reasonable upper limit, in terms of the longest string length that is likely to evolve within a hominin population (at least in a way that is either timely or meaningful). Certainly the creation and fixation of a string of three (requiring at least 380 million years) would be extremely untimely (and trivial in effect), in terms of the evolution of modern man.
    It is widely thought that a larger population size can eliminate the waiting time problem. If that were true, then the waiting time problem would only be meaningful within small populations. While our simulations show that larger populations do help reduce waiting time, we see that the benefit of larger population size produces rapidly diminishing returns (Table 4 and Fig. 4). When we increase the hominin population from 10,000 to 1 million (our current upper limit for these types of experiments), the waiting time for creating a string of five is only reduced from two billion to 482 million years.
    http://www.ncbi.nlm.nih.gov/pm.....MC4573302/

    Whereas, on the other hand, experimental realization of Maxwell’s demon thought experiment has now demonstrated that an Intelligent observer does have the physical capacity to encode information into material substrates at the atomic level.

    As the following paper highlights, it has now been experimentally demonstrated that knowledge of a particle’s location and/or position converts information into energy.

    Maxwell’s demon demonstration turns information into energy – November 2010
    Excerpt: Scientists in Japan are the first to have succeeded in converting information into free energy in an experiment that verifies the “Maxwell demon” thought experiment devised in 1867.,,, In Maxwell’s thought experiment the demon creates a temperature difference simply from information about the gas molecule temperatures and without transferring any energy directly to them.,,, Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.
    http://www.physorg.com/news/20.....nergy.html

    And as the following 2010 article stated about the preceding experiment, “This is a beautiful experimental demonstration that information has a thermodynamic content,”

    Demonic device converts information to energy – 2010
    Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski.
    http://www.scientificamerican......rts-inform

    And as the following 2017 article states: James Clerk Maxwell (said), “The idea of dissipation of energy depends on the extent of our knowledge.”,,,
    quantum information theory,,, describes the spread of information through quantum systems.,,,
    Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,,

    The Quantum Thermodynamics Revolution – May 2017
    Excerpt: the 19th-century physicist James Clerk Maxwell put it, “The idea of dissipation of energy depends on the extent of our knowledge.”
    In recent years, a revolutionary understanding of thermodynamics has emerged that explains this subjectivity using quantum information theory — “a toddler among physical theories,” as del Rio and co-authors put it, that describes the spread of information through quantum systems. Just as thermodynamics initially grew out of trying to improve steam engines, today’s thermodynamicists are mulling over the workings of quantum machines. Shrinking technology — a single-ion engine and three-atom fridge were both experimentally realized for the first time within the past year — is forcing them to extend thermodynamics to the quantum realm, where notions like temperature and work lose their usual meanings, and the classical laws don’t necessarily apply.
    They’ve found new, quantum versions of the laws that scale up to the originals. Rewriting the theory from the bottom up has led experts to recast its basic concepts in terms of its subjective nature, and to unravel the deep and often surprising relationship between energy and information — the abstract 1s and 0s by which physical states are distinguished and knowledge is measured.,,,
    Renato Renner, a professor at ETH Zurich in Switzerland, described this as a radical shift in perspective. Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,,
    https://www.quantamagazine.org/quantum-thermodynamics-revolution/

    Moreover, classical information is shown to be a subset of quantum information by the following method. Specifically, in the following 2011 paper, “researchers ,,, show that when the bits (in a computer) to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer’s state to that of the computer in such a way that they know more about the memory than is possible in classical physics.,,, In measuring entropy, one should bear in mind that (in quantum information theory) an object does not have a certain amount of entropy per se, instead an object’s entropy is always dependent on the observer.”

    Quantum knowledge cools computers: New understanding of entropy – June 1, 2011
    Excerpt: Recent research by a team of physicists,,, describe,,, how the deletion of data, under certain conditions, can create a cooling effect instead of generating heat. The cooling effect appears when the strange quantum phenomenon of entanglement is invoked.,,,
    The new study revisits Landauer’s principle for cases when the values of the bits to be deleted may be known. When the memory content is known, it should be possible to delete the bits in such a manner that it is theoretically possible to re-create them. It has previously been shown that such reversible deletion would generate no heat. In the new paper, the researchers go a step further. They show that when the bits to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer’s state to that of the computer in such a way that they know more about the memory than is possible in classical physics.,,,
    In measuring entropy, one should bear in mind that an object does not have a certain amount of entropy per se, instead an object’s entropy is always dependent on the observer. Applied to the example of deleting data, this means that if two individuals delete data in a memory and one has more knowledge of this data, she perceives the memory to have lower entropy and can then delete the memory using less energy.,,,
    No heat, even a cooling effect;
    In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy.
    Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”
    http://www.sciencedaily.com/re.....134300.htm

    Thus, to put it simply, Darwinists have no clue how coded information was put into DNA so as to circumvent the second law, whereas on the other hand, ID advocates have a demonstrated mechanism, via experimental realization of Maxwell’s demon thought experiment, that mind in able to encode information at the atomic level in order to circumvent the second law.

    As far as empirical science itself is concerned, the matter is settled. The materialistic explanations of Darwinian evolution are found to be grossly inadequate as to explaining the coded information in DNA. And only Intelligence has the demonstrated casual sufficiency in order the explain the coded information in DNA.

  41. 41
    daveS says:

    SA,

    A designer could use them to send messages about the age of trees.

    Heh. Indeed.

    Anyway, it seems there are at least a couple of different ID arguments having to do with codes here:

    1) We could find a message encoded in nature in some obvious and human-readable way (e.g., in tree rings, perhaps in DNA, etc). It appears no one here finds that a realistic possibility.

    2) We could find information circuits, or entire information processing systems (which use codes) somewhere in nature. That sort of message is more subtle, in that the designer is not explicitly announcing his existence. It’s not like the constellations suddenly rearranging themselves so as to spell out “John 3:16” or the like.

  42. 42
    ET says:

    The message that the genetic code is the result of intelligent design is far from subtle. The components and systems required to carry it out is more than enough evidence for ID. To think that nature did it, without trying to nor wanting to, is beyond absurd. Especially given that nature seeks the line of least resistance, meaning simple is the rule. Just look at Spiegelman’s Monster.

  43. 43
    bornagain77 says:

    Follow up video to the “DNA Is Code: Who Coded It?” video was just uploaded:

    Stephen Meyer: DNA and Information – video
    https://www.youtube.com/watch?v=7c9PaZzsqEg&list=PLR8eQzfCOiS1OmYcqv_yQSpje4p7rAE7-&index=9

  44. 44
    john_a_designer says:

    One of the icons or so-called irreducible complexity (IC) is the bacterium flagellum. However, there other perhaps even better examples of IC. In my opinion, prokaryote DNA replication is a far more daunting problem for the Darwinist. However, instead of one molecular machine, like the flagellum, you have several interacting machines acting in a coordinated manner. This still fits Behe’s definition of IC as being “a single system which is composed of several interacting parts, and where the removal of any one of the parts causes the system to cease functioning.”

    For example, to start replication in prokaryote DNA you need an initiation enzyme which creates a replication bubble where another enzyme called helicase attaches itself and begins, like a zipper, to unbind the two complimentary strands of DNA double helix. Another enzyme called primase creates another starting point (a primer) on both of the separated strands known as the 5’ and 3’ or leading and lagging strands. DNA polymerase III uses this primer– actually a short strand of RNA– and adds the complementary nucleobases (A to T, T to A, C to G, G to C) to the single parent strand. In a nutshell, helicase divides one double stranded DNA helix into two single “parent” or template stands to which complimentary nucleotides are added by pol III and the result is two identical double stranded DNA helixes.

    Of course, it is somewhat more complicated than that. (Please watch the first video below.) For example, as helicase unbinds the two strands of the double helix, which are wrapped around each other to begin with, there is a tendency for tangling to occur as a result of the process. Another enzyme called gyrase (or topoisomerase II) is needed to prevent this tangling from occurring. Another problem is that the bases for the lagging strand must be added discontinuously which results in short segments know as Okazaki fragments. These fragments must eventually be joined back together by an enzyme known as ligase. (We could also discuss error correction which is another part of the replication process.)

    Here are a few videos which describe the process in more detail.

    https://www.youtube.com/watch?v=O3v04spjnEg&t=2s

    https://www.youtube.com/watch?v=bePPQpoVUpM

    https://www.youtube.com/watch?v=0Ha9nppnwOc

    While it’s true that the flagellum is irreducibly complex it is not essential for life itself. There are a number of single celled organism that exist without flagella. However, life cannot exist without DNA replication (nor transcription, translation etc.) Furthermore, with DNA replication the Darwinist cannot kick the can down the road any further. DNA replication in prokaryotes is as far as you can go and then you are confronted with the proverbial chicken or egg problem. DNA is necessary to create the proteins which are used in its own replication. For example, the helicase which is absolutely essential for DNA replication is specified in the DNA code which it replicates. How did that even get started? Maybe one of our know-it-all interlocutors can tell us.

    The problem with the Darwinian approach is not scientific; it is philosophical. The people committed to this approach believe in it because they believe that natural causes are the ultimate explanation for their existence. However, science has not proven such a world view to be true. (That’s not something science can do.) So ironically, whatever they believe, they believe it by faith.

  45. 45
    Silver Asiatic says:

    DaveS

    Anyway, it seems there are at least a couple of different ID arguments having to do with codes here:
    1) We could find a message encoded in nature in some obvious and human-readable way (e.g., in tree rings, perhaps in DNA, etc). It appears no one here finds that a realistic possibility.

    ID gives the case that some aspects of the universe show evidence of having been designed by an intelligence. Clearly, it would be a pointless task for ID to travel around human culture and identify everything that we already know that humans created and then declare that as evidence. However, if there was a new discovery of caves where images of animals are inscribed on the walls, that is a relevant use for ID, somewhat – in a forensics task inferring that the images could not have been created by random, non-human movements. Yes, you’re right that nobody expects to find a quote from the Gospel written out in tree ring codes. I think, actually, such a finding would be dismissed as a radical outlier by most ID researchers, although it would be almost impossible to explain as a random occurrence. I suppose it would be evidence of human interference or better yet, some kind of alien intelligence. As a single instance, it’s an outlier. If every tree in a particular forest showed similar results, it’s an ID conclusion. There is some intelligence at work there. We could keep looking for such things, or looking at every rock formation, converting it to Morse Code and then reading it — but ID finds enough positive, repeated and stronger evidence in fine-tuning of the universe and biological systems.

    2) We could find information circuits, or entire information processing systems (which use codes) somewhere in nature. That sort of message is more subtle, in that the designer is not explicitly announcing his existence. It’s not like the constellations suddenly rearranging themselves so as to spell out “John 3:16” or the like.

    Right, exactly. The designer appears to be hiding the evidence. For centuries, before the birth of micro-biology, we had no way of really seeing those ID messages in the cell. To me, it appears that those ID messages are imbedded into realty and are only slowly revealed over time, and even though the evidence for ID seems blatanly obvious to me, it is only rarely a case where the designer makes a bold, undeniable statement.

    I say rarely because I think events like Guadalupe, the miracle of the sun at Fatima, the shroud of turin, stigmatic or incorrupted saints, for example, are very bold statements of ID, with a designer’s “signature” all over them, so to speak. But since all of those have a theological component, people do not like to investigate them. In the study of the cosmos or biology, the message is always very subtle, in my view. It can always be denied. The human imagination allows for a lot of escape paths and if people do not like the ID evidence, it’s relatively easy to invent alternative scenarios.

  46. 46
    gpuccio says:

    DaveS at #37 and #41:

    Tree rings in themselves are not a code, but a designer could use them to send messages in Morse Code, presumably.

    Not so easily. Messages are coded using configurable switches, IOWs switches which, according to the laws of nature operating in the system, can well assume different configurations. IOWs, the configuration of each switch must be “neutral” according to the necessity laws operating in the system (for example it could be 0 or 1, indifferently), and its specific value is set by the designer. This freedom allows the designer to output the meaningful configuration.

    In the case of tree rings, the configuration is set by the laws of nature and by the biology of the tree. IOWs, it is set by necessity. That would make it really difficult ti use the tree rings themselves to express any meaningful message.

    In the same way, we can write a message in the sand, but not in the position of the atoms in a crystal.

    We could find a message encoded in nature in some obvious and human-readable way (e.g., in tree rings, perhaps in DNA, etc). It appears no one here finds that a realistic possibility.

  47. 47
    gpuccio says:

    DaveS:

    The previous post was posted while I was writing, by mistake, so I am continuing here:

    We could find a message encoded in nature in some obvious and human-readable way (e.g., in tree rings, perhaps in DNA, etc). It appears no one here finds that a realistic possibility.

    We could. But we have not, as far as I am aware. Science is done with facts, not with possibilities.

    We could find information circuits, or entire information processing systems (which use codes) somewhere in nature. That sort of message is more subtle, in that the designer is not explicitly announcing his existence. It’s not like the constellations suddenly rearranging themselves so as to spell out “John 3:16” or the like.

    This sort of message we do observe all the time in biological beings. Maybe it is subtle, but it is very clear and implies design beyond any possible doubt.

    For the constellations, you can just wait, while you look at the stumps… 🙂

  48. 48
    gpuccio says:

    EricMH at #29:

    There is IMO a lot of confusion about compressibility in the ID debate. That’s why I would like to add a few thoughts about that.

    Order and compressibility can be a form of specification. In that case, and only in that case, the configuration we observe is specified because it is ordered, and for no other reason.

    Consider my example of the sequence of 1000 heads. It is highly ordered, and that’s why we distinguish it from a random outcome, which typically is very different. So, we suspect it may be designed because it is specified (by its order and compressibility) and it is complex (1000 bits).

    However, as explained in my previous post, we have to consider the poibble role of necessity, because soimple necessity laws can generate order.

    (More in next post, because this one, again, was posted by mistake: something in my typing, I suppose).

  49. 49
    gpuccio says:

    EricMH at #29:

    So, let’s go on.

    The sequence of 1000 heads is specified by its order. Maybe it is designed, maybe it is the result of necessity laws operating in the system. We have to check carefully, before reaching a conclusion.

    But functional information of the kind we observe in language, in proteins, in living system, is not of that kind. Functional information is not specified by its order, even if some order can be detected in it. Indeed, functional information is specified in spite of its order.

    I will try to be more clear. Let’s consider English language, and my Shakespeare sonnet, again.

    You say, very correctly, that English language has its redundancies, and that it is, in part, compressible. But the point is: the sonnet we are considering is not specified because it is, in part, compressible. It is, indeed, specified beacuse it expresses specific meanings that are not compressible, using specific configurations of partially compressible components.

    Just as a reminder, the sonnet I have always offered as an example is Sonnet 76:

    “Why is my verse so barren of new pride?”

    Now, while the first verse and the whole sonnet, in a wonderful paradox, seem to affirm the repetition in the poet’s obsessions, there can be no doubt that the sonnet itself is a masterpiece of creativity and originality of thought and feeling and beauty.

    Now the simple question is: does its meaning, and creativity, and beauty, derive from its compressible components? Of course not. We could observe some sequence of letters which is equally compressible, from a Shannon perspective, but which means exactly nothing. In that case, we could infer design because of the compressibility (which, in itself, is a form of order), but not from any meaning in the poem.

    So, in functional information, be it language or software or proteins, the functional specification is linked to what the object means or can do: meaning or function, descriptive information or prescriptive information, as Abel would say. If the switches used to get the configuration are partially compressible or not has no relevance. It’s the meaning or the function in the specific, unique configuration that matters.

    Functional specifiction, if complex enough, is sufficient to infer design. If more than 500 bits are necessary to implement that meaning or function, and if we observe it implemented in some object, we can infer design for that object. It’s as simple as that.

  50. 50
    EricMH says:

    @GP, I agree compression is orthogonal to whether something is meaningful.

    Functional is some kind of external definition, and perhaps is obvious from a practical standpoint. But, what is the mathematical definition of functional?

    While a comprehensive definition is probably not possible, at least a necessary component is that it is a detachable specification, per Dembski. Otherwise, we cannot say the function did not itself arise from the chance hypothesis. E.g. take a Kolmogorov random bitstring, copy it, and now each incompressible bitstring has a perfect, external specification in its twin. If we do not require ‘detachability’ in our specification, then both incompressible bitstrings have maximal CSI. Additionally, this operation can be done entirely through a natural process and does not indicate design. So, in this example, by removing the detachability requirement high CSI clearly does not indicate design.

  51. 51
    ET says:

    John_a_designer:

    In my opinion, prokaryote DNA replication is a far more daunting problem for the Darwinist.

    All you had to do was ask: Peering into Darwin’s Black Box:
    The cell division processes required for bacterial life

    Evolutionists just handwave it all away or they will attack the author.

  52. 52
    ET says:

    EricMH:

    But, what is the mathematical definition of functional?

    Does it require one? We observe something doing some work of some type and we call it a function. Functionality is a specification of sorts. Then we attempt to discover how it all came to be the way it is, in part by using our knowledge of cause and effect relationships. We could apply Wm. Dembski’s mathematics with respect to discrete combinatorial objects to help us.

  53. 53
    gpuccio says:

    EricMH:

    While I have nothing against the concept of “detachability”, I don’t think it is absolutely necessary.

    Functional specification can be well defined empirically without any problem. I have done that many times here.

    In brief, the procedure is as follows:

    a) An observer defines a function that can be implemented by some specific material object. The important point is: any observer can define any function. However, the definition of the function must be objective, and it must include some level that defines the function as present or absent, and an objective way to measure it.

    b) After we have defined the function, we can verify that the object can implement it, and we can measure the minimal complexity needed to implement that function as defined. IOWs, the number of specific bits of configuration that are necessary to implement the function as defined. That is the functional complexity linked to that defined function, and observed in the object in relation to the defined function. That functional complexity can usually be computed as the ratio between all possible forms (or sequences, for digital information) that can implement the function as defined, and the total number of possible configurations.

    So, functional information is always relative to some defined function. The same object can have different functional information for different functions.

    The point is: if an object exhibits more than 500 bits of functional information, for any possible function, we can infer design for it.

    Of course, any function we define must be defined independently from the observed configuration: this is the only important rule. Maybe this is w, and hat you mean by “detachability”: if that is the case, you are perfectly right.

    IOWs, we cannot define the function using the specific configuration observed.

    Just to be clear, if we observe a sequence of 100 digits, we cannot use that sequence to set it as the key to a safe, and then define the function as the ability to open that safe. That would be cheating.

    You say:

    E.g. take a Kolmogorov random bitstring, copy it, and now each incompressible bitstring has a perfect, external specification in its twin.

    No. You cannot define any function for string B. It is the same as string A because you have copied it. And so? That is no functional specification. And copying simply means to duplicate already existing information.

    If I copy a Shakespeare sonnet, I am creating no new functional information, no new meaning. The procedure of copying is only a necessity procedure where object A determines the form of object B according to laws operating in the system (the copying system). There is no design here, except maybe the design of the copying system (which has nothing to do with the design of the sonnet).

    So, when the information for a protein in a protein coding gene is trancribed and translated, and it generates, say, 1000 copies of the protein, no new information about the protein sequence is generated: the necessay information is already in the gene. The gene already has in itself the bits necessary to implement the function of the protein.

    So, if by detachability you only mean that the function cannot be defined ad hoc for the specific bits observed, then I agree. But that has always been an obvious requirement of the definition of functional information.

    That said, it is extremely easy to define functional information and a way to measure it. And in all cases, more than 500 bits of functional information imply design, whatever the defined function may be.

    One of my first OPs here has been about defining functional information objectively:

    https://uncommondescent.com/intelligent-design/functional-information-defined/

  54. 54
    Nonlin.org says:

    Here we go again with ‘information’ misuse (abuse) and tree rings (very much dependent on the sampling rate) and “specified complexity” nonsense and DNA as information (when 1 GB can’t even hold your phone OS). And let’s not forget the “Shakespeare sonnet” and “functional information”.

    Here’s some help:
    http://nonlin.org/dna-not-essence-of-life/
    http://nonlin.org/biological-information/
    http://nonlin.org/intelligent-design/

  55. 55
    daveS says:

    gpuccio,

    I say rarely because I think events like Guadalupe, the miracle of the sun at Fatima, the shroud of turin, stigmatic or incorrupted saints, for example, are very bold statements of ID, with a designer’s “signature” all over them, so to speak. But since all of those have a theological component, people do not like to investigate them.

    Oddly enough, I’m very intrigued by these sorts of events. I think I would find this evidence most convincing, if I could witness it myself.

  56. 56
    gpuccio says:

    Nonlin.Org:

    Here we go again…

    Yes. Definitely, I have not changed my mind!

    And neither have you, it seems… 🙂

  57. 57
    gpuccio says:

    DaveS at #55:

    I am intrigued too, of course. And that kind of events certainly deserves to be investigated with an open mind.

    However, I am afraid that at present there is no chance that they are accepted as facts by most scientists, and so it would be difficult to use them in some general scientific theory.

    So, for the moment, I am perfectly happy to stick to the billions of amazing miracles that everybody can observe daily in living beings. 🙂

  58. 58
    gpuccio says:

    EricMH:

    But, what is the mathematical definition of functional?

    It’s rather simple.

    “Functional” just mean that some object can be used to implement some explicitly define function. Any possible function will do. Of course, the same object will be functional for some functions, and not for others.

    The mathematical definition of functional information, instead, is: the least number of specific bits necessary to implement the defined function.

    Again, it’s as simple as that.

  59. 59
    daveS says:

    gpuccio,

    A couple of random questions:

    Should any nontriviality conditions be imposed on the concept of “function”? For example, I could ask how much functional information is necessary to implement a paperweight. How about “a solid object which displaces 1 liter of air”? These functions are obviously uninteresting, but it would seem under your definition, they should each possess (or specify?) a well-defined amount of functional information.

    A slightly more interesting question, perhaps: How much functional information is required to construct a mechanism which rotates a small metal shaft at a rate of 1 rotation per hour (i.e., a very simple clock)? I’m not expecting you to calculate this, mind you, it’s more food for thought.

  60. 60
    ET says:

    daves- While you are awaiting gpuccio:

    We should investigate everything we observe to get to the root cause of it and understand it. So it would all depend on the specific paperweight.

  61. 61
    daveS says:

    ET,

    But it shouldn’t “all” depend on a specific paperweight. According to our definition, we take a minimum over all functional paperweights.

  62. 62
    ET says:

    Hold down paper- how many bits is that? It isn’t CSI, that’s for sure.

  63. 63
    Brother Brian says:

    GP@57, you mention that the same object can have different functions, but does that mean that it has more than one measure of functional information. For example, the artifact that was used as the standard kilogram for over a century surely has a tremendous amount of functional information. But as of a few months ago, it is little more than a paper weight. Has it lost its functional information?

  64. 64
    ET says:

    Brother Brian:

    For example, the artifact that was used as the standard kilogram for over a century surely has a tremendous amount of functional information.

    Let’s see your math. Or are you just fishing?

    We do NOT use functional information for everything to determine whether or not it was the product of intelligent design.

  65. 65
    daveS says:

    ET,

    Hold down paper- how many bits is that? It isn’t CSI, that’s for sure.

    One would hope not.

    Now to be fair, gpuccio would likely respond by asking for the more information about the specific function. For example, perhaps this paperweight needs to be able to hold down a stack of twenty A4 sheets of paper (say 80 gsm) in a 10 km/hr breeze.

    I would be curious to see if anyone can come up with a number in bits.

  66. 66
    ET says:

    I would be curious as to why anyone would want to.

  67. 67
    daveS says:

    ET,

    I would be curious as to why anyone would want to.

    To show it’s possible, of course.

  68. 68
    ET says:

    We do NOT use functional information for everything to determine whether or not it was the product of intelligent design.

    But we do have Measuring the functional sequence complexity of proteins

  69. 69
    Brother Brian says:

    ET

    Let’s see your math. Or are you just fishing?

    Without an internationally recognized standard for the kilogram (or an equivalent standard for mass) we would not have been able to put people on the moon. Surely that means that the kilogram has functional information.

  70. 70
    ET says:

    Brother Brian:

    Without an internationally recognized standard for the kilogram (or an equivalent standard for mass) we would not have been able to put people on the moon.

    How do you know?

    Surely that means that the kilogram has functional information.

    It functions as a standard.

    But that is all moot. YOU made a claim and I asked you to back it up. Can you?

  71. 71
    daveS says:

    ET,

    We do NOT use functional information for everything to determine whether or not it was the product of intelligent design.

    If that’s directed toward me, then of course no one said otherwise. gpuccio says that “any possible function will do”, which I understand as implying that we should be able to calculate the functional information required to implement the paperweight function I described.

  72. 72
    Brother Brian says:

    ET

    How do you know?

    Are you seriously suggesting that we could put a man on the moon without a standard unit of mass? Newton begs to differ. If you are just going to keep parroting back this nonsense I am going to take my earlier advice and continue to ignore your comments.

  73. 73
    Brother Brian says:

    DaveS, I would be interested in you opinion on my statement that the standard kilogram artifact that was used for over a century had functional information. It was critical for a century’s worth of advances in industry, technology and commerce. Every country maintained their own kilogram artifact that was traceable to the prototype kilogram housed in France, and these artifacts were critical in the design and manufacture of everything from bicycles to airplanes to space craft. As well, it was used in commerce in the sale of anything that was based on weight.

    However, in 2019 this artifact was replaced. Does this mean that the functional information that this artifact contained for well over a century cease to exist?

  74. 74
    ET says:

    Brother Brian:

    Are you seriously suggesting that we could put a man on the moon without a standard unit of mass?

    Unbelievable. TRY to stay focused. We could easily put a man on the moon without an INTERNATIONAL standard.

    So you are parroting the nonsense here, Brian. And you have once again avoided the question. That is very telling

  75. 75
    ET says:

    Brother Brian:

    It was critical for a century’s worth of advances in industry, technology and commerce.

    Evidence please- and don’t ask me a question, just provide the reference to support your claim. Or stop making them

  76. 76
    gpuccio says:

    DaveS at #59 (and others):

    Good questions from you and from others. I will try to answer them, in some order I hope. I think my answers will be useful to the general debate, so I invite all who are interested to read this post and those immediately following, whoever they are addressed to.

    So, your first question:

    Should any nontriviality conditions be imposed on the concept of “function”? For example, I could ask how much functional information is necessary to implement a paperweight. How about “a solid object which displaces 1 liter of air”? These functions are obviously uninteresting, but it would seem under your definition, they should each possess (or specify?) a well-defined amount of functional information.

    No. No conditions at all are imposed to the concept of function. Anybody can define any function he likes, and the functional complexity can be assessed (at least in principle, it is not always easy) for each of them. It is not important if the function is interesting or not. Usually, uninteresting functions will have low functional complexity, as I will try to show.

    I list again the only rules that must always be respected in defining a function. They are not “conditions”, just obvious procedures that must be followed to have the right result:

    1) We can never define a function using the specific value of the bits already observed in the object. In that case, we use the (generic) information observed in the object and we use it to define an ad hoc function. That is obviously wrong. See my example of the safe, in post #53.

    2) While the function is defined by an observer, it must be defined explicitly, so that it becomes an objective reference for anybody. There must be no ambiguity in the definition.

    3) Included in the definition there must be a level that defines the function as present or absent. IOWs, we must be able to assess potential objects as exhibiting the function or not, in some objective way, direct or indirect.

    4) All reasonings and measurements of funtional complexity are never done abstractly. To be useful in inferring design, they must always refer to some specified system, time window, and so on.

    To see that, let’s try to apply those principles to your example:

    “A solid object which displaces 1 liter of air”

    This is a perfectly valid function definition, but incomplete. We need to know the system, the time window, and the level of precision to assess function.

    So, let’s say we have a beach with approximately 1 billion stones, formed apparently by natural laws operating in that system in one million years.

    We observe a stone whose volume is one liter.

    Is it designed, or not?

    Let’s say we define our function as “having a volume of one liter with a precision of one part in a million”.

    OK, that is more complete. First of all, the reference we are using (the liter) exists independently (we are not using the observed volume to define it). Of course, any stone could be defined as having the volume it has. In that case, we would be using the observed configuration (the volume of the observed stone) to define the function, and we know that it is not correct. But with the liter, we have not that problem.

    So, using our definition, we can in principle apply it to generate a binary partition on the set of all possible stones (which could include the billion we can observe, but also all those that could have been forned in the time window). However, a finite number. Our binary partition will classify all possible objects in the system as exhibiting the function or not.

    At this point, the ratio of all possible objects in the system exhibiting the function (the target space) to all possible object in the system (let’s say a billion, the search space) is the functional complexity of our function in that system.

    We can try to compute that. It may not be easy, but in principle it can be done. Possible by indirect methods. The task here is more difficult because we are dealing with analoigic configurations. It’s usually easier with information that is natively in digital form, like in most biological objects.

    Now, let’s say that in some way we compute that a stone randomly generated in our system by natural causes has a probability of satisfying our definition of 1:10^20, IOWs 10^-20, IOWs about 66 bits of functional information.

    Now we must consider the probabilistic resources of the system. If we evaluate that about one billion stones have been generated in the system in the time window, then the probabilistic resources are about 10^9, IOWs about 30 bits.

    So, we have a result that has a probability of 66 bits in a system which has probabilistic resources of 30 bits. There is a global improbability of observing that result of about 36 bits. And that is something.

    Is that enough to infer design? Not accordign to the general, extremely conservative rule used usually in ID: 500 bits of functional information observed, whatever the probabilistic resources of the system.

    But, in the end, our conclusion depends on the system we are observing, the meaning of our conclusion, its generality, and so on.

    The 500 bits threshold is usually selected because it ensures utter improbability in practically any possible physical system in the universe, whatever the probabilistic resources.

    More in next post.

  77. 77
    gpuccio says:

    DaveS at #59 (and others):

    So, let’s go to your second question:

    How much functional information is required to construct a mechanism which rotates a small metal shaft at a rate of 1 rotation per hour (i.e., a very simple clock)?

    Again we can apply, in principle, the method described.

    We need a system where such an object could arise without design in some time window, and then we have to compute the probability of such an object arising (of course the function must be defined with precision) by chance, IOWs the ratio between spontaneous objects that would exhibit the function and the total number of objects generated in the system. I will not try to compute that for any system, but I would say that, if the function is defined with high precision, the probability will be really low. For a whole watch, I would rather blindly accept Paley’s inference of design in any case and system.

  78. 78
    gpuccio says:

    DaveS and ET:

    About paperweights:

    Of course a paperweight has low functional information: if it is defined without great precision (no great precision is necessary, I would say), then a lot of possible objects qualify.

    Let’s say that we only need a solid object weighing something between 1 and 2 kilograms.

    In one of my OPs, I have used the paperweight function to illustrate an object with two different functions and two differen values of functional information.

    A laptop can be used as a paperweight and as a computer.

    As a paperweight, its functional information is very low.

    As a computer, it is extremely high.

    Is that clear?

  79. 79
    ET says:

    daves and gpuccio:

    Specification- that would be the specification the paper weight needed to meet. For example, perhaps this paperweight needs to be able to hold down a stack of twenty A4 sheets of paper (say 80 gsm) in a 10 km/hr breeze.

    We would also have to know if the stack had to be held down such that the papers don’t bend or get damaged.

    So a stone used as a paper weight would have some functional information. But that functional information is imparted by the person who wants to meet some criteria, such as the above specification. The stone wasn’t necessarily designed, unless it had to be cut and shaped, but its function was.

    (I was typing when gpuccio posted 78. I agree with 78)

  80. 80
    gpuccio says:

    Brother Brian at #63:

    you mention that the same object can have different functions, but does that mean that it has more than one measure of functional information.

    That’s perfectly correct. see my example in the previous post (laptop computer used as a paperweight).

    For example, the artifact that was used as the standard kilogram for over a century surely has a tremendous amount of functional information. But as of a few months ago, it is little more than a paper weight. Has it lost its functional information?

    Absolutely not. It was designed to correspond to a very precise level to be used as reference. That has been true up to May 20, 2019. Now the standard has changed, but it does not change the function of the previous object, which has been used for a lot of time. And it has a rather precise corresponence to the mass of one liter of water, anyway. So, nothing has changed about its functional information.

  81. 81
    gpuccio says:

    DaveS at #65:

    Now to be fair, gpuccio would likely respond by asking for the more information about the specific function. For example, perhaps this paperweight needs to be able to hold down a stack of twenty A4 sheets of paper (say 80 gsm) in a 10 km/hr breeze.

    I would be curious to see if anyone can come up with a number in bits.

    I hope my previous posts have clarified my views about that.

  82. 82
    gpuccio says:

    DaveS at #71:

    If that’s directed toward me, then of course no one said otherwise. gpuccio says that “any possible function will do”, which I understand as implying that we should be able to calculate the functional information required to implement the paperweight function I described.

    Of course that’s correct. I think I have shown the procedure. A real computation requires defining a system and time window, and a precise functional definition. And, of course, some real work.

  83. 83
    daveS says:

    Brother Brian,

    Regarding the kilogram example, I think it has changed in a sense. In the past, the kilogram was by definition exactly the mass of this object. It was correct to infinitely many decimal places, so to speak. Now its mass is just very close to 1 kg (and the error varies as atoms occasionally fly off it). Perhaps that means its functional information has changed.

  84. 84
    gpuccio says:

    DaveS at #83:

    Its functional information is linked to the way it was designed at the beginning, to satisfy certain requirements. It has not changed. Only its use has changed now, but that has nothing to do with the specific configuration that was given to the object when it was designed.

  85. 85
    daveS says:

    gpuccio,

    Thanks for the responses. My question then becomes, how does the number 66 bits quantify the amount of information needed to implement the function?

    If I actually wanted to create a solid object which displaces 1 liter, how does 66 bits fit into the design and/or construction process?

  86. 86
    gpuccio says:

    DaveS at #83:

    It is also interestying to consider that the functional information when it was designed was relative to its properties at the moment of its design (for example, to correspon rather well to the weight of onw liter of water). Its use as a reference after it was built is something “after the fact”, so it has nothing to do with the functional information.

    Remember, the functional informatio measures the bits that have to be configured to make an object able to perform a pre-defined, independent function.

    If we take a random stone and decide that it will be the new kilogram fron now on, whatever its weight, there is no functional information in the object: we are just using it for a functiona that we define using the configuraion of the objects itself. The new function is designed, but not the original object.

    Functional information means that the designer has to configure exactly a number of bits because, otherwise, the independent function cannot be implemented by the object. Natural causes can generate some functional information, but itis always very low, in the range of what probabilistic resources can do.

    That’s why paperweights abound in nature without any need for design, but watches and computers don’t.

  87. 87
    gpuccio says:

    DaveS at #85:

    It is the level of minimum precision that I have to get to have that exact volume. Of course it is not a true measure. I have derived it from the hypothesis that such a precise volume could be attained by chance in that system only once in 10^20 attempts.

    1:10^20 = 10^-20

    -log2(10^-20) = about 66 bits

  88. 88
    gpuccio says:

    DaveS:

    The meaning of the value in bits is more intuitive when we are dealing with digital information in native form. However, the meaning is always the same: -log2 of the ratio between target space and search space.

  89. 89
    daveS says:

    gpuccio,

    Ok, I might be getting it. Would it be correct to say this functional information measure is always relative to a “null hypothesis” (in this case that the 1-liter solid was produced by natural processes on this 1-billion stone beach)?

  90. 90
    gpuccio says:

    DaveS:
    Yes, of course. The absence of design is the null hypothesis.

  91. 91
    timothya says:

    Joe asks for evidence that unit standardisation a good thing. Here is a negative example:

    https://en.m.wikipedia.org/wiki/Mars_Climate_Orbiter

  92. 92
    Silver Asiatic says:

    daveS

    Oddly enough, I’m very intrigued by these sorts of events. I think I would find this evidence most convincing, if I could witness it myself.

    That is very good to hear and it tells me that you are open to the evidence, at least in this case, through direct experience. There are somewhat living artifacts or testimonies of design in the tilma of Guadaluope and the shroud. It’s not a direct experience of the events, but at least artifacts that can be observed. In both cases, some inference must be drawn about the origin of both. I think the miracle of the sun is very difficult to explain from a materialist perspective, even though it is an historical event that is subject to that kind of analysis. The stigmata of St. Pio, for example, is documented with photos. But even here, there is always some room for denial. To me, they’re strong evidences of design, but as you said previously, there’s nothing that makes an absolute statement which is completely undeniable. I see that as part of the designer’s methodology. Others think it is a weakness of a design perspective that there is never found anything like a Shakespeare sonnet written in Morse Code in tree rings.

  93. 93
    ET says:

    timothya @ 91- No one here asked for evidence that unit standardization a good thing.

  94. 94
    ET says:

    I would say the Mars orbiter problem was a communication issue and not a standardization issue. If the contract called for one thing and something else was delivered, that is a sign of a communication breakdown. But it does show how critical complex specified and functional information can be .

  95. 95
    Brother Brian says:

    ET

    I would say the Mars orbiter problem was a communication issue and not a standardization issue.

    Silly me. And I always thought standardization was a communication issue. But what would I know? I only make a living in the standardization field.

  96. 96
    ET says:

    Brother Brian:

    And I always thought standardization was a communication issue.

    Context. You know that word that you refuse to understand. Quote-mining, on the other hand, is something that you do quote well.

    The sentences after the one that you so cowardly quote-mined should have been explanation enough for someone who allegedly makes a living in the standardization field. But that is moot as the programmer was using a standard, just the wrong one. Hence the communication issue.

  97. 97
    daveS says:

    gpuccio,

    Is there a benefit to stating all this in terms of functional information? In order to do that, you have to estimate the relative frequency with which this function would occur naturally and the total number of “trials” that have occurred (10^-20 and 10^9), so you already know the chance of greater than zero functional trials is miniscule (assuming the null), hence the null hypothesis is likely false. Why not just stop there?

  98. 98
    john_a_designer says:

    A 2007 paper published in PNAS published by Jack Szostak and his colleagues defines functional information this way:

    “Functional information is defined only in the context of a specific function x. For example, the functional information of a ribozyme may be greater than zero with respect to its ability to catalyze one specific reaction but will be zero with respect to many other reactions. Functional information therefore depends on both the system and on the specific function under consideration. Furthermore, if no configuration of a system is able to accomplish a specific function x [i.e., M(Ex) = 0], then the functional information corresponding to that function is undefined, no matter how structurally intricate or information-rich the arrangement of its agents.”

    https://www.pnas.org/content/104/suppl_1/8574

    Take for example, a bike sprocket. Without a system, the bicycle, the sprocket has no function. However, it still has a potential function and a purpose. If we find a sprocket in a warehouse next to a factory where they assemble bicycles we could quickly deduce what the purpose of the sprocket is. In other words, it still has a purpose defined by its potential function.

    I was trying to make a similar point above at #44 when I talked about helicase and DNA replication.

    https://uncommondescent.com/intelligent-design/why-describing-dna-as-software-doesnt-really-work/#comment-679003

    What is the function of helicase without the DNA helix? It has no other function. So it is highly specified.

  99. 99
    daveS says:

    The paper that JAD posted might answer my last question to gpuccio:

    For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, E_x (e.g., the RNA–GTP binding energy), I(E_x) = −log_2[F(E_x)], where F(E_x) is the fraction of all possible configurations of the system that possess a degree of function ≥ E_x.

    And this definition is very similar to the one gpuccio illustrated above. I don’t see any dependence on the null hypothesis we discussed above (absence of design), however [Edit: Perhaps it’s implicit?]. Would this matter? I guess the denominator in Szostak’s version is simply the total number of possible configurations of the system, period, not the total number of configurations that are “reachable” through natural processes.

    ETA:

    My compliments to gpuccio. His detailed and very clear explanation makes the abstract to the Szostak paper comprehensible.

  100. 100
    Brother Brian says:

    Jad@98, thank you. That makes it clearer. And matches up with what I thought functional information is.

  101. 101
    gpuccio says:

    DaveS at #97:

    Is there a benefit to stating all this in terms of functional information? In order to do that, you have to estimate the relative frequency with which this function would occur naturally and the total number of “trials” that have occurred (10^-20 and 10^9), so you already know the chance of greater than zero functional trials is miniscule (assuming the null), hence the null hypothesis is likely false. Why not just stop there?

    I am not sure I understand your point.

    Functional information and its measurement are essential to infer design. We can infer design when the functional information is high enough, in relation to the probabilistic resources of the system.

    Where should we “stop”? We stop when, after having measured the functional information for some function, and finding it high enough (for example, more than 500 bits), we infer design for the object.

    That was the purpose from the beginning, wasn’t it?

  102. 102
    gpuccio says:

    John_a_designer at #98:

    Szostak’s definition is essentially the same as mine.

    Of course the function is defined in a context. There is no problem with that. However, the functional information corresponds to the minimal number of bits necessary to implement the function. The function definition will include the necessary context.

    For example, helicase will be defined as a protein that can “separate two annealed nucleic acid strands (i.e., DNA, RNA, or RNA-DNA hybrid) using energy derived from ATP hydrolysis” (from Wikipedia), of course in cells with nucleic acids and ATP.

  103. 103
    gpuccio says:

    DaveS:

    Yes, as said Szostak’s definition of functional information is the same as mine.

    The null hipothesis has a fundamental role in inferring design from functional information, not in the definition of functional information itself.

    For obvious reasons, Szostak does not use the concept of functional information to infer design. That’s why you don’t see any mention of the null hypothesis in his paper.

    But functional information above a certain threshold is a safe marker of design, and allows to infer design as the process which originated the configuration we are observing.

    Of course, that can be demontrated separately. Up to now, the discussion was about the definition of functional information and its measurement, so I have sticked to that.

  104. 104
    gpuccio says:

    DaveS:

    “My compliments to gpuccio. His detailed and very clear explanation makes the abstract to the Szostak paper comprehensible.”

    OK, so I share that with Szostak. Good, so I will feel less a “bad guy” each time I criticize his paper about the ATP binding protein (and, unfortunately, that happens quite often here! 🙂 )

  105. 105
    daveS says:

    gpuccio,

    Where should we “stop”? We stop when, after having measured the functional information for some function, and finding it high enough (for example, more than 500 bits), we infer design for the object.

    This point might now be moot, but if we simply wanted to test the null hypothesis of no design, we don’t really need to transform the probability to units of functional information via the -log_2 function, do we? Using the two numbers 10^-20 and 10^9, we can show the p-value is tiny and therefore reject H_0.

    After some reflection, I guess it’s convenient to frame this all in terms of bits of functional information and probabilistic resources. The numbers (66 bits, e.g.) turn out to be easier to work with, anyway.

  106. 106
    Nonlin.org says:

    Gpuccio @56

    Yet one of us is wrong and leading others astray. But you’re not even curious, let alone interested in determining the truth. Nice going!

  107. 107
    OLV says:

    Here’s an article from the Stanford Encyclopedia of Philosophy:

    Levels of organization are structures in nature, usually defined by part-whole relationships, with things at higher levels being composed of things at the next lower level. Typical levels of organization that one finds in the literature include the atomic, molecular, cellular, tissue, organ, organismal, group, population, community, ecosystem, landscape, and biosphere levels. References to levels of organization and related hierarchical depictions of nature are prominent in the life sciences and their philosophical study, and appear not only in introductory textbooks and lectures, but also in cutting-edge research articles and reviews. In philosophy, perennial debates such as reduction, emergence, mechanistic explanation, interdisciplinary relations, natural selection, and many other topics, also rely substantially on the notion.

    Yet, in spite of the ubiquity of the notion, levels of organization have received little explicit attention in biology or its philosophy. Usually they appear in the background as an implicit conceptual framework that is associated with vague intuitions. Attempts at providing general and broadly applicable definitions of levels of organization have not met wide acceptance. In recent years, several authors have put forward localized and minimalistic accounts of levels, and others have raised doubts about the usefulness of the notion as a whole.

    There are many kinds of ‘levels’ that one may find in philosophy, science, and everyday life—the term is notoriously ambiguous. Besides levels of organization, there are levels of abstraction, realization, being, analysis, processing, theory, science, complexity, and many others.

    Although ‘levels of organization’ has been a key concept in biology and its philosophy since the early 20th century, there is still no consensus on the nature and significance of the concept. In different areas of philosophy and biology, we find strongly varying ideas of levels, and none of the accounts put forward has received wide acceptance. At the moment, the mechanistic approach is perhaps the most promising and acclaimed account, but as we have seen, it may be too minimalistic to fulfill the role that levels of organization continue to play in biological theorizing.

    https://plato.stanford.edu/entries/levels-org-biology/#ConcRema

  108. 108
    OLV says:

    A layer of regulatory information on top of DNA is proving to be as important as genes for development, health and sickness.

    To explain how the epigenome works, some have likened it to a symphony: the sheet music (genome) is the same, but can be expressed in vastly different ways depending on the group of players and their instruments (epigenome).

    Human DNA in a single cell is enormously long-six feet-and folds with proteins into packages (chromatin) to fit within a nucleus.

    https://inside.salk.edu/summer-2016/epigenomics/

  109. 109
    OLV says:

    A major question in cell biology is how cell type identity is maintained through mitosis.

    We are only starting to understand the mechanisms by which epigenetic information contained within the vertebrate chromatin is transmitted through mitosis and how this occurs in the context of a mitotic chromosome conformation that is dramatically different from interphase. One important question that remains unanswered is how molecular details of epigenetic bookmarks are read in early G1 and enable re-establishment of cell type specific chromatin organization. Insights into these processes promise not only to lead to mechanistic understanding of mitotic inheritance of cell type specific chromatin state, they will also reveal how the spatial organization of interphase chromosomes is determined in general by the action of cis-acting elements along the chromatin fiber. This will also lead to a better understanding of what epigenetic mechanisms underlie processes in which cell type identity is changed, for example in stem cell differentiation or in diseases that result in cancer development and aging.

    It will be very interesting to explore the pathways and mechanisms that are used to initiate epigenetic changes in cellular phenotype, how differences between sister chromatids are established and proper sister segregation is controlled.

    Epigenetic Characteristics of the Mitotic Chromosome in 1D and 3D
    Marlies E. Oomen and Job Dekker
    Crit Rev Biochem Mol Biol. 2017 Apr; 52(2): 185–204. doi: 10.1080/10409238.2017.1287160
    PMCID: PMC5456460
    NIHMSID: NIHMS863269
    PMID: 28228067

  110. 110
    OLV says:

    Cells establish and sustain structural and functional integrity of the genome to support cellular identity and prevent malignant transformation.

    Physiological control of gene expression is dependent on chromatin context and requires timely and dynamic interactions between transcription factors and coregulatory machinery that reside in specialized subnuclear microenvironments. Multiple levels of nuclear organization functionally contribute to biological control…

    Cells establish and retain structural and functional integrity of the genome to support cellular identity and prevent malignant transformation. Mitotic bookmarking sustains competency for normal biological control and propetuates gene expression associated with transformed and tumor phenotypes.

    Elucidation of mechanisms that mediate the genomic organization of regulatory machinery will provide novel insight into control of cancer?compromised gene expression.

    Higher order genomic organization and epigenetic control maintain cellular identity and prevent breast cancer
    A.J. Fritz N.E. Gillis D.L. Gerrard P.D. Rodriguez D. Hong J.T. Rose P.N. Ghule E.L. Bolf J.A. Gordon C.E. Tye J.R. Boyd K.M. Tracy J.A. Nickerson A.J. van Wijnen A.N. Imbalzano J.L. Heath S.E. Frietze S.K. Zaidi F.E. Carr J.B. Lian J.L. Stein G.S. Stein
    https://doi.org/10.1002/gcc.22731
    Genes, Chromosomes and CancerVolume 58, Issue 7
    https://onlinelibrary.wiley.com/doi/full/10.1002/gcc.22731

  111. 111
    OLV says:

    Genome structure and function are intimately linked.

    the nuclear architecture of rod photoreceptors differed fundamentally in nocturnal and diurnal mammals. The rods of diurnal retinas, similar to most eukaryotic cells, had most heterochromatin situated at the nuclear periphery with euchromatin residing toward the nuclear center. In contrast, the rods of nocturnal retinas displayed a unique inverted pattern with the heterochromatin localized in the nuclear center, whereas the euchromatin and nascent transcripts and splicing machinery lined the nuclear periphery. This inverted pattern was formed by remodeling of the conventional pattern during terminal differentiation of rods.

    the inverted rod nuclei acted as collecting lenses, and computer simulations indicated that columns of such nuclei channel light efficiently toward the light?sensing rod outer segments. Thus, nuclear organization displays plasticity that can adapt to specific functional requirements.

    Understanding the mechanisms that underlie the nuclear structural order and its perturbations is the focus of many studies. We do not have a complete understanding; however, a few key mechanisms have been described.

    Introduction to the special issue “3D nuclear architecture of the genome”
    Sabine Mai
    https://doi.org/10.1002/gcc.22747
    Genes, Chromosomes and CancerVolume 58, Issue 7

  112. 112
    john_a_designer says:

    Gpuccio @ 102:

    Of course the function is defined in a context. There is no problem with that. However, the functional information corresponds to the minimal number of bits necessary to implement the function. The function definition will include the necessary context.

    For example, helicase will be defined as a protein that can “separate two annealed nucleic acid strands (i.e., DNA, RNA, or RNA-DNA hybrid) using energy derived from ATP hydrolysis” (from Wikipedia), of course in cells with nucleic acids and ATP.

    The DNA Helicase is composed of 3 polymers that contain 14 chains (454 amino acid residues long).

    https://cbm.msoe.edu/crest/ePosters/16DNAHelicase4ESV.html

    What is the probability that DNA Helicase could originate by chance? Below Steven Meyer has elucidated a method by which we can calculate the probability of a single protein originating by chance alone.

    Various methods of calculating probabilities have been offered by Morowitz, Hoyle, Cairns-Smith, Prigogine, Yockey and more recently, Robert Sauer…

    First, all amino acids must form a chemical bond known as a peptide bond so as to join with other amino acids in the protein chain. Yet in nature many other types of chemical bonds are possible between amino acids; in fact, peptide and non-peptide bonds occur with roughly equal probability. Thus, at any given site along a growing amino acid chain the probability of having a peptide bond is roughly 1/2. The probability of attaining four peptide bonds is: (1/2 x 1/2 x 1/2 x 1/2)=1/16 or (1/2)4. The probability of building a chain of 100 amino acids in which all linkages involve peptide linkages is (1/2)100 or roughly 1 chance in 10^30th. Second, in nature every amino acid has a distinct mirror image of itself, one left-handed version or L-form and one right-handed version or D-form. These mirror-image forms are called optical isomers. Functioning proteins tolerate only left-handed amino acids, yet the right-handed and left-handed isomers occurs in nature with roughly equal frequency. Taking this into consideration compounds the improbability of attaining a biologically functioning protein. The probability of attaining at random only L-amino acids in a hypothetical peptide chain 100 amino acids long is again (1/2)100 or roughly 1 chance in 10^30th. The probability of building a 100 amino acid length chain at random in which all bonds are peptide bonds and all amino acids are L-form would be (1/4)100 or roughly 1 chance in 10^60th (zero for all practical purposes given the time available on the early earth). Functioning proteins have a third independent requirement, the most important of all; their amino acids must link up in a specific sequential arrangement just the letters in a meaningful sentence must. In some cases, even changing one amino acid at a given site can result in a loss of protein function.

    Moreover, because there are twenty biologically occurring amino acids the probability of getting a specific amino acid at a given site is small, i.e. 1/20. (Actually the probability is even lower because there are many non-proteineous amino acids in nature). On the assumption that all sites in a protein chain require one particular amino acid, the probability the probability of attaining a particular protein 100 amino acids long would be (1/20)100 or roughly 1 chance in 10^130th. We know now, however, that some sites along the chain do tolerate several of the twenty proteineous amino acids, while others do not. The biochemist Robert Sauer of M.I.T has used a technique known as “cassette mutagenesis” to determine just how much variance among amino acids can be tolerated at any given site in several proteins. His results have shown that, even taking the possibility of variance into account, the probability of achieving a functional sequence of amino acids in several functioning proteins at random is still “vanishingly small,” roughly 1 chance in 10^65th an astronomically large number. (There are 10^65th atoms in our galaxy).

    http://www.arn.org/docs/meyer/sm_origins.htm

    Actually I believe that the probability for a 100 aa protein forming by chance would be 10^30th + 10^30th + 10^130th = 10^190th, according to Meyer, or 10^125th, according to Sauer. For some reason Meyer doesn’t give us the grand total– a chance probability that for all intents and purposes is impossible. The probability that 454 aa helicase forming by chance is therefore absolutely staggering. Someone else can do the calculation. I won’t because it would be pointless to do so. Again, helicase forming by chance in isolation would have no function. Helicase’s function depends on the existence of the system of which it is a part and that technically involves the entire cell.

    Therefore, if we really want grasp the probabilities we need to calculate the probability of a basic prokaryotic cell. I believe Harold Morowitz has already done something like that. The number is “astronomical.”

    Please note, I am not arguing this is necessarily true of every function within the cell. For example, the bacterial flagellum adds the function of motility to the cell but since there are many prokaryotes which lack motility it is obviously not essential for survival. On the other hand, the flagellum is not constructed out of a single protein. Are the any stand-alone single function proteins which add functionality to the cell? I am not suggesting that there are not. I just can’t think of any.

  113. 113
    gpuccio says:

    John_a_designer:

    Of course I agree with you.

    Given living cells, with their complex systems already existing, the probability of a new functional protein will be linked mainly to the probability of getting the right sequence of nucleotides in a protein coding gene. Which is, however, astronomically small for almost all proteins.

    I have said here many times that even one complex protein is enough to falsify darwinism.

    The most difficult aspect in computing functional complexity for observed proteins is to estimate the target space. The search space is easy enough, and for all practical purposes it can be equaled to 20^n, where n is the number of aminoacids in the observed protein.

    But the target space, IOWs the number of those sequences that could still perform the function we observe at a biologically relevant level, is much more difficult to estimate.

    I am not familiar with Sauer’s method, that you quote. I will try to give a look at it, it seems interesting.

    I have quoted here many times Durston’s method, based on conservation in protein families. And of course I have used many times, in detail, a method developed by me, and inspired to similar ideas as Durston’s, based on homologies conserved for long evolutionary times.

    Using that method, for example, I have shown here:
    https://uncommondescent.com/intelligent-design/the-amazing-level-of-engineering-in-the-transition-to-the-vertebrate-proteome-a-global-analysis/

    that “more than 1.7 million bits of unique new human-conserved functional information are generated in the proteome with the transition to vertebrates”.

    So, if one single protein is enough to falsify darwinism, 1.7 million bits of new, original functional information generated in a specific evolutionary event, in a time window of a few million years at most, is its final death certificate.

    And this is just about protein coding sequences, without considering the huge functional information arising in the epigenome, in all regulatory parts, and so on.

    I am really happy that I must not defend the neo-darwinian theory. Of course Intelligen Design is the only reasonable approach, it’s as simple as that.

  114. 114
    kairosfocus says:

    GP @ 22, welcome back, we missed you. I note, there are dynamic-stochastic systems that blend chance and necessity, with feedback and lags bringing memory and reflexive causal aspects. They are a refinement, they don’t change the main point. KF

  115. 115
    kairosfocus says:

    PS, 113, I note that for some cases, once the config space is big enough, atomic resources of sol system or observed cosmos are insufficient to carry out a search that rises above rounding down to zero. 500 – 1,000 bits suffices. At that point, needle in haystack challenge already makes blind chance and/or mechanical necessity maximally implausible without explicit probability estimates. And if one suggests a golden search, a search in a config space is a subset sampled, so a higher order search for a golden search is a search from the power set, where for 500 bits, the log of the power set’s cardinality is nearly 10^150. I don’t try to give the actual number, that’s calculator smoking territory, indeed when I asked an online big num calculator to spit it out for me, it complained that it cannot handle a number that large. Hence, the Dembski point that search for search is exponentially harder than direct search. The FSCO/I result is hard to evade, just like fine tuning.

  116. 116
    OLV says:

    DNA replication

    Differences in firing efficiency, chromatin, and transcription underlie the developmental plasticity of the Arabidopsis DNA replication origins
    Joana Sequeira-Mendes, Zaida Vergara, Ramon Peir1, Jordi Morata, Irene Aragüez, Celina Costas, Raul Mendez-Giraldez, Josep M. Casacuberta, Ugo Bastolla and Crisanto Gutierrez

    DOI: 10.1101/gr.240986.118

    Genome Res. 2019. 29: 784-797

    Eukaryotic genome replication depends on thousands of DNA replication origins (ORIs).

    A major challenge is to learn ORI biology in multicellular organisms in the context of growing organs to understand their developmental plasticity.

    We have identified a set of ORIs of Arabidopsis thaliana and their chromatin landscape at two stages of post-embryonic development.

    ORIs associate with multiple chromatin signatures including transcription start sites (TSS) but also proximal and distal regulatory regions and heterochromatin, where ORIs colocalize with retrotransposons.

    strong ORIs have high GC content and clusters of GGN trinucleotides.

    Development primarily influences ORI firing strength rather than ORI location.

    ORIs that preferentially fire at early developmental stages colocalize with GC-rich heterochromatin, but at later stages with transcribed genes, perhaps as a consequence of changes in chromatin features associated with developmental processes.

    Our study provides the set of ORIs active in an organism at the post-embryo stage that should allow us to study ORI biology in response to development, environment, and mutations with a quantitative approach.

    In a wider scope, the computational strategies developed here can be transferred to other eukaryotic systems.

  117. 117
    OLV says:

    GP @113:

    “I am really happy that I must not defend the neo-darwinian theory. Of course Intelligen Design is the only reasonable approach, it’s as simple as that.”

    🙂

    It feels good to be on the winning side of the debate.

    The neo-Darwinian ideas are under attack from the third way folks too, who are not ID friendly

  118. 118
    john_a_designer says:

    Gpuccio @ 113,

    I am not familiar with Sauer’s method, that you quote. I will try to give a look at it, it seems interesting.

    Neither am I. I was quoting Meyer who was alluding to Sauer who apparently has an argument that proteins do not have to be so highly specified. Indeed, as I am sure you already know, there is some variability in the sequencing of well known proteins. For example, not all cytochrome-c is the same. Douglas Axe I know has done some work that at least from what I understand “probably puts an ax” to Sauer’s higher probability estimate. However, even if Sauer is right 10^125th, for a 100 aa protein does not bode well for any kind of naturalistic explantion.

    In his book, The Varieties of Scientific Experience: A Personal View of the Search for God,* Carl Sagan life also calculates the probability of “a modest” 100 aa long enzyme. “A way to think of it,” he writes, “is a kind of a necklace on which there are a hundred beads. There are twenty different kinds any one of which could be in any one of these positions. To reproduce the molecule precisely, you have to put all the right beads– all the right amino acids– in the molecule in the right order. If you were blindfolded,” Sagan then goes on to explain your chances of coming up with the right sequences by chance alone, he calculates, is about 10^130th or the same result as Steven Meyer. (By the way, Sagan gave these lectures in 1985 before there even was a modern ID movement.) He then adds that, “Ten to the hundred-thirtieth power, or 1 followed by130 zeros, is vastly more than total number of elementary particles in the entire universe, which is only [only?] about ten to the eightieth (10^80th).” (p.99-100)

    He then, like Dembski, factors in Planck time along with a universe full of planets with oceans like our own (not that that really helps any) and concludes (ta-dah!,) “You could never produce on enzyme molecule of predetermined structure.” Of course, ID’ists agree that there is not enough time or chance in the entire universe to form even one modest protein molecule. But not so fast.

    Sagan then tries to very deftly take back with his left hand what he has just put on the table with his right. (Haven’t we seen this act before?)

    “Now let’s take another look,” he writes on page 101. “Does is matter if I have a hemoglobin molecule here and I pull out this aspartic acid and put in a glutamic?” (Notice that in less than 2 pages he had gone from a protein of 100 aa’s to one with over 850 aa’s. I won’t explain why he does this– well, actually I am really not sure why.) “Does that make the molecule function less well? In most cases it doesn’t. In most cases an enzyme has a so-called active site, which is generally about five amino acids long. And it’s the active site that does the stuff. And the rest of the molecule is involved in folding and turning the molecule on of turning it off. And it’s not a hundred places you need to explain, it’s only five to get going. And 20^5 is an absurdly small number, only about five million. Those experiments are done in one ocean between now and next Tuesday. Now, remember what we are trying to do: We’re not trying to make a human being from scratch… What we’re asking for is something that gets life going, so this enormously powerful sieve of Darwinian natural selection can start pulling out the natural experiments and encouraging them, and neglecting the cases that don’t work.”

    Does Sagan have a point here? Remember he gave the Gifford lectures right after his big success with Cosmos, so publicly he had achieved scientific “rock star” status. Nevertheless, I a mere layman, can spot at least half a dozen big flaws– actually major blunders– in his argument. Does anyone else see them?

    But who am I to question the great Carl Sagan? He was one of the pioneers of astrobiology and SETI and played a big role in helping to design the scientific instruments for NASA’s Viking Mars landers and Voyager interplanetary probes. Who am I with zero scientific credentials to my name to question such greatness? So am I mistaken in thinking that Sagan’s thinking is mistaken?

    Again, what do you see?

    *[According to Wikipedia: “The Varieties of Scientific Experience: A Personal View of the Search for God is a book collecting transcribed talks on the subject of natural theology that astronomer Carl Sagan delivered in 1985 at the University of Glasgow as part of the Gifford Lectures.[1] The book was first published posthumously in 2006, 10 years after his death. The title is a reference to The Varieties of Religious Experience by William James.

    The book was edited by Ann Druyan, who also provided an introduction section…”]

  119. 119
    daveS says:

    JAD,

    Nevertheless, I a mere layman, can spot at least half a dozen big flaws– actually major blunders– in his argument. Does anyone else see them?

    I don’t know thing 1 about this stuff, but I would appreciate an enumeration of these blunders at some point.

    Edit: And I don’t doubt that Sagan made many errors. The discussion of the death of Hypatia in the original Cosmos tv series apparently is a well-known example.

  120. 120
    OLV says:

    John_a_designer @118:

    “But who am I to question the great Carl Sagan?”

    You are a thinking person hence you may question anybody. You may not get any coherent answer back, but that’s not your problem.

  121. 121
    gpuccio says:

    KF at #114 and 155:

    Hi, always great to discuss with you! 🙂

    Of course, chance and necsssity are often mixed in systems. That’s why we have to try to separate them in our evaluations. But, as you correctly say, that does not change the main point.

  122. 122
    gpuccio says:

    OLV:

    “It feels good to be on the winning side of the debate.”

    Yes, even if almost everybody thinks the opposite, it’s truth that counts! 🙂

    And thank you for your usual very interesting links and quotes.

  123. 123
    gpuccio says:

    John_a_designer at #118 and DaveS at #119:

    Of course not all AA positions in a protein sequence have the same functional specificity. That’s why indirect methods based on homology, like Durston’s and mine, help to evaluate the real functional information.

    Let’s take for example a 154 AAs protein, human myoglobin. If all AAs had to be exactly what they are for the protein to be functional, the functional information in the protein would be:

    -log2(1:20^154) = about 665 bits.

    But of course that’s not the case. We know that many AAs can be different, and others cannot. Moreover, some AA positions are almost indifferent to the function (they can be any of the 20 AAs), while others can only change into some similar AA. All that is well known.

    It is false that only the active site is important for the protein function. The whole structure is very important, and it depends on most of the AA positions. The active sire, certainly, has a very specific role, but it is only part of the story.

    So, how can we have an idea of how big functional information is in human myoglobin?

    My method is rather simple. If we blast the human protein against, for example, cartilaginous fishes, we get a best hit of 127 bits (Heterodontus portusjasksoni), and others very similar with other members of the group (123 bits for Callorhincus milii and 121 bits for Rhincodon typus). That means that about 120 bits of functional information have been conserved between cartilaginous fish and humans.

    That value is very conservative. It corresponds to about 65 identities and 87 positives (in the best hit), and is already heavily corrected for chance similarities. So, we can be rather safe if we take it as a measure of the real functional information: the true value will almost certainly be higher.

    The reason why conserved homology corresponds to function is very simple: cartilaginous fishes and humans are separated by more than 400 million years in evolutionary history. In that time window, any nucleotide sequence in the genome will be saturated by neutral variation, IOWs it will show no detectable homology in the two groups, unless it is preserved by negative, purifying selection, beacuse ot its functional role.

    So, we can see that myoglobin is after all not so functionally specific. As a 154 AA sequence, it has only at least 120 bits of functional complexity, which is not so much. Always a lot, however.

    That is not surprising, because the structure of the myoglobin molecule is rather simple: it is a globular protein, with one well defined active site. Not the most complex of the lot.

    Now, let’s consider another protein that I have discussed many times: the beta subunit of ATP synthase.

    Again, let’s consider the human form: a 529 AA long sequence, P06576.

    Now, as this is a very old protein, originating in bacteria, let’s blast the human form against the same protein in E. coli, a well known prokaryote (P0ABB4, a 460 AA long sequence).

    The result is rather amazing: the two sequences show 660 bits of homology, after a separation of billions of years!

    We can have no doubts that those 660 bits are true functional information.

    However, as you can see, the functional information as evaluated by this method is always much less than the total possible information for a sequence that long. That’s because of course many positions are not functionally specific, and also because the BLAST method is very conservative.

    Anyway, the beta subunit of ATP synthase, which is only a part of a much more complex molecule, is more than enough, with its (at least) 660 bits of functional information, to demonstrate biological design.

    And it’s just one example, among thousands!

  124. 124
    daveS says:

    Thanks, gpuccio. I’ll have to chew on that (although I doubt my understanding will reach beyond the superficial).

  125. 125
    daveS says:

    gpuccio,

    Two questions, if you please.

    In the calculation of functional information, we take -log_2 of the ratio of the number of functional structures to the total number of structures possible in a particular system. It’s essentially -log_2 of a conditional probability P(E | F) where E and F are very precisely defined events.

    OTOH, these BLAST scores in bits are calculated (via various schemes, I take it) simply by comparing two sequences.

    I think you allude to this, but should it be clear that the BLAST numbers are lower bounds for the amount of functional information? In particular, for the E and F that Sagan is referring to? (I take it that E = some form of life arising via these proteins and F = a “primordial soup” exists).

  126. 126
    daveS says:

    gpuccio,

    Please disregard the above post—I’ll go back to chewing.

  127. 127
    gpuccio says:

    DaveS at #125 and 126:

    OK, but maybe I can help a little with your chewing! 🙂

    The logical connection between functional information and homology conservation is probably not completely intuitive, so I will try to give some input about that point.

    First of all, we must consider that the information in the genome, be it functional or not, is subject to what is called neutral variation. IOWs, errors in DNA duplication will affect the sequence of nucleotides in time.

    The process is slow, but evolutionary times are big. So, in principle, because of Random Variation each sequence in the genome of living beings would lose any connection with the original form, given enough time.

    However, luckily that is true only for neutral or quasi neutral variation. IOWs, for sequences that have no function.

    If the sequence is functional, and if the function is relevant enough (IOWs, if it can be seen by Natural Selection), what happens is that change is not allowed beyond some level: if the sequence changes enough, so that its function is lost or severely impaired, that variation is elminated by what is called negative selection, or purifying selection.

    Now, while positive selection is elusive and scarcely detectable in most cases, negative selection is really a powerful and ubiquitous force of nature. It is the reason why proteins retain much of their sequence specificity through hundreds or thousands of million yeras, in spite of random variation.

    All those things are well known, and can be easily proved. Neutral variation is well detectable in non functional, or weakly functional, sites, for example in the third nucleotide in protein coding genes, which usually can change without affecting the protein sequence.

    The concept of “saturation” is also important: it is the time necessary to erase any similarity between two neutral (non functional) sequences, because of RV. While that time can vary in different cases, in general an evolutionary split of 200 – 400 million years will be enough to cancel any detectable or significant homology between two neutral, non functional sequences in the genome.

    More in next post.

  128. 128
    gpuccio says:

    DaveS (and all interested):

    The concepts I have summarized in my previous post are the foundation for using homology to detect functional information. Of course the idea is not mine. Durston, in particular, has applied it brilliantly in his important paper. However, I have developed a personal approach and methodology which is slightly different, even if inspired by the same principles.

    So. let’s imagine that we have two sequences that are 100% identical, and that are found in two species separated by more than 400 million years of evolutionary time. For example, humans and cartilaginous fishes, which is the scenario I have analyzed many times here.

    That is a very good scenario for our purposes, because the evolutionary split between the two groups is uspposed to be more than 400 million years old. So, if we compare a human protein with the same protein in sharks, we have two proteins separated by more than 400 million years of time (humans, of course, are derived from bony fishes).

    So, let’s suppose that the same protein, with the same structure and function, has identical AA sequence in the two groups. That is not true for any protein I know of, but it is useful for our reasoning.

    So, let’s say that we have a 150 AA protein, with a well known important function, and that our protein has exactly the same AA sequence in the two groups: humans and sharks.

    Will the two protein coding genes be the same? Of course not. The third nucleotide will be different in most sites, because of neutral variation. In most cases, it could change without affecting the sequence of AAs, and in 400+ million years those sited did really change.

    But, as said, let’s suppose the AA sequence is exactly the same.

    What does that mean?

    As explained, it means that we can confidently assume that all those 150 AAs must be what they are, for the protein to be really functional. Or, at least, most of them.

    As said, that is true of no real protein. But if it were true, what does it mean?

    It simply means that the target space is 1.

    So, in that case, the computation is easy.

    Target space = 1

    Search space = 20^150 = 2^648

    Target space / Search space ratio = 2^ -648

    Functional information = 648 bits.

    Of course, this is the highest information possible for a 150 AA sequence. In this extreme case, the functional information is the same as the total potential information in the sequence.

    So, we can easily see that, in this very special, and unrealistic, case, each AA identity corresponds to about 4.3 bits of functional information.

    More in next post.

  129. 129
    gpuccio says:

    DaveS (and all interested):

    A couple of important points:

    1) It is very important that we consider long evolutionary separations. If we compare the same protein in humans and chimps, it will be almost identical in most cases. But the meaning here is different. The evolutionary separation between human and chimps is rather short. Therefore, neutral variation operated only for a very short time, and neutral sequences can be almost the same in the two groups just because there was not enough time to change. IOWs, the homology could be simply a passive result.

    2) Identities are not the whole story. We must also consider similarities, IOWs AAs that are substituted by very similar ones.

    Now, let’s see how the BLAST algorithm works. Again, let’s consider the homology between human ATP synthase subunit beta and the same protein in E. coli. Proteins P06576 and P0ABB4. The length is similar, but not identical (529 vs 460).

    Comparing the two sequences, we find:

    Identities: 334

    Positives: 382 (that includes the identities)

    Score: 660 bits

    Expect: 0.0

    IOWs, the algorithm has performed an empirical alignment between the two sequences, and found 334 identities and almost 50 similarities. The algorithm computes a bitscore of 660 and an E value of practically 0 (that is more or less a p value related to the null hypothesis that the two sequences are evolutionarily unrelated, and that the observed similarities are due to chance, given the number of comparisons performed and the number of sequences in the present protein database).

    Now, I will try to show why the Blast algorithm is very conservative, when used to evaluate functional information.

    If we reason according to the full potential of functional information, and just stick to identities, 334 identities would correspond to:

    334 x 4.3 = 1436 bits

    Indeed, the raw bitscore is given by the Blast algorithm as 1703 bits. But the algorithm works in a way that the final, corrected value is “only” 660 bits.

    IOWs, the Blast algorithm usually computes

  130. 130
    gpuccio says:

    DaveS (and all interested) (continued):

    IOWs, the Blast algorithm usually computes about 2 bits for each identity. Considering that we are dealing with logarithmic values, that is an extreme underestimation.

    But the Blast tool is easy to use, and universally used in biology. So I stick to its result, even if certainly too conservative for my purposes. Of course that is a lot of compensation for other possible factors (of course some of the identities could be a random effect, and there could be some redundancy in the functional information, and so on).

    Even considering all those aspects, one single protein like the beta subunit of ATP synthase is more than enough to infer design. And, as said, there are thousands of examples like that.

  131. 131
    daveS says:

    gpuccio,

    Thanks very much, this is helpful. I was looking for a “BLAST for Dummies” type tutorial, but even those assume background that I don’t have. Anyway, the way you have broken it down clears up a lot of questions I had.

  132. 132
    kairosfocus says:

    GP, good stuff, of course log_2 20 = 4.32, i.e. 4.32 bits/character. KF

  133. 133
    john_a_designer says:

    Earlier @ 118 I asked:

    “I can spot at least half a dozen big flaws– actually major blunders– in [Sagan’s] argument. Does anyone else see them?”

    https://uncommondescent.com/intelligent-design/why-describing-dna-as-software-doesnt-really-work/#comment-679457

    Gpuccio @123 pointed out a couple of problems. For example:

    “It is false that only the active site is important for the protein function. The whole structure is very important, and it depends on most of the AA positions. The active site, certainly, has a very specific role, but it is only part of the story.”

    Earlier @ 112 I pointed out a couple of other problems the Sagan doesn’t even mention.

    Actually I believe that the probability for a 100 aa protein forming by chance would be 10^30th + 10^30th + 10^130th = 10^190th, according to Meyer, or 10^125th, according to Sauer.

    https://uncommondescent.com/intelligent-design/why-describing-dna-as-software-doesnt-really-work/#comment-679203

    Even if we grant, for sake of argument, Sagan’s claim that “it’s not a hundred places you need to explain, it’s only five to get going… And 20^5 is an absurdly small number, only about five million. Those experiments are done in one ocean between now and next Tuesday.” However, Sagan completely ignores (1) the chirality problem and (2)the problem of creating the right chemical bond– a peptide bond. In the quote I provided above at 112 Meyer gives a very succinct explanation as to why:

    The probability of building a chain of 100 amino acids in which all linkages involve peptide linkages is (1/2)100 or roughly 1 chance in 10^30th. Second, in nature every amino acid has a distinct mirror image of itself, one left-handed version or L-form and one right-handed version or D-form. These mirror-image forms are called optical isomers. Functioning proteins tolerate only left-handed amino acids, yet the right-handed and left-handed isomers occurs in nature with roughly equal frequency. Taking this into consideration compounds the improbability of attaining a biologically functioning protein. The probability of attaining at random only L-amino acids in a hypothetical peptide chain 100 amino acids long is again (1/2)100 or roughly 1 chance in 10^30th. The probability of building a 100 amino acid length chain at random in which all bonds are peptide bonds and all amino acids are L-form would be (1/4)100 or roughly 1 chance in 10^60th…

    Again, Sagan completely ignores these two problems, even though they were well known by OoL researchers at the time. Indeed, as a layman who had an interest in the subject at the time (1985) I knew about them. Why didn’t astronomer/astrobiologist Dr. Sagan know?

    In other words, the probability of forming a single 100aa protein by chance even if we accept his 20^5 has to roughly (converting to base 10) be about be 10^66.

    However, that creates another problem that Sagan completely ignores. A single protein floating alone in an ocean won’t evolve into anything, even if we assume Darwinian evolution because protein does not self-replicate. So even a universe of oceans full of amino acids and proteins wouldn’t get you anywhere. This is why the majority of OoL researchers have moved on to the RNA world hypothesis, but that has a whole set of problems of its own.

    However, I am sure that Sagan’s audience was so enamored by his scientific “rock star” status that they gave him a complete pass on a subject that the average person knows very little about.

    Can anyone else see any other problems with Sagan’s argument?

  134. 134
    OLV says:

    Epigenetic regulation of glycosylation is the quantum mechanics of biology
    Gordan Laucab, Vlatka Zoldošc
    DOI: 10.1016/j.bbagen.2013.08.017
    Biochimica et Biophysica Acta (BBA)
    Volume 1840, Issue 1, Pages 65-70

    Highlights
    The majority of proteins are glycosylated.
    Glycan parts of proteins perform numerous structural and functional roles
    There are no genetic templates for glycans, instead glycans are defined by dynamic interaction between genes and environment.
    Epigenetic changes enable adaptation to variations in environment.
    Epigenetic regulation of glyco—genes is a powerful evolutionary tool.

    Abstract
    Background
    Most proteins are glycosylated, with glycans being integral structural and functional components of a glycoprotein. In contrast to polypeptides, which are fully encoded by the corresponding gene, glycans result from a dynamic interaction between the environment and a network of hundreds of genes.

    Scope of review
    Recent developments in glycomics, genomics and epigenomics are discussed in the context of an evolutionary advantage for higher eukaryotes over microorganisms, conferred by the complexity and adaptability which glycosylation adds to their proteome.

    Major conclusions
    Inter-individual variation of glycome composition in human population is large; glycome composition is affected by both genes and environment; epigenetic regulation of “glyco-genes” has been demonstrated; and several mechanisms for transgenerational inheritance of epigenetic marks have been documented.

    General significance
    Epigenetic recording of acquired characteristics and their transgenerational inheritance could be important mechanisms used by higher organisms to compete or collaborate with microorganisms.

Leave a Reply