Functionally Specified Complex Information & Organization Intelligent Design thermodynamics and information

From Evolution News: Prigogine’s Self-Organization vs. Specified Biological Complexity

Spread the love

This excerpt addresses some issues raised in a recent UD post and its comments.

Physicist Brian Miller writes:

Other scientists, such as Ilya Prigogine, have attempted to compare the order in cells to the order created by such self-organizing processes as the formation of a funnel cloud in a tornado. These attempts also fall short since such appeals can only explain the order of a repeating or chaotic pattern but not that of specified information.

Yockey pointed out that Prigogine and Nicolis invoked external self-organizational forces to explain the origin of order in living systems. But, as Yockey noted, what needs explaining in biological systems is not order (in the sense of a symmetrical or repeating pattern), but information, the kind of specified digital information found in software, written languages, and DNA. (Signature in the Cell, p. 255)

Others, such as complex-systems researcher Stuart Kauffman, have attempted to generate complex patterns out of self-organizing or autocatalytic systems and then relate them to life. However, all such attempts require that the initial conditions or arrangement of molecules is precisely specified. In other words, specified structures cannot be generated unless information is provided.

Thus, to explain the origin of specified biological complexity at the systems level, Kauffman has to presuppose a highly specific arrangement of those molecules at the molecular level as well as the existence of many highly specific and complex protein and RNA molecules. In short, Kauffman merely transfers the information problem from the molecules into the soup. (Signature in the Cell, p. 264)

All such attempts to explain life by natural processes make a fundamental error. They fail to distinguish between the order created by natural processes, such as water freezing to form a snowflake, and specified complexity. The former results from natural laws directing the arrangement of molecules. However, for a medium to contain information/specified complexity, it must have the freedom to take on numerous possible arrangements of parts. Correspondingly, law-like processes determine outcomes making arrangements that are highly probable, but the presence of information corresponds to patterns that are highly improbable.

Instead, information emerges from within an environment marked by indeterminacy, by the freedom to arrange parts in many different ways. As the MIT philosopher Robert Stalnaker puts it, information content “requires contingency”…the more improbable an event, the more information its occurrence conveys. In the case that a law-like physical or chemical process determines that one kind of event will necessarily and predictably follow another, then no uncertainty will be reduced by the occurrence of such a high-probability event. Thus, no information will be conveyed. (Signature in the Cell, p. 250-251)

This confusion has been pointed out by such experts in the field as Herbert Yockey, who was one of the founders in applying information theory to biology. In particular, he pointed out why order generated from natural processes could not explain the biological encoding of information. Meyer cites him on this:

Thus, as Yockey notes: “Attempts to relate the idea of order…with biological organization or specificity must be regarded as a play on words that cannot stand careful scrutiny. Informational macromolecules can code genetic messages and therefore can carry information because the sequence of bases or residues is affected very little, if at all, by [self-organizing] physicochemical factors.” (Signature in the Cell, p. 257)

The described technical details are important, but the basic challenge is easily understood by anyone via a simple analogy. Physical processes can produce various types of order, such as that seen in a hurricane. But no one has ever run to a lumber yard before a hurricane expectantly waiting for the oncoming winds to arrange the lumber into a new house. Instead, they wait in dread to see how a hurricane might demolish a home into a pile of debris. The same tendency holds true for life. Physical processes tend to break apart complex biological structures into simpler chemicals. None will organize a wide variety of molecules into fantastically improbably configurations that achieve such functional goals as processing energy, building molecular machines, and maintaining homeostasis. Only intelligence can build such complex structures for such purposeful ends.

View entire article at Evolution News.

26 Replies to “From Evolution News: Prigogine’s Self-Organization vs. Specified Biological Complexity

  1. 1
    bornagain77 says:

    Of related note:

    Brian Miller – Thermodynamics, the Origin of Life, and Intelligent Design – 2019 video
    https://www.youtube.com/watch?v=YAXiHRPZz0s

  2. 2
    asauber says:

    But who organized the organizer?

    Andrew

  3. 3
    bornagain77 says:

    A couple of more notes from Dr.Miller.

    Physicist Brian Miller: Two Conundrums for Strictly Materialist Views of Biology – January 2020
    Excerpt: Nothing in nature will ever simultaneously go to both low entropy and high energy at the same time. It’s a physical impossibility. Yet life had to do that. Life had to take simple chemicals and go to a state of high energy and of low entropy. That’s a physical impossibility.
    https://evolutionnews.org/2020/01/physicist-brian-miller-two-conundrums-for-strictly-materialist-views-of-biology/

    “‘Professor Dave’ argues that the origin of life does not face thermodynamic hurdles. He states that natural systems often spontaneously increase in order, such as water freezing or soap molecules forming micelles (e.g., spheres or bilayers), He is making the very common mistake that he fails to recognize that the formation of the cell represents both a dramatic decrease in entropy and an equally dramatic increase in energy. In contrast, water freezing represents both a decrease in entropy but also a decrease in energy.
    More specifically, the process of freezing releases heat that increases the entropy of the surrounding environment by an amount greater than the entropy decrease of the water molecule forming the rigid structure.
    Likewise, soap molecules coalescing into micelles represents a net increase of entropy since the surrounding water molecules significantly increase in their number of degrees of freedom.
    No system without (intelligent) assistance ever moves both toward lower entropy and higher energy which is required for the formation of a cell.”
    – Brian Miller, Ph. D. – MIT
    – Episode 0/13: Reasons // A Course on Abiogenesis by Dr. James Tour
    https://youtu.be/71dqAFUb-v0?t=1434

  4. 4
    martin_r says:

    a bit off topic, and 2 years old, but i noticed it just today …

    2020:
    Tour scores prestigious Centenary Prize
    Rice University chemist’s achievements earn top Royal Society of Chemistry honor

    The award, given annually to up to three scientists from outside Great Britain, recognizes researchers for their contributions to the chemical sciences industry or education and for successful collaborations. Tour was named for innovations in materials chemistry with applications in medicine and nanotechnology.

    https://news.rice.edu/news/2020/tour-scores-prestigious-centenary-prize#:~:text=Rice%20University%20chemist%20James%20Tour,education%20and%20for%20successful%20collaborations.

    PS: on December 6, there will be another Dr. Tour lecture on debunking “professor” Dave Farina “OoL-experts”…

    Here is a teaser video:

    https://www.youtube.com/watch?v=4rwPi1miWu4

  5. 5
    bornagain77 says:

    To elaborate on Dr. Miller’s observation,

    “Nothing in nature will ever simultaneously go to both low entropy and high energy at the same time. It’s a physical impossibility. Yet life had to do that.,,, No system without (intelligent) assistance ever moves both toward lower entropy and higher energy which is required for the formation of a cell.”
    – Dr. Brian Miller, PhD. physics

    The burning question that needs to be answered now becomes, “What exactly does it take for an intelligence to move a system toward lower entropy and higher energy? i.e. toward life?

    Well, the short answer is that it takes an immaterial mind infusing immaterial information into a material substrate in order to move a system toward lower entropy and higher energy i.e. toward life.

    The longer, empirically backed, answer is that it is now experimentally proven, (via advances in quantum information theory, and the experimental realization of the Maxwell demon thought experiment), immaterial information has a ‘thermodynamic content’, and that an immaterial mind has the capacity to infuse ‘thermodynamically meaningful’ immaterial information into a system in order move a system toward lower entropy and higher energy i.e. toward life.

    In the following 2010 experimental realization of Maxwell’s demon thought experiment, “they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.”

    Maxwell’s demon demonstration turns information into energy – November 2010
    Excerpt: Scientists in Japan are the first to have succeeded in converting information into free energy in an experiment that verifies the “Maxwell demon” thought experiment devised in 1867.,,, In Maxwell’s thought experiment the demon creates a temperature difference simply from information about the gas molecule temperatures and without transferring any energy directly to them.,,, Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.
    http://www.physorg.com/news/20.....nergy.html

    As Christopher Jarzynski, (who was instrumental in formulating the ‘equation to define the amount of energy that could theoretically be converted from a unit of information’), stated, “This is a beautiful experimental demonstration that information has a thermodynamic content,”

    Demonic device converts information to energy – 2010
    Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski.
    http://www.scientificamerican......rts-inform

    Moreover, the Maxwell demon thought experiment has now been extended to build ” a tiny machine powered purely by information,”.

    New Scientist astounds: Information is physical – May 13, 2016
    Excerpt: Recently came the most startling demonstration yet: a tiny machine powered purely by information, which chilled metal through the power of its knowledge. This seemingly magical device could put us on the road to new, more efficient nanoscale machines, a better understanding of the workings of life, and a more complete picture of perhaps our most fundamental theory of the physical world.
    http://www.uncommondescent.com.....-physical/

    In fact, as of 2021, a quote-unquote ‘Information engine’ has now been constructed that achieves “power comparable to molecular machinery in living cells,”

    World’s fastest information-fuelled engine designed by SFU researchers – May 11, 2021
    Excerpt: Simon Fraser University researchers have designed a remarkably fast engine that taps into a new kind of fuel — information.
    The development of this engine, which converts the random jiggling of a microscopic particle into stored energy, is outlined in research published this week in the Proceedings of the National Academy of Sciences (PNAS) and could lead to significant advances in the speed and cost of computers and bio-nanotechnologies.
    SFU physics professor and senior author John Bechhoefer says researchers’ understanding of how to rapidly and efficiently convert information into “work” may inform the design and creation of real-world information engines.
    “We wanted to find out how fast an information engine can go and how much energy it can extract, so we made one,” says Bechhoefer, whose experimental group collaborated with theorists led by SFU physics professor David Sivak.
    Engines of this type were first proposed over 150 years ago but actually making them has only recently become possible.
    “By systematically studying this engine, and choosing the right system characteristics, we have pushed its capabilities over ten times farther than other similar implementations, thus making it the current best-in-class,” says Sivak.
    The information engine designed by SFU researchers consists of a microscopic particle immersed in water and attached to a spring which, itself, is fixed to a movable stage. Researchers then observe the particle bouncing up and down due to thermal motion.
    “When we see an upward bounce, we move the stage up in response,” explains lead author and PhD student Tushar Saha. “When we see a downward bounce, we wait. This ends up lifting the entire system using only information about the particle’s position.”
    Repeating this procedure, they raise the particle “a great height, and thus store a significant amount of gravitational energy,” without having to directly pull on the particle.
    Saha further explains that, “in the lab, we implement this engine with an instrument known as an optical trap, which uses a laser to create a force on the particle that mimics that of the spring and stage.”
    Joseph Lucero, a Master of Science student adds, “in our theoretical analysis, we find an interesting trade-off between the particle mass and the average time for the particle to bounce up. While heavier particles can store more gravitational energy, they generally also take longer to move up.”
    “Guided by this insight, we picked the particle mass and other engine properties to maximize how fast the engine extracts energy, outperforming previous designs and achieving power comparable to molecular machinery in living cells, and speeds comparable to fast-swimming bacteria,” says postdoctoral fellow Jannik Ehrich.
    https://www.sfu.ca/university-communications/issues-experts/2021/05/world-s-fastest-information-fuelled-engine-designed-by-sfu-resea.html

    An ‘Information engine’ that achieves “power comparable to molecular machinery in living cells”?

    To say Darwinist materialists ‘never saw that coming’ would be an understatement. In the past, Darwinists have taken great pains to deny the significance, and foundational role, that information plays in life, and have even claimed that life is just ‘complicated biochemistry’, and have even gone so far as to claim that they could get along just as well without the ‘metaphor’ of information.

    Information Theory, Evolution, and the Origin of Life – Hubert P. Yockey, 2005
    “The belief of mechanist-reductionists that the chemical processes in living matter do not differ in principle from those in dead matter is incorrect. There is no trace of messages determining the results of chemical reactions in inanimate matter. If genetical processes were just complicated biochemistry, the laws of mass action and thermodynamics would govern the placement of amino acids in the protein sequences.” (Let me provide the unstated conclusion:) But they don’t.
    http://www.uncommondescent.com.....ent-353336

    Information Theory, Evolution, and the Origin of Life – Hubert P. Yockey, 2005
    Excerpt: “Information, transcription, translation, code, redundancy, synonymous, messenger, editing, and proofreading are all appropriate terms in biology. They take their meaning from information theory (Shannon, 1948) and are not synonyms, metaphors, or analogies.”
    http://www.cambridge.org/catal.....038;ss=exc

    And as if an ‘information engine’ that achieves “power comparable to molecular machinery in living cells” was not already enough to make a committed Darwinian materialist’s head spin around in circles, in quantum information theory it is also now found that entropy is not a property of a system, but a property of an observer who describes a system.

    As the following article states, “James Clerk Maxwell (said), “The idea of dissipation of energy depends on the extent of our knowledge.”,,,
    quantum information theory,,, describes the spread of information through quantum systems.,,,
    Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,,

    The Quantum Thermodynamics Revolution – May 2017
    Excerpt: the 19th-century physicist James Clerk Maxwell put it, “The idea of dissipation of energy depends on the extent of our knowledge.”
    In recent years, a revolutionary understanding of thermodynamics has emerged that explains this subjectivity using quantum information theory — “a toddler among physical theories,” as del Rio and co-authors put it, that describes the spread of information through quantum systems. Just as thermodynamics initially grew out of trying to improve steam engines, today’s thermodynamicists are mulling over the workings of quantum machines. Shrinking technology — a single-ion engine and three-atom fridge were both experimentally realized for the first time within the past year — is forcing them to extend thermodynamics to the quantum realm, where notions like temperature and work lose their usual meanings, and the classical laws don’t necessarily apply.
    They’ve found new, quantum versions of the laws that scale up to the originals. Rewriting the theory from the bottom up has led experts to recast its basic concepts in terms of its subjective nature, and to unravel the deep and often surprising relationship between energy and information — the abstract 1s and 0s by which physical states are distinguished and knowledge is measured.,,,
    Renato Renner, a professor at ETH Zurich in Switzerland, described this as a radical shift in perspective. Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,,
    https://www.quantamagazine.org/quantum-thermodynamics-revolution/

  6. 6
    bornagain77 says:

    And in the following 2011 paper, researchers ,,, show that when the bits (in a computer) to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer’s state to that of the computer in such a way that they know more about the memory than is possible in classical physics.,,, In measuring entropy, one should bear in mind that (in quantum information theory) an object does not have a certain amount of entropy per se, instead an object’s entropy is always dependent on the observer.

    Quantum knowledge cools computers: New understanding of entropy – June 1, 2011
    Excerpt: Recent research by a team of physicists,,, describe,,, how the deletion of data, under certain conditions, can create a cooling effect instead of generating heat. The cooling effect appears when the strange quantum phenomenon of entanglement is invoked.,,,
    The new study revisits Landauer’s principle for cases when the values of the bits to be deleted may be known. When the memory content is known, it should be possible to delete the bits in such a manner that it is theoretically possible to re-create them. It has previously been shown that such reversible deletion would generate no heat. In the new paper, the researchers go a step further. They show that when the bits to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer’s state to that of the computer in such a way that they know more about the memory than is possible in classical physics.,,,
    In measuring entropy, one should bear in mind that an object does not have a certain amount of entropy per se, instead an object’s entropy is always dependent on the observer. Applied to the example of deleting data, this means that if two individuals delete data in a memory and one has more knowledge of this data, she perceives the memory to have lower entropy and can then delete the memory using less energy.,,,
    No heat, even a cooling effect;
    In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy.
    Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”
    http://www.sciencedaily.com/re.....134300.htm

    To repeat, “In measuring entropy, one should bear in mind that (in quantum information theory) an object does not have a certain amount of entropy per se, instead an object’s entropy is always dependent on the observer.”

    That statement is simply completely devastating to the ‘bottom up’ reductive materialistic explanations of Darwinists, and is a full empirical vindication of the presuppositions of Intelligent Design, where it is held that only an Intelligent Mind has the capacity within itself to create the ‘non-physical’ information that is needed to ‘thermodynamically’ explain why life can ‘resist the ravages of entropy’.

    And although the preceding experimental evidence gets us to an intelligent mind in general, to make our inference to the Mind of God in particular more complete it is necessary to also appeal to advances in quantum biology.

    But first it is important to point out that ‘classical’ sequential information, (such as what is encoded on DNA, proteins, computers, etc..), is a subset of quantum information.

    Classical Information is a subset of Quantum information – illustration
    https://www.nsf.gov/pubs/2000/nsf00101/images/figure1.gif
    below that illustration they have this caption,
    “Figure 1: The well-established theory of classical information and computation is actually a subset of a much larger topic, the emerging theory of quantum information and computation.”
    https://www.nsf.gov/pubs/2000/nsf00101/nsf00101.htm

    And it is also important to point out that the independent, (i.e. separate from matter and energy), ‘physical’ reality of this immaterial quantum information, (quantum information of which classical information is now found to be a subset), is fairly easily demonstrated with quantum teleportation.

    Specifically, quantum information can be teleported between photons and/or atoms without the photons and/or atoms ever physically interacting with one another.

    For instance, the following article states, “the photons aren’t disappearing from one place and appearing in another. Instead, it’s the information that’s being teleported through quantum entanglement.,,,”

    Quantum Teleportation Enters the Real World – September 19, 2016
    Excerpt: Two separate teams of scientists have taken quantum teleportation from the lab into the real world.
    Researchers working in Calgary, Canada and Hefei, China, used existing fiber optics networks to transmit small units of information across cities via quantum entanglement — Einstein’s “spooky action at a distance.”,,,
    This isn’t teleportation in the “Star Trek” sense — the photons aren’t disappearing from one place and appearing in another. Instead, it’s the information that’s being teleported through quantum entanglement.,,,
    ,,, it is only the information that gets teleported from one place to another.
    https://www.discovermagazine.com/d-brief/2016/09/19/quantum-teleportation-enters-real-world/#.V-HqWNEoDtR

    And as the following article states. “scientists have successfully teleported information between two separate atoms in unconnected enclosures a meter apart,,, information,,, is transferred from one place to another, but without traveling through any physical medium.”

    First Teleportation Between Distant Atoms – 2009
    Excerpt: For the first time, scientists have successfully teleported information between two separate atoms in unconnected enclosures a meter apart – a significant milestone in the global quest for practical quantum information processing.
    Teleportation may be nature’s most mysterious form of transport: Quantum information, such as the spin of a particle or the polarization of a photon, is transferred from one place to another, but without traveling through any physical medium. It has previously been achieved between photons over very large distances, between photons and ensembles of atoms, and between two nearby atoms through the intermediary action of a third. None of those, however, provides a feasible means of holding and managing quantum information over long distances.
    Now a team from the Joint Quantum Institute (JQI) at the University of Maryland (UMD) and the University of Michigan has succeeded in teleporting a quantum state directly from one atom to another over a substantial distance
    https://jqi.umd.edu/news/first-teleportation-between-distant-atoms

    Moreover this quantum information, and/or quantum entanglement, is now found to be ubiquitous within life. i.e. It is found within every important biomolecule of life.

    As the following 2015 article entitled, “Quantum criticality in a wide range of important biomolecules”, stated, “Most of the molecules taking part actively in biochemical processes are tuned exactly to the transition point and are critical conductors,” and the researchers further commented that “finding even one (biomolecule) that is in the quantum critical state by accident is mind-bogglingly small and, to all intents and purposes, impossible.,, of the order of 10^-50 of possible small biomolecules and even less for proteins,”,,,

    Quantum criticality in a wide range of important biomolecules – Mar. 6, 2015
    Excerpt: “Most of the molecules taking part actively in biochemical processes are tuned exactly to the transition point and are critical conductors,” they say.
    That’s a discovery that is as important as it is unexpected. “These findings suggest an entirely new and universal mechanism of conductance in biology very different from the one used in electrical circuits.”
    The permutations of possible energy levels of biomolecules is huge so the possibility of finding even one (biomolecule) that is in the quantum critical state by accident is mind-bogglingly small and, to all intents and purposes, impossible.,, of the order of 10^-50 of possible small biomolecules and even less for proteins,”,,,
    “what exactly is the advantage that criticality confers?”
    https://medium.com/the-physics-arxiv-blog/the-origin-of-life-and-the-hidden-role-of-quantum-criticality-ca4707924552

    It is also very interesting to note that the classical information of DNA is now found to be ’embedded’ within quantum information.

    In the following video, at the 22:20 minute mark, Dr Rieper shows why the high temperatures of biological systems do not prevent DNA from having quantum entanglement and then at 24:00 minute mark Dr Rieper goes on to remark that practically the whole DNA molecule can be viewed as quantum information with classical information embedded within it.

    “What happens is this classical information (of DNA) is embedded, sandwiched, into the quantum information (of DNA). And most likely this classical information is never accessed because it is inside all the quantum information. You can only access the quantum information or the electron clouds and the protons. So mathematically you can describe that as a quantum/classical state.”
    Elisabeth Rieper – Classical and Quantum Information in DNA – video (Longitudinal Quantum Information resides along the entire length of DNA discussed at the 19:30 minute mark; at 24:00 minute mark Dr Rieper remarks that practically the whole DNA molecule can be viewed as quantum information with classical information embedded within it)
    https://youtu.be/2nqHOnVTxJE?t=1176

  7. 7
    bornagain77 says:

    What is so devastating to Darwinian presuppositions with the (empirical) finding of pervasive quantum coherence and/or quantum entanglement within molecular biology, is that quantum coherence and/or quantum entanglement is a non-local, beyond space and time, effect that requires a beyond space and time cause in order to explain its existence. As the following paper entitled “Looking beyond space and time to cope with quantum theory” stated, “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,”

    Looking beyond space and time to cope with quantum theory – 29 October 2012
    Excerpt: “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,”
    http://www.quantumlah.org/high.....uences.php

    Darwinists, with their reductive materialistic framework, and especially with the falsification of ‘hidden variables’, simply have no beyond space and time cause that they can appeal so as to be able to explain the non-local quantum coherence and/or entanglement that is now found to be ubiquitous within biology.

    Not So Real – Sheldon Lee Glashow – Oct. 2018
    Excerpt: In 1959, John Stewart Bell deduced his eponymous theorem: that no system of hidden variables can reproduce all of the consequences of quantum theory. In particular, he deduced an inequality pertinent to observations of an entangled system consisting of two separated particles. If experimental results contradicted Bell’s inequality, hidden-variable models could be ruled out. Experiments of this kind seemed difficult or impossible to carry out. But, in 1972, Alain Aspect succeeded. His results contradicted Bell’s inequality. The predictions of quantum mechanics were confirmed and the principle of local realism challenged. Ever more precise tests of Bell’s inequality and its extension by John Clauser et al. continue to be performed,14 including an experiment involving pairs of photons coming from different distant quasars. Although a few tiny loopholes may remain, all such tests to date have confirmed that quantum theory is incompatible with the existence of local hidden variables. Most physicists have accepted the failure of Einstein’s principle of local realism.
    https://inference-review.com/article/not-so-real

    The Universe Is Not Locally Real, and the Physics Nobel Prize Winners Proved It
    Elegant experiments with entangled light have laid bare a profound mystery at the heart of reality
    – Daniel Garisto – October 6, 2022
    Excerpt: One of the more unsettling discoveries in the past half century is that the universe is not locally real…. As Albert Einstein famously bemoaned to a friend, “Do you really believe the moon is not there when you are not looking at it?”
    This is, of course, deeply contrary to our everyday experiences. To paraphrase Douglas Adams, the demise of local realism has made a lot of people very angry and been widely regarded as a bad move.
    Blame for this achievement has now been laid squarely on the shoulders of three physicists: John Clauser, Alain Aspect and Anton Zeilinger. They equally split the 2022 Nobel Prize in Physics “for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science.” (“Bell inequalities” refers to the pioneering work of the Northern Irish physicist John Stewart Bell, who laid the foundations for this year’s Physics Nobel in the early 1960s.) Colleagues agreed that the trio had it coming, deserving this reckoning for overthrowing reality as we know it. “It is fantastic news. It was long overdue,” says Sandu Popescu, a quantum physicist at the University of Bristol. “Without any doubt, the prize is well-deserved.”,,,
    No one pounced to close these loopholes with more gusto than Anton Zeilinger, an ambitious, gregarious Austrian physicist. In 1998, he and his team improved on Aspect’s earlier work by conducting a Bell test over a then-unprecedented distance of nearly half a kilometer. The era of divining reality’s nonlocality from kayak-sized experiments had drawn to a close. Finally, in 2013, Zeilinger’s group took the next logical step, tackling multiple loopholes at the same time.,,,
    https://www.scientificamerican.com/article/the-universe-is-not-locally-real-and-the-physics-nobel-prize-winners-proved-it/

    “hidden variables don’t exist. If you have proved them come back with PROOF and a Nobel Prize.
    John Bell theorized that maybe the particles can signal faster than the speed of light. This is what he advocated in his interview in “The Ghost in the Atom.” But the violation of Leggett’s inequality in 2007 takes away that possibility and rules out all non-local hidden variables. Observation instantly defines what properties a particle has and if you assume they had properties before we measured them, then you need evidence, because right now there is none which is why realism is dead, and materialism dies with it.
    How does the particle know what we are going to pick so it can conform to that?”
    per Jimfit

    Whereas Darwinian materialists have no ‘beyond space and time’ cause that they can appeal to explain quantum non-locality, on the other hand, the Christian Theist readily does have a beyond space and time cause that he can appeal to so as to explain the ‘non-locality’ of quantum entanglement, and/or quantum information, that is now found to be ubiquitous within life. And indeed, Christians have been postulating just such a ‘beyond space and time’ cause for a couple of thousand years now. As Colossians 1:17 states, “He is before all things, and in him all things hold together.”

    Colossians 1:17
    He is before all things, and in him all things hold together.

    It is also important to realize that quantum information, unlike classical information, is physically conserved. As the following article states, In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed.

    Quantum no-hiding theorem experimentally confirmed for first time – 2011
    Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment.
    http://www.physorg.com/news/20.....tally.html

    The implication of finding ‘non-local’, (beyond space and time), and ‘conserved’, (cannot be created nor destroyed), quantum information in molecular biology on such a massive scale, in every important biomolecule in our bodies, is fairly, and pleasantly, obvious.
    That pleasant implication, of course, being the fact that we now have fairly strong empirical evidence indicating that we do indeed have a transcendent, metaphysical, component to our being, a “soul”, that is, in principle, capable of living beyond the death of our material/temporal bodies.

    As Stuart Hameroff succinctly stated in the following article, “the quantum information,,, isn’t destroyed. It can’t be destroyed.,,, it’s possible that this quantum information can exist outside the body. Perhaps indefinitely as a soul.”

    Leading Scientists Say Consciousness Cannot Die It Goes Back To The Universe – Oct. 19, 2017 – Spiritual
    Excerpt: “Let’s say the heart stops beating. The blood stops flowing. The microtubules lose their quantum state. But the quantum information, which is in the microtubules, isn’t destroyed. It can’t be destroyed. It just distributes and dissipates to the universe at large. If a patient is resuscitated, revived, this quantum information can go back into the microtubules and the patient says, “I had a near death experience. I saw a white light. I saw a tunnel. I saw my dead relatives.,,” Now if they’re not revived and the patient dies, then it’s possible that this quantum information can exist outside the body. Perhaps indefinitely as a soul.”
    – Stuart Hameroff – Quantum Entangled Consciousness – Life After Death – video (5:00 minute mark) (of note, this video is no longer available for public viewing)
    https://radaronline.com/exclusives/2012/10/life-after-death-soul-science-morgan-freeman/

    Personally, I consider these recent findings from quantum biology to rival all other scientific discoveries over the past century. Surpassing even the discovery of a beginning of the universe, via Big Bang cosmology, in terms of scientific, theological, and even personal, significance.

    As Jesus once asked his disciples and a crowd of followers, “Is anything worth more than your soul?”

    Verse:

    Mark 8:37
    Is anything worth more than your soul?

    Thus in conclusion, I hold that the inference to the Mind God in order to explain the ‘top down’ infusion of information necessary to explain life is now on strong empirical footing, and, therefore, the inference to the Mind of God is a more than valid inference for Christian Theists to now make from the scientific evidence itself.

  8. 8
    bornagain77 says:

    Supplemental notes, quotes, and verses:

    Hey, Paul Davies — Your ID Is Showing – Robert F. Shedinger – March 6, 2020
    Excerpt: With a nod toward James Clerk Maxwell’s entropy-defying demon, Davies argues that the gulf between physics and biology is completely unbridgeable without some fundamentally new concept. Since living organisms consistently resist the ravages of entropy that all forms of inanimate matter are subject to, there must be some non-physical principle allowing living matter to consistently defy the Second Law of Thermodynamics. And for Davies there is; the demon in the machine turns out to be information.
    https://evolutionnews.org/2020/03/hey-paul-davies-your-id-is-showing/

    So just how much top-down ‘non-physical’ information is required to allow ‘simple’ life to resist ‘the ravages of entropy’?

    Molecular Biophysics – Information theory. Relation between information and entropy: – Setlow-Pollard, Ed. Addison Wesley
    Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz’ deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures.
    https://docs.google.com/document/d/18hO1bteXTPOqQtd2H12PI5wFFoTjwg8uBAU5N0nEQIE/

    Of note, 10^12 bits is equivalent to approx. 100 million pages of the Encyclopedia Britannica.

    “a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.”
    – R. C. Wysong – The Creation-evolution Controversy

    ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.”
    – Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894

    “The most fundamental definition of reality is not matter or energy, but information–and it is the processing of information that lies at the root of all physical, biological, economic, and social phenomena.”
    Vlatko Vedral – Professor of Physics at the University of Oxford, and CQT (Centre for Quantum Technologies) at the National University of Singapore, and a Fellow of Wolfson College – a recognized leader in the field of quantum mechanics.

    Why the Quantum? It from Bit? A Participatory Universe?
    Excerpt: In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: “In the beginning was the Word.”
    Anton Zeilinger – 2022 Nobel laureate in quantum mechanics:
    http://www.metanexus.net/archi.....linger.pdf

    48:24 mark: “It is operationally impossible to separate Reality and Information”
    49:45 mark: “In the Beginning was the Word” John 1:1
    Prof Anton Zeilinger speaks on quantum physics. at UCT – video
    http://www.youtube.com/watch?v=s3ZPWW5NOrw

    John 1:1-4
    In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made. In him was life, and that life was the light of all mankind.

    Acts 3:15
    You killed the author of life, but God raised him from the dead. We are witnesses of this.

  9. 9
    Origenes says:

    S.C. Meyer:
    “To see the difference between order and complexity consider the difference between the following sequences:

    Na-Cl-Na-Cl-Na-Cl-Na-Cl

    AZFRT<MPGRTSHKLKYR

    The first sequence, describing the chemical structure of salt crystals, displays what information scientists call “redundancy” or simple “order.” That’s because the two constituents, Na and Cl (sodium and chloride), are highly ordered in the sense of being arranged in a simple, rigidly repetitive way. The sequence on the bottom, by contrast, exhibits complexity. In this randomly generated string of characters, there is no simple repetitive pattern. Whereas the sequence on the top could be generated by a simple rule or computer algorithm, such as “Every time Na arises, attach a Cl to it, and vice versa,” no rule shorter than the sequence itself could generate the second sequence.

    The information-rich sequences in DNA, RNA, and proteins, by contrast, are characterized not by either simple order or mere complexity, but instead by “specified complexity.” In such sequences, the irregular and unpredictable arrangement of the characters (or constituents) is critical to the function that the sequence performs. The three sequences below illustrate these distinctions:

    Na-Cl-Na-Cl-Na-Cl-Na-Cl (Order)

    AZFRT<MPGRTSHKLKYR (Complexity)

    Time and tide wait for no man (Specified complexity)

    What does all this have to do with self-organization? Simply this: the law-like, self-organizing processes that generate the kind of order present in a crystal or a vortex do not also generate complex sequences or structures; still less do they generate specified complexity, the kind of “order” present in a gene or functionally complex organ.

    Laws of nature by definition describe repetitive phenomena—order in that sense—that can be described with differential equations or universal “if-then” statements. Consider, for example, these informal expressions of the law of gravity: “All unsupported bodies fall” or “If an elevated body is left unsuspended, then it will fall.” These statements represent reasonably accurate law-like descriptions of natural gravitational phenomena precisely because we have repeated experience of unsupported bodies falling to the earth. In nature, repetition provides grist for lawful description.

    The information-bearing sequences in protein-coding DNA and RNA molecules do not exhibit such repetitive “order,” however. As such, these sequences can be neither described nor explained by reference to a natural law or law-like “self-organizational” process. The kind of non-repetitive “order” on display in DNA and RNA—a precise sequential “order” necessary to ensure function—is not the kind that laws of nature or law-like self-organizational processes can—in principle—generate or explain.”
    [‘Darwin’s Doubt’, ch. 15]

  10. 10
    PyrrhoManiac1 says:

    Others, such as complex-systems researcher Stuart Kauffman, have attempted to generate complex patterns out of self-organizing or autocatalytic systems and then relate them to life. However, all such attempts require that the initial conditions or arrangement of molecules is precisely specified. In other words, specified structures cannot be generated unless information is provided.

    As a criticism of Kauffman, I think this isn’t quite right. His argument is that autocatalytic sets display a spontaneous emergence of specified functional complexity. What makes an autocatalytic set different from a living organism is that the set is just a self-perpetuating metabolic reaction, but it doesn’t do anything. In life, metabolic processes are put to work in generating (as well as maintaining) an organization distinct from that of the environment.

    The really important idea to come out of theoretical biology here is that of self-determining systems that exhibit what’s called “closure”. A system realizes closure if each constraint in the system constrains other constraints. A constraint is any cause that reduces the degrees of freedom in a system. To maintain closure of structure against entropy, a system needs to extract highly structured energy from the environment and dump less structured energy back into the environment. (Some articles: “Biological Organisation as Closure of Constraints“, “What Makes Biological Organisation Teleological?“, and “Organisational Closure in Biological Organisms

    In other words, I think, a complexity theorist would say that we need to think less about information as some definite entity that exists independently of energy and more about structure as a feature of kinds of energy-and-matter configurations. Taking that approach allows us to conceptualize what kind of structure is distinct of biological organisms: the conjunction of organisational closure and thermodynamic openness.

    I might also add that Kauffman thinks that the emergence of teleologically structured unified wholes took place prior to the emergence of the genetic code; see Autogen is a Kantian Whole in the Non-Entailed World.

  11. 11
    bornagain77 says:

    In regards to,,,

    Others, such as complex-systems researcher Stuart Kauffman, have attempted to generate complex patterns out of self-organizing or autocatalytic systems and then relate them to life. However, all such attempts require that the initial conditions or arrangement of molecules is precisely specified. In other words, specified structures cannot be generated unless information is provided.”

    To try to get around that ‘little problem’, PMI references Kaufmann’s 1993 book, “The Origins of Order: Self-Organization and Selection in Evolution” and states, “His (Kaufmann’s) argument is that autocatalytic sets display a spontaneous emergence of specified functional complexity”.

    But, as is usual with Atheistic Naturalists, PMI, (and Kaufmann), with their self organization model, have completely left the field of empirical science and are in the fictional realm of ‘just-so story’ telling. i.e. They simply have no empirical evidence whatsoever for the “spontaneous emergence” of the functional information, i.e. “specified functional complexity”, necessary for the formation of autocatalytic sets.

    For instance, in regards to autocatalytic sets, wikipedia itself states that, “The first empirical support came from Lincoln and Joyce, who obtained autocatalytic sets in which “two [RNA] enzymes catalyze each other’s synthesis from a total of four component substrates.”
    https://en.wikipedia.org/wiki/Autocatalytic_set

    Yet, as Dr. Meyer pointed out, in Lincoln and Joyce’s work, “function arises after, not before, the information problem has been solved.”

    Biological Information: The Puzzle of Life that Darwinism Hasn’t Solved – Stephen C. Meyer
    Thus, as my book Signature in the Cell shows, (Lincoln and) Joyce’s experiments not only demonstrate that self-replication itself depends upon information-rich molecules, but they also confirm that intelligent design is the only known means by which information arises.
    http://www.evolutionnews.org//.....e_puz.html

    Stephen Meyer Responds to Fletcher in Times Literary Supplement – Jan. 2010
    Excerpt: everything we know about RNA catalysts, including those with partial self-copying capacity, shows that the function of these molecules depends upon the precise arrangement of their information-carrying constituents (i.e., their nucleotide bases). Functional RNA catalysts arise only once RNA bases are specifically-arranged into information-rich sequences—that is, function arises after, not before, the information problem has been solved.
    http://www.evolutionnews.org/2....._flet.html

    To call Tracey Lincoln and Gerald Joyce’s supposed empirical evidence for autocatalytic sets ‘overhyped’ would be an understatement. As Douglas Axe humorously noted, Tracey Lincoln and Gerald Joyce’s overhyped claims amount to,,, “advertising this as “self-replication” is a bit like advertising something as “free” when the actual deal is 1 free for every 1,600 purchased. It’s even worse, though, because you need lots of the pre-made precursors in cozy proximity to a finished RNA in order to kick the process off. That makes the real deal more like n free for every 1,600 n purchased, with the caveats that n must be a very large number and that full payment must be made in advance.”

    Excerpt: Consider the recent Science paper by Tracey Lincoln and Gerald Joyce. [2] The paper describes RNA chains about 70 nucleotides long that produce copies of themselves when placed in the right kind of mixture. The authors use the term “cross-replication” to describe this because they found that it works best with two distinct RNA chains, each of which catalyzes formation of the other one from supplied precursors. But since either of these RNAs could potentially kick the process off (by forming the other), much of the commentary on this widely publicized study refers to it as an example of self-replication.
    The study itself is a helpful contribution to our understanding of catalytic RNA, but the hype accompanying its publicity is much less helpful. For example, under the heading “A never-ending dance of RNA”, Erika Check Hayden writes that:
    “Joyce’s group had already made [RNA] enzymes capable of catalyzing their own replication, but they could only reproduce themselves a limited number of times. The new enzymes can reproduce themselves indefinitely. “This is the first time outside of biology where you have immortalized molecular information.” [3]
    Stirring language indeed, but is it justified? Technically speaking, of course, we could apply the language of immortality to our tongue-in-cheek Jeep example. The sounds of “replication”— the thump of bumper contact followed by the chirp of tread meeting pavement—could keep on going indefinitely. The only limitation is the supply of precursors. Right?
    Well… yes but therein lies a formidable problem. To fully appreciate it, we need to recall the gold standard of self-replication—life. Oak trees make more oak trees out of air, sunlight, water, and minerals. No one knows exactly how they do it, but the amazing and undeniable fact is that they, like all life, assemble things of stunning complexity from things of sheer simplicity. The complexity of the finished products is itself remarkable, but when we consider replication specifically it is this contrast in complexity that is most striking. Life consistently delivers more than it demands—far more.
    The RNA demonstration, like the Jeep one, falls well short of this. Both show how a spontaneous process can produce a finished product, but they only do so by relying on precursors that are every bit as unlikely as the products themselves. In other words, what is being presented as a step toward solving the origins problem is really just a displacement of that problem. The humble truth is that the catalytic RNAs simply join two pre-made halves together by making a single new chemical bond. [2] What’s more, the molecular structure for accomplishing this joining is built into the precursors in such a way that 1) wrong ends cannot be joined, and 2) the energy for the correct joining is pre-supplied.
    How reasonable is it to call something so carefully set up “self-replication”?…
    To get an idea of how little was actually being accomplished (comparatively speaking) by the RNAs themselves, we should see how the total number of chemical bonds in the complete RNAs compares to the number made (one) during “self-replication”. Ignoring hydrogen atoms, which don’t join atoms up into large molecules, each complete RNA molecule had over 1,600 specific chemical bonds. Except for the final one, all of these bonds were pre-made in the process of making the precursors.
    So, advertising this as “self-replication” is a bit like advertising something as “free” when the actual deal is 1 free for every 1,600 purchased. It’s even worse, though, because you need lots of the pre-made precursors in cozy proximity to a finished RNA in order to kick the process off. That makes the real deal more like n free for every 1,600 n purchased, with the caveats that n must be a very large number and that full payment must be made in advance.
    https://www.biologicinstitute.org/post/19309047110/biologic-institute-announces-first

    In short, and in conclusion, PMI has no empirical evidence whatsoever for his claim of the “spontaneous emergence of specified functional complexity”. i.e. PMI is, once again, found to be in the realm of fictional ‘just-so story’ telling.

    Of related note:

    Nick Lane Takes on the Origin of Life and DNA – Jonathan McLatchie – July 2010
    Excerpt: As Stephen Meyer has comprehensively documented in his book, Signature in the Cell, the RNA-world hypothesis is fraught with problems, quite apart from those pertaining to the origin of information. For example, the formation of the first RNA molecule would have required the prior emergence of smaller constituent molecules, including ribose sugar, phosphate molecules, and the four RNA nucleotide bases. However, it turns out that both synthesizing and maintaining these essential RNA building blocks — especially ribose — and the nucleotide bases is a very difficult task under origin-of-life conditions.
    http://www.evolutionnews.org/2.....36101.html

  12. 12
    Belfast says:

    Pm1@10
    You have mentioned Kaufman and autocatalysis, and I think you are referring to his book. Are you also adverting to the paper he did with researcher Ms Joana Xavier?
    https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2021.0244

  13. 13
    PyrrhoManiac1 says:

    @10

    You have mentioned Kaufman and autocatalysis, and I think you are referring to his book. Are you also adverting to the paper he did with researcher Ms Joana Xavier?

    I was not aware of this paper! Very interesting — thanks!

    (BTW, Xavier has a PhD, so I think she’s “Dr. Xavier” — or “Professor Xavier”, since she teaches at University College London, but then we’d confuse her with Patrick Stewart.)

  14. 14
    bornagain77 says:

    And that paper helps your lack of empirical evidence how exactly?

    from the paper, “Several origins of life theories postulate autocatalytic chemical networks preceding the primordial genetic code,”

    Small problem with their belief in a ‘primordial genetic code’. The ‘near-optimal’ genetic code we now find in life,,,,

    Get Out of Jail Free: Playing Games in an RNA World – September 23, 2013
    Excerpt: “The genetic code, the mapping of nucleic acid codons to amino acids via a set of tRNA and aminoacylation machinery, is near-universal and near-immutable. In addition, the code is also near-optimal in terms of error minimization,”
    http://www.evolutionnews.org/2.....77021.html

    The Optimal Design of the Genetic Code – Fazale Rana – October 3, 2018
    Excerpt: It could be argued that the genetic code’s error-minimization properties are more dramatic than these (one in a million) results indicate. When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code’s capacity occurred outside the distribution. Researchers estimate the existence of 10^18 (a quintillion) possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. Nearly all of these codes fall within the error-minimization distribution. This finding means that of 10^18 possible genetic codes, only a few have an error-minimization capacity that approaches the code found universally in nature.
    https://reasons.org/explore/blogs/the-cells-design/the-optimal-design-of-the-genetic-code

    ,,,, The ‘near-optimal’ genetic code we now find in life, and codes in general, are “near-immutable”, i.e. ‘non-evolvable’.The reason why the ‘near-optimal’ DNA code, and codes in general, are considered “near-immutable” is fairly easy to understand and is given by none other than Richard Dawkins himself?

    Venter vs. Dawkins on the Tree of Life – and Another Dawkins Whopper – March 2011?Excerpt:,,, But first, let’s look at the reason Dawkins gives for why the code must be universal:
    “The reason is interesting. Any mutation in the genetic code itself (as opposed to mutations in the genes that it encodes) would have an instantly catastrophic effect, not just in one place but throughout the whole organism. If any word in the 64-word dictionary changed its meaning, so that it came to specify a different amino acid, just about every protein in the body would instantaneously change, probably in many places along its length. Unlike an ordinary mutation…this would spell disaster.”
    (Dawkins – 2009, p. 409-10 – The Greatest Show On Earth)?OK. Keep Dawkins’ claim of universality in mind, along with his argument for why the code must be universal, and then go here (linked site listing 19 variants of the genetic code).
    Simple counting question: does “one or two” equal 19? That’s the number of known variant genetic codes compiled by the National Center for Biotechnology Information. By any measure, Dawkins is off by an order of magnitude, times a factor of two.?http://www.evolutionnews.org/2.....44681.html

    And the fact that genetic codes are now found to overlap each other makes the ‘near-immutable’, i.e. non-evolvable, problem exponentially worse for Darwinists,

    Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez 1, Robert J. Marks II 2, Jorge Fernandez 3 and John C. Sanford 4 – published online May 2013
    Excerpt: In the last decade, we have discovered still another aspect of the multi- dimensional genome. We now know that DNA sequences are typically “ poly-functional” [38]. Trifanov previously had described at least 12 genetic codes that any given nucleotide can contribute to [39,40], and showed that a given base-pair can contribute to multiple overlapping codes simultaneously. The first evidence of overlapping protein-coding sequences in viruses caused quite a stir, but since then it has become recognized as typical. According to Kapronov et al., “it is not unusual that a single base-pair can be part of an intricate network of multiple isoforms of overlapping sense and antisense transcripts, the majority of which are unannotated” [41]. The ENCODE project [42] has confirmed that this phenomenon is ubiquitous in higher genomes, wherein a given DNA sequence routinely encodes multiple overlapping messages, meaning that a single nucleotide can contribute to two or more genetic codes. Most recently, Itzkovitz et al. analyzed protein coding regions of 700 species, and showed that virtually all forms of life have extensive overlapping information in their genomes [43].,,,
    Conclusions: Our analysis confirms mathematically what would seem intuitively obvious – multiple overlapping codes within the genome must radically change our expectations regarding the rate of beneficial mutations. As the number of overlapping codes increases, the rate of potential beneficial mutation decreases exponentially, quickly approaching zero. Therefore the new evidence for ubiquitous overlapping codes in higher genomes strongly indicates that beneficial mutations should be extremely rare. This evidence combined with increasing evidence that biological systems are highly optimized, and evidence that only relatively high-impact beneficial mutations can be effectively amplified by natural selection, lead us to conclude that mutations which are both selectable and unambiguously beneficial must be vanishingly rare. This conclusion raises serious questions. How might such vanishingly rare beneficial mutations ever be sufficient for genome building? How might genetic degeneration ever be averted, given the continuous accumulation of low impact deleterious mutations?
    http://www.worldscientific.com.....08728_0006

    The cited paper goes on,,,, “yet demonstration with biochemical systems is lacking. Here, small-molecule reflexively autocatalytic food-generated networks (RAFs) ranging in size from 3 to 619 reactions were found in all of 6683 prokaryotic metabolic networks searched.”,,,,

    So let’s get this straight, they searched 6683 prokaryotic metabolic networks for evidence that autocatalytic chemical networks might have preceded life???

    Ever heard the term ‘assuming your conclusion’?

    In short, per the cited 2022 paper, Darwinists still have no evidence, as pointed out in post 11, that “autocatalytic sets display a “spontaneous emergence of specified functional complexity”

    In fact, there is a 10 million dollar prize being offered for the first person who can empirically demonstrate the origin of a ‘primordial’ genetic code by unguided processes,,, whether it be by autocatalytic sets, or otherwise,

    Artificial Intelligence + Origin of Life Prize, $10 Million USD
    Excerpt: What You Must Do to Win The Prize
    You must arrange for a digital communication system to emerge or self-evolve without “cheating.” The diagram below describes the system. Without explicitly designing the system, your experiment must generate an encoder that sends digital code to a decoder. Your system needs to transmit at least five bits of information. (In other words it has to be able to represent 32 states. The genetic code supports 64.)
    You have to be able to draw an encoding and decoding table and determine whether or not the data has been transmitted successfully.
    So, for example, an RNA based origin of life experiment will be considered successful if it contains an encoder, message and decoder as described above. To our knowledge, this has never been done.
    https://www.herox.com/evolution2.0

    Verse:

    John 1:1-4
    In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made. In him was life, and that life was the light of all mankind.

  15. 15
    martin_r says:

    A few days ago, i suggested, that Darwinists still believe in spontaneous generation of life (like in 19th century)

    https://uncommondescent.com/intelligent-design/every-cell-comes-from-a-preexistent-cell/#comments

    Seversky reacted as follows:

    Which is not what OOL research assumes. It does not propose that modern organisms spring fully-formed into existence from inanimate precursors.

    It is, however, what ID/creationists believe their designer/creator does.

    Although they have no idea how and apparently don’t care.

    In this case, Seversky, look here,

    look what a 31-year-old MIT Physicist Jeremy England thinks …

    (This can’t be true … There is something very very wrong with Darwinists …)

    You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant.

    Is this normal ???

    Seversky, if this is not a spontaneous generation then what is it ? :)))))))

    https://www.quantamagazine.org/a-new-thermodynamics-theory-of-the-origin-of-life-20140122/?gclid=EAIaIQobChMIt9mPuezY-wIVM4FQBh2sEQKhEAAYASAAEgJ45fD_BwE

  16. 16
    Caspian says:

    Re: PM1 @ 10,
    From a review of Kauffman’s book:
    “the idea at the heart of the book is truly important: even in vastly complicated interactive networks, a few simple rules can easily–if amazingly–lead to order and self-organised patterns and processes. This represents a major advance in understanding how the living world works.” –Robert M. May, The Observer”

    Please understand that if “a few simple rules” “lead to order”, then it is a type of order that is prevalent in nature – namely crystallization built of a repeated sub-pattern, and such order is entirely irrelevant to the complex, specified arrangements found in the biomolecules of the cell. Consider the difference, if you will, between the crystalline order of Lot’s wife once she became a pillar of salt, and the living being she was before this descriptive event.
    Self-organized patterns, such as hexagonal-shaped convection cells in a system experiencing a throughput of thermal energy, are low-information patterns and have no relation to the complex, specified, interdependent arrangements characteristic of the bio-molecules of the cell.

    It is commendable that you seek to exhaust possible naturalistic avenues from non-life to life before admitting divine intervention. But it seems that what we already know about the possible workings of nature preclude any conceivable natural mechanism from achieving the task of abiogenesis.

  17. 17
    Belfast says:

    Auto-catalysis, when applied to origin of life,is little more than the self-organization dream with a wig.
    Evolution origin theory maintains that the earliest known forms of life are too extraordinarily complex to have materialized suddenly unless there exist principles, or laws, of atomic and molecular self-organization which propel matter onward and upward to a living cell. Those principles, to be successful, need be combined with reaction enhancing catalysts, otherwise the life of the universe is not long enough.
    The difficulty is that pre-biotic catalysts are rare and inadequate, hence the auto-catalysis hypothesis. But auto-catalysis, where a product of catalysis can act as a catalyzing agent itself, hits the barrier that actual reproduction is still inconceivable. So autocatalysis is given a new theoretical power – a primitive form of natural selection!!! One would think imagination could go no further.
    One would be wrong. Harvard on its current origin of life claims that it is “wondrous” that, “On a large scale, self-organizing behavior’s powerful effects are seen when small gusts of wind join together to form a tornado that can wreak havoc on infrastructure and natural resources in its path.
    And on a much smaller scale, this same principle is seen when two strands of DNA zip up to form the double helix that encodes our genome. Or, when cells self assemble into embryonic tissues that further develop into fully formed humans and animals.”
    But what is even more “wondrous” is that the promoters of this deceptive fiddle, Harvard, thought they could get away with crediting self-organization as just another form of self-assembly.
    Self-assembling in a cell has nothing in common with self-organizing into simple patterns.
    Harvard might just as well have written,
    “And on a much smaller scale, this same principle is seen when cans of Coke roll down a street, propelled by strong winds, hit rocks and are propelled by the wind up onto the footpath, thence through an open door of a shop where erratic winds blowing through gather them into a pyramid.”

  18. 18
    PyrrhoManiac1 says:

    @16

    Please understand that if “a few simple rules” “lead to order”, then it is a type of order that is prevalent in nature – namely crystallization built of a repeated sub-pattern, and such order is entirely irrelevant to the complex, specified arrangements found in the biomolecules of the cell.

    The comparison between crystals and cells misses the point of what Kaufman and other complex systems theorists are talking about. Crystals are not dissipative structures. They don’t tend to maintain themselves at far-from-equilibrium with respect to their environments.

    Self-organized patterns, such as hexagonal-shaped convection cells in a system experiencing a throughput of thermal energy, are low-information patterns and have no relation to the complex, specified, interdependent arrangements characteristic of the bio-molecules of the cell.

    Benard cells (unlike crystals) are dissipative structures, but they aren’t autocatalytic reactions. For an example of the latter, consider the Belousov–Zhabotinsky reaction (video).

    My position is not, of course, that cellular metabolism is exactly like autocatalytic reactions: there are real and profound differences. The most salient difference, I think, is that metabolism maintains organizational closure.

    This concept, taken from theoretical biologists Maël Montévil, Matteo Mossio, and Alvaro Moreno, is the idea that living things have a specific kind of organizational structure: one in which each constraint constrains all other constraints. A constraint is a causal influence that reduces the degrees of freedom of a system. So what makes something alive is that it is structured in ways that tend to maintain the system in a precarious relationship with its environment. It is this constantly maintained, continually renewed precariousness that distinguishes living systems from other complex dynamical systems. (The biophilosopher Hans Jonas calls this “the needful freedom” of the organism: to be an organism is to be at once distinct from the environment and dependent upon it.)

    So putting these ideas together, we get the following question: what is needed to get from autocatalytic networks to organizationally closed (but thermodynamically open) systems? Do we have good reasons for positing some intervening intelligent agent, possibly but not necessarily supernatural, in order to explain this transition?

    My view is that if an autocatalytic network were to become contained within a semi-permeable membrane, with the construction and maintenance of that membrane itself a product of autocatalytic reactions, one would have the necessary conditions for an organizationally closed (but thermodynamically open) system. This puts me in the metabolism-first camp of abiogenesis: I think we need to get metabolism off the ground first, then a distinct sub-group of chemicals develop for stabilizing and controlling metabolic reactions. And that’s all that “genetic information” is, because of what genes can’t do.

  19. 19
    PyrrhoManiac1 says:

    I started reading Deacon’s very intriguing “How Molecules Became Signs.

    This should be of some interest to ID folks because Deacon tackles head-on the question that ID people claim that naturalists always avoid: the origins of genetic information itself. He takes up the question as “what is necessary for a system that can take a molecule (e.g. a nucleotide sequence) as a sign (e.g. ‘build this protein’)?” Deacon uses semiotics — the theory of signs developed by the American polymath Charles S. Peirce — to really think very carefully about what we’re talking about when we talk about “information”.

    Abstract:

    To explore how molecules became signs I will ask: “What sort of process is necessary and sufficient to treat a molecule as a sign?” This requires focusing on the interpreting system and its interpretive competence. To avoid assuming any properties that need to be explained I develop what I consider to be a simplest possible molecular model system which only assumes known physics and chemistry but nevertheless exemplifies the interpretive properties of interest. Three progressively more complex variants of this model of interpretive competence are developed that roughly parallel an icon-index-symbol hierarchic scaffolding logic. The implication of this analysis is a reversal of the current dogma of molecular and evolutionary biology which treats molecules like DNA and RNA as the original sources of biological information. Instead I argue that the structural characteristics of these molecules have provided semiotic affordances that the interpretive dynamics of viruses and cells have taken advantage of. These molecules are not the source of biological information but are instead semiotic artifacts onto which dynamical functional constraints have been progressively offloaded during the course of evolution.

    When I suggest that it’s theories of self-organizing systems, not evolutionary theory, that pose the real alternative to intelligent design, this is what I have in mind.

  20. 20
    jerry says:

    Here’s the conclusion of Deacon’s analysis

    The sequence of hypothetical molecular models discussed here falls well short of explaining the origins of the “genetic code.” Indeed, it posits an evolutionary sequence that assumes that protein-like molecules are present long before nucleic acids (possibly arising from the prebiotic formation of hydrogen cyanide polymers; see Das et al. (2019) for a current review). This inverts the currently popular view that replicating molecules intrinsically constitute biological information. This popular assumption has implicitly reduced the concept of information to pattern replication without reference. As a result it begs the question of the origin of functional significance.

    The logic of the autogenic approach, though not able to directly account for the evolution of the DNA-to-amino acid “code,” provides something more basic. It provides a “proof of principle” of a sort, showing step-by-chemically-realistic-step how a molecule like RNA or DNA could acquire the property of recording and instructing the dynamical molecular relationships that constitute and maintain the molecular system of which it is a part. In short, it explains how a molecule can become about other molecules. Importantly, this analysis inverts the logic that treats RNA and DNA replication as intrinsically informational and instead shows how the information-bearing function of nucleic acids is due to their ability to embody constraints inherited from the codependent dynamics of an open molecular` system able to repair itself. This may point the way to an alternative strategy for exploring the origin of the genetic code. Rather than thinking of the problem from an information molecule first perspective (how nucleic acid structure came to inform protein dynamics), it might be instructive to ask the question the other way around (how protein dynamics came to be reflected in nucleic acid structure). In other words, it might make sense to invert the order of Crick’s central dogma when considering the evolution of the genetic code.

    Is this a real phenomenon or just someone’s speculation?

  21. 21
    relatd says:

    Jerry at 20,

    More nonsense.

    ‘It provides a “proof of principle” of a sort, showing step-by-chemically-realistic-step how a molecule like RNA or DNA could acquire the property of recording and instructing the dynamical molecular relationships that constitute and maintain the molecular system of which it is a part. In short, it explains how a molecule can become about other molecules.’

    This explanation is not an explanation. If it was, a scientist could replicate it.

  22. 22
    asauber says:

    “Is this a real phenomenon or just someone’s speculation?”

    I believe the term is Mumbo-Jumbo.

    Andrew

  23. 23
    PyrrhoManiac1 says:

    @20

    Is this a real phenomenon or just someone’s speculation?

    I think Deacon is quite clear about this: it is proof-of-concept. It shows that there is a step-by-step chemically realistic pathway for the emergence of genetic information by naturalistic means.

    If Deacon is right, it would mean that KF is wrong to insist on a sharp distinction between “order” and “organization” (as he does here).

    If Deacon is right, it means that Josh Anderson is wrong to say that “a system of established correlations between stuff out here and information instantiated in a domain of symbols” cannot come into existence through some “intelligence-free material processes” (see here).

    It would mean that Caspian is wrong to say “the information content of the simplest self-replicating machine . . . cannot be explained by any natural process.”

    I am underscoring the cannot here: on Caspian’s view, and this seems to be the ID ‘party line’, it is not possible for material processes to give rise to the kind of complex functionally specified information that we observe in life. That is precisely what Deacon is doing here: showing exactly how, in a chemically realistic step-by-step process, material processes can give rise to complex functionally specified information.

    Does he show that this is how life actually came to exist, billions of years ago? No.

    Does he provide a detailed recipe that some chemist could follow in a lab and produce life? No.

    But he does show that the basic premise of ID is wrong, because we can produce a chemically realistic step-by-step process whereby natural processes give rise to complex functional information, and that is exactly what ID insists is simply not possible.

    So in order for ID to be correct, one would need to show where Deacon has made a mistake in his reasoning.

  24. 24
    relatd says:

    “But he does show that the basic premise of ID is wrong, because we can produce a chemically realistic step-by-step process whereby natural processes give rise to complex functional information, and that is exactly what ID insists is simply not possible.”

    Baloney.

  25. 25
    JVL says:

    Relatd: Baloney.

    Why is it baloney? What part of Deacon’s argument is incorrect?

  26. 26
    relatd says:

    This explanation is not an explanation. If it was, a scientist could replicate it.

Leave a Reply