Animal minds Artificial Intelligence Intelligent Design

The amoeba that might be smarter than your computer

Spread the love
Amoebae move and feed by using pseudopods, which are bulges of cytoplasm formed by the coordinated action of actin microfilaments pushing out the plasma membrane that surrounds the cell.

Hype aside, the microbe’s math skills ace the Travelling Salesman problem and may help with cybersecurity

Each channel represents a city on the “salesman’s route” with lights going off that signal how far each next “city” is.

This might seem like a roundabout way of calculating the solution to the traveling salesman problem, but the advantage is that the amoeba doesn’t have to calculate every individual path like most computer algorithms do. Instead, the amoeba just reacts passively to the conditions and figures out the best possible arrangement by itself. What this means is that for the amoeba, adding more cities doesn’t increase the amount of time it takes to solve the problem. Avery Thompson, “A Single Cell Hints at a Solution to the Biggest Problem in Computer Science” at Popular Mechanics

The thinking if, if we could figure out just how the brainless amoebas do these calculations, we might be able to use that information to speed up problems that are difficult because they use up a lot of computational power, including cybersecurity problems, which the amoebas do not seem to require.

Apart from solving computer science problems, the amoebas’ skills raise some interesting questions. Though we don’t know how, we know that some type of intelligence can exist without a brain, as the slime mold has demonstrated in other experiments, reminiscent of the Travelling Salesman problem, as well… Denyse O’Leary, “Is an amoeba smarter than your computer?” at Mind Matters

Nature is full of intelligence but mostly it does not resemble human intelligence. In some ways it works better than computer intelligence.

See also: See also: Can plants be as smart as animals?

3 Replies to “The amoeba that might be smarter than your computer

  1. 1
    bornagain77 says:

    A few notes:

    Learning from Bacteria about Social Networking (Information Processing) – video
    Excerpt: I (Dr. Ben-Jacob) will show illuminating movies of swarming intelligence of live bacteria in which they solve optimization problems for collective decision making that are beyond what we, human beings, can solve with our most powerful computers.

    The Multi-dimensional Genome–impossible for Darwinism to account for– by Dr Robert Carter – video
    (15:52 minute mark: Comparing the Computer Operating Systems of Linux to the much more sophisticated operating systems of Regulatory Networks in e-coli)

    Comparing genomes to computer operating systems – Van – May 2010
    Excerpt: we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology,,,

    regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) – Picture of comparison

    Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information – David L. Abel and Jack T. Trevors – Theoretical Biology & Medical Modelling, Vol. 2, 11 August 2005, page 8
    “No man-made program comes close to the technical brilliance of even Mycoplasmal genetic algorithms. Mycoplasmas are the simplest known organism with the smallest known genome, to date. How was its genome and other living organisms’ genomes programmed?”

    “Human DNA is like a computer program but far, far more advanced than any software we’ve ever created.”
    – Bill Gates, The Road Ahead, 1996, p. 188

    Programming of Life – video

    Scientists Have Stored a Movie, a Computer OS, and an Amazon Gift Card in a Single Speck of DNA
    “The highest-density data-storage device ever created.”
    – PETER DOCKRILL – 7 MAR 2017
    Excerpt: In turn, Erlich and fellow researcher Dina Zielinski from the New York Genome Centre now say their own coding strategy is 100 times more efficient than the 2012 standard, and capable of recording 215 petabytes of data on a single gram of DNA.
    For context, just 1 petabyte is equivalent to 13.3 years’ worth of high-definition video, so if you feel like glancing disdainfully at the external hard drive on your computer desk right now, we won’t judge.

    Earth’s Biosphere Is Awash in Information – June 29, 2015
    Excerpt: In this remarkable paper, Landenmark, Forgan, and Cockell of the United Kingdom Centre for Astrobiology at the University of Edinburgh attempt “An Estimate of the Total DNA of the Biosphere.” The results are staggering:
    “Modern whole-organism genome analysis, in combination with biomass estimates, allows us to estimate a lower bound on the total information content in the biosphere: 5.3 × 10^31 (±3.6 × 10^31) megabases (Mb) of DNA. Given conservative estimates regarding DNA transcription rates, this information content suggests biosphere processing speeds exceeding yottaNOPS values (10^24 Nucleotide Operations Per Second).,,,”
    ,,,let’s ponder the scale of this information content and processing speed. A yottaNOPS is a lotta ops! Each prefix multiplies the prior one by a thousand: kilo, mega, giga, tera, peta, exa, zetta, yotta. A “yottabase” doesn’t even come close to the raw information content of DNA they estimate: 10^31 megabases. That’s the same as 10^37 bases, but a yottabase is only 10^24 bases (a trillion trillion bases). This means that the information content of the biosphere is 50 x 10^13 yottabases (500 trillion yottabases). They estimate that living computers perform a yottaNOPS, or 10^24 nucleotide operations per second, on this information.
    You can pick yourself off the floor now.,,,
    “Storing the total amount of information encoded in DNA in the biosphere, 5.3 × 10^31 megabases (Mb), would require approximately 10^21 supercomputers with the average storage capacity of the world’s four most powerful supercomputers.”
    How much land surface would be required for 10^21 supercomputers (a “zetta-computer”)? The Titan supercomputer takes up 404 m2 of space. If we assume just 100 m2 for each supercomputer, we would still need 10^23 square meters to hold them all. Universe Today estimates the total surface of Earth (including the oceans) at 510 million km2, which equates to 5.1 x 10^14 m2. That’s 9 orders of magnitude short of the zetta-computer footprint, meaning we would need a billion Earths to have enough space for all the computers needed to match the equivalent computing power life performs on DNA!

  2. 2
    bornagain77 says:

    Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez , Robert J. Marks II , Jorge Fernandez and John C. Sanford – May 2013

    Time to Redefine the Concept of a Gene? – Sept. 10, 2012
    Excerpt: As detailed in my second post on alternative splicing, there is one human gene that codes for 576 different proteins, and there is one fruit fly gene that codes for 38,016 different proteins!
    While the fact that a single gene can code for so many proteins is truly astounding, we didn’t really know how prevalent alternative splicing is. Are there only a few genes that participate in it, or do most genes engage in it? The ENCODE data presented in reference 2 indicates that at least 75% of all genes participate in alternative splicing. They also indicate that the number of different proteins each gene makes varies significantly, with most genes producing somewhere between 2 and 25.

    Second, third, fourth… genetic codes – One spectacular case of code crowding – Edward N. Trifonov – video
    In the preceding video, Trifonov elucidates codes that are, simultaneously, in the same sequence, coding for DNA curvature, Chromatin Code, Amphipathic helices, and NF kappaB. In fact, at the 58:00 minute mark he states, “Reading only one message, one gets three more, practically GRATIS!”. And please note that this was just an introductory lecture in which Trifinov just covered the very basics and left many of the other codes out of the lecture. Codes which code for completely different, yet still biologically important, functions. In fact, at the 7:55 mark of the video, there are 13 codes that are listed on a powerpoint, although the writing was too small for me to read.
    Concluding powerpoint of the lecture (at the 1 hour mark):
    “Not only are there many different codes in the sequences, but they overlap, so that the same letters in a sequence may take part simultaneously in several different messages.”
    Edward N. Trifonov – 2010

    ‘It’s becoming extremely problematic to explain how the genome could arise and how these multiple levels of overlapping information could arise, since our best computer programmers can’t even conceive of overlapping codes. The genome dwarfs all of the computer information technology that man has developed. So I think that it is very problematic to imagine how you can achieve that through random changes in the code.,,, and there is no Junk DNA in these codes. More and more the genome looks likes a super-super set of programs.,, More and more it looks like top down design and not just bottom up chance discovery of making complex systems.’ –
    Dr. John Sanford – Inventor of the ‘Gene Gun’ – 31 second mark – video

    Life Leads the Way to Invention – Feb. 2010
    Excerpt: a cell is 10,000 times more energy-efficient than a transistor. “In one second, a cell performs about 10 million energy-consuming chemical reactions, which altogether require about one picowatt (one millionth millionth of a watt) of power.” This and other amazing facts lead to an obvious conclusion: inventors ought to look to life for ideas.,,, Essentially, cells may be viewed as circuits that use molecules, ions, proteins and DNA instead of electrons and transistors. That analogy suggests that it should be possible to build electronic chips – what Sarpeshkar calls “cellular chemical computers” – that mimic chemical reactions very efficiently and on a very fast timescale.

    The astonishing efficiency of life – November 17, 2017 by Jenna Marshall
    Excerpt: All life on earth performs computations – and all computations require energy. From single-celled amoeba to multicellular organisms like humans, one of the most basic biological computations common across life is translation: processing information from a genome and writing that into proteins.
    Translation, it turns out, is highly efficient.
    In a new paper published in the journal Philosophical Transactions of the Royal Society A, SFI researchers explore the thermodynamic efficiency of translation.,,,
    To discover just how efficient translation is, the researchers started with Landauer’s Bound. This is a principle of thermodynamics establishing the minimum amount of energy that any physical process needs to perform a computation.
    “What we found is that biological translation is roughly 20 times less efficient than the absolute lower physical bound,” says lead author Christopher Kempes, an SFI Omidyar Fellow. “And that’s about 100,000 times more efficient than a computer.”

    Also of interest is that the integrated coding between the DNA, RNA and Proteins of the cell apparently seem to be ingeniously programmed along the very stringent guidelines laid out in Landauer’s principle, (by Charles Bennett from IBM of Quantum Teleportation fame), for ‘reversible computation’ in order to achieve such amazing energy/metabolic efficiency as it does.

    Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon – Charles H. Bennett – September 2003
    Excerpt: Of course, in practice, almost all data processing is done on macroscopic apparatus, dissipating macroscopic amounts of energy far in excess of what would be required by Landauer’s principle. Nevertheless, some stages of biomolecular information processing, such as transcription of DNA to RNA, appear to be accomplished by chemical reactions that are reversible not only in principle but in practice.,,,,

    Logically and Physically Reversible Natural Computing: A Tutorial – 2013
    Excerpt: This year marks the 40th anniversary of Charles Bennett’s seminal paper on reversible computing. Bennett’s contribution is remembered as one of the first to demonstrate how any deterministic computation can be simulated by a logically reversible Turing machine. Perhaps less remembered is that the same paper suggests the use of nucleic acids to realise physical reversibility. In context, Bennett’s foresight predates Leonard Adleman’s famous experiments to solve instances of the Hamiltonian path problem using strands of DNA — a landmark date for the field of natural computing — by more than twenty years.

    Logical Reversibility of Computation* – C. H. Bennett – 1973
    Excerpt from last paragraph: The biosynthesis and biodegradation of messenger RNA may be viewed as convenient examples of logically reversible and irreversible computation, respectively. Messenger RNA. a linear polymeric informational macromolecule like DNA, carries the genetic information from one or more genes of a DNA molecule. and serves to direct the synthesis of the proteins encoded by those genes. Messenger RNA is synthesized by the enzyme RNA polymerase in the presence of a double-stranded DNA molecule and a supply of RNA monomers (the four nucleotide pyrophosphates ATP, GTP, CTP, and UTP) [7]. The enzyme attaches to a specific site on the DNA molecule and moves along, sequentially incorporating the RNA monomers into a single-stranded RNA molecule whose nucleotide sequence exactly matches that of the DNA. The pyrophosphate groups are released into the surrounding solution as free pyrophosphate molecules. The enzyme may thus be compared to a simple tape-copying Turing machine that manufactures its output tape rather than merely writing on it. Tape copying is a logically reversible operation. and RNA polymerase is both thermodynamically and logically reversible.,,,

  3. 3
    bornagain77 says:

    The amazing energy efficiency possible with ‘reversible computation’ has been known about since Charles Bennett laid out the principles for such reversible programming in 1973, but as far as I know, due to the extreme level of complexity involved in achieving such ingenious ‘reversible coding’, has yet to be accomplished in any meaningful way for our computer programs even to this day:

    Reversible computing
    Excerpt: Reversible computing is a model of computing where the computational process to some extent is reversible, i.e., time-invertible.,,, Although achieving this goal presents a significant challenge for the design, manufacturing, and characterization of ultra-precise new physical mechanisms for computing, there is at present no fundamental reason to think that this goal cannot eventually be accomplished, allowing us to someday build computers that generate much less than 1 bit’s worth of physical entropy (and dissipate much less than kT ln 2 energy to heat) for each useful logical operation that they carry out internally.

    Can reversible computing really dissipate absolutely zero energy?
    Of course not. Any non-equilibrium physical system (whether a computer or a rock) dissipates energy at some rate,,,
    Okay, then can reversible computing really make the energy dissipation of a computation be an arbitrarily small non-zero amount?
    Only insofar as the computer can be arbitrarily well isolated from unwanted interactions, errors, and energy leakage,,,
    But, despite all these caveats, it may yet be possible to set up reversible computations that dissipate such amazingly tiny amounts of energy that the dissipation is not a barrier to anything that we might wish to do with them – I call such computations ballistic. We are a long way from achieving ballistic computation, but we do not yet know of any fundamental reasons that forbid it from ever being technically possible.

    Further note: not only do single cells outperform our best computers in terms of solving the traveling salesman problem, single proteins themselves, with protein folding, also outperform our best computers in terms of solving the traveling salesman problem.

    DNA computer helps traveling salesman – Philip Ball – 2000
    Excerpt: Just about the meanest problems you can set a computer belong to the class called ‘NP-complete’. The number of possible answers to these conundrums, and so the time required to find the correct solution, increases exponentially as the problem is scaled up in size. A famous example is the ‘travelling salesman’ puzzle, which involves finding the shortest route connecting all of a certain number of cities.,,,
    Solving the traveling-salesman problem is a little like finding the most stable folded shape of a protein’s chain-like molecular structure — in which the number of ‘cities’ can run to hundreds or even thousands.

    NP-hard problem –
    Excerpt: Another example of an NP-hard problem is the optimization problem of finding the least-cost cyclic route through all nodes of a weighted graph. This is commonly known as the traveling salesman problem.
    – per wikipedia

    Protein folding is found to be a ‘NP-hard problem’,

    Combinatorial Algorithms for Protein Folding in Lattice
    Models: A Survey of Mathematical Results – 2009
    Excerpt: Protein Folding: Computational Complexity
    NP-completeness: from 10^300 to 2 Amino Acid Types
    NP-completeness: Protein Folding in Ad-Hoc Models
    NP-completeness: Protein Folding in the HP-Model

    Confronting Science’s Logical Limits – John L. Casti – 1996
    Excerpt: It has been estimated that a supercomputer applying plausible rules for protein folding would need 10^127 years to find the final folded form for even a very short sequence consisting of just 100 amino acids. (The universe is 13.7 x 10^9 years old). In fact, in 1993 Aviezri S. Fraenkel of the University of Pennsylvania showed that the mathematical formulation of the protein-folding problem is computationally “hard” in the same way that the traveling-salesman problem is hard.

    The Humpty-Dumpty Effect: A Revolutionary Paper with Far-Reaching Implications – Paul Nelson – October 23, 2012
    Excerpt: Put simply, the Levinthal paradox states that when one calculates the number of possible topological (rotational) configurations for the amino acids in even a small (say, 100 residue) unfolded protein, random search could never find the final folded conformation of that same protein during the lifetime of the physical universe.

    Physicists Discover Quantum Law of Protein Folding – February 22, 2011
    Quantum mechanics finally explains why protein folding depends on temperature in such a strange way.
    Excerpt: First, a little background on protein folding. Proteins are long chains of amino acids that become biologically active only when they fold into specific, highly complex shapes. The puzzle is how proteins do this so quickly when they have so many possible configurations to choose from.
    To put this in perspective, a relatively small protein of only 100 amino acids can take some 10^100 different configurations. If it tried these shapes at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how these molecules do the job in nanoseconds, nobody knows.,,,

    Moreover, a promising avenue in quantum computation is that researchers are looking to quantum biology to overcome extreme difficulties with ‘noise’ that have stymied engineers in their quest to build stable quantum computers that can be of practical, even commercial, application.

    Quantum entanglement in hot systems – 2011
    Excerpt: The authors remark that this reverses the previous orthodoxy, which held that quantum effects could not exist in biological systems because of the amount of noise in these systems.,,, Environmental noise here drives a persistent and cyclic generation of new entanglement.,,, In summary, the authors say that they have demonstrated that entanglement can recur even in a hot noisy environment. In biological systems this can be related to changes in the conformation of macromolecules.
    per quantum mind

    Quantum life: The weirdness inside us – 03 October 2011 by Michael Brooks
    Excerpt: “It sounds harsh but we haven’t learned a thing apart from the obvious.” A better understanding of what is going on might also help us on the way to building a quantum computer that exploits coherent states to do myriad calculations at once. Efforts to do so have so far been stymied by our inability to maintain the required coherence for long – even at temperatures close to absolute zero and in isolated experimental set-ups where disturbances from the outside world are minimised.

    How quantum entanglement in DNA synchronizes double-strand breakage by type II restriction endonucleases – 2016
    Implications concluding paragraph: The discovery of quantum states in protein-DNA complexes would thus allude to the tantalizing possibility that these systems might be candidates for quantum computation. The evidence is mounting for the implementation of such technology.
    Biology is characterized by macroscopic open systems in non-equilibrium conditions. Macroscopic organization of microscopic components (e.g., molecules, ions, electrons) that exhibit quantum behavior is rarely straightforward. Knowing the microscopic details of the constituent interactions and their mechanistic laws is not sufficient. Rather, as this work has shown, molecular systems must be contextualized in their local biological environments to discern appreciable quantum effects.

Leave a Reply