Hungarian mathematician John von Neumann (1903 – 1957) was one of the more powerful scientific mind of the 20th century. His works span from functional and numerical analysis to quantum mechanics, from set theory to game theory, as well as many other fields of pure and applied mathematics. He was a pioneer in computer science, the first real computers were developed according to a basic model that takes his name (“von Neumann architecture”).

In the late 1940’s von Neumann studied from a theoretical point of view the problem of self-reproducing automata (see “Theory of Self-Reproducing Automata”, 1966, University of Illinois Press, Urbana). He was the first to provide an algorithmic model of a self-reproducing automaton. Roughly speaking, he proved that to achieve the goal it is necessary to resolve four problems: (1) to store instructions in a memory; (2) to duplicate these instructions; (3) to implement an automatic factory (sort of “universal constructor”), able to read the memory instructions, and, based on them, to construct the components of the system; (4) to manage all these functions my means of a central control unit. A self-reproducing system must contain the program of its own construction. This program is a sort of consistent and complete abstract image of the system. It is not for chance that the inventor of the theory of cellular automata is also one of the fathers of computer science. In fact, in two words, what von Neumann proved is that self-reproduction needs programming and processors (i.e. software and hardware). The following simplified symbolic picture can give an idea of the von Neumann’s solution:

Notice that the above picture is not very different from the “von Neumann architecture” of a computer (which instead of the Universal Constructor has the Arithmetic Logic Unit). We arrive to the conclusion that necessary but not sufficient condition for a self-reproducing automaton is to be a computer.

Notice that the necessity of software holds for all information-based replicators, be they artificial or natural. The remarkable thing is that von Neumann was able to successfully conceive the mathematical model of a self replicator without knowing as biological cells reproduce in detail (for the simple fact that at his times such knowledge was yet unavailable). He understood that necessarily cells had to implement similar techniques years before biologists discovered such mechanisms. His theory was proved true few years later with the discovery of the structure of DNA (James Watson and Francis Crick, 1953) and still later the discovery of most complex molecular machines that work out the cellular information processing. Michael Denton in his “Evolution, a theory in crisis” (cap.11) wrote:

“As von Neumann pointed out, the construction of any sort of self-replicating automaton would necessitate the solution to three fundamental problems [here Denton doesn’t consider the Control Unit] […] The solution to all three problems is found in living things and their elucidation has been one of the triumphs of modern biology. So efficient is the mechanism of information storage and so elegant the mechanism of duplication of this remarkable molecule that it is hard to escape the feeling that the DNA molecule may be the one and only perfect solution to the twin problems of information storage and duplication for self-replicating automata. The solution to the problem of the automatic factory lies in the ribosome.”

As known, in the replication of DNA, the key role is accomplished by the group of enzymes called DNA and RNA polymerase. It remains to identify what in the cell has the role of Control Unit. Since it is the more complex function (in fact it is the finite state machine governing all other devices) it is likely that its detailed explanation involves many correlated molecular machines.

Of course the above list of four functions is simplified and reduced at minimum. However without all these functions any self-reproducing automaton cannot work. In the terms of Intelligent Design theory that means that the set {1,2,3,4} is irreducibly complex (IC) at the functional level. Von Neumann’s self-replicating architecture is science at its best and represents an intelligent design. What was his scientific forecast about biological cells but an ID prediction? Given that such ID prediction was proved true in the lab some time after, on that occasion ID was indeed science and the opposite of a science stopper (the usual accusations against ID are: “ID is not science”, “ID is science stopper” …).

Despite what some believe, self-replication is one of the more difficult technological problems. Von Neumann understood that any information-based replicator must contain inside itself (among other indispensable things) a symbolic representation of itself. It’s worth considering what this implies. The concept of “symbolic representation” implies somehow the realization of a precise mapping between two realities: the physical reality (the physical body of the replicator, the hardware) and an abstract reality (a structure composed of symbols or signs, the software). In mathematical terms this mapping is a function where the domain is a structured set of physical objects and the co-domain a structured set of abstract objects.

The realization of such mapping is not a thing that randomness and physical chemical laws can afford. What is only physical cannot work out the non physical, the abstract. Only a mind can conceive and manage abstract realities, because mind can be considered an abstract processor of concepts and ideas. Obviously a mapping between the physical and the abstract can be designed in many ways. In fact while the physical reality is a given data of the problem the symbolic structure, which must match it, has to be invented. Here only the creativity of intelligence can have success.

But these ascertainments give only a partial idea of the complexity of the self-replication problem. What has to be emphasize is the symbolic aspect of the representation. In fact also a picture can contain an internal self-representation (imagine to put two mirrors in front of each other and each of them will contain a scaled self image). But simple 2D mirrored images are not symbolic and would be of no value for the job of replicating, based on instructions, a 3D molecular system from raw materials. What can help to understand the big difference is to think that the sequence of signs or symbols stored in a replicator are properly instructions, that is directives that must be interpreted by the replicator machinery for constructing a copy of itself from a repository of materials (which get into the replicator through that in the above schema is called “input” device). Instructions imply a code that must be shared between the memory, the processor and the constructor (in the above picture the bidirectional arrows connected them represent also this).

Moreover we have the concept of “autotrophic” replicator. An autotrophic replicator is not a replicator that needs an external provider of basic parts, rather it can self-reproduce finding the necessary materials by itself in the wild. Biological cells are even autotrophic replicators.

With the concept of software and its execution by a processor we enter the main door of computer science. Life necessarily implies information processing in both its aspects, software and hardware. Chance and necessity can create neither the software nor the hardware able to run it. As a consequence, this astonishing meeting of computer science and biology, one of the greatest discoveries of the 20th century, is a thing that Darwin could not even imagine and that, together with other evidences, finally debunks his theory and proves ID true.