- Share
-
-
arroba
![]() |
![]() |
Aleksandr Oparin and John von Neumann. Courtesy of Russavia, Beelaj and Wikipedia.
In two separate comments (see here and here) on a recent post of mine, Intelligent Design critic Dave Mullenix posed a question to ID supporters, which often comes up on this blog:
[W]hy do you ID people insist that the first living thing was complex? 500 to 1000 bits of information? Try 50 to 100. Think of a single polymer whose only capability is reproducing itself, and which is possibly imbedded in the kind of droplets that form naturally…
A simple self replicating molecule isn’t much compared to modern life, but if it self-replicates and allows evolution, it’s all the start we need and a small polymer would do it. Don’t worry about proteins, they come later. Don’t worry about metabolism – that’s also for advanced life. For first life, reproduction with the possibility of Darwinian evolution is all we need and a short polymer will do the trick.
Dave Mullenix confesses to not yet having read Dr. Stephen Meyer’s Signature in the Cell, although he has purchased a Kindle version of the book. I realize that he is a very busy man, and I also realize that other Intelligent Design critics have voiced similar objections previously, so I’ve written this post in order to explain why the scenario Dave Mullenix proposes will not work.
What motivates the quest for a 50-bit life form?
Dave Mullenix is surely well aware of the research of Dr. Douglas Axe, which has shown that the vast majority of 150-amino-acid sequences are non-functional, and that the likelihood of a single protein – that is, any working protein, never mind which one – arising by pure chance on the early earth is astronomically low. Nor can necessity account for the origin of DNA, RNA or proteins. All of these molecules are made up of biological building blocks – nucleotides in the case of DNA and RNA, and amino acids in the case of proteins. Just as the properties of stone building blocks do not determine their arrangements in buildings, so too, the properties of biological building blocks do not determine their arrangements in DNA, RNA and proteins.
If neither chance nor necessity can account for the appearance of fully functional RNA, DNA and proteins, then evolutionists have no choice but to assume that these molecules arose from something even simpler, which was capable of evolving into these molecules. This is the logic which underlies Dave Mullenix’s proposal regarding the origin of life.
Why a 50-bit life form wouldn’t work
Actually, a similar proposal was made by origin-of-life researcher Aleksandr Oparin in the late 1960s. In his original model, put forward in the 1920s and 1930s, Oparin had assumed that chance alone could account for the origin of the proteins which make cellular metabolism possible. However, the discovery of the extreme complexity and specificity of protein molecules, coupled with the inability of his model to explain the origin of the information in DNA, forced him to revise his original proposal for the chemical evolution of life on earth. Dr. Stephen Meyer continues the story in Signature in the Cell (HarperOne, New York, 2009), pages 273-277:
As the complexity of DNA and proteins became apparent, Oparin published a revised version of his theory in 1968 that envisioned a role for natural selection earlier in the process of abiogenesis. The new version of his theory claimed that natural selection acted on unspecified polymers as they formed and changed within his coacervate protocells.[5] Instead of natural selection acting on fully functional proteins in order to maximize the effectiveness of primitive metabolic processes at work within the protocells, Oparin proposed that natural selection might work on less than fully functional polypeptides, which would naturally cause them to increase their specificity and function, eventually making metabolism possible. He envisioned natural selection acting on “primitive proteins” rather than on primitive metabolic processes in which fully functional proteins had already arisen….
[Oparin] proposed that natural selection initially would act on unspecified strings of polypeptides of nucleotides and amino acids. But this created another problem for his scenario. Researchers pointed out that any system of molecules for copying information would be subject to a phenomenon known as “error catastrophe” unless these molecules are specified enough to ensure an error-free transmission of information. An error catastrophe occurs when small errors – deviations from functionally necessary sequences – are amplified in successive replications.[14] Since the evidence of molecular biology shows that unspecified polypeptides will not replicate genetic information accurately, Oparin’s proposed system of initially unspecified polymers would have been highly vulnerable to such an error catastrophe.
Thus, the need to explain the origin of specified information created an intractable dilemma for Oparin. If, on the one hand, Oparin invoked natural selection early in the process of chemical evolution (i.e. before functional specificity in amino acids or nucleotides had arisen), accurate replication would have been impossible. But in the absence of such replication, differential reproduction cannot proceed and the concept of natural selection is incoherent.
On the [other] hand, if Oparin introduced natural selection late in his scenario, he would need to rely on chance alone to produce the sequence-specific molecules necessary for accurate self-replication. But even by the late 1960s, many scientists regarded that as implausible given the complexity and specificity of the molecules in question…
The work of John von Neumann, one of the leading mathematicians of the twentieth century, made this dilemma more acute. In 1966, von Neumann showed that any system capable of self-replication would require sub-systems that were functionally equivalent to the information storage, replicating and processing systems found in extant cells.[16] His calculations established an extremely high threshold of minimal biological function, a conclusion that was confirmed by later experimental work.[17] On the basis of the minimal complexity and related considerations, several scientists during the late 1960s (von Neumann, physicist Eugene Wigner, biophysicist Harold Morowitz) made calculations showing that random fluctuations of molecules were extremely unlikely to produce the minimal complexity required for a primitive replication system.[18]…
As a result, by the late 1960s, many scientists had come to regard the hypothesis of prebiotic natural selection as indistinguishable from the pure chance hypothesis, since random molecular interactions were still needed to generate the initial complement of biological information that would make natural selection possible. Prebiotic natural selection could add nothing to the process of information generation until after vast amounts of functionally specified information had first arisen by chance.
References
[5] Oparin, A. Genesis and Evolutionary Development of Life, New York: Academic, 1968, pp. 146-147.
[14] Joyce, Gerald F. and Leslie Orgel, “Prospects for Understanding the Origin of the RNA World.” In The RNA World, edited by Raymond F. Gesteland and John J. Atkins, I-25. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press, 1993. See especially pp. 8-13.
[16] Von Neumann, John. The Theory of Self-Replicating Automata. Completed and edited by A. Burks. Urbana: University of Illinois Press, 1966.
[17] Pennisi, Elizabeth. “Seeking Life’s Bare (Genetic) Necessities”. Science 272(1996): 1098-99.
Mushegian, Arcady, and Eugene Koonin, “A Minimal Gene Set for Cellular Life Derived by Comparison of Complete Bacterial Genomes”. Proceedings of the National Academy of Sciences USA 93 (1996): 10268-10273.[18] Wigner, Eugene. “The Probability of the Existence of a Self-Reproducing Unit.” In The Logic of Personal Knowledge: Essays Presented to Michael Polyani, edited by Edward Shils, pp. 231-235. London: Routledge and Kegan Paul, 1961. [But see here for a critique by physicist John C. Baez. – VJT]
Morowitz, Harold J. “The Minimum Size of the Cell,”in Energy Flow in Biology: Biological Organization as a Problem in Thermal Physics, New York: Academic, 1968, pp. 10-11.(Emphases mine – VJT.)
In conclusion: there are good reasons for thinking that a 50-bit life-form would never work. Since it would not be capable of accurate self-replication, it would be unable to evolve into larger molecules such as RNA, DNA and proteins. Intelligent Design critics who attempt to overcome the astronomical odds against these molecules forming naturally by hypothesizing a simpler, 50-bit life-form that generated them are, like the man of La Mancha, dreaming the impossible dream.
Let me finish my essay by quoting the beautiful lyrics of the song, The Impossible Dream. The song was composed by Mitch Leigh, and the lyrics were written by Joe Darion. It was written for the 1965 musical, “Man of La Mancha”:
To dream the impossible dream
To fight the unbeatable foe
To bear with unbearable sorrow
To run where the brave dare not go
To right the unrightable wrong
To love pure and chaste from afar
To try when your arms are too weary
To reach the unreachable star
This is my quest, to follow that star
No matter how hopeless, no matter how far
To fight for the right, without question or pause
To be willing to march into Hell, for a Heavenly cause
And I know if I’ll only be true, to this glorious quest,
That my heart will lie will lie peaceful and calm, when I’m laid to my rest
And the world will be better for this:
That one man, scorned and covered with scars,
Still strove, with his last ounce of courage,
To reach the unreachable star.