Newton’s theory of gravity, Wegener’s theory of continental drift and Darwin’s theory of evolution all have one thing in common: they have all been ridiculed as impossible at one time or another, because they lacked a plausible mechanism. So which theory is different from the rest? I shall argue that Darwin’s theory is unique, in that it has won widespread acceptance despite the existence of weighty scientific arguments showing that its mechanism is incapable of accounting for the phenomena that it purports to explain. However, if Darwin had formulated his theory of evolution in the same way that Newton formulated his theory of gravity, Darwin’s theory would have been invulnerable to these scientific difficulties. It is also a curious fact that although Darwin’s original theory has undergone radical transformation, like those of Newton and Wegner, many scientists and philosophers are proud to call themselves “Darwinists,” whereas no modern scientist would refer to him/herself as a “Newtonian” or “Wegnerian.”
In this short essay, I also address the fierce resistance in scientific circles to teaching Intelligent Design in high school classrooms, and I propose a legal strategy for neutralizing the Darwinian crusade.
Newton’s theory of gravity
In ancient times, gravity was viewed not as an external force acting on bodies, but as an inherent tendency of heavy bodies to fall towards the center of the universe. Experimental evidence in the sixteenth and seventeenth centuries overturned the traditional view. The first modern attempt to formulate a systematic theory of gravity was that of Isaac Newton (1642-1727).
From the very beginning, Isaac Newton realized that his theory of gravity, while empirically valid, lacked a plausible physical mechanism, as it seemed to require action at a distance. In his fourth letter to Richard Bentley (dated February 25, 1692/3), he wrote: “It is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter, without mutual contact, as it must do if gravitation in the sense of Epicurus be essential and inherent in it. And this is one reason why I desired you would not ascribe ‘innate gravity’ to me. That gravity should be innate, inherent, and essential to matter, so that one body may act upon another at a distance, through a vacuum, without the mediation of anything else, by and through which their action and force may be conveyed from one to another, is to me so great an absurdity, that I believe no man who has in philosophical matters a competent faculty of thinking can ever fall into it. Gravity must be caused by an agent acting constantly according to certain laws; but whether this agent be material or immaterial, I have left to the consideration of my readers.” (Emphasis mine – VJT.)
To make matters worse, Newton’s theory assumed that gravity propagated instantaneously across space: each particle responds instantaneously to every other particle, regardless of the distance between them. Small wonder then, that to some of his contemporaries, Newton’s theory made gravity sound like a spooky, occult force. Indeed, according to two of his friends, Nicolas Fatio de Duillier and David Gregory, Newton himself seems to have thought at one time that gravitation was based directly on the will of God (Van Lunteren, F. (2002), “Nicolas Fatio de Duillier on the mechanical cause of Gravitation”, in Edwards, M.R., Pushing Gravity: New Perspectives on Le Sage’s Theory of Gravitation, Montreal: C. Roy Keys Inc., pp. 41–59); while on two other occasions (in 1675 and 1717), Newton tried to explain the action of gravity in terms of basic mechanical processes propagating through the aether, thereby avoiding the need for action at a distance.
Since he was unable to experimentally verify the motion that produces gravity, Newton wisely refused to link his theory of gravity to any scientific hypothesis as to the cause of this mysterious force. In his General Scholium of 1713, which was published in the second edition of Newton’s Principia, he wrote: “I have not yet been able to discover the cause of these properties of gravity from phenomena and I feign no hypotheses… It is enough that gravity does really exist and acts according to the laws I have explained, and that it abundantly serves to account for all the motions of celestial bodies, and of our sea.” (Emphasis mine – VJT.)
Although Newton’s theory of gravity has been rendered obsolete by Einstein’s general theory of relativity, which was first proposed in 1915, Newton’s theory continues to be widely used because of its mathematical convenience and the fact that it works satisfactorily in normal circumstances, at velocities well below that of light.
Wegener’s theory of continental drift
Although the notion that continents drift was first suggested as far back as 1596, the German scientist Alfred Wegener was the first to formally publish the hypothesis that the continents had somehow moved apart, in 1912. He expanded on his ideas in his 1915 book, The Origin of Continents and Oceans. Wegener marshaled a great deal of evidence for continental drift in his book. However, most geologists found his theory unpersuasive, as it lacked a plausible mechanism. As we have seen, Newton’s theory of gravity also lacked a mechanism, but at least it was supported by observational evidence. Wegener, unlike Newton, was unable to point to observations showing that the continents are currently drifting, and it was not until 1984 (some fifty-four years after Wegener’s death) that NASA was able to release the first direct measurements of continental drift, showing that the Atlantic is gradually widening, and that Australia is receding from South America and heading for Hawaii.
To make matters worse, the mechanisms proposed by Wegener in his book were nowhere near powerful enough to move the continents. As Stephen Jay Gould narrates in his essay, The Validation of Continental Drift (in Ever Since Darwin: Reflections in Natural History, 1977, W. W. Norton & Co. Inc.):
All the original drifters [proponents of continental drift – VJT] imagined that continents plow their way through a static ocean floor. Alfred Wegener, the father of continental drift, argued early in our century that gravity alone could put continents in motion. Continents drift slowly westward, for example, because attractive forces of the sun and moon hold them up as the earth rotates underneath them. Physicists responded with derision and showed mathematically that gravitational forces are far too weak to power such a monumental peregrination. So Alexis du Toit, Wegener’s South African champion, tried a different tack. He argued for a local, radioactive melting of oceanic floor at continental borders, permitting the continents to glide through. This ad hoc hypothesis added no increment of plausibility to Wegener’s speculation. Since drift seemed absurd in the absence of a mechanism, orthodox geologists set out to render the impressive evidence for it as a series of unconnected coincidences.
In the end, writes Gould, “the classical data for drift … played no role in validating the notion of wandering continents; drift triumphed only when it became the necessary consequence of a new theory.” That theory was plate tectonics. As far back as 1920, the English geologist Arthur Holmes had proposed that plate junctions might lie beneath the sea, and in 1928, he had suggested that convection currents within the mantle might be the driving force. But what finally swung geologists behind the theory of plate tectonics was the publication of several scientific papers between 1965 and 1968, which defined the key concepts of the theory and which modeled the Earth’s surface as a set of a dozen or so moving plates. The all-embracing explanatory power of the new theory made it immensely attractive to scientists. Continental drift, in turn, was “demoted” from being a theory in its own right to a mere entailment of the new theory of plate tectonics.
Darwin’s theory of evolution
Darwin’s theory of evolution by means of natural selection was not the first scientific theory of evolution to be proposed, but it was by far the best-argued. In his monumental work, The Origin of Species, Darwin not only put forward his case for evolution by natural selection; he also anticipated scientific objections to his theory and attempted to rebut them.
The reaction to Darwin’s book was varied, but there was a widespread feeling that he had overstated his case. While many of Darwin’s scientific contemporaries were willing to affirm common descent, they were far more skeptical of Darwin’s claim that the driving mechanism was natural selection, acting on random variation.
Perhaps the most penetrating critique of Darwin’s views was Fleeming Jenkin’s Review of The Origin of Species, published in The North British Review, June 1867, 46, pp. 277-318. (Warning: Jenkin’s views on racial equality are odious and very much the product of their era.) Despite his disagreement with Darwin’s theory, Jenkin admitted to “feeling the greatest admiration both for the ingenuity of the doctrine and for the temper in which it was broached.” However, Jenkin argued that the theory was mistaken for several reasons:
(i) the variation possible within each species of plant and animal is limited and not infinite, so there is a limit to how far evolution can go;
(ii) within each species, there is a strong counter-evolutionary tendency for individuals to revert to the norm;
(iii) the fact that natural selection can improve existing organs within a species does not imply that it can create or develop new organs, or that it can generate new species;
(iv) even if abnormal variations (sports) are capable of generating new organs or habits, the traits would subsequently be diluted by interbreeding with individuals lacking the new trait;
(v) whereas Darwin’s theory requires countless ages during which the earth was habitable, the scientific evidence from thermodynamics and the cooling of the stars suggested that the Earth is no more than a few hundred million years old; and
(vi) contrary to what Darwinists assert, it is not particularly surprising that species can be arranged in a graduated series, with numerous intermediates, as all animals are constituted from combinations of the same basic body parts.
Another criticism of Darwin’s theory was voiced by the anatomist St. George Mivart, who argued that natural selection could not explain complex structures such as the eye, since they would only be beneficial (and selectable) when fully formed. Darwin was certainly alive to this objection, which he attempted to rebut by pointing out that the eyes of organisms living today exhibited varying grades of complexity, forming a series.
Darwin’s theory of evolution: the current state of the evidence
Discoveries in genetics have rendered moot Jenkin’s fourth objection to Darwinism (which Darwin regarded as the most telling one, against his theory), and geologists now agree that the Earth is about 4.5 billion years old. Additionally, the existence of (mutually consistent) nested hierarchies in the genetic and anatomical traits of organisms lends strong support to the hypothesis that all living things share a common ancestry. Fossil evidence of intermediates remains sparse, but is consistent with the family tree that scientists have constructed from the nested hierarchies of traits found in organisms.
Having said that, Jenkin’s first three objections continue to retain their force today, as does St. George Mivart’s argument relating to the emergence of complex structures. Indeed, the strength of these objections has increased during the 140 years since they were first voiced.
I was especially struck by this when listening to the recent radio debate between Professor Michael Behe and Professor Keith Fox. Time and time again, Behe put forward experimental evidence demonstrating the causal inadequacy of natural selection, which large-scale experimental trials have shown to be incapable of generating complex structures in organisms. He also cited a recent scientific paper entitled A golden age for evolutionary genetics? Genomic studies of adaptation in natural populations by N.J. Nadeau and C.D. Jiggins (Trends in Genetics, November 2010; 26(11):484-92. Epub 2010 Sep. 28) in which the authors admit that “most studies of recent evolution involve the loss of traits, and we still understand little of the genetic changes needed in the origin of novel traits.”
Additionally, in his recent book, The Edge of Evolution (Free Press, 2007), Professor Michael Behe has argued that the scientific evidence to date suggests strongly that there is a biological limit to how far evolution by natural selection can transform populations of organisms. Certainly, it can create new biological species and probably new genera, but as Behe argues, there are good grounds for believing that natural selection is incapable of creating new classes. The “edge of evolution” has not yet been precisely identified, but it appears to lie somewhere near the taxonomic level of the family or super-family.
In the meantime, the science of biochemistry has revealed a host of complex systems within the cell, which can only be described as cellular machines. In the words of molecular biologist Michael Denton: “Although the tiniest bacterial cells are incredibly small, weighing less than 10^(-12) grams, each is in effect a veritable micro-miniaturized factory containing thousands of exquisitely designed pieces of intricate molecular machinery, made up altogether of one hundred thousand million atoms, far more complicated than any machinery built by man and absolutely without parallel in the non-living world” (“Evolution: A Theory in Crisis,” Adler & Adler, 1985, p. 250). The problem of how these cellular machines originated as a result of blind natural processes is more intractable than ever, as Professor Behe has recently demonstrated in his discussion of the bacterial flagellum in The Edge of Evolution. The eye, too, continues to yield surprises, too, as scientists have uncovered regulatory switches that trigger the development of the eye in nearly all kinds of animals – but at the same time, they haven’t a clue how these complex regulatory cascades originated.
Finally, the cell itself appears to be unfathomably complex – a city seems to be the best metaphor we can construct for it at present, but I suspect that no human metaphor will ever do it justice. As Nicholas Talbott has argued in a recent article in The New Atlantis, the scientific quest for a non-lifelike foundation for life is a misbegotten one: “In the study of organisms, ‘It’s life all the way down.'”
Natural selection, then, appears woefully inadequate to account for the complexity we find in living things; and it appears incapable of radically transforming organisms beyond the taxonomic limits of the family. The experimental evidence massively contradicts Darwinism as a scientific theory. Living things appear to have evolved from a common stock; but natural selection is incapable of making living things evolve much at all.
Why is Darwin’s theory still revered in scientific circles?
Yet Darwin’s theory still revered in scientific circles, while Newton’s theory of gravity and Wegener’s theory of continental drift have been cheerfully cast aside by scientists, in favor of better theories. In this respect, Darwin’s theory is strikingly different from the other two theories. Why is this? And what can we do about it?
Stephen Jay Gould, in the essay quoted above, wrote that “The lesson of history holds that theories are overthrown by rival theories.” The problem in this case is that the rival theory (Intelligent Design) is unpalatable to most scientists. Hence the exaggerated reverence for Darwin displayed in academic circles.
At the present time, Intelligent Design proponents have to contend with a wall of scientific prejudice against the notion of life’s having had a Designer, and there remain formidable legal hurdles to Intelligent Design theory even getting a hearing in American high school science classrooms. An alternative strategy, however, might be to neutralize the theory of evolution taught in high school classrooms. How? By “Newtonizing” it.
Newton, it will be recalled, refrained from linking his theory of gravity to any scientific hypothesis as to its cause. As he wrote in his General Scholium of 1713: “I have not yet been able to discover the cause of these properties of gravity from phenomena and I feign no hypotheses… It is enough that gravity does really exist and acts according to the laws I have explained, and that it abundantly serves to account for all the motions of celestial bodies, and of our sea.” (Emphasis mine – VJT.)
Nested hierarchies exist, and they are a pervasive feature of organisms. Their mutual consistency, coupled with their compatibility with the fossil evidence, strongly implies common descent. What nested hierarchies do not imply, however, is Darwinism. If scientists were honest, this is what they would teach in high school classrooms:
“We have not yet been able to discover the causes of the various complex properties of organisms from observed phenomena, and we feign no hypotheses… It is enough that nested hierarchies really do exist, that they are mathematically describable, and that they apply to all of the features of living things, suggesting common descent.”
Now that would be a curriculum change for the better. Will we see it happen? Time will tell. But I think it would definitely be a legal avenue worth pursuing.
What do readers think?
32 Replies to “Which one is different: gravity, continental drift or evolution?”
Hello Dr. Torley,
As always you have given us a very stimulating presentation.
Although common descent is an area where there is stronger evidence that other areas, there is also counter evidence. Have you considered, for example, the counter evidence summarized in this discussion:
A Primer on the Tree of Life
An alternative strategy, however, might be to neutralize the theory of evolution taught in high school classrooms. How? By “Newtonizing” it.
Interesting idea. Let me make some suggestions about how to “Newtonize’ Darwinism that may or may not mesh with your own. (I assume here that, while you disagree that ID should not be taught in schools, you are offering this as a pragmatic tack – maybe a compromise.)
* Stress what it means for there to be a ‘law’ as far as science goes – that laws are regularities found in nature, and that deeper questions (Whether they are merely descriptive or prescriptive, whether things called ‘laws’ actually exist in nature, whether these laws are expressions of a designer’s intention or creation, etc) go beyond science.
* Stress that the Darwinian theory, as science, is necessarily silent on whether mutations and selection events are ultimately guided. Science as science can neither rule in nor out design or guidance on certain levels in nature. Thus the question of whether evolution is guided or directed – whether all selection is, ultimately, artificial selection rather than natural – is unanswerable by science alone in either a positive or negative direction.
* Explain the current blind spots in evolutionary theory that you speak of, and – if it is so the case – that they really are scientific blindspots, though research continues on them.
* Explain the particular problems with performing lab tests on evolutionary history, as per Fox’s view. And when dealing with hypotheses, at least provide explanations that are not merely limited to neo-darwinism (neutral drift theory, symbiogenesis, etc) even if ID itself is off the list. Also stress that such possible explanations do not exhaust the possibilities.
A very illuminating writeup – thank you.
I have not finished reading it, but at this point I want to stop and point out that you seem to be giving a good deal more credit to the evolutionary position than is due, when you say
Additionally, the existence of (mutually consistent) nested hierarchies in the genetic and anatomical traits of organisms lends strong support to the hypothesis that all living things share a common ancestry.
Fossil evidence of intermediates remains sparse, but is consistent with the family tree that scientists have constructed from the nested hierarchies of traits found in organisms.
Nested cladistic hierarchies are riddled with appeals to the convergent evolution of complex organs, e.g. the apparently homoplastic labyrinth organ of Anabantoidei and of arapaima. And fossil evidence can only be shoehorned into consistency with these hierarchies by postulating long ghost lineages. As a result, the hierarchies are in a constant state of flux, as articles on this blog have often pointed out.
That being said, I appreciate the other gems you’ve put on display, such as Jenkin’s review of TOOS, and look forward to reading the rest of what you’ve written.
Endoplasmic Messenger and Lars,
Thank you both for your posts. My knowledge of biology is very limited, but I try to keep up with the latest news. It seems to me that up until last year, a very good scientific case could be made for a polyphyletic origin of life. The link you enclosed, Endoplasmic Messenger (thanks very much, by the way), summarizes the case against universal common ancestry very well. However, the latest research, summarized in this article by Dr. Douglas Theobald, and conducted over all three domains of organisms, seems to swing the balance of evidence decisively the other way:
What Theobald’s methodology is incapable of answering, as Dr. Cornelius Hunter has pointed out in a previous post, is the question of whether Universal Common Ancestry is in fact true. All it can tell us is whether it is more probable than other hypotheses that have been proposed to date. Having said that, the most parsimonious explanation of the patterns of similarity observed in the organisms tested by Dr. Theobald appears to be common ancestry – which, of course, is perfectly compatible with repeated acts of God that produced the complex structures we find in organisms. In any case, I was intrigued by the following quote from Dr. Theobald:
I have read claims that nested hierarchies contain anomalies such as the ones you pointed out, Lars. I guess what I’d like to know is whether these anomalies are minor and infrequent, or whether they are pervasive. Should the latter prove to be the case, then of course that would throw into question Dr. Theobald’s methodology.
In the meantime, the point I would like to stress is that a Darwinistic explanation of the specified complexity which characterizes living things remains as elusive as ever, which is why I believe the teaching of evolution in high schools has to be “Newtonized.”
Thank you for your post. I would definitely agree with your statement that the laws are not ultimately explicable in scientific terms. Science cannot explain what science presupposes: an orderly, law-governed universe.
However, I have considerable reservations about your claim that “Darwinian theory, as science, is necessarily silent on whether mutations and selection events are ultimately guided.” Science is indeed silent on such a question, but Darwinian theory, as I understand it, answers the question firmly in the negative. I have dealt with this elsewhere, in my online discussion of why Thomism and Darwinism don’t mix (see sections 1 to 4, and especially section 4 at http://www.angelfire.com/linux.....l#section4 , which deals with Markov processes). Markov processes are inherently memoryless, and therefore cannot be used to bring about long-term goals, such as those which an Intelligent Designer might have had.
Finally, I would certainly agree with your plea that other versions of evolution (neutral drift theory, symbiogenesis, etc.) be taught in the classroom. I’d also like to add orthogenesis and Wallace’s theory of intelligent evolution .
However, I have considerable reservations about your claim that “Darwinian theory, as science, is necessarily silent on whether mutations and selection events are ultimately guided.” Science is indeed silent on such a question, but Darwinian theory, as I understand it, answers the question firmly in the negative.
Well, that’s easy to resolve: If Darwinian theory demands that neither mutation nor selection is ultimately guide it, but science is necessarily silent on such a question, then the Darwinian theory contains claims which go beyond science – out it goes (or those claims are sliced). Common descent, selection, mutation, etc – all those things would presumably survive the cut. Supposedly the NCSE and company agree on this (See the famous NABT event). Let’s see if they mean it, eh?
Markov processes are inherently memoryless, and therefore cannot be used to bring about long-term goals, such as those which an Intelligent Designer might have had.
Sure, if you define a process as ‘unguided’, then you can’t at the same time say ‘the process is guided’. But Markov processes are, at best, models we apply to nature to better understand it – not statements of ultimate reality (nor could they be and remain science.) If I roll a die, it can come up as 1 through 6. I could describe the possibilities in a probablistic manner. But if I roll the die and turn up a 4, could I – in an ultimate sense – have rolled anything else? Could no possible onlooker have known with certainty ‘Oh, that roll will come up 4’? We’ve stepped outside of science on that question.
We could also employ a thought experiment. Let’s say I live in a world and somehow I know for a fact that all events are determined and guided – and that what actually happens, had to happen. But that’s all I know; I know the fact of utter determinism and guidance, but no details of what will happen. How would I describe the possibilities of the die I’m about to cast? Pretty much the same way, because the possibilities are a tool I use, a statement about both what I know and don’t know.
I think there are two major stumbling blocks to any theory of common descent, even guided common descent. The first is a fact that anyone who has made a major change to a large computer system (as I have), for example, knows: you simply can’t do it by making a series of small changes, not if you require that the system continue to work (forget about working better) after each change. The reason for this is that in any complex system the parts (subsystems and sub-subsystems, etc.) must work together as a whole for the system to work at all. Now it is generally accepted that a living cell is far more complex than any computer system designed to date, and any multicellular organism has additional layers of complexity added on top of the complexity of the cell. Therefore, if common descent is true, we must have new body plans, organ systems, etc. springing full blown from their parent organisms. How well is it going to work for a mouse to give birth to a bat, for example? (And there would have to be at least two–a male and a female.)
But the second stumbling block is even more serious. I refer to the work of J. C. Stanford published in his book Genetic Entropy & the Mystery of the Genome. Using the established findings of population genetics, he shows quite conclusively that the genome of every multicellular species is degenerating due to mutation, so that each can last only a few million years before it becomes extinct due to that degeneration (we humans are not exempt from this sentence, either). In the book, he shows that natural selection cannot reverse this slow deterioration. Therefore, no species can serve as the progenitor of a descendant species because the descendant species would inherit the deteriorated genome of its parents.
And of course, these two stumbling blocks are absolute disaster to Darwinism. (The first includes Behe’s notion of irreducible complexity.) They are two of the three mortal wounds to the theory, in my view. The third is the enormous amount of complex specified information in even the simplest life and the inability of natural processes to generate it.
In all respect for your position, while I agree with what you say, I don’t believe it is an argument against common descent.
Your first argument is against gradual common descent. And I agree with it. But it does not rule away less gradual forms of CD.
Let’s put it this was: in the end, CD just means that the designer, or designers, inputs new information into what already exists, and acts on what already exists, in some way constrained by that.
That is, IMO, the best way to explain what we observe in designed biological information. But in no way it requires that the change is gradual, or step by step.
For instance, a very good biological engineer could in principle take a mouse in his lab, and through a series of passages, in a definite time, transform it into a bat. If that is in principle possible for a human designer, it is reasonably possible for the designer of biological information.
Only the observation of data can give us information about how “suddenly” the transitions happened. I agree with Gould that the transitions, whatever their mechanism, happened relatively suddenly.
Going back to your computer example, if you want to design a new operating system, you usually don’t build it from scratch. You take the last version you made, and modify it. That is common descent of operating systems. If you wrote each new operating system from scratch, both the procedure and the result would be different.
That’s my point. I don’t believe that the designer “builds” each new species “from scratch”. That would be the same as repeating the process of OOL each time, in different ways. I don’t believe that, because it is not a good explanation for what we observe.
Your second argument, I believe, is only valid against unguided common descent. If the designer inputs new information, he can certainly both correct the degeneration due to genetic entropy if and when that is necessary, and design a new species from an existing one just the same.
Indeed, some evidence that at least part of causal degenerations are inherited is one of the arguments for common descent. You are right, however, that the process would be completely without hope if there were no fresh input of information.
Your third argument is obviously completely true, but it is not an argument against CD.
I am well aware of the arguments both pro and con CD, like those in the Luskin article. I have seriously considered them. But still I have convinced that CD is at present the best explanation for facts.
It is true that it is difficult to build a convincing “tree of life”, and that the results are often contradictory. But in principle, that can well happen with scientific models, when the evidence is still poorly understood.
Moreover, a model where common descent is designed is very different from a model where it is drawn by “darwinian evolution”. The second model is clearly wrong, and it doesn’t fit the data.
I find even the distinction between “common design” and “designed common descent” a little artificial. When a designer updates his software, he usually has it written somewhere. I believe the biological software is written in the existing biological beings. So the reasonable way to act would be to act on them.
A pure common design model would require that the designer redesigns everything each time, out of merely functional needs. It requires that everything we observe which is inherited to another species must be functional.
That is a heavy constraint, and difficult to maintain. I have worked with protein homologies, in these years, to understand them better. I do believe that many neutral mutations are inherited from one species to another, and that is one of the most convincing aspects of a “nested hierarchy”, whatever it may be. Neutral mutations make the same protein, with the same function, increasingly different as the distance between species increases. That’s what is called “the big bang theory of proteins”. This very simple fact can easily be verified by anyone who is able to work the blast page at pubmed.
Until someone can satisfactorily explain that fact to me in other ways, I stick to common descent.
Dr. Torley, In your reliance on the Theobold study I should like to point out the extremely biased sampling of the study:,,, many times evolutionists will scan molecular sequences using computer algorithms to find a hypothetical Tree Of Life (TOL), but this is very problematic because of the inherent bias of researchers to look solely for evidence that accords to a preconceived evolutionary conclusion whereas ignoring the majority of sequences that disagree with their inherent bias:,,,
Pattern pluralism and the Tree of Life hypothesis – 2006
Excerpt: Hierarchical structure can always be imposed on or extracted from such data sets by algorithms designed to do so, but at its base the universal TOL rests on an unproven assumption about pattern that, given what we know about process, is unlikely to be broadly true.
,,, Dr. Torley when Darwinists are allowed to disavow whatever evidence is ‘inconvenient’ to there TOL hypothesis exactly how is this better than seeing designoids, as Dawkins would call such imposed reasoning???
gpuccio to add to what Bruce David has elucidated, here is the main problem with this scenario you elucidated:
,,, gpuccio I simply don’t see a biological engineer producing such a change no matter how good he is and here is why,,,,
Poly-Functional Complexity equals Poly-Constrained Complexity
The primary problem that poly-functional complexity presents for neo-Darwinism, or even Theistic Evolutionists is this:
To put it plainly, the finding of a severely poly-functional/polyconstrained genome by the ENCODE study has put the odds, of what was already astronomically impossible, to what can only be termed fantastically astronomically impossible. To illustrate the monumental brick wall any evolutionary scenario (no matter what “fitness landscape”) must face when I say genomes are poly-constrained to random mutations by poly-functionality, I will use a puzzle:
If we were to actually get a proper “beneficial mutation’ in a polyfunctional genome of say 500 interdependent genes, then instead of the infamous “Methinks it is like a weasel” single element of functional information that Darwinists pretend they are facing in any evolutionary search, with their falsified genetic reductionism scenario I might add, we would actually be encountering something more akin to this illustration found on page 141 of Genetic Entropy by Dr. Sanford.
S A T O R
A R E P O
T E N E T
O P E R A
R O T A S
Which is translated ;
THE SOWER NAMED AREPO HOLDS THE WORKING OF THE WHEELS.
This ancient puzzle, which dates back to 79 AD, reads the same four different ways, Thus, If we change (mutate) any letter we may get a new meaning for a single reading read any one way, as in Dawkins weasel program, but we will consistently destroy the other 3 readings of the message with the new mutation.
This is what is meant when it is said a poly-functional genome is poly-constrained to any random mutations.
The puzzle I listed is only poly-functional to 4 elements/25 letters of interdependent complexity, the minimum genome is poly-constrained to approximately 500 elements (genes) at minimum approximation of polyfunctionality. For Darwinist to continue to believe in random mutations to generate the staggering level of complexity we find in life is absurd in the highest order!
As to Theistic Evolutionists, who believe God guides evolution incrementally, all I ask of you is do you think that it would be easier for God to incrementally change the genome of a organism, maintaining functionality all the time, in a bottom up manner or do you think it would be easier for Him to design each kind of organism in a top down manner?
Simplest Microbes More Complex than Thought – Dec. 2009
Excerpt: PhysOrg reported that a species of Mycoplasma,, “The bacteria appeared to be assembled in a far more complex way than had been thought.” Many molecules were found to have multiple functions: for instance, some enzymes could catalyze unrelated reactions, and some proteins were involved in multiple protein complexes.”
First-Ever Blueprint of ‘Minimal Cell’ Is More Complex Than Expected – Nov. 2009
Excerpt: A network of research groups,, approached the bacterium at three different levels. One team of scientists described M. pneumoniae’s transcriptome, identifying all the RNA molecules, or transcripts, produced from its DNA, under various environmental conditions. Another defined all the metabolic reactions that occurred in it, collectively known as its metabolome, under the same conditions. A third team identified every multi-protein complex the bacterium produced, thus characterising its proteome organisation.
“At all three levels, we found M. pneumoniae was more complex than we expected,”
Scientists Map All Mammalian Gene Interactions – August 2010
Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome.
Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information – David L. Abel and Jack T. Trevors – Theoretical Biology & Medical Modelling, Vol. 2, 11 August 2005, page 8
“No man-made program comes close to the technical brilliance of even Mycoplasmal genetic algorithms. Mycoplasmas are the simplest known organism with the smallest known genome, to date. How was its genome and other living organisms’ genomes programmed?”
Mammalian overlapping genes: the comparative perspective. – 2004
Excerpt: it is rather surprising that a large number of genes overlap in the mammalian genomes. Thousands of overlapping genes were recently identified in the human and mouse genomes. However,the origin and evolution of overlapping genes are still unknown. We identified 1316 pairs of overlapping genes in humans and mice and studied their evolutionary patterns. It appears that these genes do not demonstrate greater than usual conservation. Studies of the gene structure and overlap pattern showed that only a small fraction of analyzed genes preserved exactly the same pattern in both organisms.
No Examples Of Gradualism In Molecular Biology – Doug Axe PhD.
Another fact that argues very forcefully for this truth of the polyconstraint of Genomes is the fact that only a very few mutations to the human DNA are found to be ‘beneficial’, yet, as with antibiotic resistant bacteria, the ‘benefit’ is always found to come at a molecular cost. Whereas on the other hand the evidence for the detrimental nature of mutations to humans is overwhelming for scientists have already cited over 100,000 mutational disorders.
Inside the Human Genome: A Case for Non-Intelligent Design – Pg. 57 By John C. Avise
Excerpt: “Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens.”
I went to the mutation database website cited by John Avise and found:
HGMD®: Now celebrating our 100,000 mutation milestone!
I really question their use of the word celebrating:
Here is a ‘not so surprising’ study:
A golden age for evolutionary genetics? Genomic studies of adaptation in natural populations. – September 2010
Excerpt: Nonetheless, most studies of recent evolution involve the loss of traits, and we still understand little of the genetic changes needed in the origin of novel traits.
When looked at from a information science perspective, mutations are the very ‘bottom rung of the ladder’ as far as the ‘higher levels of the layered information’ of the cell are concerned:
Stephen Meyer on Craig Venter, Complexity Of The Cell & Layered Information – video
“Stunningly information has been shown not to increase in the coding regions of DNA with evolution… Mutations do not produce increased information… the amount of coding in DNA actually decreases with evolution… No increase in Shannon or Prescriptive information occurs in duplication.”
David Abel – The GS (Genetic Selection) Principle – 2009
I don’t understand why your points should be relevant to my discussion. They are obviously true, but relevant?
“gpuccio I simply don’t see a biological engineer producing such a change no matter how good he is and here is why”
Why? If a designer can build a new species form scratch, why couldn’t he build it from an existing species? I don’t understand.
I agree with all your arguments against darwinism and theistic evolutionists. But what have they to do with common descent?
Maybe I have not been clear enough. Let’s pretend for a moment that the designer is God, which is what we both believe in the end.
Why shouldn’t God be able to take an existing species, and transform it. by any means he prefers to use, in whatever time he finds appropriate, reutilizing what is useful of the existing species, correcting what has to be corrected, and adding what has to be added?
There is no a priori need that the acting should be incremental, or preserve intermediate functions to be selected by any amount of NS. I have argued many times that the designer can use many instruments to implement a new design. Guided mutation is one. Intelligent selection is another one. “Direct writing” of great amounts of new information is a possibility (not so different form guided mutation, anyway). We just don’t know. But the designed result itself, and the data we collect about natural history, can certainly give us some clues.
I understand that you believe that the designer, God for us, each time does anything from scratch. That would be more or less what is usually called “special creation”. It is a possibility. But why the only possibility? Even if the designer acts differently, it is special creation just the same. The new information, the new species, is created just the same, even if not from scratch.
Anyway, I don’t think I will go on defending CD here. It’s not my priority, My priority is ID. I am interested in CD only in the measure that it helps me understand the mechanisms of ID.
Dr, Torley, ‘coincidentally’ Casey Luskin’s latest article deals with the TOL, particularly this reference caught my eye:
molecular systematics is (largely) based on the assumption, first clearly articulated by Zuckerkandl and Pauling (1962), that degree of overall similarity reflects degree of relatedness. This assumption derives from interpreting molecular similarity (or dissimilarity) between taxa in the context of a Darwinian model of continual and gradual change. Review of the history of molecular systematics and its claims in the context of molecular biology reveals that there is no basis for the ‘molecular assumption.’ … For historians and philosophers of science the questions that arise are how belief in the infallibility of molecular data for reconstructing evolutionary relationships emerged, and how this belief became so central …
(Jeffrey H. Schwartz, Bruno Maresca, “Do Molecular Clocks Run at All? A Critique of Molecular Systematics,” Biological Theory, Vol. 1(4):357-371, (2006).)
gpuccio, I think the relevant point in all this is, is it ‘easier’ to design top down or bottom up?,,, Of course God could do it anyway He wanted but is it pragmatic for us to infer that He did it gradually???, I just don’t see the necessary continuity to say it is the ‘strongest’ position in the molecular, or fossil evidence, as you seem to think exists for the inference.
please look at this very simple example.
Myoglobin is part of the globin superfamily. These proteins have maintained a remarkable constancy of protein fold and funtion throughout evolution. We can say that, functionally speaking, they are same protein with the same function.
And yet, the primary sequence has varied much in the course of evolution.
Look at the following data about some examples:
a) Human myoglobin. 154 AAs.
b) Globin 1 in mosquito. 124 AAs. Compared to human myoglobin: identities 33%; positives 57%; E value: 2e-06 (measures the probability that the homology is casual: the lower it is, the more significant the homology is). As you can see, there is significant homology with human myoglobin, but the primary structures are rather different (only 33% identities). Remember that 3D and function are practically the same.
c) Myoglobin in tuna fish: 147 AAs. Compared to human myoglobin: identities 48%; positives 61%; E value: 6e-31
d) Myoglobin in mouse: 154 AAs. Compared to human myoglobin: identities 84%; positives 92%; E value: 7e-71
e) Myoglobin in chimpanzee: 154 AAs.
Compared to human myoglobin: identities 99%; positives 99%; E value: 3e-85
These are the facts I alluded to. these are the numbers. I really don’t known how you can explain those numbers. I explain them with common descent, and neutral variation in a constant island of functionality.
gpuccio, you are failing to realize the myriad of other proteins that Myoglobin interacts with that have vastly different sequences, when taken in a holistic view the EXTREMELY ‘polyfunctional/polyconstrained nature of the situation argues very forcefully for ‘top down’ design of the entire system. Remeber that humans have at least 1000 genes that are completely unique and yet the genes of humans are found to have extreme overlapping poly- functionality:
Scientists Map All Mammalian Gene Interactions – August 2010
Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome.
as I somewhat alluded to here:
,,, the constraint of 7 million interactions on 20,000 is SEVERE, You simply cannot, in step-wise fashion alter one protein, the ENTIRE system has to be modified at the same time in order to maintain functionality. Sure if myoglobin was a stand alone entity then it would be reasonable to infer such a gradual model, yet as the situation now sits, with extreme polyfunctionality being proven to at least 7 million interactions encompassing essentially EVERY ONE OF THE GENES, this position of gradual ‘bottom up’ design simply does not square with the facts as they now sit. ‘Top down’ design is the parsimonious explanation, with genetic entropy then being the correct model for explaining all ‘beneficial’ adaptations after initial point of design.
I am afraid we don’t understand each other.
I have never spoken of any beneficial adaptation, least of all of bottom up design. Please, read again what I have said.
Myoglobin has functionally remained the same. I don’t believe the modifications in the primary structure have modified substantially its 3D structure and its function. It binds iron, and oxygen.
What I am saying is that there is continuity and variation in the primary structure of different myoglobins, while the function remains the same and the fold remains the same. And the differences become greater when the “separation” between species becomes greater. That is due, IMO, to cumulative neutral mutations, while negative selection keeps the molecule inside the island of functionality which it has achieved just from the beginning.
That has nothing to do with top down or bottom up design. It is just an argument in favour of CD (it is more or less so: if neutral mutations are inherited, there must be a physical continuity in proteins through different species).
I have absolutely no doubt about the designed nature of proteins, and if it is top down or bottom up we will say according to facts: for me that remains an open argument.
Please note that I have stayed very generic, without referring to any detailed “tree of life”, but just limiting the discussion to few examples which anyone would accept as well separated in time and natural history: insects, fish, mammals, primates and humans. Are you objecting even to that elementary “sequence”? And yet, the differences in homology are stunning, and they are in full accord with the chronological sequence.
So, if you have specific and pertinent observations about these facts, I am willing to listen. Otherwise, we can well remain with our opinions.
gpuccio, when looking at ONLY one sequence you may reasonably infer CD, but that is not the situation for other sequences which have vastly different sequences,,,
Botching Evolutionary Science – Casey Luskin – April 2009
Excerpt: The textbook touts the cytochrome C tree, but it ignores the cytochrome B tree, which has striking differences from the classical animal phylogeny. As one article in Trends in Ecology and Evolution stated: “[T]he mitochondrial cytochrome b gene implied,, an absurd phylogeny of mammals, regardless of the method of tree construction. Cats and whales fell within primates, grouping with simians (monkeys and apes) and strepsirhines (lemurs, bush-babies and lorises) to the exclusion of tarsiers. Cytochrome b is probably the most commonly sequenced gene in vertebrates, making this surprising result even more disconcerting.” (See Michael S. Y. Lee, “Molecular Phylogenies Become Functional,” Trends in Ecology and Evolution, Vol. 14: 177 (1999).)
The universal ancestor – Carl Woese
Excerpt: No consistent organismal phylogeny has emerged from the many individual protein phylogenies so far produced. Phylogenetic incongruities can be seen everywhere in the universal tree, from its root to the major branchings within and among the various taxa to the makeup of the primary groupings themselves.
A New Model for Evolution: A Rhizome – May 2010
Excerpt: Thus we cannot currently identify a single common ancestor for the gene repertoire of any organism.,,, Overall, it is now thought that there are no two genes that have a similar history along the phylogenic tree.,,,Therefore the representation of the evolutionary pathway as a tree leading to a single common ancestor on the basis of the analysis of one or more genes provides an incorrect representation of the stability and hierarchy of evolution. Finally, genome analyses have revealed that a very high proportion of genes are likely to be newly created,,, and that some genes are only found in one organism (named ORFans). These genes do not belong to any phylogenic tree and represent new genetic creations.
Congruence Between Molecular and Morphological Phylogenies – Colin Patterson
Excerpt: “As morphologists with high hopes of molecular systematics, we end this survey with our hopes dampened. Congruence between molecular phylogenies is as elusive as it is in morphology and as it is between molecules and morphology.”
‘The theory makes a prediction (for amino acid and nucleotide sequence studies); we’ve tested it, and the prediction is falsified precisely.’
Dr. Colin Patterson Senior Principal Scientific Officer in the Paleontology Department at the British Museum
Why Darwin was wrong about the (genetic) tree of life: – 21 January 2009
Excerpt: Syvanen recently compared 2000 genes that are common to humans, frogs, sea squirts, sea urchins, fruit flies and nematodes. In theory, he should have been able to use the gene sequences to construct an evolutionary tree showing the relationships between the six animals. He failed. The problem was that different genes told contradictory evolutionary stories. This was especially true of sea-squirt genes. Conventionally, sea squirts – also known as tunicates – are lumped together with frogs, humans and other vertebrates in the phylum Chordata, but the genes were sending mixed signals. Some genes did indeed cluster within the chordates, but others indicated that tunicates should be placed with sea urchins, which aren’t chordates. “Roughly 50 per cent of its genes have one evolutionary history and 50 per cent another,” Syvanen says. .”We’ve just annihilated the tree of life. It’s not a tree any more, it’s a different topology entirely,” says Syvanen. “What would Darwin have made of that?”
gpuccio perhaps DR. Sewell can better explain the reason why each system needs to be implemented as a ‘top down’ design instead of gradually:
Granville Sewell PhD Math – Comparing The Jumps Seen In The Fossil Record To The Jumps Needed In Software Programming- video
As well gpuccio, remember that there very well may be sequence constraints placed on the exact molecular function that myoglobin is performing so as to explain that particular sequence similarity across species. It simply is not ‘scientific’ for you to put all your eggs in this basket while ignoring all the evidence against the CD model.
First of all, be sure that I respect and take in serious consideration your ideas and the arguments you bring.
I would only add some remarks, in the hope to contribute to a reciprocal understanding:
a) I am not “putting all my eggs in the basket” of CD. I have really no special commitment to CD: I just find it the best explanation at present. I can change my mind at any moment, if the evidence changes, or if my appreciation of the evidence changes. If I am putting my eggs in a basket, that basket is ID. I am really openminded about CD.
b) I agree with you that there are many arguments against the current model of CD, and that the various “trees of life” are often contradicting and not too credible. Those are interesting facts, and must be seriously considered. But it is also true that the present model of CD has been created according to darwinian theory. A model of CD shaped by design theory would certainly be much more flexible and empirical.
c) While I have some belief in CD, I am not at all sure that CD is universal.
d) I think you still miss my main point about myoglobin. It’s not the similarities which, for me, are a good argument for CD. As you correctly say, the similarities can be well explained as functional constraints. That’s true.
It’s the differences which are a good argument for CD. Out of CD, the differences should all be explained by functional constraints too. While that is possible in principle (there could be reasons why the myoglobin in mosquitos has to be so different from the myoglobin in humans), we have no real evidence of those functional explanations. Moreover, a functional interpretation should also explain the continuous gradient of those differences according to chronological sequence.
The theory that those differences are due to neutral mutations explain them very simply and elegantly. That theory is known as “the big bang theory of proteins”. I find that model very appealing.
In that model, the true “miracle” (to be explained by a very important
information input by the designer) is that the myoglobin molecule has been found in the beginning.
So, let’s say that mosquitoes are the beginning for myoglobin (I can be wrong about that, and the molecule could be older; but the reasoning remains the same).
So, let’s say that before mosquitoes (or their ancestors), myoglobin did not exist. At some point, in a window of time which will be better understood as we accumulate more data, but which I believe to be small, myoglobin appears.
The important point is that the molecule is new: it is a new superfamily, a new fold, a new function. It binds heme, it binds iron, it binds oxygen. It has a definite, new function. It already does all that it will be doing in the future millions of years.
That is the big design. That is the big information jump. A new functional fold and superfamily has been found, one of the about 2000 basic ones. That is the big bang of globin superfamily.
What happens after? Almost nothing. The molecules remains the same: same fold, same function. It probably undergoes minor adaptations in different species, but nothing really major.
But the primary sequence does not stay the same. The primary sequence “explores” the target space of that function. It changes, in the millions of years, as much as it can change without compromising the function. I changes because neutral mutations are possible, and negative mutations are cancelled by negative selection. So it moves in that island of functionality, but when it goes near the “borders”, where function would be compromised, it just goes back.
So, at present, we have not one myoglobin, but a family of molecules, all different at the primary sequence level, all (almost) the same at the functional level.
And the more chronologically “distant” species are, the bigger are primary sequence differences.
So, it’s the differences which are an argument for CD, not the similarities. It’s difficult to explain the differences out of a CD scenario.
e) A final, “strategic” note. You have certainly heard darwinists affirm that “the evidence for evolution is overwhelming”. While we well know that darwinists often lie, it is certainly true that, when asked to exhibit that “overwhelming” evidence, they only produce evidence for CD. My simple point is: if you concede CD, even tentatively, they have really nothing.
I have no reason to concede it. Even tentatively. And they remain with nothing in their hands.
Ehm, errata corrige. The last phrase in the previous post should be:
“I have no reason not to concede it. Even tentatively. And they remain with nothing in their hands.”
well gpuccio, 🙂 I really don’t even buy ‘neutral mutations’ seeing the extreme level of poly-functionality being dealt with,,, Let me run this by you one more time,,,,
Scientists Map All Mammalian Gene Interactions – August 2010
Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome.
,,, exactly how are we to presuppose a completely ‘neutral’ mutation when such an extreme level, and I do mean exceedingly extreme level, of integrated complexity virtually assures with 100% probability the mutation will at least be slightly detrimental though we may not be able to directly measure it??? It is simply unwarranted, scientifically, to presuppose neutrality for mutations when dealing with such a staggering level of integrated complexity.
gpuccio, to address this,,
“neutral mutations are possible”
,,,claim of yours,,,
Proteins with cruise control provide new perspective:
“A mathematical analysis of the experiments showed that the proteins themselves acted to correct any imbalance imposed on them through artificial mutations and restored the chain to working order.”
Cruise Control??? This is an absolutely fascinating discovery. The equations of calculus involved in achieving even a simple process control loop, such as a dynamic cruise control loop, are very complex. In fact it seems readily apparent to me that highly advanced mathematical information must reside in each individual amino acid, used in a protein, in order to achieve such control. This fact gives us clear evidence that there is far more functional information residing in proteins than meets the eye. For a sample of the equations that must be dealt with, to ‘engineer’ even a simple process control loop like cruise control for a single protein, please see this following site:
A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems. A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then outputting a corrective action that can adjust the process accordingly and rapidly, to keep the error minimal.
It is in realizing the staggering level of engineering that must be dealt with to achieve ‘cruise control’ for each individual protein that it becomes apparent even Axe’s 1 in 10^77 estimate for finding specific functional proteins within sequence space is in all likelihood far too generous. The individual amino acids themselves are clearly embedded with highly advanced mathematical language in their structures. This, of course, adds an additional severe constraint, on top of the 1 in 10^77 constraint, for finding exactly which of the precise sequences of amino acids, in sequence space, will perform any given desired function.
Though the authors of the paper tried to put a evolution friendly spin on the ‘cruise control’ evidence, finding a highly advanced ‘Process Control Loop’ at such a base molecular level, before natural selection even has a chance to select for any morphological novelty, is very much to be expected as a Intelligent Design/Genetic Entropy feature, and is in fact a very constraining thing to the amount of variation we can expect from any ‘kind’ of species in the first place.
Extreme functional sensitivity to conservative amino acid changes on enzyme exteriors – Doug Axe
Excerpt: Contrary to the prevalent view, then, enzyme function places severe constraints on residue identities at positions showing evolutionary variability, and at exterior non-active-site positions, in particular.
As well, the ‘errors’ that are found in protein sequences are found to be ‘designed errors’:
Cells Defend Themselves from Viruses, Bacteria With Armor of Protein Errors – Nov. 2009
Excerpt: These “regulated errors” comprise a novel non-genetic mechanism by which cells can rapidly make important proteins more resistant to attack when stressed,
In fact the Ribosome, which makes the myriad of different, yet specific, types of proteins found in life, is found to be severely intolerant to any random mutations occurring to proteins.
The Ribosome: Perfectionist Protein-maker Trashes Errors
Excerpt: The enzyme machine that translates a cell’s DNA code into the proteins of life is nothing if not an editorial perfectionist…the ribosome exerts far tighter quality control than anyone ever suspected over its precious protein products… To their further surprise, the ribosome lets go of error-laden proteins 10,000 times faster than it would normally release error-free proteins, a rate of destruction that Green says is “shocking” and reveals just how much of a stickler the ribosome is about high-fidelity protein synthesis.
Dollo’s law, the symmetry of time, and the edge of evolution – Michael Behe – Oct 2009
Excerpt: Nature has recently published an interesting paper which places severe limits on Darwinian evolution.,,,
A time-symmetric Dollo’s law turns the notion of “pre-adaptation” on its head. The law instead predicts something like “pre-sequestration”, where proteins that are currently being used for one complex purpose are very unlikely to be available for either reversion to past functions or future alternative uses.
Severe Limits to Darwinian Evolution: – Michael Behe – Oct. 2009
Excerpt: The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed.
“Blue Gene’s final product, due in four or five years, will be able to “fold” a protein made of 300 amino acids, but that job will take an entire year of full-time computing.” Paul Horn, senior vice president of IBM research, September 21, 2000
Networking a few hundred thousand computers together has reduced the time to a few weeks for simulating the folding of a single protein molecule:
A Few Hundred Thousand Computers vs. A Single Protein Molecule – video
There are two kinds of neutral mutations.
The first kind is synonymous mutations, where a nucleotide is changed, but the aminoacid remains the same because of the redundancy of the genetic code. While we know that some of these mutations may still not be neutral, because they affect the structure of mRNA, in general they appear to be clinically silent.
The second kind is a change of one aminoacid which does not affect significantly the structure and function of the protein. This can happen and happens. For example, we know hundreds of hemoglobin variants in humans which have no clinical manifestation. People who carry those variants are healthy, and only a sophisticated lab test can reveal the condition. They are certainly not “eliminated” by any form of natural selection.
So, neutral mutations do occur, even in complex molecules like hemoglobin and in complex beings like ourselves. We can discuss how common they are, but they certainly exists.
gpuccio aren’t you using something like the ‘random’ genetic drift assumption of neo-Darwinism to maintain the plausibility of ‘neutral mutations’?
The consequences of genetic drift for bacterial genome complexity – Howard Ochman – 2009
Excerpt: The increased availability of sequenced bacterial genomes allows application of an alternative estimator of drift, the genome-wide ratio of replacement to silent substitutions in protein-coding sequences. This ratio, which reflects the action of purifying selection across the entire genome, shows a strong inverse relationship with genome size, indicating that drift promotes genome reduction in bacteria.
As well gpuccio, Dr. Sanford is quite adamant that there is no such thing as a truly ‘neutral’ mutation in the genome, If fact he maintains the vast majority of mutations are ‘slightly’ detrimental,,,
Evolution Vs Genetic Entropy
Here is where we start getting into the analysis of NDET. Sanford discusses the statistical distribution of mutational effects (i.e. the magnitude of good and bad mutations affecting fitness) and their frequency. Sanford points out a number of differences between NDET and reality:
A. NDET posits that most mutations are neutral. However, Sanford argues that there is no such thing as a truly “neutral” mutation. Rather, most mutations are “near-neutral” (whether increasing fitness or decreasing fitness). Even a single point-nucleotide mutation in a minor area of the genome disrupts the genetic code to some degree (no matter how small). This is key for the rest of his book.
gpuccio in his book Sanford makes a strong case that there are no truly ‘neutral’ mutations to a genome and lists several reasons why they will always be ‘slightly detrimental’, (also of note: he actually uses the much more conservative numbers of ‘slightly detrimental mutations agreed on by evolutionists themselves to disprove neo-Darwinism from the ‘inside out’, if you will, and to establish Genetic Entropy.
gpuccio, here is another critique against CD from the fossil record:
Seeing Ghosts in the Bushes (Part 2): How Is Common Descent Tested? – Paul Nelson – Feb. 2010
Excerpt: Fig. 6. Multiple possible ad hoc or auxiliary hypotheses are available to explain lack of congruence between the fossil record and cladistic predictions. These may be employed singly or in combination. Common descent (CD) is thus protected from observational challenge.
I can be find with the concept of “near-neutral” mutations. My point is that, if a mutation has a very small effect, it will not be “visible” to negative selection, and so it will accumulate. So, neutral mutations are those mutations which cannot be eliminated by negative selection. So, they are inherited.
All that has nothing to do with genetic drift. Genetic drift is a mechanism by which neutral mutations are suppose to be fixed. I have never be interested in that, because either it happens or not, the result is anyway random, so it doe not change anything from the ID perspective.
I am not saying here that neutral mutations are fixed. Just that they are retained, and inherited. That’s enough for my point.
Finally, I don’t usually discuss the fossil record, because it’s not my field.
gpuccio, I find this whole ‘neutral mutation’ thing, and the extreme extent to which you are trying to extend it as proof of CD, though having such a shaky premise for its consideration even in a limited context, to be ‘out of character’ on your part as to the resolute firmness you usually establish your points of science.
I respect your point of view. But I am always the same, and I use the same principles and the same methodology. I have no reason to do differently.
Hi bornagain77 and gpuccio,
I’d just like to say that I have learned quite a lot of biology from your very interesting exchange of views, and I’m sure I’ll learn a lot more from you both in the future. Thank you.
Thank you to you for your excellent posts and philosophical contributions. I really appreciate the “multidisciplinarity” here at UD, and the different personal approaches. They are a true richness.
Let’s call the genetic entropy model the Sanford/Gil Dodgen Hanglider Model. It can slow the descent, or even once in a while catch an updraft, but in the end, the direction is always down.
Maybe many mutations are not harmful in and of themselves, but once enough accumulate near each other (in proximity or functionality) then the accumulation leads to a major weakness. Analogy: a spider can function with 7, 6 or even 5 legs, but can it do 4? 3? 2?