Who is Libby Anne?
That’s what you’re wondering, isn’t it? I’ll let her introduce herself:
As a brief introduction, I was raised in a large homeschooling family influenced by the Christian Patriarchy and Quiverfull movements. I grew up an evangelical Christian, though with some fundamentalist aspects. I found my beliefs challenged in college and am today an atheist and a feminist. I am in my mid-twenties, married to a wonderful man… and busily raising young children… I am also in graduate school getting my Ph.D. in a humanities field.
(I’ve omitted the names of family members, out of respect for their privacy.) Libby Anne has a Web site called Love, Joy, Feminism. I would recommend that readers take the time to peruse the articles on her Web site, as her posts are thought-provoking, challenging and very forthright, but never uncivil. She has already attracted quite a large readership, and I think we’ll be hearing a lot more from her in years to come. She’s also a butterfly fan – hence the image above (courtesy of Kenneth Dwain Harrelson and Wikipedia).
Libby Anne makes it clear in her writings that she is a humanist and a feminist first, and an atheist second. As she puts it in an article entitled, On being an Atheist AND a Feminist (December 22, 2011):
When I named this blog, I chose to call it “Love, Joy, Feminism” and not “Love, Joy, Atheism” for a reason. I knew that I would be blogging about leaving fundamentalist and evangelical religion as well as blogging about leaving patriarchy, but I felt – and still feel – that my feminism is more important to me than my atheism. I thought about it and realized that I identify more – a great deal more – with a religious feminist than a sexist atheist.
Libby Anne is also the author of a recent blog post that went viral, entitled, How I Lost Faith in the “Pro-Life” Movement, as well as a follow-up post entitled, A Response to Objections on My Pro-Life Movement Post. I’ll be discussing those in a later post. But first of all, I’d like to talk about how she came to revise her fundamental beliefs. What I’m going to argue in this post is that her gradual abandonment of theism in favor of evolutionary naturalism sprang from a failure to advert to five key epistemological principles, relating to how science should be done. I suspect that there are many other people who overlook these principles, so this essay is written for these people, too.
Libby Anne’s path from religious belief to atheism
In a revealing article entitled, Young Earth Creationism and Me, Libby Anne tells her readers that doubts about Young Earth Creationism caused her to question her upbringing:
Realizing that Young Earth Creationism (YEC) was wrong was the first step on my journey out of my parents’ beliefs. My parents made YEC the center of their beliefs, and taught me that everything else rests upon it. I studied YEC in detail beginning when I was about twelve, and I was convinced of its truth and of the falsehood of evolution. When I came to college, however, I engaged in debate on this topic with other students I knew. I was convinced I could convert them into Young Earth Creationists, because I was convinced my position was right. But I found over time that my arguments were either flawed or flat out wrong. After months and months of this, I finally admitted that I had been wrong….
For every supposed “hole” in the theory of evolution put forward by Young Earth Creationists (i.e., the rock layers in the Grand Canyon are out of order, the flagellum is irreducibly complex, etc.), scientists have an answer (which of course is the part creationists leaders don’t tell their followers).
…The reality is that the “missing links” aren’t actually “missing” at all.
Finally, there is the whole issue of vestigial organs. Did you know that whales have hip bones? Whales and dolphins were originally land animals and then moved back into the water…
…[M]any animals actually show evidence of very bad design.
…I learned that every creationist argument against evolution is baseless. I found that the theory of evolution actually makes perfect sense. The truth is that the evidence for evolution is overwhelming, and I encourage you to explore it for yourselves.
This is an Intelligent Design Web site, and I have no desire to change Libby’s mind about the Earth being very old or about common descent. I happen to believe in both of those things myself, but for me, these issues are of secondary importance. The Big Question that we all want answered is: was life on Earth designed (either in whole or in part) by an Intelligent Agent? Many famous thinkers have tried to address this question through philosophical arguments, and that’s fine, but in my opinion, their case would be a lot stronger if a convincing argument could be made for the existence of an Intelligent Designer on scientific grounds, as most people are unlikely to be persuaded by philosophical proofs alone. Hence my interest in Intelligent Design.
In a blog post entitled, What kind of atheist are you? (July 9, 2012), Libby Anne reveals her own reasons why she considers herself an atheist:
The truth is that while I am a Humanist in how I act on and view my atheism, I am an atheist not because of how religion has served as a source of oppression but rather because I see no evidence of or reason to believe in a god. Thus I am, technically speaking, a Scientific Atheist when it comes to my reasons for being an atheist.
P. Z. Myers classifies thoughtful atheists according to a fourfold taxonomy, which Libby Anne endorses in her blog post above. In P.Z. Myers’ system, a Scientific Atheist is someone who, when confronted with claims that God exists, is likely to demand: “Show me the peer-reviewed scientific evidence.”
I will therefore take it as given that Libby Anne’s atheism rests in large part on her conviction that a naturalistic account can be given of the origin of life and the cosmos, without any recourse to the creative activity of an Intelligent Agent. By Ockham’s razor, then, there is no need to posit the existence of a Deity. It thus follows that if I can weaken Libby Anne’s belief in evolutionary naturalism, I will have opened her mind to the possibility of there being a God.
SECTION A: THREE IMPORTANT FACTS WHICH SHOULD DISCOMFORT ATHEISTS
Before we go on, I’d like to draw Libby Anne’s attention to three very important facts, which she may or may not be already acquainted with. I don’t expect these facts to convert Libby Anne back to theism. But at the very least, I hope they persuade her that arguments for God’s existence can be mounted which are scientifically and mathematically rigorous. While these arguments don’t establish the existence of the God of traditional theism, let alone the God of any revealed religion, they do point to an Intelligent Designer Who transcends the cosmos. Many people might call that God.
Fact #1: Not only the universe, but also the multiverse had a beginning in time
An artistic depiction of the multiverse. Image courtesy of Silver Spoon and Wikipedia.
The first big fact is that not only did our universe have a beginning, but it is now reasonably certain that the entire multiverse, had a beginning, as well. Leading cosmologists such as Alexander Vilenkin admit this fact, as I pointed out earlier this year, in my blog post, Vilenkin’s verdict: “All the evidence we have says that the universe had a beginning”. If the multiverse had a beginning (or a temporal boundary, if you prefer to call it that), then it has an arbitrary property. And if that’s the case, then the multiverse can no longer be treated as self-explanatory. It is therefore reasonable to ask what might explain its existence. That doesn’t prove God made it, of course. But it does suggest that something did, and that whatever that “something” is, it’s not bound by any laws of physics – for if it were, it would be part of the multiverse, too. What’s more, this “something” must either be everlasting or outside time altogether.
Fact #2: Not only the universe, but also the multiverse has to be fine-tuned
The second important fact is that not only our universe, but also the multiverse itself has to be fine-tuned – a fact which points to the existence of a Fine-Tuner beyond the multiverse. Before I go on, I’d like to comment on claims by Professor Victor Stenger, an American particle physicist and a noted atheist, who is also the author of several books, including his recent best-seller, The Fallacy of Fine-Tuning: How the Universe is Not Designed for Humanity (Prometheus Books, 2011). Stenger’s latest book has been received with great acclaim by atheists: “Stenger has demolished the fine-tuning proponents,” writes one enthusiastic Amazon reviewer, adding that the book tells us “how science is able to demonstrate the non-existence of god.”
Unfortunately for Stenger, Dr. Luke A. Barnes, a post-doctoral researcher at the Institute for Astronomy, ETH Zurich, Switzerland, has written a devastating critique of Stenger’s book. In his paper, Dr. Barnes takes care to avoid drawing any metaphysical conclusions from the fact of fine-tuning. His main concern is simply to establish that the fine-tuning of the universe is real, contrary to the claims of Professor Stenger, who asserts that all of the alleged examples of fine-tuning in our universe can be explained without the need for a multiverse.
Dr. Barnes’ ARXIV paper, The Fine-Tuning of the Universe for Intelligent Life (Version 1, December 21, 2011), is available online. Readers who dislike technical jargon can find a non-technical overview of key excerpts from Barnes’ paper in my blog post, Is fine-tuning a fallacy? (January 5, 2012). I would like to add that Dr. Barnes has also written an incisive online critique of Mike Ikeda and Bill Jeffery’s widely cited paper, The Anthropic Principle Does Not Support Supernaturalism, which is cited by Professor Stenger in his book, in order to show that even if some observation were to establish that the universe is fine-tuned, it could only count as evidence against God’s existence. Part 1 of Dr. Barnes’ reply is here; Part 2 is here.
The fine-tuning of our universe is real, then. However, some scientists think that the theological implications of fine-tuning can be avoided by positing a multiverse, which generates vast number of universes, of which only a few (such as our own) are capable of supporting life. Most people have heard of the multiverse by now. What they haven’t heard, however, is that even a multiverse would still need to be fine-tuned.
Dr. Robin Collins explains why the multiverse needs to be fine-tuned in an influential essay entitled, The Teleological Argument: An Exploration of the Fine-Tuning of the Universe (in The Blackwell Companion to Natural Theology, edited by William Lane Craig and J. P. Moreland, 2009, Blackwell Publishing Ltd.). One reason, which I discussed in a blog post entitled, Why a multiverse would still need to be fine-tuned, in order to make baby universes, is that the laws of the multiverse would need to be just right – i.e. ﬁne-tuned – in order for it to even occasionally produce universes whose constants and initial conditions permit life to exist on some planets, later on.
A further problem with the multiverse hypothesis is that it is utterly unable to account for the unexpected mathematical beauty of the laws of nature in our universe. Several years ago, Dr. Robin Collins elucidated the concept of “mathematical beauty” in very accessible laypeople’s language, in section 6 of a lecture he gave at Stanford University entitled, Universe or Multiverse? A Theistic Perspective. In a nutshell, simplicity underlying variety is the defining feature of beauty or elegance. In his best-selling book, Many Worlds in One: The Search for Other Universes (Hill and Wang, 2006), Professor Alex Vilenkin provides a similar definition of mathematical beauty:
…Euler’s formula shows a rather surprising connection between three seemingly unrelated numbers – the number e, which is related to “natural” logarithms; the “imaginary” number i – the square root of minus 1; and the number pi – the ratio of the circumference of a circle to its diameter. We can call this property “depth.” Beautiful mathematics combines simplicity with depth. (2006, pp. 201-202.)
Physicists have long recognized that the phenomena of our universe can be understood in terms of just a few simple laws. What is more amazing, however, is that these simple laws can in turn be organized under a handful of higher-level principles, which form a very elegant mathematical framework. Nobel physicist Eugene Wigner wrote about this very striking fact in a 1960 essay entitled, The Unreasonable Effectiveness of Mathematics in the Natural Sciences (Communications in Pure and Applied Mathematics, vol. 13, No. 1, February 1960), and subsequent attempts by atheists to argue it away have failed dismally.
The relevance of all this to the multiverse is that even if we leave aside those possible universes that cannot support life anywhere and confine ourselves to those “life-friendly” universes that are capable of supporting life somewhere, there is absolutely no reason to expect the physics underlying a life-permitting universe to be mathematically elegant as well. The physics underlying a life-friendly universe could be mathematically messy, or even utterly intractable. On the face of it, an ugly universe is much more likely than an elegant one: after all, there are many more ways for a room to be messy than for it to be neatly organized. Consequently the beauty of the laws of Nature in our universe comes as an unexpected surprise. I’ve blogged about Collins’ argument in my post, Beauty and the Multiverse. In a subsequent post, entitled, Why the mathematical beauty we find in the cosmos is an objective “fact” which points to a Designer, I refuted the atheistic retort that the beauty of the laws of Nature is merely in the eye of the beholder, adducing the testimony of scientists and mathematicians who were themselves atheists, in order to show that: (i) mathematical beauty is an objective property, as shown by the fact that accredited experts (mathematicians and physicists) are able to arrive at an agreement in their aesthetic judgments about which theories are beautiful; and (ii) the cosmos instantiates this kind of beauty, and can therefore be called objectively beautiful.
The only serious attempt to lessen the surprise factor of a mathematically elegant cosmos has been made by Professor Max Tegmark, of the Massachusetts Institute of Technology, who proposes that there is a separate universe corresponding to each and every mathematical structure and that all these universes really exist “out there,” even if we are not aware of them. We just happen to live inside a mathematical structure which is rich enough to support intelligent life. However, cosmologist Alex Vilenkin points out a serious flaw in Tegmark’s proposal in his book, Many Worlds in One: The Search for Other Universes (Hill and Wang, 2006):
If successful, this line of reasoning [by Tegmark – VJT] would drive the Creator entirely out of the picture. Inflation relieved him of the job of setting up the initial conditions of the big bang, quantum cosmology unburdened him of the task of creating space and time and starting up inflation, and now he is being evicted from his last refuge – the choice of the fundamental theory of nature.
Tegmark’s proposal, however, faces a formidable problem. The number of mathematical structures increases with increasing complexity, suggesting that “typical” structures should be horrendously large and cumbersome. This seems to be in conflict with the simplicity and beauty of the theories describing our world. It thus appears that the Creator’s job security is in no immediate danger. (2006, p. 203.)
So where are we at, now? We now have a multiverse which is radically contingent (as shown by the fact that it had a beginning) and which therefore seems to require an explanation beyond itself. What’s more, the multiverse itself appears to have been fine-tuned to produce a universe like ours, which is capable of supporting life and whose underlying physics is unexpectedly elegant and beautiful, since there would be absolutely no reason to expect this happy confluence of life-friendliness and mathematical beauty, if the multiverse that generated it had not been fine-tuned. Of course, the theist has a ready explanation for these striking facts: our multiverse was produced by an Intelligent Agent, Who made a choice to produce the kind of world that could not only support life, but also support intelligent life-forms who could appreciate its underlying beauty. That’s the conclusion argued for by astronomer Guillermo Gonzalez and philosopher Jay Richards, who contend in their book The Privileged Planet (Regnery Publishing, 2004) that conditions on Earth, especially those that make human life possible, have also been optimized for scientific investigation. In short: “the correlation between habitability and measurability” is a remarkable coincidence, which constitutes “a signal revealing a universe so skillfully created for life and discovery that it seems to whisper of an extraterrestrial intelligence immeasurably more vast, more ancient, and more magnificent than anything we’ve been willing to expect or imagine.”
Professor Paul Herrick has recently written an excellent paper, Job Opening: Creator of the Universe – A Reply to Keith Parsons (2009), which argues on philosophical grounds that we should not take the multiverse to be a brute fact, and that a theistic explanation of the cosmos as the product of an agent is the best kind of explanation that can be given. Another philosophical paper I’d like to recommend, which is notable for its argumentative rigor, is A New Look at the Cosmological Argument (American Philosophical Quarterly 34 (2):193 – 211, 1997) by Dr. Robert Koons.
So far, I’ve argued that an Intelligent Agent Who exists outside the cosmos is the best explanation for its radical contingency (as shown by the fact that it had a beginning) and its unexpected mathematical beauty (which is revealed in its underlying physics). But that doesn’t nail the argument for an Intelligent Designer. To do that, we’d need some empirical effect that only an intelligent being was capable of generating, within the time available, and we’d also need a rigorous mathematical way of demonstrating that unguided natural processes were incapable of producing the effect, within that time-span. Now that would be the smoking gun of Intelligent Design.
Fact #3: Unguided natural processes are incapable of generating proteins, even over billions of years
This is what a real protein looks like. The enzyme hexokinase is a protein found even in simple bacteria. Here, it is shown as a conventional ball-and-stick molecular model. For the purposes of comparison, the image also shows molecular models of ATP (an energy carrier found in the cells of all known organisms) and glucose (the simplest kind of sugar) in the top right-hand corner. Image courtesy of Tim Vickers and Wikipedia.
Fortunately, there exists such an effect: the proteins we find in living things. They’re my third and final “big fact.” Proteins, which are made up of amino acids, are fundamental components of all living cells and include many substances, such as enzymes, hormones, and antibodies, that are necessary for the proper functioning of an organism. They’re involved in practically all biological processes. To fulfill their tasks, proteins need to be folded into a complicated three-dimensional structure. Proteins can tolerate slight changes in their amino acid sequences, but a single change of the wrong kind can render them incapable of folding up, and hence, totally incapable of doing any kind of useful work within the cell. That’s why not every amino-acid sequence represents a protein: only one that can fold up properly and perform a useful function within the cell can be called a protein.
Now let’s consider a protein made up of 150 amino acids – which is a fairly modest length. If we compare the number of 150-amino-acid sequences that correspond to some sort of functional protein to the total number of possible 150-amino-acid sequences, we find that only a tiny proportion of possible amino acid sequences are capable of performing a function of any kind. The vast majority of amino-acid sequences are good for nothing. So, what proportion of amino acid sequences are capable of doing something useful? An astronomically low proportion: 1 in 10 to the power of 74, according to work done by Dr. Douglas Axe. When we add the requirement that a protein has to be made up of amino acids that are either all left-handed or all right-handed, and when we finally add the requirement that the amino acids have to be held together by peptide bonds, we find that only 1 in 10 to the power of 164 (10^164) amino-acid sequences of that length are suitable proteins. 1 in 10 to the power of 164 is 1 in 1 followed by 164 zeroes. The Earth has been around for about 4,540,000,000 years. Since the number of amino acid sequences that could have formed during this time is far, far less than 10^164, scientists concluded back in the mid-1960s that there was nowhere near enough time for a protein to form on the early Earth, as a result of chance processes alone.
Some scientists hypothesized that there were hidden laws of chemical affinity, which would have favored the evolution of proteins on the primordial Earth, given enough time. This is known as the hypothesis of biochemical predestination, as it supplements chance with necessity. Unfortunately, the facts tell a different story. There’s very strong experimental evidence that for complex molecules such as DNA, RNA and proteins, there are no stringent chemical constraints on chaining that would be sufficient to account for the information contained in these chains. This was already investigated by Bradley et al. in the mid-1980s, and it’s a major reason why Professor Dean Kenyon (who is Professor Emeritus of biology at San Francisco State University, and the author of a text called Biochemical Predestination) chose to take the opportunity of publicly recanting from biochemical predestination in the preface he wrote for the first technical work on Intelligent Design: a book titled, The Mystery of Life’s Origin by Charles B. Thaxton, Walter L. Bradley and Roger L. Olson.
Skeptical critics of Dr. Axe’s work are fond of claiming that plant biologist Art Hunt has demonstrated that Douglas Axe’s 2004 paper, Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds (Journal of Molecular Biology, Volume 341, Issue 5, 27 August 2004, Pages 1295–1315) doesn’t really support Intelligent Design or challenge Darwinism, so it’s a mistake to use it for those purposes. However, Dr. Douglas Axe himself has rebutted this claim in his article, Correcting Four Misconceptions about my 2004 Article in JMB (May 4, 2011).
Other critics have queried Dr. Axe’s numbers, so it’s worth quoting from a recent article, in which Axe argued that we should be looking well outside the Darwinian framework for an adequate explanation of protein fold origins. The following excerpt is taken from Douglas Axe’s article, The Case Against a Darwinian Origin of Protein Folds (BioComplexity 2010(1):1-12. doi:10.5048/BIO-C.2010.1):
Four decades ago, several scientists suggested that the impossibility of any evolutionary process sampling anything but a miniscule fraction of the possible protein sequences posed a problem for the evolution of new proteins. This potential problem – the sampling problem – was largely ignored, in part because those who raised it had to rely on guesswork to fill some key gaps in their understanding of proteins. The huge advances since that time call for a careful reassessment of the issue they raised. Focusing specifically on the origin of new protein folds, I argue here that the sampling problem remains. The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that relatively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence. If this is correct, the sampling problem is here to stay, and we should be looking well outside the Darwinian framework for an adequate explanation of fold origins.
Excerpt from the paper:
Based on analysis of the genomes of 447 bacterial species, the projected number of different domain structures per species averages 991. Comparing this to the number of pathways by which metabolic processes are carried out, which is around 263 for E. coli, provides a rough figure of three or four new domain folds being needed, on average, for every new metabolic pathway. In order to accomplish this successfully, an evolutionary search would need to be capable of locating sequences that amount to anything from one in 10^159 to one in 10^308 possibilities, something the neo-Darwinian model falls short of by a very wide margin. (p. 11)
Molecular biologist James Shapiro has attempted to respond to Dr. Axe’s argument in his post, A Response to Ann Gauger’s and Douglas Axe’s Comments on Evolution News and Views. Dr. Axe has replied in a subsequent post entitled, On Protein Origins, Getting to the Root of Our Disagreement with James Shapiro. Professor Larry Moran has also written a trenchant critique of Axe’s argument in his blog post, Douglas Axe on Protein Evolution and Magic Numbers, to which Dr. Axe has responded in a recent post entitled, Are We Reaching a Consensus that Evolution is Past its Prime?. I would like to invite Libby Anne to take a look at these posts and form her own judgment.
RNA to the rescue?
A game of Scrabble. Professor Dr. Robert Shapiro (1935-2011), who was professor emeritus of chemistry at New York University, has declared: “[S]uppose you took Scrabble sets, or any word game sets, blocks with letters, containing every language on Earth, and you heap them together and you then took a scoop and you scooped into that heap, and you flung it out on the lawn there, and the letters fell into a line which contained the words ‘To be or not to be, that is the question,’ that is roughly the odds of an RNA molecule … appearing on the Earth.” Image courtesy of Wikipedia.
Indeed, the odds against proteins forming by unguided natural processes are so formidable that many scientists now believe that another molecule – RNA – formed first, and that proteins were formed from RNA. But the same problem arises for RNA as for proteins: the vast majority of possible sequences are non-functional, and only an astronomically tiny proportion work. In a discussion hosted by Edge in 2008, entitled, Life! What a Concept, with scientists Freeman Dyson, Craig Venter, George Church, Dimitar Sasselov and Seth Lloyd, the late Professor Robert Shapiro (1935-2011) explained why he found the RNA world hypothesis incredible:
…[S]uppose you took Scrabble sets, or any word game sets, blocks with letters, containing every language on Earth, and you heap them together and you then took a scoop and you scooped into that heap, and you flung it out on the lawn there, and the letters fell into a line which contained the words “To be or not to be, that is the question,” that is roughly the odds of an RNA molecule, given no feedback – and there would be no feedback, because it wouldn’t be functional until it attained a certain length and could copy itself – appearing on the Earth.
What the foregoing three facts do and don’t establish
The reader might ask: “How does all this relate to arguments for God’s existence?” It’s important, because our world contains molecules essential to life – proteins and RNA molecules, but at the same time, it can be demonstrated mathematically that all the unguided natural processes we know of are utterly unable to generate these molecules (barring a statistical miracle) in the time available. The only process which is known to be capable of generating these molecules in the time available is intelligent agency. Here, at last, we have the smoking gun: an effect which points unambiguously to an intelligent cause, and can be shown to do so using the tools of science and mathematics, rather than philosophy. The only way to evade the full force of this argument is to take refuge in unknown forces of Nature that might have produced life – which is really an appeal to ignorance, and a lame one at that.
It might be objected that the Intelligent Being Who produced the first proteins on Earth need not be the same Being as the entity responsible for the fine-tuning of the multiverse. That’s true, but if the Intelligent protein-producer were a being living inside the multiverse, then it would also have a law-governed physical structure, which means that its own existence would require an explanation. Indeed, it would have to be at least as information-rich (and hence at least as unlikely to arise by unguided processes) as the proteins that it generated, for reasons that I’ll explain below, relating to the Law of Conservation of Information.
The “Who made the Designer?” objection loses force, however, once we get to a Being outside the multiverse altogether. Since such a Being would not be subject to physical laws of any kind, it would be meaningless to apply the terms “simple” and “complex” to such a Being. Hence Professor Richard Dawkins’ Ultimate 747 gambit never gets off the ground.
Finally, Humean philosophers might object that to apply the term “cause” or “explanation” to entities outside the multiverse is an illegitimate extension of language, as we have no experience of such extra-cosmic entities. However, this objection would prove too much, since it would rule out talk of a multiverse generating our universe by the same token, as no-one is capable of observing events outside our universe. Likewise, the argument that the notions of “cause” or “explanation” presuppose the existence of laws of Nature, and that the notion of a First Cause operating outside any framework of laws is therefore nonsensical, is refuted by the fact that the concepts of “cause” and “explanation” contain no implicit or explicit reference to laws, and indeed predate the very notion of a scientific law which can be described in mathematical terminology. After all, people have posited causes for phenomena for thousands of years, but laws that can be expressed in mathematical equations are no more than a few hundred years old.
The scientific evidence remains inconclusive, but it appears to be pointing more and more towards the existence of an Intelligent Being Who is transcendent (beyond the cosmos) and not bound by any physical laws, and Who designed the universe to be compatible with life before administering the master stroke: the creation of life itself. We can be sure that life itself must have been designed, for if proteins could only have been generated in our world within the time available by an Intelligent Agent, then the first living cell, which would have been far more complex and which would have required a minimum of 250 proteins to function effectively, must also have been produced by such an Agent.
Of course, we still don’t know if the Intelligent Being which produced proteins is benevolent, malevolent or indifferent, although I would argue that the “big picture” facts – the fine-tuning and mathematical beauty of the multiverse, as well as the subsequent emergence of sentient and intelligent life-forms on Earth – heavily favor the first option, Ichneumon wasps notwithstanding. However, many people would be content to call such a Being “God.” And who could blame them?
(I would like to add in passing that neither Ichneumon wasps nor their victims are sentient, for reasons explained by Dr. James Rose in his widely cited article, The Neurobehavioral Nature of Fishes and the Question of Awareness and Pain (Reviews in Fisheries Science, 10(1): 1–38, 2002). Hence Darwin’s distress on their behalf was unwarranted. Indeed, relatively few animals are sentient – probably only mammals and birds, and possibly cephalopods, according to a recent article by David B. Edelman, Bernard J. Baars and Anil K. Seth, entitled, Identifying hallmarks of consciousness in non-mammalian species (Consciousness and Cognition 14 (2005) 169–187). That’s about 14,000 species, out of a total of at least 7.7 million species of animals, or about 0.2 per cent. It is not currently known how many of these species possess rudimentary self-awareness. For those animals that do possess it, John Wesley’s animal theodicy might well apply.)
An Objection: Wouldn’t the Divine creation of life undercut the argument from fine-tuning?
Before I wrap up my three-point case for theism and critique Libby Anne’s atheism, I’d just like to address a common objection to the biological version of Intelligent Design. One often hears the argument: “Wouldn’t it be more elegant of God to design a universe in which the laws of Nature generate life automatically?” The creation of life sounds too Deus ex machina, these critics contend. The implication is that a God Who actually needed to step in and create proteins (and the first life-form) wouldn’t be very intelligent, after all. So if ID proponents could actually prove that life on Earth must have been created by an Intelligent Agent, they would actually be undercutting the cosmological fine-tuning argument, and thereby sawing off the theological branch most of them are sitting on. (I say “most” because the Intelligent Design movement includes people of a variety of religious persuasions, including agnostics.)
The first point I’d like to make in reply is that making a life-compatible universe is not the same as making a life-generating universe. Fine-tuning pertains to the former, not the latter. Indeed, my own view (with which some ID proponents would disagree) is that accomplishing the latter feat is impossible, even for a Deity, if the life we’re talking about is life based on DNA and proteins. nobody, not even God, can generate this kind of life, and at the same time, keep the laws of the cosmos mathematically elegant.
This brings me to my second point, which is that the information used to build living things is both highly specified and complex. You can’t write a simple, elegant program which will generate that kind of information. Here’s why.
To avoid accusations of bias, I’ll quote from two authors who have no affiliations with the Intelligent Design movement. Let me begin with a quote from origin of life researcher Professor Leslie Orgel, who coined the term “specified complexity” in order to denote what distinguishes living things from non-living things:
In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. (The Origins of Life, Prentice Hall, 1974, p. 189.)
The term “specified complexity” was later used by physicist Paul Davies in a similar fashion:
Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity. (The Fifth Miracle: The Search for the Origin and Meaning of Life, Simon and Schuster, 1999, p. 112.)
Davies himself explicitly declares that laws cannot generate life, because laws are “information-poor,” whereas life is “information-rich”:
Can [specified complexity] be the guaranteed product of a deterministic, mechanical, law-like process, like a primordial soup left to the mercy of familiar laws of physics and chemistry? No, it couldn’t. No known law of nature could achieve this. (1999, pp. 119-120.)
For his part, Davies speculates that life might have been generated by laws that allow systems to self-organize, along the lines of Stuart Kauffman’s complexity theory. But he appears to refute his own conjecture when he observes:
“Life is actually not an example of self-organization. Life is in fact specified — i.e., genetically directed — organization.”
My third point in reply to the theological argument against God having created the first life is that Intelligent Design theory doesn’t rule out the possibility that the initial conditions of the universe might have been fine-tuned to such an extraordinary degree that the emergence of life, billions of years later, was a guaranteed result. That’s called front-loading, and some Intelligent Design proponents still advocate it. I used to do so, myself, although I now no longer think it would work. But here’s the point: highly specified output requires highly specified input. As Professor William Dembski puts it in his recent paper, Conservation of Information Made Simple (Evolution News and Views, 28 August 2012):
Conservation of information … says that any information we see coming out of the evolutionary process was already there in this fitness landscape or in some other aspect of the environment or was inserted by an intervening intelligence…
Indeed, that is the defining property of intelligence, its ability to create information, especially information that finds needles in haystacks.
In other words, the tightly specified initial conditions of the universe that (presumably) led to the emergence of life were no less specified than the life-forms that they helped generate. Had they been even slightly different, life would never have emerged. So in order to make a universe that automatically generates life, you would need not only fine-tuning of the laws of that universe, but also super-fine tuning of the initial conditions. In making this point, Dembski appeals to the Law of the Conservation of Information which has been formulated in a mathematically rigorous fashion – see the list of articles here.
In short: if you want front-loading, you can have it, but you can’t have it on the cheap. Not even God can make a universe in which natural processes can generate a large amount of specified information from just a little.
My fourth and final point in response to the argument that God, if He were really intelligent, could have set up the initial conditions of the universe so that it would generate life without the need for subsequent intervention, is that front-loading probably wouldn’t work, anyway. A few years ago, physicist Robert Sheldon wrote a thought-provoking article entitled, The Front-Loading Fiction (July 1, 2009), in which he critiqued the assumptions underlying “front-loading.”
In the first place, the clockwork universe of Laplacean determinism (the idea that you can control the outcomes you get, by controlling the laws and the initial conditions) won’t work:
First quantum mechanics, and then chaos-theory has basically destroyed it, since no amount of precision can control the outcome far in the future. (The exponential nature of the precision required to predetermine the outcome exceeds the information storage of the medium.) (Emphases mine – VJT.)
In the second place, what Dr. Sheldon calls “Turing-determinism” – the modern notion that God could use an algorithm or program to design all the forms we observe in Nature – fares no better:
Turing-determinism is incapable of describing biological evolution, for at least three reasons: Turing’s proof of the indeterminancy of feedback; the inability to keep data and code separate as required for Turing-determinancy; and the inexplicable existence of biological fractals within a Turing-determined system.
Specifically, Dr. Sheldon argues that the only kind of universe that could be pre-programmed to produce specific results without fail and without the need for further input would be a very boring, sterile one, without any kind of feedback, real-world contingency or fractals. However, such a universe would necessarily be devoid of any kind of organic life. Dr. Sheldon proposes that God is indeed a “God of the gaps” – an incessantly active “hands-on” Deity Who continually maintains the universe at every possible scale of time and space, in order that it can support life. Such a role, far from diminishing God, actually enhances His Agency.
SECTION B: IS LIBBY ANNE’S ACCEPTANCE OF EVOLUTIONARY NATURALISM GROUNDED IN FAULTY EPISTEMOLOGY?
What I’d like to suggest in this post is that Libby Anne’s belief in evolutionary naturalism springs from a faulty epistemology on her part, which leads her (and many other people with similar tendencies) to gullibly accept scientific “answers” to problems, which should be treated with skepticism. Specifically, Libby Anne fails to heed the following five maxims:
1. Scientific plausibility isn’t the same thing as scientific possibility. In order for a proposed explanation of an empirical phenomenon to be regarded as scientifically possible, it has to not only appeal to processes which are plausible, but also show that these processes are either sufficient to generate the phenomenon, or at least reasonably likely to do so, within the time available. The term “reasonably likely” means that the probability of success must exceed some minimum threshold.
2. Scientific inferences are adductive. When trying to account for an event, scientists look for the best possible explanation of that event.
3. Science is an open endeavor. There should be no restrictions at the outset on what kinds of explanations scientists are allowed to posit, when formulating hypotheses about the world.
4. Speculative proposals require mathematical models to back them up. No process should be judged capable of generating an empirical phenomenon E, without either concrete evidence of P actually producing E, or (at least) a mathematical model showing that P is reasonably likely to generate E under ideal or simplified conditions.
5. Empirical evidence comes first. Psychological speculation isn’t evidence. Arguments based on empirical evidence should always trump hypothetical arguments based on psychological reasoning, when you’re doing science.
Case study: the bacterial flagellum
A Gram-negative bacterial flagellum. Image courtesy of LadyofHats and Wikipedia.
Which brings us back to Libby Anne’s remarks on evolution. Let’s start with the bacterial flagellum. Libby Anne tells us that “scientists have an answer” to the claim that the flagellum is irreducibly complex. And indeed they do. Mark Pallen and Nicholas Matzke put forward a plausible-sounding scenario as to how the bacterial flagellum might have evolved in their article, From The Origin of Species to the origin of bacterial flagella (Nature Reviews Microbiology, AOP, published online 5 September 2006; doi:10.1038/nrmicro1493):
…[D]esigning an evolutionary model to account for the origin of the ancestral flagellum requires no great conceptual leap. Instead, one can envisage the ur-flagellum arising from mergers between several modular subsystems: a secretion system built from proteins accreted around an ancient ATPase, a filament built from variants of two initial proteins, a motor built from an ion channel and a chemotaxis apparatus built from pre-existing regulatory domains (FIG. 1). As we have seen, each of these function in a modular fashion and share ancestry with simpler systems — thereby answering the question ‘what use is half a flagellum?’ Furthermore, it is not hard to envisage how an ancestral crude and inefficient flagellum, if it conferred any motility at all, could function as the starting material for natural selection to fashion today’s slicker flagellar apparatus.
However, one could still question how, from such bricolage, natural selection could lock on to an evolutionary trajectory leading to an organelle of motility in the first place, when none of the components alone confer the organism with a selective advantage relevant to motility. The key missing concept here is that of exaptation, in which the function currently performed by a biological system is different from the function performed while the adaptation evolved under earlier pressures of natural selection.
Ian Musgrave even suggests a sequence of steps leading to the bacterial flagellum, in chapter 5 of Why Intelligent Design Fails – A Scientific Critique of the New Creationism (edited by Matt Young and Taner Edis, Rutgers University Press, 2006). In his essay, “Evolution of the Bacterial Flagellum,” Musgrave outlines his proposed model on page 82:
Here is a possible scenario for the evolution of the eubacterial flagellum: a secretory system arose first, based around the SMC rod and pore-forming complex, which was the common ancestor of the type-III secretory system and the flagellar system. Association of an ion pump (which later became the motor protein) to this structure improved secretion. Even today, the motor proteins, part of a family of secretion-driving proteins, can freely dissociate and reassociate with the flagellar structure. The rod- and pore-forming complex may even have rotated at this stage, as it does in some gliding-motility systems. The protoflagellar filament arose next as part of the protein-secretion structure (compare the Pseudomonas pilus, the Salmonella filamentous appendages, and the E. coli filamentous structures). Gliding-twitching motility arose at this stage or later and was then refined into swimming motility. Regulation and switching can be added later, because there are modern eubacteria that lack these attributes but function well in their environment.(Shah and Sockett 1995). At every stage there is a benefit to the changes in the structure.
That’s an answer, and to a layperson it looks pretty convincing. But is it a good one? Does it dig deep enough?
Maxim #1: Scientific plausibility isn’t the same thing as scientific possibility
No, it doesn’t. In his Evolution News and Views post, Michael Behe Hasn’t Been Refuted on the Flagellum, Jonathan McLatchie, a post-graduate student working in the field of evolutionary biology, exposes the inadequacies of the scenarios outlined above. I’ll confine myself to quoting a few brief excerpts, to convey the gist of the argument:
First and foremost, it trivializes the sheer complexity and sophistication of the flagellar system — both its assembly apparatus, and its state-of-the-art design motif…
The synthesis of the bacterial flagellum requires the orchestrated expression of more than 60 gene products. Its biosynthesis within the cell is orchestrated by genes which are organised into a tightly ordered cascade in which expression of one gene at a given level requires the prior expression of another gene at a higher level…
[P]romoters are akin to a kind of molecular toggle switch which can initiate gene expression when recognised by RNA polymerase and an associated specialised protein called a “sigma factor”. These three classes of promoters are uninspirationally dubbed “Class I,” “Class II,” and “Class III.”…Those genes which are involved in the synthesis of the filament are controlled by the Class III promoters.
…The sigma factor sigma-28 is required to activate the Class III promoters. But here we potentially run into a problem. It makes absolutely no sense to start expressing the flagellin monomers before completion of the Hook-Basal-Body construction. Thus, in order to inhibit the sigma-28, the anti-sigma factor (FlgM) alluded to above inhibits its activity and prohibits it from interacting with the RNA polymerase holoenzyme complex. When construction of the Hook-Basal-Body is completed, the anti-sigma factor FlgM is secreted through the flagellar structures which are produced by the expression of the Class II hook-basal-body genes.
But it gets better. The flagellar export system (that is, the means by which [regulatory gene] FlgM is removed from the cell) has two substrate-specificity states: rod-/hook-type substrates and filament-type substrates. During the process of flagellar assembly, this substrate-specificity switch has to flick from the former of those states to the latter. Proteins which form part of the hook and rod need to be exported before those which form the filament…
The rod structure is built through the peptidoglycan layer. But its growth isn’t able to proceed past the physical barrier presented by the outer membrane without assistance. So, the outer ring complex cuts a hole in the membrane, so that the hook can grow beneath the FlgD scaffold until it reaches the critical length of 55nm. Then the substrates which are being secreted can switch from the rod-hook mode to flagellin mode, FlgD can be replaced by hook-associated-proteins, and the filament continues to grow. Without the presence of the cap protein FliD, these flagellin monomers become lost. This cap protein is thus essential for the process to take place.
My description, given above, has really only scratched the surface of this spectacular item of nano-technology… I have not, for the sake of brevity, even discussed the remarkable processes of chemotaxis, two component signal transduction circuitry, rotational switching, and the proton motive force by which the flagellum is powered… But the bottom line is that modern Darwinian theory — as classically understood — has come no where close to explaining the origin of this remarkably complex and sophisticated motor engine. Just as Darwinian “explanations” of the eye may, at first, appear convincing to the uninitiated, largely unacquainted with the sheer engineering marvel of the biochemistry and molecular basis of vision, so too do the evolutionary “explanations” of the flagellum rapidly become void of any persuasiveness when one considers the molecular details of the system…
It seems that the bacterial flagellum is as much a — and perhaps a greater — challenge to Darwinism as it was when Behe first wrote Darwin’s Black Box in 1996.
The epistemological moral to be drawn here is that the word “possible” has various meanings, and that any evolutionary explanation has to clear several hurdles before it can be regarded as scientifically possible, let alone factually true.
To illustrate this point, consider the idea of a winged horse that can actually fly, like Pegasus. The idea sounds ridiculous, but it’s not flat-out contradictory, like the idea of a square circle, so the existence of a winged horse that can fly is at least logically possible. However, the laws of Nature which hold in our universe would prevent such a horse from flying. Hence a flying horse cannot be regarded as nomologically possible.
A bacterial flagellum is certainly nomologically possible: flagella abound in the natural world. But what about its origin by natural processes? The origin of the bacterial flagellum by a stepwise Darwinian process does not violate any law of Nature, when considered in the abstract. However, scientific explanations need to be more than abstract: a model is required, which invokes a process or set of processes that can accomplish the task in question. It could be argued that Pallen & Matzke (2006) and Musgrave (2006) have met this condition: they have specified a set of processes which they consider sufficient in order to generate a bacterial flagellum. Musgrave has even outlined a proposed sequence of steps which could have yielded a flagellum.
But we are not finished yet. Another major problem looms: time. Our observable universe hasn’t been around forever. 13.7 billion years might sound like a long time, but some processes require a lot more time than that. Any truly scientific explanation of an entity’s origin has to be capable of showing that the entity could have been generated, in the time available. That means demonstrating that the generation of the entity in question must be reasonably probable, given a plausible set of initial conditions and a timescale of billions rather than (say) trillions of years.
The term “reasonably probable” might sound a little vague, and different scientists have suggested various cut-off points, but a minimum probability threshold of 10^(-150) is a pretty generous one, as 10^150 is much larger than the total number of events that have taken place during the entire history of our universe(10^120, according to Seth Lloyd’s calculations.
Here’s where the proposals put forward by Pallen & Matzke (2006) and Musgrave (2006) come up short. They simply don’t provide enough detail for scientists to estimate whether their occurrence over the lifetime of the observable universe is reasonably probable or not. While they might be considered scientifically plausible, there is currently no way of showing whether they are scientifically possible.
At this point, evolutionists commonly try to turn the tables. “You haven’t proved that they’re not scientifically possible,” they retort. “As far as we know, the proposals we’ve put forward remain possible, so scientists should continue to treat them as live possibilities.” This is a sneaky move, because it confuses epistemic possibility (“For all we know, it might have happened”) with ontologicalpossibility (“In the real world, it could have happened”). The first kind of possibility is armchair speculation, not science. Scientists need to establish the second kind of possibility, when they are proposing a model to explain events occurring in the real world.
To sum up: without some hard numbers (i.e. detailed probability estimates for the various steps, or at the very least, estimates of upper and lower probability bounds), we really can’t say whether the models proposed by Pallen & Matzke (2006) and Musgrave (2006) are scientifically possible or not. Hence it is misleading of Libby Anne to claim that “scientists have an answer” to the question of how the bacterial flagellum could have evolved. The problem is that we still don’t know whether the answers that have been proposed are even scientifically possible, let alone factually correct.
Maxim #2: Scientific inferences are adductive.
A schematic diagram of the type III secretion-system needle-complex. Image courtesy of Pixie and Wikipedia.
There is a second epistemological moral to be drawn from the story of the flagellum, and it is this: when trying to account for an event, you should appeal to the best possible explanation of that event. Stepwise Darwinian processes are a very poor explanation for the origin of the bacterial flagellum. In his 2012 paper, The Bacterial Flagellum: A Paradigm of Design, Jonathan McLatchie provides his readers with a detailed and beautifully illustrated explanation of how the bacterial flagellum operates, before going on to critically evaluate current Darwinian models for the evolution of the flagellum:
One of the purposes of offering this description in such detail is to reveal the futility of mere appeals to biochemical homology of flagellar proteins to proteins involved in other cellular functions (Pallen and Matzke, 2006). Indeed, homology does nothing to demonstrate that the necessary transitions are evolutionarily feasible (Gauger and Axe, 2011), and it has been shown that the process of gene duplication and recruitment, as a source of evolutionary novelty, is extremely limited: If a duplicated gene has a slightly negative fitness cost, the maximum number of non-adaptive point mutations that a new innovation in a bacterial population can require is two or fewer; this number jumps to six or fewer if the duplication is selectively neutral (Axe, 2010)…
The most common response to the claim that the bacterial flagellum manifests irreducible complexity has been to point to the type III secretion system (T3SS), a needle-like syringe used by certain bacteria (e.g. the archetype for this system Yersinia pestis) to inject toxins into organisms, as a possible evolutionary predecessor. There are a number of problems, however, with this hypothesis. For one thing, it sidesteps the need to also explain the components of the type III export machinery (including FlhA, FlhB, FliR, FliQ, FliP, FliIetc.), at least most of which are essential for its function…
The T3SS also lacks homologues of the flagellar proteins MotA, MotB, FliG, and FliM, each of which is essential for motor function…
Even in the event that it was somehow feasible to evolve the flagellar export apparatus and basal body by evolution, there is the problem of producing the filament. Leaving aside the fact that the flagellar filament is assembled with the assistance of an essential capping protein encoded by FliD, the exported flagellin monomers need to stick both to each other and to the export machinery’s outer components (so that they are not lost from the cell into the surrounding medium). The specific and co-ordinated mutations required to facilitate such an innovation are likely to be well beyond the reach of a Darwinian process…
Rod assembly is another clear case of irreducible complexity. Aside from the multiple genes that are necessary for rod formation, this is perhaps most obvious in the necessity for penetration of the peptidoglycan layer, by the hydrolysing (muramidase) activity of the C-terminal domain of FlgJ, to allow the rod to pass through.
The motor itself exhibits irreducible complexity, and is dependent on the critical proteins FliG, MotA, MotB and FliM. Remove any one of those proteins and the motor will completely cease to function…
Many more examples could be given. But the bottom line is this: The bacterial flagellum exhibits remarkable design, and irreducible complexity at every tie. When so much of the assembly process and functional operation of the flagellum appears to resist explanation in evolutionary terms, perhaps it is time to lay aside such a paradigm and begin the search for alternatives…
The flagellum exhibits irreducible complexity in spades. In all of our experience of cause-and-effect, we know that phenomena of this kind are uniformly associated with only one type of cause – one category of explanation – and that is intelligent mind. Intelligent design succeeds at precisely the point at which evolutionary explanations break down. Rational, deliberative, agents have the ability to visualise a complex endpoint and bring everything together that is required to actualise that endpoint.
Intelligent design is a very good explanation of irreducible complexity; whereas Darwinian evolution is a very ad hoc explanation, in the absence of a detailed, quantifiable model. If scientists followed the second epistemological principle listed above, they would go for the best explanation of the origin of life: intelligent design. They would then start examining various Intelligent design hypotheses for the bacterial flagellum, and ask questions like: “How many different designs were there, originally, for the various kinds of bacterial flagella that we find in Nature?”, “When were they implemented?”, “In what order were they implemented?” and “What other designs in Nature had to precede them?” These are genuinely interesting subjects, which scientists could fruitfully research. So why don’t they?
At the present time, most scientists have fallen under the spell of a principle known as methodological naturalism, which is commonly invoked by neo-Darwinists in order to exclude Intelligent Design research from the domain of legitimate science. The supernatural, it is argued, has no place in science. As I’ll explain below, scientists have been bewitched by the principle of methodological naturalism for the last 150 years, but that wasn’t always the case. There was a time when natural theology was in vogue, and it inspired a lot of good scientific work. I’m not arguing for a return to the old days; all I want is for science to shake off its self-imposed methodological shackles.
I should add that there is no reason in principle why the Intelligent Designer of life on Earth, or even life in our universe, would have to be supernatural. The most one could prudently conclude is that the Designer lies outside the observable universe, since we now know that our universe had a beginning, which means that life in our universe (which requires a Designer, as it is characterized by a high degree of specified complexity) must have had a beginning at some point in time, billions of years ago. The Designer might be transcendent, or alternatively He/She/It might exist somewhere in the multiverse. To exclude that possibility, you’d have to show that the multiverse itself was the product of design – which is precisely what I attempted to do in the first part of this post, when I argued that it was fine-tuned.
But the point I wanted to make here is that Intelligent Design, as a purely biological hypothesis, does not concern itself with the question of whether the Designer of life is a supernatural Being. Hence there is no good reason to exclude the hypothesis that life on Earth was designed from the domain of legitimate science.
Maxim #3: Science is an open endeavor.
Historian of science and medicine Professor Ronald Numbers, photographed at the 2008 History of Science Society conference, on 8 November 2008. According to Numbers, eighteenth century natural philosophers allowed appeals to God in their science, while displaying a preference for natural causes. The exclusion of God from the domain from science did not take place until much later on: “virtually all scientists (a term coined in the 1830s but not widely used until the late nineteenth century), whether Christians or non-Christians, came by the latter nineteenth century to agree that God talk lay beyond the boundaries of science.” (Ronald L. Numbers, 2003. “Science without God: Natural Laws and Christian Beliefs.” In When Science and Christianity Meet, edited by David C. Lindberg and Ronald L. Numbers. Chicago: University Of Chicago Press, p. 272.) Image courtesy of Ragesoss and Wikipedia.
And that brings me to my third epistemological maxim, which is that science is essentially an open endeavor: it doesn’t rule out any suggestion on a priori grounds. It therefore follows that the principle of methodological naturalism is prejudicial to scientific enquiry, as it attempts to dictate to scientists what kinds of explanations they are allowed to posit, when formulating hypotheses about the world. In short: the principle of methodological naturalism puts science in a straitjacket.
For the past several months, I’ve been researching the arguments for methodological naturalism, as well as the history of the principle itself. I’ve written a post which is about 95% complete, entitled, Is methodological naturalism a defining feature of science?. It’s a little rough, but readers are welcome to peruse it if they wish. To cut a long story short, here are the highlights of my four-part post:
- Methodological naturalism is widely regarded as a cardinal rule of scientific methodology. This methodological principle excludes all references to the supernatural from scientific discourse: it says that God-talk has no place in science. In Part A of my post, after carefully distinguishing methodological naturalism from six other principles, I argue that methodological naturalism can be best defined as an injunction: when doing science, we should assume that natural causes are sufficient to account for all observed phenomena, and for precisely this reason, all talk of the supernatural is banished from science.
- The Intelligent Design movement makes no pronouncements about who the Designer of Nature is, but deliberately leaves open the possibility that the Designer is a supernatural Being (i.e. God). Thus one can fairly argue that Intelligent Design theory is at odds with methodological naturalism, simply by refusing to rule out a supernatural Designer of Nature, and by refusing to affirm that natural causes are sufficient to account for all observed phenomena. By the same token, however, the science of cosmology also violates methodological naturalism, as it is unable to rule out a supernatural Designer of either the universe or the multiverse.
- In part A, I endeavor to show that none of the arguments which are commonly adduced in support of methodological naturalism as a guiding principle of science are cogent, and that in any case, the biological version of Intelligent Design (which attracts widespread criticism) poses no threat to methodological naturalism. It is the cosmological version of Intelligent Design which is at loggerheads with this allegedly “scientific” principle. Bizarrely, however, many scientists who are vociferous opponents of biological Intelligent Design perceive the cosmological version of Intelligent Design as benign.
- Methodological naturalism is also commonly believed to be a hallowed principle of science, which scientists have adhered to since the Middle Ages. I argue that this picture is totally mistaken. In Part B, I present proof that methodological naturalism is a scientific novelty. I show that it was not generally accepted as a rule of scientific methodology until the late 19th century. Before then, it was regarded as perfectly legitimate for scientists to argue for the existence of a supernatural Creator of the cosmos, on empirical grounds, even in science texts.
- In Part C, I refute the claim that methodological naturalism is a hallowed principle of science, going all the way back to the Middle Ages. This claim is implicit in the writings of Professor Edward Grant, who contends that medieval natural philosophers strove to minimize the role of the supernatural in science, which they defined as the study of bodies in motion. The claim that methodological naturalism is of medieval origin is often mistakenly attributed to Professor Ronald Numbers, though what he actually says is considerably more nuanced. In my historical survey of the Middle Ages, I show that there are indeed passages in the works of medieval natural philosophers, which sound as if they are espousing methodological naturalism. For instance, many medieval scientists refused to postulate supernatural miracles when doing science, but it turns out that this was because their definition of science was narrower than ours: they deliberately excluded singular phenomena from the domain of science, and dealt only with regular phenomena. I go on to show that even on the narrow, Aristotelian definition of science as the study of regular natural occurrences, these medieval philosophers still felt impelled to invoke God, the incorporeal Unmoved Mover, as an ultimate explanation of changes occurring in the natural world. Citing passages from their works, I demonstrate that St. Albert the Great, St. Thomas Aquinas, John of Sacrobosco, Jean Buridan and Bishop Nicole Oresme all viewed God-talk as having a perfectly legitimate place in science.
- In Part D, I also rebut the assertion made by two so-called “experts” (Professor Robert Pennock and Professor John Haught) at the Dover trial of 2005, that methodological naturalism became an accepted rule for doing science in the Scientific Revolution of the sixteenth and seventeenth centuries. In fact, as I will show in a future post, methodological naturalism was widely flouted by scientists of that time. Here, I focus on two scientists: Copernicus and Galileo, who are commonly cited as examples of scientists who upheld methodological naturalism. I quote passages from the works of these scientists which show that they both firmly believed that that anyone who diligently studied the movements of the celestial bodies would be led thereby to a knowledge of God. For both of these scientists, then, God-talk had a perfectly legitimate place in science – in fact, it played a vital role.
- I conclude that critics of Intelligent Design as a scientific enterprise are misinformed – both regarding the history of science and the aims of the modern Intelligent Design movement, which modestly refrains from equating the Designer of life with any supernatural Being.
Maxim #4: Speculative proposals require mathematical models to back them up
A fourth epistemological lesson that Libby Anne needs to draw from her study of evolution is that speculative proposals require mathematical models to back them up. This is particularly relevant to the comment she makes on fossils.
…The reality is that the “missing links” aren’t actually “missing” at all.
Paradoxides davidis, an early trilobite from the Cambrian period, about 540 million years ago. Trilobites appeared suddenly in the Lower Cambrian fossil record. Currently, there do not seem to be any transitional or ancestral forms combining the features of trilobites with other early arthropods. Image copyright Sam Gon III, courtesy of Wikipedia.
I’m trying to bite my tongue at this point, to refrain from exclaiming: “Cambrian explosion!” I’m referring, of course, to the 25 million year window during which 30 or so phyla of animals appeared. I don’t want to say more than I have to on the subject, so I’ll just point readers in the direction of some papers that they may find interesting, before I move on to discuss a fossil sequence Libby Anne will definitely like: whales.
Easy reading on the Cambrian explosion
Cambrian explosion. Wikipedia article. (A good overview.)
Questions about the Cambrian Explosion, Evolution, and Intelligent Design (brochure at the Darwin’s Dilemma Website).
More advanced reading
Stephen C. Meyer, Marcus Ross, Paul Nelson and Paul Chien, The Cambrian Explosion: Biology’s Big Bang in Darwinism, Design, and Public Education (John A. Campbell and Stephen C. Meyer eds., Michigan State University Press, 2003).
Stephen C. Meyer, The origin of biological information and the higher taxonomic categories, in Proceedings of the Biological Society of Washington, Vol. 117(2):213-239 (2004).
MicroRNAs and metazoan macroevolution: insights into canalization, complexity, and the Cambrian explosion (BioEssays 31:736-747, 2009. DOI: 10.1002/bies.20090003), by Kevin J. Peterson, Michael R. Dietrich and Mark A. McPeek. (A thoughtful paper by a team of scientists who are orthodox neo-Darwinists, but who are also honest enough to recognize the inadequacy of current explanations for the Cambrian explosion. The authors then put forward their own tentative proposal.)
And now, back to whales.
The evolution of whales took place over a period of about 15 million years. Neo-Darwinists often cite the intermediate forms found between modern whales and their land-dwelling ancestors as powerful evidence of a smooth, unbroken, continuous chain of descent linking the two. Case closed, right? Not so fast. Recently, I’ve been looking at the same fossils, and researching the anatomical differences between them. I see evidence not of continuity, but of discontinuity: the fossil intermediates fall into about half a dozen clearly-defined, non-overlapping groups, and at each step along the way, not one but several new anatomical traits emerge at roughly the same time. Often we get several traits emerging at the same time in the same organ (e.g. the ear).
The evolution of the whale would have required a large number of mutations to occur, in parallel. For instance, whales require an intra-abdominal counter current heat exchange system (the testis are inside the body right next to the muscles that generate heat during swimming), they need to possess a ball vertebra because the tail has to move up and down instead of side-to-side, they require a re-organization of kidney tissue to facilitate the intake of salt water, they require a re-orientation of the fetus for giving birth under water, they require a modification of the mammary glands for the nursing of young under water, the forelimbs have to be transformed into flippers, the hind-limbs need to be substantially reduced, they require a special lung surfactant (the lung has to re-expand very rapidly upon coming up to the surface), and so on.
What the fossils show is that whale evolution occurred over millions of years. At first blush, that seems to suggest a natural, gradualistic process. However, the mathematical feasibility of such parallel evolution occurring as a result of unguided Darwinian processes, over a mere 15 million years, remains to be demonstrated. Indeed, evolutionary biologist Dr. Richard von Sternberg has previously applied the population genetic equations employed in a 2008 paper by Durrett and Schmidt to argue against the plausibility of the transition happening in such a short period of time. The equations of population genetics predict that – assuming an effective population size of 100,000 individuals per generation, and a generation turnover time of 5 years (according to Dr. Richard von Sternberg’s calculations and based on equations of population genetics applied in the Durrett and Schmidt paper), that one may reasonably expect two specific co-ordinated mutations to achieve fixation in the timeframe of around 43.3 million years. When one considers the large number of mutations that occurred in cetaceans over a 15-million-year period, the Darwinian scenario appears to lack credibility.
At this point, I’d like to let Libby Anne in on a dirty little secret which is highly embarrassing to proponents of Darwinian evolution…
The only kind of evolution that is known to be mathematically feasible is … Intelligent Design!
A North American beaver (Castor Canadensis). The Busy Beaver function was used by Professor Gregory Chaitin to model evolution. However, what it actually shows is the adequacy of intelligently guided evolution and the apparent inadequacy of Darwinian evolution to produce the required changes within the limited time available. Image courtesy of Laszlo Ilyes and Wikipedia.
Last year, I had the good fortune to listen to a one-hour talk posted on Youtube, entitled, Life as Evolving Software. The talk was given by Professor Gregory Chaitin, a world-famous mathematician and computer scientist, at PPGC UFRGS (Portal do Programa de Pos-Graduacao em Computacao da Universidade Federal do Rio Grande do Sul.Mestrado), in Brazil, on 2 May 2011. I was profoundly impressed by Professor Chaitin’s talk, because he was very honest and up-front about the mathematical shortcomings of the theory of evolution in its current form. As a mathematician who is committed to Darwinism, Chaitin is trying to create a new mathematical version of Darwin’s theory which proves that evolution can really work. He has recently written a book, Proving Darwin: Making Biology Mathematical (Random House, 2012, ISBN: 978-0-375-42314-7), which elaborates on his ideas.
Here is a very short summary of what Professor Chaitin said in his talk, concerning Darwinism and Intelligent Design.
(a) DNA really is a kind of programming language. In fact, Professor Chaitin believes it’s a universal programming language.
(b) Building on the work on John Maynard Smith, Chaitin claims that life itself is evolving software, and that biology can be defined as the study of ancient software – software archaeology, if you like.
(c) At the present time, there is no adequate mathematical theory of Darwinian evolution. In fact, even the possibility of evolution being able to continue indefinitely without grinding to a halt (which is absolutely fundamental to Darwin’s theory) had not been mathematically demonstrated before Chaitin did his research.
(d) Unfortunately, the genes of modern organisms are too complicated and too messy to use, if you want to create a mathematical model which rigorously demonstrates the possibility of evolution. Instead, a simplified “toy model” is required in order to rigorously demonstrate that evolution can go on forever, without grinding to a halt.
(e) Chaitin looked at three kinds of evolution in his toy model: exhaustive search (which stupidly performs a search of all possibilities in its search for a mutation that would make the organism fitter, without even looking at what the organism has already accomplished), Darwinian evolution (which is random but also cumulative, building on what has been accomplished to date) and Intelligent Design (where an Intelligent Being selects the best possible mutation at each step in the evolution of life). All of these – even exhaustive search – require a Turing oracle for them to work, in Chaitin’s model – in other words, outside direction by an Intelligent Being. In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.” The purpose of the Turing oracle in Chaitin’s theory is twofold: to decide which of two organisms is better, and to eliminate non-terminating mutations. I have previously discussed Chaitin’s use of an oracle to simulate evolution in my post, At last, a Darwinist mathematician tells the truth about evolution. For my part, I have grave doubts about the legitimacy of scientists appealing to an oracle to make their mathematical model of Darwinian evolution work, but I shall refrain from discussing them here.
(f) Interestingly, out of the three kinds of evolution examined by Turing, Intelligent Design was the only one guaranteed to get the job done on time. Darwinian evolution is certainly much more efficient than performing an exhaustive search of all possibilities, but it still seems to take too long to come up with an improved mutation. Chaitin is currently trying to show that it can do the job on time.
In his talk, Chaitin describes how he defined “fitness” in his model, using the Busy Beaver function of N, and how he was dismayed to find that Darwinian evolution appears to take too long to enable a population to reach the required level of fitness:
Well, the way to measure the rate of progress, or creativity, in this model, is to define a thing called the Busy Beaver function. One way to define it is the largest fitness of any program of N bits in size. It’s the biggest whole number without a sign that can be calculated if you could name it, with a program of N bits in size… It’s like, the fittest one. It succeeds in naming the biggest integer … You name an integer by calculating it and evolving it. OK, so that’s the best mathematician among the N-bit programs in my competition.
So what happens if we do that, which is sort of cumulative random evolution, the real thing? Well, here’s the result. You’re going to reach Busy Beaver function N in a time that is – you can estimate it to be between order of N squared and order of N cubed. Actually this is an upper bound. I don’t have a lower bound on this. This is a piece of research which I would like to see somebody do – or myself for that matter – but for now it’s just an upper bound. OK, so what does this mean? This means, I will put it this way. I was very pleased initially with this.
Exhaustive search reaches fitness BB(N) in time 2^N.
Intelligent Design reaches fitness BB(N) in time N. (That’s the fastest possible regime.)
Random evolution reaches fitness BB(N) in time between N^2 and N^3.
But I told a friend of mine … about this result. He doesn’t like Darwinian evolution, and he told me, “Well, you can look at this the other way if you want. This is actually much too slow to justify Darwinian evolution on planet Earth. And if you think about it, he’s right… If you make an estimate, the human genome is something on the order of a gigabyte of bits. So it’s … let’s say a billion bits – actually 6 x 10^9 bits, I think it is, roughly – … so we’re looking at programs up to about that size [here he points to N^2 on the slide] in bits, and N is about of the order of a billion, 10^9, and the time, he said … that’s a very big number, and you would need this to be linear, for this to have happened on planet Earth, because if you take something of the order of 10^9 and you square it or you cube it, well … forget it. There isn’t enough time in the history of the Earth … Even though it’s fast theoretically, it’s too slow to work. He said, “You really need something more or less linear.” And he has a point…
Chaitin is still trying to find a way to show that Darwinian evolution can work, in the time available.
In his talk, Chaitin discussed the problems relating to Darwinian evolution with refreshing candor, and I would like to thank him for his honesty. I also think that his way of framing the problem in his model of “toy evolution” is a very fruitful one. My own view, for what it’s worth, is that the Intelligent Designer acts like Chaitin did, when implementing his Intelligent Design solution: He selects the best possible mutation at each step in the evolution of life.
“But there’s plenty of time for evolution!” – Only if it’s intelligently designed evolution!
A humpback whale breaching. Image courtesy of Whit Welles and Wikipedia.
Many Darwinian evolutionists indignantly deny that there’s any problem with their theory being able to come up with the required number of mutations in the time available. They triumphantly cite a recent paper by Herbert S. Wilf and Warren J. Ewens, entitled, There’s plenty of time for evolution (Proceedings of the National Academy of Sciences, December 28 2010, vol. 107 no. 52, pp. 22454-22456, doi: 10.1073/pnas.1016207107), which claims to show that there’s plenty of time for Darwinian evolution to occur, even if multiple mutations are required:
Objections to Darwinian evolution are often based on the time required to carry out the necessary mutations. Seemingly, exponential numbers of mutations are needed. We show that such estimates ignore the effects of natural selection, and that the numbers of necessary mutations are thereby reduced to about K log L, rather than K [to the power of] L, where L is the length of the genomic “word,” and K is the number of possible “letters” that can occupy any position in the word. The required theory makes contact with the theory of radix-exchange sorting in theoretical computer science, and the asymptotic analysis of certain sums that occur there.
Evolution is an “in parallel” process, with beneficial mutations at one gene locus being retained after they become fixed in a population while beneficial mutations at other loci become fixed. In fact this statement is essentially the principle of natural selection…
The paradigm used in the incorrect argument [against Darwinian evolution – VJT] is often formalized as follows: Suppose that we are trying to find a specific unknown word of L letters, each of the letters having been chosen from an alphabet of K letters. We want to find the word by means of a sequence of rounds of guessing letters. A single round consists in guessing all of the letters of the word by choosing, for each letter, a randomly chosen letter from the alphabet. If the correct word is not found, a new sequence is guessed, and the procedure is continued until the correct sequence is found. Under this paradigm the mean number of rounds of guessing until the correct sequence is found is indeed K [to the power of] L.
But a more appropriate model is the following: After guessing each of the letters, we are told which (if any) of the guessed letters are correct, and then those letters are retained. The second round of guessing is applied only for the incorrect letters that remain after this first round, and so forth. This procedure mimics the “in parallel” evolutionary process. The question concerns the statistics of the number of rounds needed to guess all of the letters of the word successfully.
The fact is that with the parallel model, i.e., taking account of natural selection, the number of rounds of mutations that are needed to change the complete genome to its desirable form are only about K log L, instead of the hugely exponential K [to the power of] L which would result from the serial model.
Professor Jerry Coyne has an excellent post on Wilf and Ewens’ paper, which is eponymously entitled, There’s plenty of time for evolution (29 December 2010). Allow me to quote a brief excerpt:
Let’s put some biological numbers to this. Let’s assume that we have to change 20,000 genes to get from an ancestor to a descendant. (That’s a LOT of genes, since the whole human genome is only a tad bigger than this.) And let’s assume that at each gene only 1/40 of all gene variants are adaptive. (We’re assuming that if the population has as few as one “adaptive” variant, that one will sweep through the population. That’s not strictly correct since some of these will get lost by genetic drift and never contribute to evolution.) The 1/40 figure comes from assuming a population has a million births each generation, that there are 20,000 genes, that each generation of new births carries about 5 million new mutations in the genome – about 250 per gene – and that only one new mutation in 10,000 will be favored over the “resident gene type” (The mutation data are taken from humans, and assume that only a small percentage of new mutations arise in regions of the genome that actually do something.)
Using the formula, Wilf and Ewens calculate that complete gene substitution at all 20,000 genes would take about 390 “rounds” of guessing.
That compares to 10^34,040 rounds of guessing if you ask for all the genes to change in a single “round”.
The difference occurs because under parallel evolution the number of trials (or mutational rounds that must occur to cause evolution) enters as K(log L) rather than K^L. The first number is much smaller when L is large.
So it seems that whales could easily have evolved by Darwinian process over 15 million years, after all. Problem solved, right? Wrong!
A Least weasel (Mustela nivalis) at the British Wildlife Centre, Surrey, England. Image courtesy of Keven Law and Wikipedia.
A response to Wilf and Ewens’ paper was rapidly forthcoming from Dr. Douglas Axe, of the Biologic Institute. In a short post mockingly titled, Breaking News from the Academy: There’s Plenty of Time for Evolution! (January 14, 2011), Dr. Axe pointed out the fallacious logic employed by Wilf and Ewens in their paper:
Lacking recourse to anything comparably compelling, Darwinists have always relied heavily on mere repetition of their core beliefs. If you can’t prove something, sometimes you just keep asserting it with an authoritative tone in prominent places, hoping that it will catch on. From what I can tell, that appears to be the most plausible explanation for a paper by Herbert Wilf and Warren Ewens titled “There’s Plenty of Time for Evolution”, which just appeared in the highly regarded Proceedings of the National Academy of Sciences …
So here we have a new research paper that reads very much like a mathematically embellished version of the simplistic “METHINKS IT IS LIKE A WEASEL” argument put forward twenty-five years ago by Richard Dawkins .
In case you missed it the first time around, here’s my two-sentence synopsis. Although it would take eons for unassisted random typing to generate the Shakespearean line METHINKS IT IS LIKE A WEASEL, the task becomes very manageable if something can select the best line from among the many lines of random gibberish, where ‘best’ means most resembling METHINKS IT IS LIKE A WEASEL (however slight that resemblance may be). Couple this with the ability to breed slight variations on what was just selected, and voila! – a line from Shakespeare materializes right before our eyes.
It’s an old argument with an embarrassingly obvious flaw. Yes, meaningful text can evolve very rapidly if selection has foresight or (equivalently) if miraculously helpful fitness functions can be assumed. But alas, neither of these happy circumstances follows from the impersonal kind of selection that Darwinists are committed to.
Dawkins’ illustration makes this abundantly clear, in spite of his intent. He proposed (in my antique copy of his book, it’s on page 48) that this:
is somehow manifestly more fit than this:
but I can’t imagine why it would be, unless the selector (like Dawkins) knows exactly where he wants to go with it. If he does… well, that’s called intelligent design.
In the end, whether evolution has plenty of time or not depends on what you want to ascribe to it. It copes well with the most favorable adaptations conceivable (those offering substantial benefit after a single nucleotide substitution), but even slightly more complex tasks involving just two or three mutations can easily stump it [3,4]. The key question, then, is this: What, of all life’s marvels, can be accounted for in terms of the single-change adaptations that Darwinism explains? And the answer, if we take Dawkins’ illustration seriously, is: Nothing that approaches the complexity of a six-word sentence.
You don’t need a biology degree to see that this leaves Darwinism in a difficult position. In fact, oddly enough, it seems that biology degrees only make it harder to see.
So my advice for advocates of neo-Darwinism is: beware of using fossils to support your case. At the very most, fossils will merely establish that an evolutionary transition has occurred. That’s evidence for common descent, but it’s not evidence for Darwinism, which postulates not only common descent but a particular mechanism as well: random variation, culled by non-random natural selection. What needs to be shown is that that mechanism is adequate to account for the evolution we observe in the fossil record.
The evolution of the whale might be a good argument for evolution, construed broadly to mean “common descent.” But in the absence of a mathematical model for Darwinian evolution which shows that it can accomplish a transition within the time available, the only kind of evolution that whale fossils can possibly support at present is intelligently guided evolution.
But Darwinists have one more trick up their sleeve. “Intelligent Design is a flawed hypothesis,” they argue. An Intelligent Creator would never have made living things the way they are now. Look at the giraffe’s laryngeal nerve, or the panda’s thumb. Look at junk DNA. Darwinism is the only theory that can explain those facts.”
The reader will notice that Darwinists are appealing to a psychological argument here. But psychological arguments are weak, and don’t count for much in science. They’re speculative. Empirical evidence trumps psychological speculation, every time, when the two clash. If a forensic scientist forms a hypothesis about a crime, based on speculation about the criminal’s possible motives, and physical evidence turns up which contradicts that hypothesis, then it’s dead in the water. And that brings me to my fifth and final epistemic principle.
Maxim #5: Empirical evidence comes first. Psychological speculation isn’t evidence.
Male Giant Panda “Tai Shan” (2005) at the Smithsonian National Zoological Park in Washington, D.C. The panda’s thumb is commonly cited as an example of poor design. Image courtesy of Fernando Revilla and Wikipedia.
In her blog article entitled, Young Earth Creationism and Me, Libby Anne is clearly impressed by the following argument for evolution:
…[M]any animals actually show evidence of very bad design.
There are several comments I’d like to make here.
First, even if Intelligent Design proponents had no good explanation for the instances of poor design cited by neo-Darwinian evolutionists, these awkward examples of bad design would still be trumped by the evidence from proteins and RNA, which demonstrate on mathematical grounds that life must have been designed. Why? Because that’s the way science works. Empirical evidence comes first. Show me a bio-molecule (such as a protein) which unguided natural processes couldn’t have put together in the time available, and I’ll have to infer a Designer. Arguments based on structures found in living things which appear to have been poorly designed can never over-rule that kind of evidence, because it’s based on solid empirical facts and mathematical calculations. Examples of poor design in living organisms rely on a hypothetical counterfactual about the Creator: “A Designer would never have done it that way.” Really? Do you know that? No, you don’t. Until you can find some rigorous way of quantifying the probability that a Designer would act in a certain way, your argument will not hold water.
Second, the fact that some features of living things were designed doesn’t mean that all of them were. Maybe proteins were designed, but the panda’s thumb wasn’t. Maybe the Designer only designed the first living thing – or the distinctive body plans for the major groups of organisms – and let Darwinian processes take over at lower taxonomic levels, meaning that species-specific traits were not designed. Who knows?
Third, even the most clear-cut cases of poor design are open to an alternative, design-friendly interpretation. Consider that most comical of anatomical imperfections, the laryngeal nerve of the giraffe, cited by Professor Richard Dawkins as excellent evidence for Darwinian evolution. Now, if the laryngeal nerve were just involved in controlling the larynx, then Dawkins might have a good point. The laryngeal nerve comes down from the brain and loops around the arteries near the heart and then goes back up to the larynx. In the giraffe, this seems like particularly bad design. However, the laryngeal nerve actually has several branches all along its length that go to the heart, esophagus, trachea, and thyroid gland. Thus it is involved in a whole system of control of various related organs. It would be very unintelligent to have a single nerve, controlling only the larynx. It would be more intelligent to have it control a lot of related systems all along its length (see this article .) Hence the laryngeal nerve, far from being a problem for intelligent design, actually vindicates it.
Dr. Jerry Bergman discusses the laryngeal nerve in a recent article, entitled Recurrent Laryngeal Nerve Is Not Evidence of Poor Design, in Acts & Facts 39 (8): 12-14, concludes:
The left recurrent laryngeal nerve is not poorly designed, but rather is clear evidence of intelligent design:
- Much evidence exists that the present design results from developmental constraints.
- There are indications that this design serves to fine-tune laryngeal functions.
- The nerve serves to innervate other organs after it branches from the vagus on its way to the larynx.
- The design provides backup innervation to the larynx in case another nerve is damaged.
- No evidence exists that the design causes any disadvantage.
The arguments presented by evolutionists are both incorrect and have discouraged research into the specific reasons for the existing design.
Fourth, the Wikipedia article that Libby Anne linked to pointed out one good reason for “bad” designs: “the observed suboptimality in one system or another is intentional, as a Trade-off to improve overall optimal design.” Darwinists themselves admit that this notion of a trade-off applies when we look at the design of the human body. “Darwinian medicine” advocate Dr. Randolph Nesse, a psychiatrist at the University of Michigan, was recently interviewed by Professor Richard Dawkins. David Klinghoffer reports:
As a first example of “bad design” that comes to mind, Nesse speaks about our forearm with its two slender bones, the radius and the ulna. If those bones were thicker we wouldn’t be so vulnerable to a kind of fracture, a Colles fracture, that besets skateboarders — who when they fall forward off their board, catch their weight on their extended forearms. That does sound painful, yet the same feature allows us to rotate our arms in countless delicate ways, with a fine dexterity that makes it possible to play the piano or the violin, or paint portraits.
It’s a trade-off between sturdiness and mobility, explains Nesse — a “historical legacy,” an example of “path dependence”: “Everything in the body…is trade-offs all the way down.” He seems not to notice that this is true of all design, in the human context, that you can possibly think of. It’s in the nature of the physical world that every good must be somehow bought at the expense of something else. Only in pure creativity, which happens in the mind, is no compromise necessarily exacted. Translate your creative idea into matter, and it’s a different story.
Fifth, even structures that have been designed are capable of undergoing deterioration. This is an especially pertinent point with respect to DNA. Personally, as an Intelligent Design proponent, I don’t have a big problem with the existence of junk DNA in organisms. After four billion years of evolution, I’d certainly expect to find some. The only question is: how much? What I’m much more impressed with, however, is the design of the first living cell.
Sixth, and finally, the fact that animals are capable of losing organs through neo-Darwinian evolution in no way implies that neo-Darwinian evolution is capable of creating new organs. This point was overlooked by Libby Anne in her blog article, Young Earth Creationism and Me, where she cited vestigial organs as evidence for evolution:
Finally, there is the whole issue of vestigial organs. Did you know that whales have hip bones? Whales and dolphins were originally land animals and then moved back into the water.
All well and good; but I’d be much more impressed if Libby Anne could provide me with a good explanation of the origin of the tympanoperiotic complex(or the so-called “ear-bone”) in toothed whales, porpoises and dolphins (odontocetes). I blogged about this a few weeks ago in a post entitled, Darwinians concoct a whale of a tale about the evolution of the ear. I’ll just quote a short excerpt, to convey the magnitude of the problem facing neo-Darwinists:
…[A] lot of evolutionary innovations (or apomorphies) seem to have occurred over a very short period of evolutionary time, in the lineage leading to whales. For instance, Zhexi Luo, on page 274 of his article, ”Cetacean Ectotympanic Structures” in The Emergence of Whales: Evolutionary Patterns in the Origin of Cetacea edited by J.G.M. Thewissen (Plenum Press, 1998), listed the following six apomorphies in the ear bones as hallmark traits of the “post-Pakicetidae” Cetacea – in other words, creatures in the whale line after Pakicetus, which means the Protocetidae, as well as their descendants, the Basilosauridae and the Dorudontidae, the latter of whom gave rise to modern whales, which fall into two groups – baleen whales (Mysticeti) and toothed whales (Odontoceti):
(a) An incipient conical apophysis.
(b) The tympanic opening for the external meatus is reduced.
(c) The sigmoid process is twisted and has involuted margins.
(d) Elongate posterior process of the ecotympanic to cover the entire length of the mastoid process of the petrosal. The posterior portion of the posterior process is a horizontal plate (not vertical).
(e) A median furrow on the ventral surface of the bulla.
(f) Double pedicles for the posterior process of the tympanic bone.
(Sorry for the anatomical jargon. If anyone can interpret it, they’re welcome.)
Maiacetus, a whale in the Protocetid family. Image courtesy of Cliff, FunkMonk, National Museum of Natural History and Wikipedia.
So there were six evolutionary innovations which appeared for the first time in the protocetids! That’s a whole lot of evolution going on, and all in the one organ: the cetacean ear.
To be clear, I’d like to point out that Dr. Zhexi Luo is no friend of Intelligent Design: he discusses the homologies between whales and their near relatives within a Darwinian framework, and he also invokes embryology to explain some of the distinctive traits of whales. That’s fine, but I have to say that I found no detailed explanation for the origin of any particular structure in the ear of whales.
Concluding thoughts on epistemology
In this essay, I have highlighted five key epistemological principles relating to science, the neglect of which can make people vulnerable to accepting weak, unconvincing or even fallacious arguments for the theory of neo-Darwinian evolution. Libby Anne is by no means the first person to be hoodwinked into accepting neo-Darwinian evolution as a result of failing to advert to these principles, and she won’t be the last. I hope this essay of mine will prompt her to question her beliefs.
SECTION C – A RESPONSE TO LIBBY ANNE’S QUESTIONS ON GOD AND RELIGION
Is faith in God immune to falsification?
Another thing that led Libby Anne away from belief in God was that the belief seemed immune to falsification, as she explains in a post titled, Searching for the Baby in the Bathwater:
Christians have set it up so that God can never fail them. Your child survives cancer? Praise God! He healed your child! Your child dies of cancer? It was God’s will, and he’s teaching you things through it.
In practice, whether God exists or not is completely irrelevant. Christians don’t get sick less, they don’t have greater financial success, and studies have shown that prayer does not actually help… It took me almost five years from start to finish, but in the end I concluded that there was no baby in the bathwater after all.
I really feel that this is not a fair criticism. For my part, I’ve listed no less than seven observations that would cause me to abandon my belief in God, in a blog article entitled, My faith is falsifiable, Professor Coyne. Is yours?. I would invite Libby Anne to go and take a look at them, and see what she thinks.
Libby Anne’s claim that prayer doesn’t make sick people better is a common one, but in fact there are medical studies suggesting that prayer actually works, although they’re not conclusive. I’ve discussed the evidence that prayer helps sick people in my blog post, No evidence, you say? A reply to Eric MacDonald.
Some answers to Libby Anne’s questions on Christian doctrines
Readers who have no interest in religious matters may stop reading here, if they wish.
In a post titled, Searching for the Baby in the Bathwater (February 23, 2012), and in a more recent post titled, Omniscience, the Trinity, and Free Will: Why I can’t believe (May 12, 2012), Libby Anne brings up several religious doctrines that make absolutely no sense to her, and which caused her to give up her religious faith. Libby Anne also reveals that after quitting evangelicalism, she spent a couple of years in the Catholic Church, before she became convinced that its doctrines also made no sense. I can strongly sympathize with her position here, as I gave up Catholicism and Christianity for much the same reasons back in 1989, before returning to the faith some 15 years later. The questions below are extracted from Libby Anne’s posts, and the answers I’ve written are my own, so I take full responsibility for them.
Q. If God knew before he created the world that he would create the world and exactly everything he would do in the future – since he’s supposedly omniscient – then would God have free will?
The short answer is: No.
The longer answer is that you have to be careful about the word “before.” Philosophers distinguish two kinds of priority: logical and temporal. A is logically prior to B if the very concept of B presupposes A. For instance, the concept of property is logically prior to the concept of theft: if no-one owned anything, the notion of theft wouldn’t make any sense. A is temporally prior to B if A occurs at an earlier time than B. For instance, childhood is temporally prior to adulthood.
God is outside time, so we can’t speak of Him being temporally prior to anything. He could, however, warn people at time T1 about dangers they’d face at a subsequent time T2, since He is omniscient and knows the future.
What about God’s knowledge of His own choices? The real question at issue here is: are God’s choices determined by His (logically prior) knowledge of those choices, or is His knowledge of those choices determined by His act of making those choices? The first option, aside from destroying God’s freedom, is circular, as it begs the question of how God knows His choices, logically prior to His act of making them. The second option, on the other hand, gives God back His freedom, and avoids circularity. That’s the one I accept, as do nearly all Christian theologians past and present.
Q. Did God put Adam and Eve in a situation where he knew they would fail and then punish them for it? Was there an emergency plan B forced upon the Creator of the universe after the Fall in Genesis?
The short answer is: No to the first question, and Yes to the second.
Theologians have wrestled with the question of how God knows human choices for the past 2,000 years. Three main solutions have been proposed.
Some, such as Calvin (as well as the later Augustine and St. Thomas Aquinas, as I read them), held that God knows our choices by determining them. That’s predestination. Hence His knowledge of our choices is logically prior to our act of making those choices. That seems to me to destroy human freedom. It also (to my mind) makes God responsible for evil, as I cannot be held accountable for doing a bad action if my doing it is determined by circumstances beyond my control (i.e. by God’s sovereign will).
Other theologians, such as Molina, held that human beings have genuine libertarian freedom (the power to choose otherwise than what they do), but that God also has a basic, ungrounded knowledge of not only everything we do, but also everything we would do, in every possible situation. (William Lane Craig is also of this view.) Molinism seems to put the cart before the horse, however, because it says that God’s knowledge of what I would choose in every possible situation is logically prior to His act of creating me. Another problem I have with Molinism is that it doesn’t really make human beings free. For if (as Molinism maintains) it is true that for any choice that I actually make in a given situation, that was the choice I would have made in that situation, then there is no meaningful sense in which I could have chosen otherwise in that situation. Hence I don’t have libertarian freedom. Finally, if God, in choosing which world He will actually create, from among an array of possible worlds, selects one in which He knows certain individuals will be damned because of decisions that they would make, then God has already ensured the damnation of those individuals, simply by deciding to create that world. Consequently, if people are damned for their bad choices in this world, they are no more responsible for their own damnation than they would be if the doctrine of predestination were true. Is such a God any more merciful than the God who predestines everything? I think not.
For my part, I favor a third solution to the problem of God’s foreknowledge, which was first proposed by Boethius in the sixth century: God can be timelessly made aware of our past, present and future choices. That’s how He knows them. It doesn’t matter for my purposes whether you conceive of God as atemporal, or outside time (as Boethius did) or as omnitemporal, or occupying all points in time (as some modern philosophers do). On the Boethian view, God’s foreknowledge of our choices logically presupposes our making those choices. In other words, our choices are logically prior (but not temporally prior) to God’s knowledge of them. Our choices (which are made in time) determine God’s timeless awareness of those choices. God is like a watcher in a high tower, Who can see past, present and future in one sweep, for He is outside time (or atemporal).
(A variant on the Boethian view is that God is not atemporal but omnitemporal, just as he is omnipresent: on this view, He is at all points in space and all points in time. One could then argue that it is simply God’s nature to know past, present and future events alike. This is what David Misialowski does in his article, Theological Fatalism Part 1, Part 2. What makes these articles especially is that they are written by a self-described “agnostic atheist.” Even though he is a skeptic, Misialowki believes he can show that “no theist need fear the argument, heard so often from atheists intent on discrediting religious belief, that an omniscient God cancels human free will and moral responsibility.”)
One big advantage of the Boethian account over is that it acquits God of all responsibility for the damnation of any human being. If some people are damned because of the choices they have made, then God only knows this after the fact, logically speaking (not temporally, as God is outside time). All He does is reluctantly acquiesce in the decisions that wicked people make at the end of their lives, to eternally separate ourselves from him. God doesn’t force Himself on people; if people want to be left alone, then in the end, He’ll grant them their wish.
The Boethian account has been defended by John Wesley and C. S. Lewis, and it is also popular among Christian laypeople. Theologians have raised various objections to the Boethian account, which I have answered in detail here. Some theologians don’t like the Boethian view because it makes God dependent on His human creatures for His knowledge of their choices: He is made aware of what they do by their doing it. I don’t have a problem with that: if God freely accepts that limitation in deciding to create beings with libertarian free will, then that’s His choice.
One interesting consequence of the Boethian account is that although God sees my future choices, He cannot tell me what I will do tomorrow, for in so doing, He would make it possible for me to prove Him wrong by choosing to do otherwise (as I have libertarian free will). On the other hand, He could tell someone who is too weak to choose the right thing that they will sin in the future. That’s how Jesus could tell Peter, whose faith was still weak, that He would deny His Master three times. (God would have also arranged for Peter to be tested three times.)
To sum up: on the Boethian view, God did not know, logically prior to His decision to create human beings, that they would sin and turn against Him. His default plan (or Plan A) was for them to enjoy happiness with Him in Paradise. Nevertheless, God realized that human beings might choose to sin, and made “back-up” plans. So yes, the Fall in Genesis did force an emergency plan B on God, on the Boethian view.
Q. Why couldn’t God just forgive people, why did he have to have his son murdered in order to be able to do so?
First, God didn’t “have His son murdered.” He allowed His Son to be murdered.
Second, He didn’t have to allow His Son murdered. Many theologians down the ages have taught that God could have redeemed the human race in some other way. The commonly accepted teaching is that God chose to become incarnate and submit to crucifixion, because that would be the most perfect way for Him to redeem the human race, albeit an extremely painful one. Death on the Cross shows, like nothing else, how much God loves us.
Does that sound like human sacrifice? Yes – but with a difference: the victim freely accepted His death. Jesus didn’t have to die, which is why the Catholic Mass contains the words, “a death He freely accepted.”
Q. And besides that, how does God have a son who can come to earth and die, and yet he and this son are one being, together with the Holy Spirit? Let’s face it, the Trinity makes no sense.
Here’s how I explain the doctrine to myself. God doesn’t have three minds or three consciousnesses. There’s only One Mind, One Consciousness, in God. What we call God the Father is the Mind and “Heart” of God – the font of God’s knowledge and God’s love. God the Son is simply God’s knowledge, or consciousness, or idea, of Himself. We can speak of the Son as the “I” or self-expression of the Father. God the Holy Spirit is God’s love of Himself. God loves Himself through knowing Himself. Hence God the Holy Spirit is the “we” of the Father and the Son. God is a personal agent, and knowledge and love are the two activities that characterize personal agents. These two activities are irreducibly distinct: knowing and loving are not the same activity, and both are different from simply being. Hence there is a distinction between God as such, God’s self-knowledge and God’s self-love. On the other hand, none of these can exist without the other two: hence they are distinct but inseparable. God, being infinitely perfect, necessarily knows Himself and loves Himself. He does that by virtue of His very Nature. I, being a mere creature, have an intellect that knows and a will that loves. But God the Creator doesn’t merely have an intellect and will; He is His intellect and will. His intellect and will are His very essence. That’s what we mean when we say that God is three persons in one and the same essence.
The three persons are not merely three faces of God, for that would be Modalism, and would make God a Trinity only from an outsider’s perspective. The three-ness is intrinsic to God, since God necessarily knows and loves Himself. It is in this sense that the Catholic Encyclopedia, in its article on the Trinity, refers to the three persons as “three modes of existence” of “the same mind.”
God’s self-knowledge is (eternally) generated by God’s Mind, and God’s self-love is (eternally) produced by this same Mind (or “Heart” if you prefer), through His knowledge of Himself. That’s what we mean when we say that God the Son is begotten of the Father, and that God the Holy Spirit proceeds from the Father through the Son.
The doctrine of the Trinity is not provable by unaided human reason, for we can never understand the nature of God. Nevertheless reason can make the doctrine philosophically plausible.
On a practical level, the doctrine of the Trinity means that we should not strive to have a two-way relationship with God. Rather, we should strive to have a six-way relationship with God, since there are three Divine persons, or three irreducibly distinct modes of existence of the same Being.
God chose to create a human nature (body and soul, with all its human faculties) and to take over that nature. The human in question was Jesus. Although Jesus had a human nature, he had no human ego. In this one respect, He was different from other human beings. He had a human intellect, a human will, a human heart with human feelings, and a human body that could suffer and die. But he had no human ego at all. What took its place? God did – or more precisely, the person of God the Son, Who is God’s self-consciousness. Christians believe that God the Son (that is, God’s self-consciousness) assumed Jesus’ human nature. Jesus is the human revelation of God, in the flesh. That’s why the person of Jesus is God the Son (Who is God’s expression of Himself) rather than God the Father or God the Holy Spirit. To say that God the Son assumed a human nature means He took it over, without in any way destroying its integrity. One consequence of this teaching is that Jesus’ human will was completely free, and yet incapable of going against God’s will (the doctrine of the impeccability of Christ). Jesus’ human freedom allowed Him to choose between different goods; but because He was a Divine Person, He could never choose evil.
God’s choice to become incarnate in order to save us was a timeless choice, like all of God’s choices, even though it was caused by (and logically subsequent to) God’s knowledge of the Fall of man. Hence “when the Word took Flesh, there was no change in the Word; all the change was in the Flesh,” as the Catholic Encyclopedia says in its article on the Incarnation.
By the way, some theologians have taught that God would have become incarnate even if Adam hadn’t sinned. This was the view of Duns Scotus, for example.
Q. When reading this article [about the Fall and God’s plan to redeem the human race through the Incarnation] I wondered to myself how God the Father broke the news about this plan to Jesus… But then I realized that given that God and Jesus are supposed to be one and the same, no such conversation would be necessary. Except that when Jesus was on earth he did have conversations with God the Father, and God the Father knew things Jesus didn’t.
You’re quite right in saying that God the Father does not need to “break the news” to God the Son, as if they possessed two distinct centers of consciousness. The Son is the Father’s consciousness of Himself. As for Jesus talking to His Father while on earth, that’s because He had (and still has) two minds, with two distinct consciousnesses; a Divine Mind which knows everything and a human mind which is finite and doesn’t know everything. (The Church teaches that Jesus had two wills, a Divine will and a human will, which are necessarily in harmony; hence He must also have two intellects or minds.) When the Gospels talk about Jesus conversing with His Father, they’re referring to Jesus’ human mind conversing with His Father.
However, there is a sense in which we can say that God talks to Himself, because He knows and loves Himself perfectly. Hence we can say that God communes with Himself.
Q. Why did the [Catholic] church condemn birth control? Why was masturbation wrong? Why did priests have to be celibate? Just believe, the Catholic Church said. Just accept. We know what is best.
According to Catholic teaching, priests don’t have to be celibate. During the first three centuries, they weren’t, and even to this day, in the Eastern rites of the Catholic Church, married men can be ordained priests. The reasons for priestly celibacy in the Latin rite can be found here.
As for the Church’s teachings on sex, they can be easily understood if we first grasp the notion that to be good is to be whole, or fully integrated, and that a bad act is a self-stunting act.
The next thing that needs to be grasped is that human beings, like all living creatures, have certain built-in ends. For example, the end or purpose of the heart is to pump blood around the body; that is what it is for. We can also speak of human processes and activities as having an end or purpose. Thus the purposes of eating and breathing are to supply the body with food and oxygen respectively.
What’s the purpose of sex? Sex has a two-fold end. It serves to unify a couple, at the deepest psychological level. It also has a procreative purpose: sex is for making babies. Thus sex is essentially both a life-giving act and a love-giving act. That doesn’t mean each and every sexual act has to give life, but it does mean that sex is about life, as well as love. Both ends are built into the same act.
Masturbation is wrong because it consists of sex divorced from both life and love. It’s a narcissistic act, which sunders sex from both of its built-in ends. To engage in such an act is self-stunting, and hence bad.
According to Catholic teaching, using contraceptives is wrong because it is an attempt to divorce the life-giving aspect of sex from the love-giving aspect, robbing the act of one of its built-in ends. As such, contraceptive sex is stunted sex, which is why the Catholic Church regards it as wrong.
A married couple who have sex when one partner is either permanently or temporarily infertile do no wrong, so long as it is not their intention to divorce the life-giving aspect of sex from the love-giving aspect. Hence natural family planning and sex after menopause are morally permissible.
Two people who have sex without having made a life-long commitment to one another are incapable of realizing the love-giving aspect of the act at its deepest level, as their love lacks commitment. Nor can they properly realize the life-giving aspect of the act, as the responsible creation of a new human life presupposes a willingness to stay together for life, in order to take care of that child. That’s why the Church teaches that premarital sex is wrong.
Homosexual sex is sex which is by its very nature incapable of creating a new life, unlike sex between infertile couples, which is robbed of its procreative aspect by external circumstances which have nothing to do with the act itself (age or infertility). Since homosexual sex is incapable of having a life-generating significance, then it is by its very nature incapable of realizing one of the two built-in goals of sex itself. Hence it too stunts the human psyche and is wrong. The same goes for anal sex between married couples.
That’s the Catholic teaching on sex, in a nutshell. You may or may not agree with it, but it does make sense, and it isn’t something Catholics are told to “Just believe,” as Libby Anne seems to think. It claims to be based on natural law.
Q. [Doesn’t] the Catholic Church admit that there are errors in the Bible?
No. I don’t know where Libby Anne got this funny idea.
“Since therefore all that the inspired authors or sacred writers affirm should be regarded as affirmed by the Holy Spirit, we must acknowledge that the books of Scripture firmly, faithfully, and without error teach that truth which God, for the sake of our salvation, wished to see confided to the Sacred Scriptures.” (Vatican II, Dei Verbum 11.) However, the Catholic Church does not interpret the Bible in a crudely literalistic fashion. To quote from the New Catechism:
109 In Sacred Scripture, God speaks to man in a human way. To interpret Scripture correctly, the reader must be attentive to what the human authors truly wanted to affirm, and to what God wanted to reveal to us by their words.75
110 In order to discover the sacred authors’ intention, the reader must take into account the conditions of their time and culture, the literary genres in use at that time, and the modes of feeling, speaking and narrating then current. “For the fact is that truth is differently presented and expressed in the various types of historical writing, in prophetical and poetical texts, and in other forms of literary expression.”76
111 But since Sacred Scripture is inspired, there is another and no less important principle of correct interpretation, without which Scripture would remain a dead letter. “Sacred Scripture must be read and interpreted in the light of the same Spirit by whom it was written.”77
Well, I hope that answered some of Libby Anne’s questions.
In my next post, I’ll address the question of whether Libby Anne’s atheistic outlook can provide a suitable foundation for ethics, before proceeding to discuss her views on human life and abortion.