A couple months ago, I took an online “moodle” class with the Center for Inquiry on the topic of the evolution debate. The instructor was the renowned philosopher and evolutionary biologist, Massimo Pigliucci. I entered into an exchange of dialogue with Pigliucci and the other students on the evidence for the efficacy of naturalistic evolution, as well as presenting some counter-arguments against it and for ID.
During the course of our discussion, Pigliucci made some claims which astonished me — especially as arguments coming from a trained philosopher and world-renowned evolutionary theorist. To my surprise, when I articulated the numerous probabilistic hurdles — hurdles which are so pervasive at every level — which Darwinism has to overcome in order to be considered a viable paradigm, he wrote,
No evolutionary biologist I know…actually attaches probabilities to specific evolutionary events of the type you are talking about. There is no way to do that. Similarly, there is no way to attach probabilities to the set of physical laws regulating our universe, for the simple reason that we have no sample population to draw from (which is why typically you estimate probabilities).
This struck me at the time as a very strange argument to be making given the fact that many Darwinists (Dawkins & Futuyma spring to mind) say that the brilliance of Darwin was to reduce the improbability of getting complex, design-like systems. What was the whole point of “Climbing Mount Improbable”? The point was that probability didn’t have to jump up the sheer face of the cliff. It could meander up the gently sloping rear side, in small probability increments. But if we can’t assign probabilities to the events, exactly what has Darwin’s theory done?
In response to Massimo, I cited several attempts by Darwinists — many of which feature in the peer-reviewed literature — which attempt to demonstrate the efficacy of the Darwinian mechanism by virtue of probabilistic arguments. I wrote,
I am not convinced that we are reading the same literature. I am sure that the recent Wilf and Ewens PNAS paper cannot have escaped your notice (much was made of it by several prominent internet bloggers). The whole purpose of this paper was to demonstrate (unsuccessfully, in my opinion) that “There’s plenty of time for evolution” and it attempts to address probabilistic arguments against the efficacy of blind evolution.
Another such paper which springs to mind is the Durrett and Schmidt (2008) paper, which attempts to calculate the waiting time for a pair of mutations, the first of which inactivates an existing transcription factor binding site and the second of which creates a new one.
Moreover, if not to simulate the probabilistic feasibility of the Darwinian “search” function, what is the purpose of evolutionary computer algorithms such as Dawkins’ Weasel or Lenski’s Avida? The evolutionary informatics lab lists several peer-reviewed publications which evaluate the probabilistic plausibility of Darwinian theory (and find it wanting).
Further, Sean B. Carroll makes a probabilistic argument in his book, The Making of the Fittest, in his discussion of evolutionary convergence. He begins by introducing “some hard evidence from the evolution of ultraviolet vision in birds.” He continues, “In four different orders, there are both ultraviolet sensing and violet-sensing species. This means that the switch between violet-sensing and UV-sensing capabilities must have evolved at least four separate times. The difference between birds is always correlated with a particular amino acid, at position 90 in their short wavelength (SWS) opsin; birds with a serine in this position are tuned to violet, birds with a cysteine here are tuned to UV.”
Carroll explains that “this amino acid is encoded by DNA positions 268-270 in the text of the birds’ SWS opsin genes. Close scrutiny of the DNA text of the birds’ SWS opsin gene reveals that the difference between serine and cysteine involves just a single letter of the DNA text at position 268.”
So, in the case of switching from a violet-sensing opsin to an ultra-violet sensing opsin, you need a mutation at position 268 and this must have occurred independently four times. Carroll then reaches for the calculator in an attempt to reassure us that convergent evolution is not only probable, but “abundantly so”.
The figures used in the calculation are as follows:
Average per-site rate of mutation = 1 per 500,000,000 bases.
Number of copies of the gene = 2 copies
Number of offspring produced per year = estimated to be at least 1 million offspring per year.
Because there are 2 copies of the gene, we can cut the average per-site rate of mutation to 1 per 250,000,000 offspring.
There are three possible mutations at the locus (A to T, A to C and A to G). Only A to T will create a UV-shifting Cysteine. Assuming that the probability of each mutation is similar, this means that one out of three mutations at this locus will cause the switch. Thus, one A to T mutation will occur in roughly 750 million birds.
Carroll then factors in the number of offspring produced per year (taken as 1 million per year). When we divide this into the rate of one mutation per 750 million birds, the result is that the serine-to-cysteine switch will occur once every 750 years. Thus, Carroll argues, Darwinism may be rendered a plausible explanation for such convergence.
Is that not a probabilistic argument? I don’t find this argument very impressive (it cherry picks). But this should be sufficient to refute your claim that evolutionary biologists are not interested in evaluating probabilistic feasibility.
To this, I received no response.
Indeed, the whole discipline of population genetics is predicated on evaluating probabilistic feasibility (e.g. “How long will a certain variant take to appear and be fixed in the population given this population size and generation turn-over time?”). But it doesn’t end there. Darwinists are accustomed to making probabilistic arguments all the time in order to establish common ancestry (e.g. “What are the chances of these same parallel substitutions or element inserts occurring by convergence in independent lineages?”). Of course, when shared similarities can be explained by common descent, these similarities are taken as evidence for the descent model. On the other hand, when shared similarities cannot be explained by common ancestry, it is taken as evidence for convergent evolution. In chapter 5 of The Myth of Junk DNA, Jonathan Wells highlighted two papers (Balakirev and Ayala 2003; Khachane and Harrison 2009) in which similarities in pseudogenes (which cannot be explained by descent) are taken as presumptive evidence that those pseudogenes are functional. The whole argument is thus rendered circular.