In the July 9th, 2005 issue of The New Scientist, there appears the following passage quoting Brown University’s Ken Miller:
Ã¢â‚¬Å“ItÃ¢â‚¬â„¢s what statisticians call a retrospective fallacy.Ã¢â‚¬Â It is like equating the odds of drawing two pairs in poker with the odds of drawing a particular two-pair hand – say a pair of red queens, a pair of black 10s and the ace of clubs. Ã¢â‚¬Å“By demanding a particular outcome, as opposed to a functional outcome, you stack the odds,Ã¢â‚¬Â Miller says. What these calculations fail to recognise is that many different protein sequences can be functional. It is not uncommon for proteins in different species to vary by 80 to 90 per cent, yet still perform the same function. [Go here for the article.]
I commented on this article, and about Miller’s charge of the “retrospective fallacy” in particular, here. Miller’s point is that ID proponents like me fail to make an appropriate type-token distinction, focusing on the improbability of a particular token of a protein/gene/etc. when in fact they should be focusing on the improbability of the type of protein/gene/etc. that performs the same function as the token. This charge is unwarranted. In fact, I’ve explicitly countered this concern in my writings (notably in section 5.10 of No Free Lunch, where I assign probabilities in terms of perturbation tolerance and perturbation identity factors — these factors take into account variants/perturbations of tokens that belong to the same functional type).
Thus, what Miller means by a “retrospective fallacy” fails to apply to ID reasoning. My main concern here, however, is his statement that the term “retrospective fallacy” is common usage among statisticians. I’ve looked through my statistics and probability texts (I own quite a number) and failed to find this usage. I also looked through my books on informal logic and fallacies and again found no reference to “retrospective fallacy.” Perhaps I’m missing something. Or perhaps Miller just made it up.
I’ll write him and find out. Stay tuned.