![]() |
Crossing swords with a professional philosopher can be a dangerous thing. I’m not one, of course; I simply happen to have a Ph.D. in philosophy. But Professor Edward Feser is a professional philosopher, and a formidable debating opponent, as one well-known evolutionary biologist is about to find out.
In a recent post of mine, entitled, Minds, brains, computers and skunk butts, I took issue with a recent assertion by Professor Jerry Coyne, that the evolution of human intelligence is no more remarkable than the evolution of skunk butts. (To be fair, Coyne was not trying to be offensive in his comparison: apparently he really did have a pet skunk for several years, and the simile was the first that sprang to mind for him, as a biologist.) In my post, I cited a philosophical argument put forward by Professor Feser, that the intentionality or “meaningfulness” of our thoughts cannot be explained in materialist terms, as thoughts have an inherent meaning, whereas physical states of affairs (such as brain processes) have no inherent meaning as such. However, Professor Coyne was not terribly impressed with this argument. He replied as follows:
I’ll leave this one to the philosophers, except to say that “meaning” seems to pose no problem, either physically or evolutionarily, to me: our brain-modules have evolved to make sense of what we take in from the environment. And that’s not unique to us: primates surely have a sense of “meaning” that they derive from information processed from the environment, and we can extend this all the way back, in ever more rudimentary form, to protozoans.
He shouldn’t have said that.
Professor Edward Feser has just issued a devastating response to Professor Coyne over at his Website. I’d like to invite readers at Uncommon Descent to have a look at it for themselves, here. It’s a very entertaining read. Feser concludes:
… if one is going to aver confidently that “‘meaning’… pose[s] no problem,” he had better give at least some evidence of knowing what the philosophical problem of meaning or intentionality is and what philosophers have said about it.
Wise words, indeed.
Now, it occurs to me that Professor Coyne, upon reading Professor Feser’s post, might attempt to argue as follows: “When I wrote about our ancestors’ brains ‘making sense’ of their environment, I didn’t mean that they needed to affirm certain propositions about it. I simply meant that they could discriminate between different states of affairs (e.g. friend vs. foe; safe vs. poisonous food) in a way that accrued to our ancestors’ biological advantage. All our higher-level senses of ‘meaning’ subsequently evolved from that ability.”
However, what this response overlooks is the fact that Coyne’s earlier argument contained a subtle but illicit equivocation. Being able to discriminate between A and B is quite different from being able to understand the definition of what it means to be A or B. Likewise, explaining how the human brain came to be able to distinguish safe from poisonous food is not the same thing as explaining how human beings came to be able to talk to each other (using the vehicle of language) about the idea of putting poison in someone’s food in order to kill them. To do that, you need words that have a pre-agreed meaning, and you need to be able to put your thoughts into words that other people can understand. And if that sounds easy, take a look at the previous sentence, and ask yourself how many words have a meaning that you could communicate to a clever chimp by pointing and gesturing. Try communicating the meanings of “to,” “that,” “need,” “word,” “have,” “a,” “agreed,” “and,” “able” and “into” to a chimp, using sign language. Finally, try communicating the meaning of “thought” and “meaning” using nothing but sign language. Somehow I don’t think the attempt is going to work.
So when Professor Coyne asserts in his latest reply that “the brain is a meat machine that cranks out thoughts and emotions, and when the brain dies, so do its products,” it is he who is begging the question. He is assuming that brains can do something that no physical object can do: namely, generate propositions that have a meaning in their own right, despite the fact that brain processes that do the generating are utterly devoid of meaning in their own right.
Regarding Professor Coyne’s other major assertion, that a chimp-sized brain would require a growth rate of only 0.00056% per generation, over a five-million-year period, to attain a human brain size: I don’t dispute his mathematics for a moment. What I do dispute, however, is the implicit claim that growth in volume, or some other incremental quantitative change, is all you need to get from a chimp-sized brain to a human brain. In my earlier post, I cited various experts (e.g. Professor Bruce Lahn, a Howard Hughes Medical Institute researcher at the University of Chicago), who had argued that human evolution would have required a large number of mutations happening in a large number of genes, that the changes which generated the human brain were “categorically different from the typical processes of acquiring new biological traits” and that human evolution was not just a matter of spontaneous advantageous mutations arising within the human lineage. (See here for Lahn’s preferred explanation of the unique traits of the human brain.) Unlike Lahn, I believe that the changes that eventually gave rise to the human brain were intelligently guided; but like Lahn, I believe that the question of the human brain’s origin should be resolved by scientific research and experimentation. Mathematical calculations about growth rates won’t help us much, as the human brain is not merely a scaled-up version of a chimp’s.