One of the clearest and most compelling arguments against materialism is that it is unable to account for the simple fact that our thoughts possess a meaning in their own right. As philosopher Ed Feser puts it in an online post entitled, Some brief arguments for dualism, Part I:
Thoughts and the like possess inherent meaning or intentionality; brain processes, like ink marks, sound waves, and the like, are utterly devoid of any inherent meaning or intentionality; so thoughts and the like cannot possibly be identified with brain processes.
The argument seems especially convincing when we consider abstract concepts. Consider the famous line, “Honesty is a greatly overrated virtue,” from Jane Austen’s Pride and Prejudice. It seems preposterous to suppose that a concrete entity like a set of neurons, or even a neural process, could mean “honesty,” “virtue,” or any of the other words in that memorable quote.
Now, however, the materialists are fighting back, and attempting to locate meaning in the brain itself. A team of cognitive neuroscientists claims to have identified the areas of the brain that are responsible for processing the meanings (and not just the sounds) of specific words. Their findings were presented at the 2012 Society for the Neurobiology of Language Conference in San Sebastian, Spain. Presenting the team’s research findings, Joao Correia of Maastricht University told the conference that his team had decided to address the vital question: “How do we represent the meaning of words, independent of the language we are listening to?” A report in New Scientist magazine entitled,
“Mind-reading scan locates site of meaning in the brain” (16 November 2012) by Douglas Heaven, takes up the story:
To begin the hunt, Correia and his colleagues used an fMRI scanner to study the brain activity of eight bilingual volunteers as they listened to the names of four animals, bull, horse, shark and duck, spoken in English.
The team monitored patterns of neural activity in the left anterior temporal cortex – known to be involved in a range of semantic tasks – and trained an algorithm to identify which word a participant had heard based on the pattern of activity.
Since the team wanted to pinpoint activity related to meaning, they picked words that were as similar as possible – all four contain one syllable and belong to the concept of animals. They also chose words that would have been learned at roughly the same time of life and took a similar time for the brain to process.
They then tested whether the differences in brain activity were related to the sound of the word or its meaning by testing whether the algorithm could identify the correct animal while the participants listened to the Dutch version of the word.
The system was still able to identify which animal had been named, despite being trained with patterns generated for English words. For example, the word “horse” and its Dutch equivalent “paard” gave rise to the same brain pattern, suggesting that the activity represented the word’s meaning – the concept of a horse…
“This type of pattern recognition approach is a very exciting scientific tool for investigating how and where knowledge is represented in the brain,” says Zoe Woodhead at University College London, who wasn’t involved in the study. “Words that mean the same thing in different languages activate the same set of neurons encoding that concept, regardless of the fact that the two words look and sound completely different.”
As resolutions in brain imaging improve, Correia predicts that a greater number of words will be predicted from brain activity alone. In principle, it might even be possible to identify whole sentences in real time, he says…
So, have Correia and his team located the meaning of words in the brain? Summing up their research findings, Correia et al. wrote in the Abstract of their report (delivered on Friday October 26th, 2012, at 2:20 p.m., at Slide Session B; see p. 12 of the Conference Report):
The results of our discrimination analysis show that word decoding involves a distributed network of brain regions consistent with the proposed ‘dual-stream model’ (Hickok and Poeppel, 2007). The results of our generalization analysis highlights a focal and specific role of a left anterior temporal area in semantic/concept decoding. Together, these distributed and focal brain activity patterns subserve the extraction of abstract semantic concepts from acoustically diverse English and Dutch words during bilingual speech comprehension.
I had never heard of the Dual Stream model until I came across this report, and I suspect most of my readers won’t have heard of it, either. Professor Greg Hickok helpfully explains the model in a post entitled, Dual Stream Model of Speech/Language Processing: Tractography Evidence (Wednesday, December 3, 2008), on a blog called Talking Brains – News and views on the neural organization of language which he and co-author Professor David Poeppel moderate:
The Dual Stream model of speech/language processing holds that there are two functionally distinct computational/neural networks that process speech/language information, one that interfaces sensory/phonological networks with conceptual-semantic systems, and one that interfaces sensory/phonological networks with motor-articulatory systems (Hickok & Poeppel, 2000, 2004, 2007). We have laid out our current best guess as to the neural architecture of these systems in our 2007 paper…
[A diagram illustrating the model is included in the post.]
It is worth pointing out that under reasonable assumptions some version of a dual stream model has to be right. If we accept (i) that sensory/phonological representations make contact both with conceptual systems and with motor systems, and (ii) that conceptual systems and motor-speech systems are not the same thing, then it follows that there must be two processing streams, one leading to conceptual systems, the other leading to motor systems. This is not a new idea, of course. It has obvious parallels to research in the primate visual system, and (well before the visual folks came up with the idea) it was a central feature of Wernicke’s model of the functional anatomy of language. In other words, not only does the model make sense for speech/language processing, it appears to be a “general principle of sensory system organization” (Hickok & Poeppel, 2007, p. 401) and it has stood the test of time.
The abstract of Hickok and Poeppel’s original 2007 paper, The cortical organization of speech processing (Nature Reviews Neuroscience, 8 (5), 393-402 DOI: 10.1038/nrn2113) is even more succinct:
Despite decades of research, the functional neuroanatomy of speech processing has been difficult to characterize. A major impediment to progress may have been the failure to consider task effects when mapping speech-related processing systems. We outline a dual-stream model of speech processing that remedies this situation. In this model, a ventral stream processes speech signals for comprehension, and a dorsal stream maps acoustic speech signals to frontal lobe articulatory networks. The model assumes that the ventral stream is largely bilaterally organized – although there are important computational differences between the left- and right-hemisphere systems – and that the dorsal stream is strongly left-hemisphere dominant.
So much for the theoretical background. What we need to ask ourselves now is: what have Correia and his team actually established?
The research findings of Correia et al. certainly lend support to the idea that the left anterior temporal cortex is involved in decoding words in sentences in a way that assists with identifying the meanings of these words, rather than their sounds. However, I think it would be an unwarranted leap to conclude that this part of the brain plays a special role in identifying the actual meaning of a word. Instead, what I would propose is that this region plays a subsidiary but nonetheless role, preparatory to the activity of locating the meaning of a word.
What I am tentatively suggesting is that the left anterior temporal cortex may store collocations (or frequent co-occurrences of words), by means of neural connections whose strength corresponds to the relative frequency with which two words are found to occur together. In other words, this part of the brain doesn’t store the meanings of words, but the frequency with which a word having a certain meaning (whether in English or Dutch) is likely to be used with certain other words. If you can identify one word in a sentence, this part of the brain would definitely help in identifying the other words that it is likely to be used with – irrespective of how those words sound in the two languages. That’s why it’s so useful for semantic decoding.
Even when individuals are only exposed to single words (as in the experiment conducted by Correia et al.), their brains would naturally search for related words, because human beings are, after all, creatures who are designed to seek meanings. We can’t help it – that’s what we do, as rational animals. Moreover, we habitually tend to communicate with each other in whole sentences, not one-word utterances. So it is not surprising that the left anterior temporal cortex of these individuals was still activated.
By the way, for those who may be wondering, here is how Wikipedia defines a Collocation:
In corpus linguistics, collocation defines a sequence of words or terms that co-occur more often than would be expected by chance. In phraseology, collocation is a sub-type of phraseme. An example of a phraseological collocation (from Michael Halliday)is the expression strong tea. While the same meaning could be conveyed by the roughly equivalent powerful tea, this expression is considered incorrect by English speakers. Conversely, the corresponding expression for computer, powerful computers is preferred over strong computers. Phraseological collocations should not be confused with idioms, where meaning is derived, whereas collocations are mostly compositional.
I should note that English and Dutch are very similar languages – they’re practically sisters. What I would be interested to see is the results of research conducted on individuals who are bilingual in English and Japanese – whose grammar, collocations and idioms are very different from each other. It is doubtful whether researchers would observe the same neat one-to-one mapping between the meanings of English and Japanese words as they discovered between English and Dutch words.
To sum up: it is simply nonsensical to assert that the brain, or any other material entity, could possibly store the meaning of a word – particularly an abstract word. Meaning is not a physical property as such. It is perfectly reasonable, however, to claim that the brain contains centers that not only decode sounds into the words of our mother tongue (or a second language), but also enable us to predict, from having heard one word, which other words it is likely to be associated with. It is not surprising, either, that closely related languages like English and Dutch would generate much the same pattern of predictions regarding what word will come next, even if the word sounds different in the two languages.
Well, that’s my two cents. But I may be wrong. What do readers think?