From David DiSalvo’s “NeuropPsyched” at Forbes’blog, we learn about “Five Big Developments in Neuroscience to Watch” (June 17, 2011), such as:
4. Altering Moral Judgments with MagnetismResearch of the last couple decades has shown that injuries to a part of the brain called the right temporoparietal junction (RTPJ), located at the brain’s surface above and behind the right ear, can change a patient’s moral judgments. When these patients are asked to answer morally challenging questions that weigh the life of one person against others, they consistently make utilitarian decisions without feeling the least bit uneasy. Their moral judgments about life and death, so vexing to most of us, become clinical and routine.
Researchers have recently found that they can induce a similar effect using magnetism (transcranial magnetic stimulation, or TMS) to disrupt RTPJ activity. When participants were exposed to magnetic “bursts” from a TMS device, their judgments about what is morally permissible significantly changed. For example, they were more likely to say that intending to harm another was morally permissible if the other person luckily avoided becoming a victim; they considered the intention of the first person to be irrelevant. The effect was only temporary, but the implications are massive. Most of us consider moral judgment a higher order thought process, but this research shows that it can be tweaked by a weak magnetic field in a matter of minutes.
Some confusion evident here: Moral judgment, like any other thought process, is susceptible to influence. What of the millennia-old practice of getting people drunk before proposing actions that they would never commit while sober? It’s fairly reliable and much more straightforward. It also says nothing about morality as a thought process in itself: A morally better person would not have got so drunk in the first place.
Hat tip: Stephanie West Allen at Brains on Purpose
5 Replies to “Next Big Thing?: Altering Moral Judgments with Magnetism”
In this study, the participants did not make a morally culpable decision to receive TMS or not (it was randomised within-subject anyway), and the only “influence” was direct effects on the neurons.
That seems neither like getting drunk, nor like being persuaded, and it does seem to tell us something about the neural substrates of moral decision making.
However, the subjects were not less “moral” in one condition than the other. They were just less influenced by information about the intentions of the agent of harm (or more influenced by information about a benign outcome) when evaluating whether an action was “permissible”.
Which is interesting. But it looks more like a Theory of Mind or Mental Time Travel disruption to me than a disruption to moral judgment per se.
The paper is here, btw:
Lizzie, you make me laugh 🙂
On some level, you know it’s true.
Why does that make you laugh, Mung?
Not that I would deny you pleasure 🙂
But I’m intrigued by what you must think I think.
If i read the paper, will I find their operational definition of information and their operational definition of about?
Not explicitly, because they their operationalized hypothesis does not compare “information about …” with “no information about …” but concersn information about …” that differs in content – “moral context”.
What they did need to do, and, IMO, do rather badly, is operationalise “moral”.
They cite papers in which differences are found, and I guess we could check the operationalization in those papers, but they should do it more clearly in the paper itself IMO. I think it’s a weakness of the paper,and I think their inferences lack validity as a result.
Although it’s always a problem in journals like PNAS where the methods section goes at the end, and there’s a tendency to jam a kind of summary methods into your intro, and keep the actual methods extremely technical. A crucial bit often gets missed.