From Adam Waytz at Nautilus:
Once we abandon the idea of universal empathy, it becomes clear that we need to build a quantitative moral calculus to help us choose when to extend our empathy. Empathy, by its very nature, seems unquantifiable, but behavioral scientists have developed techniques to turn people’s vague instincts into hard numbers. Cass Sunstein of Harvard Law School has suggested that moral concepts like fairness and dignity can be assessed using a procedure he calls breakeven analysis. Do people feel that the benefits of a given course of action justify the costs? If so, the action is worth taking. For example, we may judge that invasive phone-hacking is moral if the cost of invasion of privacy is countered with the benefit of preventing one terrorist attack at some minimum frequency, say, every five and a half years.
Furthermore, survey data from people across the globe can reveal what people consider the most important factors in their happiness and suffering. Advances in survey methods that examine happiness associated with specific daily events8 or that use smartphone technology to assess happiness9 at a moment-to-moment basis have improved on basic self-report methods. Implicit measures that capture how quickly people associate words related to the self (“me”) with words related to happiness (“elated”) have begun to capture aspects of happiness separable from explicit reports of happiness. And neuroimaging methods have identified neural signatures of both hedonic well-being (related to pleasure) and eudaimonic well-being (related to meaning in life).
A data-based approach to identifying and ranking universal values is ambitious to be sure. But, crucially, it calls on us to make use of the limits on morality that are inherent to all of us as human beings, rather than lamenting them. These constraints challenge us to focus our attention, and drive us to see that not all values are equally valid. Instead of indefinitely fighting over tradeoffs between in-group- and out-group-oriented moralities, we might find that picking among universally held values is more palatable, efficient, and uniting—providing a moral function in and of itself. In place of the usual, default concentric circles of in-groups that guide us today (family, friends, neighbors, citizens) we would have the tools to carefully engineer to whom we should extend our empathy, and when. More.
Classic: Moral advice struggling to become a science ends up doing a grave disservice to both.
One way of producing the most utterly contemptible social and moral order would be to adopt elite-driven perspectives based in pure naturalism, as above. But chaos would intervene first.
Follow UD News at Twitter!