In an effort to program self-driving cars to make decisions in a crisis, MIT’s Moral Machine offered 2.3 million people worldwide a chance to crowdsource who to kill and who to spare in a road mishap…
The project aimed at building righteous self-driving cars revealed stark differences in global values. People from China and Japan were more likely to spare the old than the young. But in Western cultures, numbers matter more:
The results showed that participants from individualistic cultures, like the UK and US, placed a stronger emphasis on sparing more lives given all the other choices—perhaps, in the authors’ views, because of the greater emphasis on the value of each individual. Karen Hao, “Should a self-driving car kill the baby or the grandma? Depends on where you’re from.” at Technology Review
Whatever the causes of cultural differences, Dixon thinks that the Moral Machine presents mere caricatures of moral problems anyway. “The program reduces everything to a question of who gets hurt. There are no shades of gray or degrees of hurt. It is, as is so often with computers, simply black or white, on or off. None of the details that make true moral decisions hard and interesting remain.” “There is no universal moral machine” at Mind Matters More.
Follow UD News at Twitter!
See also: Peaceful code of conduct sparks rage in Silicon Valley. Hi-tech firm’s code, based on ancient monks’ practice, deemed “just disgusting”
By Jonathan Bartlett: Who assumes moral responsibility for self-driving cars? Can we discuss this before something happens and everyone is outsourcing the blame?
Guess what? You already own a self-driving car Tech hype hits the stratosphere
and
Self-driving vehicles are just around the corner… On the other side of a vast chasm