Artificial Intelligence Culture Ethics Intelligent Design Mind

Once upon a time, MIT tried building a universal Moral Machine…

Spread the love
Brendan Dixon
Software engineer Brendan Dixon tried it out.

In an effort to program self-driving cars to make decisions in a crisis, MIT’s Moral Machine offered 2.3 million people worldwide a chance to crowdsource who to kill and who to spare in a road mishap…

The project aimed at building righteous self-driving cars revealed stark differences in global values. People from China and Japan were more likely to spare the old than the young. But in Western cultures, numbers matter more:

The results showed that participants from individualistic cultures, like the UK and US, placed a stronger emphasis on sparing more lives given all the other choices—perhaps, in the authors’ views, because of the greater emphasis on the value of each individual. Karen Hao, “Should a self-driving car kill the baby or the grandma? Depends on where you’re from.” at Technology Review

Whatever the causes of cultural differences, Dixon thinks that the Moral Machine presents mere caricatures of moral problems anyway. “The program reduces everything to a question of who gets hurt. There are no shades of gray or degrees of hurt. It is, as is so often with computers, simply black or white, on or off. None of the details that make true moral decisions hard and interesting remain.”  “There is no universal moral machine” at Mind Matters  More.

Follow UD News at Twitter!

See also: Peaceful code of conduct sparks rage in Silicon Valley. Hi-tech firm’s code, based on ancient monks’ practice, deemed “just disgusting”

By Jonathan Bartlett: Who assumes moral responsibility for self-driving cars?  Can we discuss this before something happens and everyone is outsourcing the blame?

Guess what? You already own a self-driving car Tech hype hits the stratosphere

and

Self-driving vehicles are just around the corner… On the other side of a vast chasm

7 Replies to “Once upon a time, MIT tried building a universal Moral Machine…

  1. 1
    polistra says:

    The differences aren’t the issue. The scary part is the hidden UNSTATED assumption. These differences can’t be part of the programming if the car doesn’t KNOW how to DETECT these differences.

    In other words, autonomous cars are already capable of distinguishing old vs young, male vs female, legal vs illegal, Correct vs Deplorable. Self-driving cars are roaming Inquisitors, sorting out the gold from the dross and squashing the dross on command.

    THAT’S THE WHOLE PURPOSE OF THIS GROTESQUE PROJECT.

  2. 2
    Latemarch says:

    I think that you just gave me the plot for my next dystopian sci-fi novel.

  3. 3
    daveS says:

    THAT’S THE WHOLE PURPOSE OF THIS GROTESQUE PROJECT.

    😮

    I thought it was more about safety, efficiency, and of course selling cars.

  4. 4
    kairosfocus says:

    DS, dig for where the military-industrial complex lurks in this and then use FOIA to drag out the paper trail. KF

  5. 5
    Latemarch says:

    DS:

    I like the safety and efficiency of my autonomous car.

    **Of course, I have to say that because it monitors this site.**

  6. 6
    daveS says:

    Heh.

    I just worry that some of these crazy conspiracy theories could limit our future development.

  7. 7
    kairosfocus says:

    DS, there are always conspiracies about; the issue is to detect and counter. The issue is evidence and track record. After all, there is the story of seeing the devil sitting by the road crying. Why? People are blaming me for things I haven’t had a chance to do yet. Groups like the CIA and ilk, deservedly, have a very bad reputation. It was a retiring US President and former general who warned against the military-industrial complex, and it was another who warned against entangling alliances. I for one will never have anything to do with Facebook. And if Alexa answers when her name is called, what is she listening to all the time? KF

Leave a Reply