Artificial Intelligence Ethics Intelligent Design Mind

But if humans are meat machines, how do ethics come into the picture?

Spread the love

George D. Montañez, (shown chatting with Mike Behe in July, left) a machine learning specialist, reflects on Micah 6:8 as a guide to developing ethics for the rapidly growing profession:

The AI and ML systems we have in place today are not sentient, but they are still dangerous. I am not worried about the future of AI, but I am concerned about the dangers artificial learning systems currently pose. There are obvious threats: weaponization, terrorism, fraud. But there are also less intentional threats, such as increased inequality, privacy violations, and negligence resulting in harm. For example, consider the case of the self-driving Uber vehicle that killed an Arizona pedestrian in March. According to some, that fatal crash proved that self-driving technology just isn’t ready yet, and releasing these cars on roads in their current form is an act of extreme negligence.

Cathy O’Neil, in her book, Weapons of Math Destruction highlights several cases of machine learning systems harming poor and minority groups through the uncritical use of questionable correlations in data. For example, models that attempt to predict the likelihood that someone will commit a crime if released from prison (criminal recidivism models), use early run-ins with police as a feature. Because poor minority neighborhoods are more heavily policed and minority youth are prosecuted more often than their rich white counterparts for the same crimes, such as recreational drug use and possession, this feature of prediction strongly correlates with race and poverty levels, punishing those with the wrong economic background or skin color. Another feature of these systems assesses whether a person lives near others who themselves have had trouble with the police. While this feature may in fact correlate with a higher risk of committing a crime in the future, it punishes the person for what others have done, which they have no control over. Given that poorer people cannot just move into wealthier neighbourhoods to improve their statistics, the poor are doubly punished, simply for being poor. Other examples are not hard to find. Because of these dangers, ethics has become an area of increasing concern for AI researchers. George G. Montañez, “Think About Ethics efore Trouble Arises” at Mind Matters



True, but—as Montañez makes clear—we only care about ethics if we aren’t machines. His piece highlights the way data can embody hidden ethical problems.

George D. Montañez is an assistant professor of computer science at Harvey Mudd College, in Claremont, California, specializing in machine learning.

Also by George Montañez: What is learning anyway?

and

Can an algorithm be racist?

4 Replies to “But if humans are meat machines, how do ethics come into the picture?

  1. 1
    bornagain77 says:

    “How Do Ethics Come Into The Picture?”

    Put another way,,,,

    Robotic Souls – 2019
    Excerpt: And indeed the creation of sex bots is underway. Some provocateurs have argued that these robots could help to resolve the sexual frustrations of lonely men, but the public has generally regarded these developments as concerning, laughable, or creepy. Nevertheless, the effort to create them is driven by powerful commercial motives.,,,
    https://www.thenewatlantis.com/publications/robotic-souls

    Something tells me that atheists will soon be labeling Christians as backwards ‘sex-bot bigots’.

  2. 2
    Seversky says:

    If meat machines are social creatures and find it beneficial to survival to form co-operative groups then you will probably find ethical systems emerging

    As for sex robots, the prescient Dr Dogbert was ahead of the curve back in 1994, when he wrote “I can predict the future by assuming that money and male hormones are the driving forces for new technology. Therefore, when virtual reality gets cheaper than dating, society is doomed”>

  3. 3
  4. 4
    john_a_designer says:

    Here is a pertinent quote from the Montanez article cited by the OP:

    In the popular media, visions of an impending robot takeover often raise concerns over how we should shape AI systems to safeguard the future of humanity. Some wonder whether we should require systems to be embedded with the equivalent of Asimov’s Laws of Robotics:

    *1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    *2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

    *3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    https://mindmatters.ai/2019/01/ai-think-about-ethics-before-trouble-arises/

    From where do Asimov’s laws originate? From the robots or the robots creators/programmers? Obviously, it has to be the latter which means the robots must follow an objective moral/ethical standard. Letting them “invent” their own ethical standard (assuming we could give them consciousness and freewill) would open the possibility they could rebel against their creator(s).

Leave a Reply