Uncommon Descent Serving The Intelligent Design Community

Before you turn it all over to AI: Why the Laws of Robotics fail

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Coloured robot design Free Vector

Jonathan Bartlett, Eric Holloway, and Brendan Dixon explain:

Prolific science and science fiction writer Isaac Asimov (1920–1992) developed the Three Laws of Robotics, in the hope of guarding against potentially dangerous artificial intelligence. They first appeared in his 1942 short story Runaround:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov fans tell us that the laws were implicit in his earlier stories.

A 0th law was added in Robots and Empire (1985): “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” …

[lots of reasons why this won’t work, right down to:]

Eric Holloway suggested how the robot might think that [the 0th Law] out: In which case, the best way to minimize harm to the population is to wipe out everyone. Of course, there is a lot of short term harm in wiping out everyone, but it is much less in aggregate than the accumulation of harm throughout many thousands of future generations. Or, a bit less extreme, sterilize everyone. That way minimize harm to currently existing humans, and there will be no future humans to be harmed. “The Three Laws of Robotics Have Failed the Robots” at Mind Matters News

Back to other stuff soon.

See also: Other sci-fi from Mind Matters News’s Sci-Fi Saturday:

The Brain: Junkyard, Watch, or Antenna? Eric Holloway: A warped genius reviews the options, as he seeks ultimate power – a tale. After many dead ends, Flim realized that all forms of human power are ultimately controlled by the human mind. Thus, if he could harness the power of the mind, he would finally be able to create anything his heart could desire.

and

Another Life: All fun and games till an AI falls in love Adam Nieri: Then it descends into a convoluted drift of uncertain storytelling. And the victim is not primarily the viewer, who has other options. The victim is the art itself.

Comments
You mean like this. Spacex StarshipLatemarch
October 7, 2019
October
10
Oct
7
07
2019
09:17 AM
9
09
17
AM
PDT
Well, the general problem is that Asimov, in the innocence of early Sci Fi writing, assumed that robots would START as mechanical human beings. That is, the electro-mechanical box that autopiloted your space ship was NOT a "robot". Only Robby the Robot, who could walk and talk, was a "robot". And so the mechanical men not only did dangerous construction work, they also walked down the street beside "meat bag" men. Kinda like having REALLY smart police dogs or something. But this was a more innocent age, and even Werner von Braun believed, regardless of their actual utility, that "real" rocket ships would always have fins on their tails. And when a rocket ship made it to the Moon, it would LAND, delicately, on its fins.vmahuna
October 5, 2019
October
10
Oct
5
05
2019
08:45 PM
8
08
45
PM
PDT

Leave a Reply