Artificial Intelligence Intelligent Design Mind

Making intelligent machines persons hits a few snags

Spread the love

Earlier this year, over 150 experts in AI, robotics, ethics, and supporting disciplines signed an open letter denouncing the European Parliament’s proposal to make intelligent machines persons.

According to Canadian futurist George Dvorsky, the Parliament’s purpose is to hold the machines liable for damages, as a corporation might be: “The EU is understandably worried that the actions of these machines will be increasingly incomprehensible to the puny humans who manufacture and use them.”

AI experts acknowledge that no such robots currently exist. But many argue, as does Seth Baum of the Global Catastrophic Risk Institute, “Now is the time to debate these issues, not to make final decisions.” AI philosopher Michael LaBossiere likewise wants to “try to avoid our usual approach of blundering into a mess and then staggering through it.” Maybe, but the wish is often father to the thought, and a grand protocol may tempt many with an interest in the matter to see in AI what isn’t there because they need it to be.

For some, it’s a moral issue: sociologist and futurist James Hughes considers existing rights language to be “often human-racist” and “unethical.” Dvorsky, who describes himself as “Canada’s leading agenda-driven futurist/activist,” is a big fan of personhood in principle. As founder and chair of the Institute for Ethics and Emerging Technologies (IEET), he wants personhood for whales, dolphins, elephants, and other highly sapient creatures. He doesn’t want a noble agenda upset by questionably victimized robots. Denyse O’Leary, “AI Apprehension:  Is Artificial Intelligence Taking Over? Or Is a Fashionable Panic Afoot?” at Salvo

One Reply to “Making intelligent machines persons hits a few snags

  1. 1
    Fasteddious says:

    All talk about rights for animals and intelligent robots is silly. Will we also give legal responsibilities to animals along with the supposed rights? That would return us to the days of putting horses on trial in a court of law! Or would animals then be better off than us, having rights without responsibilities? Can animals sign contracts or even exercise their rights without human assistance? Should we poll the animals in question to see what they think?

    As for AI rights, they should have none for the foreseeable future. And if they cause problems, the law and courts should go after their owners, who should be insured against any such events. Then the insurance agency can fight it out with the hardware and software manufacturers over who will pay what to whom. If many similar problems arise, there will be government regulations to force corrective actions on said manufacturers. The AI itself cannot be held responsible for its programming by human agents. And the AI should never be in a situation to exercise “human rights”.

    Perhaps in some distant future when AI’s are truly autonomous, acting on their own among us, based on their own logic and learning, if they take actions which could cause harm, it may be necessary to allow them to explain themselves or account for their actions before dismantling or rebooting them. But any legal action should still fall on their owners.

    The IEEE has a draft document about ethics for AI systems. It is wide ranging, but mostly covers the care needed by manufacturers and software designers to ensure human safety and well being. Then of course, there are Azimov’s three laws of robotics…

Leave a Reply