Uncommon Descent Serving The Intelligent Design Community

Can we program morality into a self-driving car?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Stripes on two lane highway

A software engineering professor tells us why that’s not a realistic goal:

Any discussion of the morality of the self-driving car should touch on the fact that the industry as a whole thrives on hype that skirts honesty …

The cars raise the same problem as do other types of machine learning: The machine isn’t responsible, so who is? That gets tricky…

Companies may say one thing about their smart new product in the sales room and another in the law courts after a mishap. The European Parliament has proposed making robotic devices legal persons, for the purpose of making them legally responsible. But industry experts have denounced the move as unlikely to address real-world problems. McDermid thinks we should forget trying to make cars moral and focus on safety instead: “Currently, the biggest ethical challenge that self-driving car designers face is determining when there’s enough evidence of safe behavior from simulations and controlled on-road testing to introduce self-driving cars to the road.” “Can we program morality into a self-driving car?” at Mind Matters


See also: Self-driving cars hit an unnoticed pothole “Not having to intervene at all”? One is reminded of the fellow in C. S. Lewis’s anecdote who, when he heard that a more modern stove would cut his fuel bill in half, went out and bought two of them. He reckoned that he would then have no fuel bills at all. Alas, something in nature likes to approach zero without really arriving…

and

AI Winter is coming Roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding. (Brendan Dixon)

Comments
The obvious answer is that the self-driving car (or any other AI) takes on the persona of an indentured servant, or a slave; without rights or responsibilities, other than to do as it is told. Its legal owner takes on the responsibilities that arise from its use. For example, said owner has car insurance, as today, to cover any untoward events. If the self-driving car gets into an accident, then the owner's insurance pays out, as it would today with a human driver. If his insurance company thinks it is the car manufacturer's fault (hardware/software error or lack), then he takes the manufacturer to court (as may be done today). Insurance companies will not provide comprehensive insurance until they are convinced that self driving cars are at least as safe as human drivers. Manufacturers will only release hardware (and software updates) when they have been tested sufficiently to avoid costly lawsuits. There will be regulations in place, of course, to cover the basics, but as usual, legal proceedings will eventually dictate the level of safety and any ongoing additional regulations (as they do today for our present cars). Thus, I do not see any special "morals" required for cars or other AI's. The AI cannot be held to "blame", other than as a need for better design, manufacturing, software, or regulations. You cannot take a horse to court or expect it to make amends for damaging someone. Self-driving cars will be handled the same way, I expect.Fasteddious
February 12, 2019
February
02
Feb
12
12
2019
11:23 AM
11
11
23
AM
PDT

Leave a Reply