- Share
-
-
arroba

A software engineering professor tells us why that’s not a realistic goal:
Any discussion of the morality of the self-driving car should touch on the fact that the industry as a whole thrives on hype that skirts honesty …
The cars raise the same problem as do other types of machine learning: The machine isn’t responsible, so who is? That gets tricky…
C
ompanies may say one thing about their smart new product in the sales room and another in the law courts after a mishap. The European Parliament has proposed making robotic devices legal persons, for the purpose of making them legally responsible. But industry experts have denounced the move as unlikely to address real-world problems. McDermid thinks we should forget trying to make cars moral and focus on safety instead: “Currently, the biggest ethical challenge that self-driving car designers face is determining when there’s enough evidence of safe behavior from simulations and controlled on-road testing to introduce self-driving cars to the road.” “Can we program morality into a self-driving car?” at Mind Matters
See also: Self-driving cars hit an unnoticed pothole “Not having to intervene at all”? One is reminded of the fellow in C. S. Lewis’s anecdote who, when he heard that a more modern stove would cut his fuel bill in half, went out and bought two of them. He reckoned that he would then have no fuel bills at all. Alas, something in nature likes to approach zero without really arriving…
and
AI Winter is coming Roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding. (Brendan Dixon)