Uncommon Descent Serving The Intelligent Design Community

Jonathan Bartlett: Self-driving vehicles are just around the corner, all right

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Jonathan Bartlett

On the other side of a vast chasm…

The code needed to detect and handle the flow between the situations increases polynomially with the number of driving situations we must address. That is, if we have 2 driving situations, there are 2 possible transitions to account for. If we have 3 driving situations, there are 6 possible transitions. If we have 4 driving situations, there are 12 possible transitions.

Expressing it mathematically, for n driving situations, there are “n2 – n” transition possibilities. These types of numbers can mount up quickly. Therefore, every newly-identified driving scenario doesn’t just add one more scenario to code for in a linear fashion; it makes the project an order of magnitude more difficult.

Many cheerleaders have wrongly assumed that the progress from one level of automation to another should be a direct, linear process but it clearly isn’t. I’m not saying that this hurdle is insurmountable. Rather, the transition from Level 4 to Level 5 automation is multiple orders of magnitude more difficult than all the other levels combined. Its completion should not be taken as a foregone conclusion. More.

Jonathan Bartlett is the Research and Education Director of the Blyth Institute.

Follow UD News at Twitter!

See also: Guess what? You already own a self-driving car Tech hype hits the stratosphere (Jonathan Bartlett) Yes, the car you own today is probably “self-driving” and you may not know it. But that is because of the creative ways the term can be defined.

and

Who assumes moral responsibility for self-driving cars? Jonathan Bartlett: Can we discuss this before something happens and everyone is outsourcing the blame? (Jonathan Bartlett) Level 4 self-driving vehicles will bring with them a giant shift in the moral equation of driving. Unfortunately, in a culture that seems to think that the future will take care of itself, little thoughtful public discussion is taking place. My hope is to start a discussion of how coming technological changes will affect the future moral landscape.

 

Comments
The moral responsibility question is (or should be)straightforward: 1. governments will not allow use of such cars unless they are insured by their owners (same as today). 2. insurers will not insure until such cars are shown in testing to be safer than people-driven cars (but don't expect premium discounts). 3. Self-driving cars in accidents will be referred to the insurer (same as today) who will have access to the car's "black box" records. 4. Insurer will pay damages (if necessary) and manage any immediate lawsuits (same as today). 5. If insurers feel it necessary, they can in turn sue the car manufacturer (same as today, I suppose). 6. Manufacturers may respond to lawsuits (and judgements) by upgrading their software (such cars are presumably upgradable). 7. Any software upgrades will have to pass regulatory hurdles (i.e. extensive testing) before release. One danger will be lawyers seeing manufacturers with deep pockets and bring lawsuits for huge sums, much greater than would be claimed if a human caused a similar accident. AI cars should be held to a higher standard than human drivers, but cannot be made 100% safe and remain useful. The "higher standard" for manufacturers will ultimately be determined by the volume of lawsuits and resulting judgements (same as today).Fasteddious
November 1, 2018
November
11
Nov
1
01
2018
02:12 PM
2
02
12
PM
PDT
VM, traffic density creates a lot more room around aircraft than cars. Including, on landing. Likewise, roads are pre-existing, inherently complex and dense environments. The issue is situation awareness and ability to respond, which in our case is run by an exceedingly powerful processor. KFkairosfocus
October 30, 2018
October
10
Oct
30
30
2018
09:24 PM
9
09
24
PM
PDT
Um, we already have self-flying aeroplanes, and aeroplanes have a LOT more complicated "driving situations" that they handle in THREE dimensions at speeds around Mach 0.9 (going Mach 1 or greater is horrendously wasteful even for military planes). In particular, self-landing planes are already available. Although there was the spectacular Airbus crash where it turned out that some silly human had entered the wrong height about sea level into the guidance system for the runway in Paris by 50 feet or so. We need to get the sloppy humans out of the loop. Again, the reason we don't have truly self-flying planes (no humans in the cockpit) and self-driving cars is because the people who OWN the aeroplanes (e.g., airline companies) CAN sue either the owner or the manufacturer (or both) for gazillions of dollars for a "manufacturing defect". However, if there is a human in the cockpit, the crash can be blamed on "pilot error", which makes the DEAD GUY liable for any claims. If a self-driving car were to cause a multiple car accident in rush hour (killing 3 or 4 people and injuring a dozen more) the liability claims would surely convince the manufacturer of the auto-car to get out of the business.vmahuna
October 29, 2018
October
10
Oct
29
29
2018
01:55 PM
1
01
55
PM
PDT
Law is based on the civil peace of justice, therefore on morality.kairosfocus
October 29, 2018
October
10
Oct
29
29
2018
01:33 PM
1
01
33
PM
PDT
The biggest obstacle to full self-driving autonomy has to do with the concept of representationalism used in deep learning. Unlike deep neural nets, the brain can instantly see a new object it has never seen before. A neural net, by contrast, cannot see an object unless it has a prior representation of it in memory. So self-driving cars are essentially blind to new situations, a fatal flaw. This is why self-driving car companies train their cars over millions of miles hoping to cover all possibilities, which is impossible. AI experts have absolutely no idea how the brain does it.FourFaces
October 28, 2018
October
10
Oct
28
28
2018
02:31 PM
2
02
31
PM
PDT
Moral responsibility doesn't matter because morality no longer influences business or government. Legal responsibility does matter. Lawsuits are expsneive. One thing is certain: Companies like Uber and Tesla will displace all liabilities so they can continue creating evil and increasing share value without any expenses. Government will set up a risk pool to cover all disasters created by COOOOOL corporations. Taxpayers and UNCOOOL businesses will have to pay the tab.polistra
October 28, 2018
October
10
Oct
28
28
2018
02:25 AM
2
02
25
AM
PDT
The cries of the doom merchants and the cries of the utopians share some key commonalities: we've been hearing them be so wrong about so many things for so long.ScuzzaMan
October 28, 2018
October
10
Oct
28
28
2018
01:40 AM
1
01
40
AM
PDT

Leave a Reply