Artificial Intelligence Human exceptionalism Intelligent Design

At Tech Xplore: Advancing human-like perception in self-driving vehicles

Spread the love

How can mobile robots perceive and understand the environment correctly, even if parts of the environment are occluded by other objects? This is a key question that must be solved for self-driving vehicles to safely navigate in large crowded cities. While humans can imagine complete physical structures of objects even when they are partially occluded, existing artificial intelligence (AI) algorithms that enable robots and self-driving vehicles to perceive their environment do not have this capability.

Robots with AI can already find their way around and navigate on their own once they have learned what their environment looks like. However, perceiving the entire structure of objects when they are partially hidden, such as people in crowds or vehicles in traffic jams, has been a significant challenge. A major step towards solving this problem has now been taken by Freiburg robotics researchers Prof. Dr. Abhinav Valada and Ph.D. student Rohit Mohan from the Robot Learning Lab at the University of Freiburg, which they have presented in two joint publications.

The two Freiburg scientists have developed the amodal panoptic segmentation task and demonstrated its feasibility using novel AI approaches. Until now, self-driving vehicles have used panoptic segmentation to understand their surroundings.

This means that they can so far only predict which pixels of an image belong to which “visible” regions of an object such as a person or car, and identify instances of those objects. What they lack so far is being able to also predict the entire shape of objects even when they are partially occluded by other objects next to them. The new task of perception with amodal panoptic segmentation makes this holistic understanding of the environment possible.

“Amodal panoptic segmentation will significantly help downstream automated driving tasks where occlusion is a major challenge such as depth estimation, optical flow, object tracking, pose estimation, motion prediction, etc. With more advanced AI algorithms for this task, visual recognition ability for self-driving cars can be revolutionized. For example, if the entire structure of road users is perceived at all times, regardless of partial occlusions, the risk of accidents can be significantly minimized.”

Full article at Tech Xplore.

This article highlights more of the remarkable nature of human ability. When engineers try to design robots to perform tasks that humans do as a matter of routine, it turns out that what can seem simple to us is not simple at all.

9 Replies to “At Tech Xplore: Advancing human-like perception in self-driving vehicles

  1. 1
    martin_r says:

    published in 2021:

    Tesla boss Elon Musk admits autonomous tech is “a hard problem” and the “difficulty is obvious”

    Musk: Generalized self-driving is a hard problem, as it requires solving a large part of real-world AI. Didn’t expect it to be so hard, but the difficulty is obvious in retrospect.

    https://www.drive.com.au/news/tesla-boss-elon-musk-admits-autonomous-tech-is-a-hard-problem-and-the-difficulty-is-obvious/

    PS: Perhaps Elon Musk and his teams of engineers should talk to some biologists … to ask Dawkins/Coyne/Lents for some advices …. how to design sophisticated fully autonomous self-navigating systems, that can walks/run/swim/dive/fly … Because biologists know the secret —> no engineers are needed, no knowledge is needed, just be patient, give it some time and it will self-design …

  2. 2
    martin_r says:

    an ‘autonomous’ wildcat made by humans
    https://www.youtube.com/watch?v=wE3fmFTtP9g

    an autonomous wildcat made by God
    https://youtu.be/RylSyOMXNks?t=70

  3. 3
    Blastus says:

    Martin, that was excellent. You have me laughing out loud.

    But perhaps the man made wildcat is self replicating? Perhaps it has a functional immune system? Perhaps it can identify and respond appropriately to prey and predators? And don’t forget camouflage…

  4. 4
    relatd says:

    They can use all the fancy words they want. There will be more accidents. They should use closed tracks with real-world obstacles. With real-world accident reports in hand. Cheap and easy? No. Not this time. And not in the future.

    “panoptic segmentation” Oh yes. We ALL knew that was the wrong approach….

    “downstream automated driving tasks” Downstream? I’ll remember that next time I’m driving through a lake.

  5. 5
    asauber says:

    Where’s the override?

    Andrew

  6. 6
    chuckdarwin says:

    The cost of liability insurance to own a self-driving vehicle will be astronomical—only Musk will be able to afford one…..

  7. 7
    hnorman42 says:

    I’ll think about getting a self-driving vehicle when one passes the “I am not a robot” test and logs into this site.

  8. 8
    relatd says:

    Andrew at 5,

    Installing an override is too expensive. Think about it. Even if it cost one dollar, millions of cars times one dollar. No can do. Especially when you’re going for your next billion.

  9. 9
    doubter says:

    I think that the AI driverless car technology will get there “one day”. But that day will be long delayed by a series of nearly intractable problems that will take a long time to resolve. One of them is the question of just how good does the system need to be for public perception to take off and go into mass buying of the vehicles (that is, just how low does the accident rate have to be when the tradeoff of AI-caused accidents versus human driver-caused accidents is right). This will take a very long time, but AI driverless cars’ first and main performance goal will be to achieve parity with the human driver accident rate, and that triumph would theoretically and logically lead to many buyers’ decision that the equation is now in favor of getting the driverless cars, since the outbalancing positive factor is now the added convenience and utility of not having to manually drive the vehicle. All bets are off, however, whether the public psychology will go that way. They may insist on 100% reliability of the technology, and that probably will never be achievable.

    The other main confounding factor, I think, will also be be societal and will be (in the absence of proven 100% reliable, zero accident rate AI driverless cars), the litigation morass that will inevitably be involved in the case of accidents, especially in determining who or what was at fault, whether it was the car or the driver that caused the accident, if it was the car then was it the software design company or the car company, etc., and in coming up with the the actual failure in software design that caused the failure, so as to be able to fix it. As has been becoming apparent, with the deep learning technology being used, often the programmers themselves can’t determine how the AI system came up with its decision, whether or not that decision was good or bad.

    This is because the AI system itself developed the software design (which may be exceedingly complicated), over thousands or millions of training iterations of its deep learning algorithms. A fundamental limitation of the computer AI technology and its learning algorithms. The lawyers could even argue that the driver shares the fault even though the car AI was driving, because the owner/driver did make the original decision to buy the AI driverless auto knowing that it may possibly cause an accident. Obviously new laws regarding these situations will have to be developed, and any existing ones greatly changed. There are a few states now with no-fault laws on the books, but for obvious reasons the lawyers strongly resist adopting that, and many of the state lawmakers (politicians and bureaucrats especially) are already lawyers themselves and aren’t inclined to drastically reduce their revenue.

Leave a Reply