The problem machine vision has with understanding what things should look like creates risks for traffic video safety systems, researchers say. “Frankensteins” — models of life forms that are distorted in some way — help researchers test the limits of machine vision for safety-related tasks.
The fact that humans and other life forms “want” things may underlie the superiority of natural vision systems to machine vision systems. It will be interesting to see how easy the gap is to close — if it can be done at all.
You may also wish to read: Researchers: Deep Learning vision is very different from human vision. Mistaking a teapot shape for a golf ball, due to surface features, is one striking example from a recent open-access paper. The networks did “a poor job of identifying such items as a butterfly, an airplane and a banana,” according to the researchers. The explanation they propose is that “Humans see the entire object, while the artificial intelligence networks identify fragments of the object.” Also: Mis-seeing can include mistaking a schoolbus for a snowplow.