From Mind Matters Today:
A recent conference raises concerns, according to Science Magazine, that our machines may never be able to get wise to human deviancy. So-called “white hat” hackers who test the security of AI have found it surprisingly easy to fool. Matthew Hutson reports,
Last week, here at the International Conference on Machine Learning (ICML), a group of researchers described a turtle they had 3D printed. Most people would say it looks just like a turtle, but an artificial intelligence (AI) algorithm saw it differently. Most of the time, the AI thought the turtle looked like a rifle. Similarly, it saw a 3D-printed baseball as an espresso. These are examples of “adversarial attacks”—subtly altered images, objects, or sounds that fool AIs without setting off human alarm bells.
Impressive advances in AI—particularly machine learning algorithms that can recognize sounds or objects after digesting training data sets—have spurred the growth of living room voice assistants and autonomous cars. But these AIs are surprisingly vulnerable to being spoofed. More.
Also new at MMT: The driverless car: A bubble soon to burst? Author says journalists too gullible about high tech: Why do we constantly hear that driverless, autonomous vehicles will soon be sharing the road with us? Wolmar blames “gullible journalists who fail to look beyond the extravagant claims of the press releases pouring out of tech companies and auto manufacturers, hailing the imminence of major developments that never seem to materialise.”
See also: Bill Dembski on how AI can solve our problems… … maybe by changing the landscape in ways we might not like.