Uncommon Descent Serving The Intelligent Design Community
Topic

machine learning

Artificial intelligence: Machines do not see objects as wholes

Mistaking a teapot shape for a golf ball, due to surface features, is one striking example from a recent open-access paper: The networks did “a poor job of identifying such items as a butterfly, an airplane and a banana,” according to the researchers. The explanation they propose is that “Humans see the entire object, while the artificial intelligence networks identify fragments of the object.” News, “Researchers: Deep Learning vision is very different from human vision” at Mind Matters “To see life steadily and see it whole”* doesn’t seem to be popular among machines. *(Zen via Matthew Arnold) See also: Can an algorithm be racist?

Can machines really learn? Neurosurgeon Michael Egnor offers a parable

At Mind Matters Today: “Machine learning” is a hot field, and tremendous strides are being made in programming machines to improve as they work. Such machines work toward a goal, in a way that appears autonomous and seems eerily like human learning. But can machines really learn? What happens during machine learning, and is it the same thing as human learning? Because the algorithms that generate machine learning are complex, what is really happening during the “learning” process is obscured both by the inherent complexity of the subject and the technical jargon of computer science. Thus it is useful to consider the principles that underlie machine learning in a simplified way to see what we really mean by such “learning.” Read More ›

At Mind Matters Today: AI is not (yet) an intelligent cause

From Mind Matters Today: A recent conference raises concerns, according to Science Magazine, that our machines may never be able to get wise to human deviancy. So-called “white hat” hackers who test the security of AI have found it surprisingly easy to fool. Matthew Hutson reports, Last week, here at the International Conference on Machine Learning (ICML), a group of researchers described a turtle they had 3D printed. Most people would say it looks just like a turtle, but an artificial intelligence (AI) algorithm saw it differently. Most of the time, the AI thought the turtle looked like a rifle. Similarly, it saw a 3D-printed baseball as an espresso. These are examples of “adversarial attacks”—subtly altered images, objects, or sounds Read More ›