From Gary Marcus at the Edge:
People get very excited every time there’s a tiny advance, but the tiny advances aren’t getting us closer. There was a Google captioning thing that got a lot of press. I think it was the front page of The Times. You could show it some pictures and it looked like it was great. You’d show it a picture of a dog, a person, and a Frisbee and it might be able to say, that’s a dog catching a Frisbee. It gives the illusion of understanding the language. But it’s very easy to break these systems. You’d show it a picture of a street sign with some stickers on it and it said, that’s a refrigerator with food in it. This is the kind of bizarre answer that used to send you to Oliver Sacks. It’s almost like a neurological deficit. The systems will be right on the cases that they have a lot of data for, and fall apart on the cases where they don’t have much data.
You can contrast this with a human being. You’ve never heard any of the sentences that I’ve said today—maybe one or two—and yet you can understand them. We’re very far from that. More.
The quest to create artificial intelligence that is exactly like human intelligence is like the quest to discover a simple origin of life: The only failure that matters is the loss of public interest and funding. Apart from that, new theories could be generated for centuries without any need to confront fundamental difficulties.
See also: Face it, your brain isn’t a computer (though Gary Marcus tells us it is)
See also: Pigeons, computers, and Picasso (Vincent Torley)
ID, philosophy, and computer programming (Johnny Bartlett)
Follow UD News at Twitter!