Can AI become just like us?
|October 11, 2017||Posted by News under Artificial Intelligence, Mind, Naturalism|
We’ve been hearing a lot about that. From Rodney Brooks, former director of the Computer Science and Artificial Intelligence Laboratory at MIT , in “The Seven Deadly Sins of AI Predictions” at Technology Review, featuring the fourth sin:
When people hear that machine learning is making great strides in some new domain, they tend to use as a mental model the way in which a person would learn that new domain. However, machine learning is very brittle, and it requires lots of preparation by human researchers or engineers, special-purpose coding, special-purpose sets of training data, and a custom learning structure for each new problem domain. Today’s machine learning is not at all the sponge-like learning that humans engage in, making rapid progress in a new domain without having to be surgically altered or purpose-built.
Likewise, when people hear that a computer can beat the world chess champion (in 1997) or one of the world’s best Go players (in 2016), they tend to think that it is “playing” the game just as a human would. Of course, in reality those programs had no idea what a game actually was, or even that they were playing. They were also much less adaptable. When humans play a game, a small change in rules does not throw them off. Not so for AlphaGo or Deep Blue.
Suitcase words mislead people about how well machines are doing at tasks that people can do. That is partly because AI researchers—and, worse, their institutional press offices—are eager to claim progress in an instance of a suitcase concept. The important phrase here is “an instance.” That detail soon gets lost. Headlines trumpet the suitcase word, and warp the general understanding of where AI is and how close it is to accomplishing more.More.
Show this to people who are freaked out by pop science claims about AI.
See also: Silicon Valley religion: “The final end of science is the revelation of the absurd”