In his new book, Pearl, now 81, elaborates a vision for how truly intelligent machines would think. The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions—to inquire how the causal relationships would change given some kind of intervention—which Pearl views as the cornerstone of scientific thought. Pearl also proposes a formal language in which to make this kind of thinking possible—a 21st-century version of the Bayesian framework that allowed machines to think probabilistically.
Pearl expects that causal reasoning could provide machines with human-level intelligence. They’d be able to communicate with humans more effectively and even, he explains, achieve status as moral entities with a capacity for free will—and for evil.
Pearl is recognized as developing a method that enables machines to make probabilistic calculations but feels that the field is in a rut just now. He hopes to enable machines to have a model of reality and is quite convinced that they will have free will and the ability to do evil. He proposes that we will know that a robot has chosen evil when
it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them. More.
This is heady talk for a field that Pearl thinks is in a rut now. Is it a rut or a gulf?
See also: Henry Kissinger: The End of the Enlightenment dawns, due to artificial intelligence