From Kevin Hartnett at the Atlantic, interviewing, for Quanta Magazine, Judea Pearl, author (along with Dana Mackenzie) of The Book of Why: The New Science of Cause and Effect
In his new book, Pearl, now 81, elaborates a vision for how truly intelligent machines would think. The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions—to inquire how the causal relationships would change given some kind of intervention—which Pearl views as the cornerstone of scientific thought. Pearl also proposes a formal language in which to make this kind of thinking possible—a 21st-century version of the Bayesian framework that allowed machines to think probabilistically.
Pearl expects that causal reasoning could provide machines with human-level intelligence. They’d be able to communicate with humans more effectively and even, he explains, achieve status as moral entities with a capacity for free will—and for evil.
Pearl is recognized as developing a method that enables machines to make probabilistic calculations but feels that the field is in a rut just now. He hopes to enable machines to have a model of reality and is quite convinced that they will have free will and the ability to do evil. He proposes that we will know that a robot has chosen evil when
it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them. More.
This is heady talk for a field that Pearl thinks is in a rut now. Is it a rut or a gulf?
See also: Henry Kissinger: The End of the Enlightenment dawns, due to artificial intelligence
7 Replies to “Artificial intelligence pioneer laments current AI limitations, promises machines with free will and morality”
Pearl qualifies as a primitive “unevolved” machine by his own standards.
He is incapable of seeing the PURPOSE of machines. We have always used animals and built machines to do tasks that are beyond human muscle effort or beyond human precision. We don’t NEED a machine that can deal with abstract questions of morality, and we certainly don’t NEED a machine that can do evil. We’re doing a beautiful job of using abstract morality to justify homicide and suicide, and we’re manufacturing evil with perfect efficiency and precision. No machine can possibly match humans in that department.
Judea Pearl is a materialist and evolutionist. So his stance on free will is to be expected. However, his stance on causality should not be thrown out with the bath water. When asked “What was the greatest challenge you have encountered in your research?” during a 2012 Cambridge University Press interview, he replied:
In retrospect, my greatest challenge was to break away from probabilistic thinking and accept, first, that people are not probability thinkers but cause-effect thinkers and, second, that causal thinking cannot be captured in the language of probability; it requires a formal language of its own.
This is a complete 180 coming from a man who built his entire career on the Bayesian brain hypothesis. It’s not a position that came easy to him. Pearl may be wrong about free will but he is right about the importance of causality to intelligence. We will have highly intelligent machines in our lifetimes even if they are not conscious. Neither is required by the other.
Pearl is not talking about free will, instead, he is taking about “the sensation of free will.” “Evolution”, he mistakenly claims, “has equipped us with this sensation.”
IOWs Pearl doesn’t believe that true free will exists.
This means that Pearl does not understand that rationality breaks down absent true free will.
Hey Pearl, if you happen to read this, consider the following simple argument:
If I am not the author of my thoughts, if, instead, my thoughts are produced by entities beyond my control, then I am not rational.
Bayesian inference is causal reasoning, but it’s weak. It’s also a lot easier than analytic rule building and testing.
I’d say what you need is something that builds and scores equations by fit against Bayesian understanding.
Add some Bayesian inference (and then rule building) to the equation construction heuristic based on the rough shape of the distribution and your inference building becomes more efficient in general (though you have to make sure it controls order of approach and doesn’t constrain scope of exploration otherwise you fall into the pit of degenerate confirmation bias/”robot materialism”).
So, you’re now sitting on some process recursive process composition, which could go in ever further in being used against itself to evaluate the worth of applying further recursion depth. And so on.
So, very flexible process structure. But still helpless without external direction and context. A very nice and shiny hammer.
To me it seems like common sense that machines will only ever be able to approximate human behavior. As a person trained in programing, B.S. in computer science, with a ton of experience on how computers and networks function, it is a ridiculous idea that machines will ever “suddenly” see, or taste, or smell. We can certainly liken some of our senses to senors that man can make, although they are a far cry from being able to capture a mear fraction of what our senses do, but they will still only be sensors feeding input into computer algorithms, and it is a fact, that machines like this STILL need a conscious individual to observe their output. An AI machine will never be able, no matter the type or amount of computing power, “see back into itself” and create an internal picture, very much like a 3D+T fully immersive experience, where WE, not the computer “see” the results. If those that believe a machine can become conscious would just stop and think, that this would be like a computer, with a sensor, somehow being able to construct a 3D image +T and MEANING, that would somehow emerge from its internal circuitry that is dead and cold. This is why our brains, in my strong opinion, backed up by QM Theory, are simply receivers and transmitters, and processors, that allow us to do something no computer can – receive our consciousness from wherever it really is stored – it is certainly not in the brain, anymore that their is actually a little man in your TV. Or a simpler analogy, looking for consciousness like looking for the source of a radio broadcast inside the radio. It seems self-evident to me – there is no little conscious man in our brain that interprets code, and no little man in his brain, and so on and so on and so on. Like almost every materialistic concept, it leads to a divide by zero infinite regression. Like saying the universe came from nothing, through quantum fluctuations, but quantum fluctuations must have some matter to act upon, they do not exist outside the construct of spacetime and matter as they need something to give a potential waveform to use. Or the closer to the truth model, that we are in some kind of simulation – but instead of calling it a creation out of the mind of a creator (which is what I believe), they say some advanced race of being, but then they are most likely also in a simulation, and then you have it again, infinite regression – as you get to the ultimate source of the first simulation, and have to ask, where did they come from – simple logic is ignored in many materialistic scientific fields.
TR, you raise several interesting points. I think current materialism is evolving into some sort of panpsychism that in effect sees consciousness as inherent to reality of matter. Sort of like a magnetic or gravitational field. Of course that points straight to a global mind and thus panentheism. On this, a sufficiently complex processor will have enough of a consciousness field to be a mind, especially if fed by sensor arrays and equipped with suitable software and actuators. Thus the sci fi trope of the conscious AI. Materialism is long gone, run over and left behind as roadkill. We now deal with the philosophical difficulties of this sort of panentheism, starting with moral government: we are all a little bit of the grand mind thus it is partly good, partly evil etc. Resemblance to Star Wars’ Force is not coincidental. A second challenge is that if such mind is an epiphenomenon of matter, then it lacks grounding for responsible rational freedom of thought and action, undercutting mindedness and morality which both require real freedom. If the poof magic of emergence is trotted out, we are at just so stories, and more. KF
Regarding panpsychism, which seems to be sort of in vogue, it is basically untenable:
Panpsychism merely substitutes one mystery with another. It still doesn’t resolve the mystery of the nature of the essence of consciousness and how it is of a fundmentally different nature than matter – Chalmers’ “hard problem”.
Quantum mechanics indicates consciousness is fundamental. Matter is derived from consciousness. This is incompatible with panpsychism.
The evidence from cosmology that an intelligence created the physical universe (things like the very evident fine tuning of the laws of physics) is also incompatible with panpsychism.
Empirical objection: panpsychism seems incompatible with data from research into psi and survival. Panpsychism is being proposed as a naturalistic framework according to which the mind is fully embodied in the brain (and therefore could never become separated from it).
Also: Panpsychism says that the mind is composed of mental elements linked to the physical components of (parts of the) brain, but if mental processes are intrinsically linked to physical processing in the brain, it is unclear where the personal self could ever come in.