Larson did an interesting podcast with the Brookings Institution through its Lawfare Blog shortly after the release of his book. It’s well worth a listen, and Larson elucidates in that interview many of the key points in his book. The one place in the interview where I wish he had elaborated further was on the question of abductive inference (aka retroductive inference or inference to the best explanation). For me, the key to understanding why computers cannot, and most likely will never, be able to perform abductive inferences is the problem of underdetermination of explanation by data. This may seem like a mouthful, but the idea is straightforward. For context, if you are going to get a computer to achieve anything like understanding in some subject area, it needs a lot of knowledge. That knowledge, in all the cases we know, needs to be painstakingly programmed. This is true even of machine learning situations where the underlying knowledge framework needs to be explicitly programmed (for instance, even Go programs that achieve world class playing status need many rules and heuristics explicitly programmed).
Humans, on the other hand, need none of this…William A. Dembski, “Why computers will likely never perform abductive inferences” at Mind Matters News
Takehome: Computers require complete data to come to a correct conclusion but humans often work very well with incomplete data.
PS: By the way, we told you Dembski was back, didn’t we?
You may also wish to read:
Are we spiritual machines? Are we machines at all? Inventor Ray Kurzweil proposed in 1999 that within the next thirty years we will upload ourselves into computers as virtual persons, programs on machines. The themes and misconceptions about computers and artificial intelligence that made headlines in the late 1990s persist to this day.
A critical look at the myth of “deep learning” “Deep learning” is as misnamed a computational technique as exists. The phrase “deep learning” suggests that the machine is doing something profound and beyond the capacity of humans. That’s far from the case.
Artificial intelligence understands by not understanding The secret to writing a program for a sympathetic chatbot is surprisingly simple… We needed to encode grammatical patterns so that we could reflect back what the human wrote, whether as a question or statement.
Automated driving and other failures of AI How would autonomous cars manage in an environment where eye contact with other drivers is important? In cossetted and sanitized environments in the U.S., we have no clue of what AI must achieve to truly match what humans can do.
Artificial intelligence: Unseating the inevitability narrative. William Dembski: World-class chess, Go, and Jeopardy-playing programs are impressive, but they prove nothing about whether computers can be made to achieve AGI. In The Myth of Artificial Intelligence, Erik Larson shows that neither science nor philosophy back up the idea of an AI superintelligence taking over.