In connection with a new book, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2021) by Erik J. Larson, Bill offers some thoughts on how human intelligence isn’t being — and can’t be — duplicated:
… it would be interesting to see what fully automated driving would look like in a place like Moldova. A U.S. friend of mine who happened to visit the country was surprised at how Moldovan drivers managed to miss hitting each other despite a lack of clear signals and rules about when to take an opportunity and when to hold back. When he asked his Moldovan guide how the drivers managed to avoid accidents, the guide answered with two words: “eye contact.” Apparently, the drivers could see in their eyes who was willing to hold back and who was ready to move forward. Now that’s a happy prospect for fully automated driving. Perhaps we need “level 6” automation, at which AI systems learn to read the eyes of drivers to determine whether they are going to hold back or make that left turn into oncoming traffic.
This example suggests to me that AI is hopelessly behind the full range of human intellectual capabilities. It also suggests that we, in the cossetted and sanitized environments that we have constructed for ourselves in the U.S., have no clue of what capabilities AI actually needs to achieve to truly match what humans can do. The shortfall facing AI is extreme.William Dembski, “Automated driving and other failures of AI” at Mind Matters News
Takehome: In cossetted and sanitized environments in the U.S., Dembski says, we have no clue of what AI must achieve to truly match what humans can do.
You may also wish to read:
Artificial intelligence: Unseating the inevitability narrative. William Dembski: World-class chess, Go, and Jeopardy-playing programs are impressive, but they prove nothing about whether computers can be made to achieve AGI. In The Myth of Artificial Intelligence, Erik Larson shows that neither science nor philosophy back up the idea of an AI superintelligence taking over.