Uncommon Descent Serving The Intelligent Design Community

Bill Dembski on how a new book expertly dissects doomsday scenarios

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Cover: The Myth of Artificial Intelligence in HARDCOVER

The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (Harvard University Press, 2021) is by AI researcher and tech entrepreneur Eric J. Larson. Dembski recalls earlier attempts to stem the tide of nonsense:

Back in 1998, I moderated a discussion at which Ray Kurzweil gave listeners a preview of his then forthcoming book The Age of Spiritual Machines, (1999) in which he described how machines were poised to match and then exceed human cognition, a theme he doubled down on in subsequent books (such as The Singularity Is Near (2005) and How To Create a Mind (2012) How to Create A Mind). For Kurzweil, it is inevitable that machines will match and then exceed us: Moore’s Law, guarantees that machines will attain the needed computational power to simulate our brains, after which the challenge will be for us to keep pace with machines. Kurzweil’s respondents at the discussion were John Searle, Thomas Ray, and Michael Denton, and they were all to varying degrees critical of his strong AI view. Searle recycled his Chinese Room thought experiment to argue that computers don’t/can’t actually understand anything. Denton made an interesting argument about the complexity and richness of individual neurons and how inadequate is our understanding of them and how even more inadequate our ability is to realistically model them computationally. At the end of the discussion, however, Kurzweil’s overweening confidence in the glowing prospects for strong AI’s future were undiminished. And indeed, they remain undiminished to this day (I last saw Kurzweil at a Seattle tech conference in 2019 — age seemed to have mellowed his person but not his views).

William A. Dembski, “Unseating the Inevitability Narrative” at Amazon Customer Reviews

Were they too polite? Not thorough enough? Dembski sees Larson’s book as “far and away the best refutation” of the AI overlords stuff we hear. And he has followed the field for four decades:

In fact, I received an NSF graduate fellowship in the early 1980s to make a start at constructing an expert system for doing statistics… I witnessed in real time the shift from rule-based AI (common with expert systems) to the computational intelligence approach to AI (evolutionary computing, fuzzy sets, and neural nets) to what has now become big data and deep/machine learning. I saw the rule-based approach to AI peter out. I saw computational intelligence research, such as conducted by my colleague Robert J. Marks II, produce interesting solutions to well-defined problems, but without pretensions for creating artificial minds that would compete with human minds. And then I saw the machine learning approach take off, with its vast profits for big tech and the resulting hubris to think that technologies created to make money could also recreate the inventors of those technologies.

William A. Dembski, “Unseating the Inevitability Narrative” at Amazon Customer Reviews

More at Mind Matters News


You may also wish to read: Why Richard Dawkins thinks AI may replace us. He likes the idea because it is consistent with his naturalist philosophy. Dawkins does not advance an argument for why “anything that a human brain can do can be replicated in silicon,” apart from the fact that he is “committed to the view that there’s nothing in our brains that violates the laws of physics.”

Comments

Leave a Reply