Uncommon Descent Serving The Intelligent Design Community

William Dembski: Artificial intelligence understands by not understanding

Photo of Erik J. Larson
Erik J. Larson

Dembski continues to reflect on Erik J. Larson’s new book, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2021). He recalls his experiences learning to write boilerplate for a psychology chatbot back in 1982:

William Dembski: In 1982, I sat in on an AI course at the University of Illinois at Chicago where the instructor, a well-known figure in AI at the time by name of Laurent Siklossy, gave us, as a first assignment, to write an ELIZA program. Siklossy, a French-Hungarian, was a funny guy. He had written a book titled Let’s Talk LISP (that’s funny, no?), LISP being the key AI programming language at the time and the one in which we were to write our version of ELIZA. I still remember the advice he gave us in writing the program…

We needed to encode grammatical patterns so that we could reflect back what the human wrote, whether as a question or statement …

William Dembski, “Artificial intelligence understands by not understanding” at Mind Matters News

Well, spinning away from Adventures in Botworld for a moment, Larson’s book is doing quite well. Apparently, everyone isn’t a dummy for bots. Look at this:

If you’ve read the book, you might want to get in there and write a review before the trolls arrive. ‘Course, Amazon put in a new rule a little while back where they have to buy the book before they can trash it. So maybe there’s surely for a civilized discussion.

You may also wish to read:

Automated driving and other failures of AI How would autonomous cars manage in an environment where eye contact with other drivers is important? In cossetted and sanitized environments in the U.S., we have no clue of what AI must achieve to truly match what humans can do.


Artificial intelligence: Unseating the inevitability narrative. William Dembski: World-class chess, Go, and Jeopardy-playing programs are impressive, but they prove nothing about whether computers can be made to achieve AGI. In The Myth of Artificial Intelligence, Erik Larson shows that neither science nor philosophy back up the idea of an AI superintelligence taking over.

This part is really a critique of psychotherapy, not a critique of AI. If an official method of therapy is simpler than a short program, it's not an especially powerful skill. By contrast, janitors and housekeepers and gardeners still haven't been replaced by software. Their skill is on a much higher level. polistra

Leave a Reply