Dembski continues to reflect on Erik J. Larson’s new book, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2021). He recalls his experiences learning to write boilerplate for a psychology chatbot back in 1982:
William Dembski: In 1982, I sat in on an AI course at the University of Illinois at Chicago where the instructor, a well-known figure in AI at the time by name of Laurent Siklossy, gave us, as a first assignment, to write an ELIZA program. Siklossy, a French-Hungarian, was a funny guy. He had written a book titled Let’s Talk LISP (that’s funny, no?), LISP being the key AI programming language at the time and the one in which we were to write our version of ELIZA. I still remember the advice he gave us in writing the program…
We needed to encode grammatical patterns so that we could reflect back what the human wrote, whether as a question or statement …
William Dembski, “Artificial intelligence understands by not understanding” at Mind Matters News
Well, spinning away from Adventures in Botworld for a moment, Larson’s book is doing quite well. Apparently, everyone isn’t a dummy for bots. Look at this:
- Best Sellers Rank: #17,440 in Books (See Top 100 in Books)
- Customer Reviews: 5.0 out of 5 stars 5 ratings
If you’ve read the book, you might want to get in there and write a review before the trolls arrive. ‘Course, Amazon put in a new rule a little while back where they have to buy the book before they can trash it. So maybe there’s surely for a civilized discussion.
You may also wish to read:
Automated driving and other failures of AI How would autonomous cars manage in an environment where eye contact with other drivers is important? In cossetted and sanitized environments in the U.S., we have no clue of what AI must achieve to truly match what humans can do.
and
Artificial intelligence: Unseating the inevitability narrative. William Dembski: World-class chess, Go, and Jeopardy-playing programs are impressive, but they prove nothing about whether computers can be made to achieve AGI. In The Myth of Artificial Intelligence, Erik Larson shows that neither science nor philosophy back up the idea of an AI superintelligence taking over.