You can picture yourself eating a chocolate ice cream sundae:
John Searle’s Chinese Room scenario is the most famous argument against the “strong AI” presumption that computation-writ-large-and-fast will become consciousness: … His argument shows that computers work at the level of syntax, whereas human agents work at the level of meaning: …
I still find Searle’s argument persuasive, despite decades of attempts by other philosophers to poke holes in it.
But there’s another, shorter and more intuitive argument against a materialist account of the mind. It has to do with intentional states. Michael Egnor and others have offered versions of this argument here at Mind Matters and elsewhere but I’d like to boil it down to its bare bones. Then you can commit it to memory and pull it out the next time your office mate starts to worry about Skynet or denies that he has free will.
Imagine a scenario where I ask you to think about eating a chocolate ice cream sundae, while a doctor does an MRI and takes a real-time scan of your brain state. We assume that the following statements are true: … More.
Readers? Thoughts? You can’t comment at Mind Matters but you can here. Doe Richards’ argument work?
See also: Jay Richards asks, can training for an AI future be trusted to bureaucrats?
Will AI lead to mass joblessness and social unrest?