
Psychology prof Gregg Henriques argues, consciousness “plays by a different set of rules than the language game of science”:
“One problem is the “ontological problem” of how it might be possible to engineer the felt experience of being. The other is the “epistemological problem” of directly knowing another’s primary experience.”
So the first problem is, could we create a computer that, even though it is only a calculating machine and not a living being, is conscious due to a massive RAM?
Second, none of us really knows for sure that anyone (except oneself!) is conscious. That’s the p-zombie problem: The p-zombie (the philosopher’s zombie) is your co-worker. What if he actually isn’t conscious, just programmed to follow carefully designed routines. How would you know?
News, “Consciousness is TWO hard problems, not one” at Mind Matters News
He’d like to see a scientific approach to this problem.

Meanwhile, a computer science prof counters the AI boosters with cogent explanations as to why computers will never be conscious:
“Some researchers continue to insist that simulating neuroscience with computers is the way to go. Others, like me, view these efforts as doomed to failure because we do not believe consciousness is computable. Our basic argument is that brains integrate and compress multiple components of an experience, including sight and smell—which simply can’t be handled in the way today’s computers sense, process and store data.”
He highlights a number of additional problems with the idea that computers can be conscious…
News, “Computer science prof: Computers will never be conscious” at Mind Matters News