Most people who frequent these pages are familiar with the Turing Test. Turing proposed that a judge would evaluate text responses from a machine and a human. If the judge could not tell which was human, the machine would have passed the test. The Turing Test measures machine intelligence based on a communication metric. In other words, if the AI can talk like a human, it is as intelligent as a human.
Some researchers, like our own Robert Marks, think the Turing Test is too easy. They say creativity, not mere communication, is the real measure of human intelligence, and they have advanced the “Lovelace Test” as a superior alternative. An AI would pass the Lovelace Test by doing something “surprising.” For example, the AI could be asked to write a story, and the AI would pass if the programmer could not explain how the AI come up with the story. The AI would, itself, be considered creative, as opposed to an extension of its creator’s creativity (as is the case with chess and Go playing computers).
If an AI were able to pass both the Turing Test and the Lovelace Test, would we then know it is conscious in the same way humans are conscious? No. The reason for this conclusion, which might be surprising for some, is simple. We can’t “know” that even other humans are conscious; far less can we know that an AI is conscious.
Whoa Barry. Get a grip. Are you suggesting that you do not know other humans are conscious? In a sense, Yes I am. By its very nature, consciousness, as evidenced by subjective self-awareness, can be known only by subjective experience. And I can have subjective experience only of my own self. I cannot be subjectively self-aware of any other self. It follows, that I can be certain only of my own consciousness.
Of course, by no means am I denying that other humans are conscious. I feel confident they are. I am merely saying that I can experience only my own consciousness. My own experience of consciousness is the primary evidence of the fact that I am conscious. I cannot have primary evidence that any other person is conscious. I can infer to a very high degree of confidence that other humans are conscious, but that inference is based on secondary evidence. To use a crude example, I regard my own empathy as an attribute of my consciousness. When my wife cries at the end of Old Yeller, I infer from this outward reaction that she also has empathy. And from this I infer further that her empathy is an attribute of her consciousness just as mine is, and therefore she in fact is conscious. But I cannot know that she is in the same way that I know that I am. Conceivably, my wife is an AI programmed to shed tears when a beloved pet dies. I am very confident that is not the case, but I cannot know it for certain.
These are not original ideas. There is a large literature based on the concept of the “philosophical zombie” based on the insight that we cannot experience another person’s consciousness; we can only infer it. If we can only infer (and not know) another human is consciousness, it follows that no matter how sophisticated an AI is, we can never know that it is conscious. If an AI becomes so powerful that we cannot distinguish it from a human, we might infer that it is conscious, but we will never be able to know it for certain.