The Turing test, and the Lovelace test, are attempts to determine if computers can show human-like intelligence. Holloway asks, what happens if researchers succeed in creating lifelike machines? in the sense of “wanting” things?
More research should be spent on a Terminator test to mitigate the threat of an unfriendly, all-powerful artificial intelligence…
In the movie Terminator, the humans use dogs to detect the terminators, but eventually the robots figure out how to use organic skin to fool the dogs. We can imagine this occurring with any sort of external test of the terminator’s appearance. So, to make a test that does not give us false positives, we need to look internally to the fundamental limits of computers, and there are a lot of them.
An idealized computer is a Turing machine. Turing machines have logical limits and performance limits. The most well-known limits are the halting problem and NP completeness. The halting problem is completely unsolvable by computers, and NP completeness means many important problems become unsolvable way before they become useful. Furthermore, there are many different problems that humans solve routinely that fall into these categories. So, if we want a good place to look for Terminator tests, this is where we should look.
Yet, despite the threat of AI wiping out humanity, and the fecundity of possible applications, there is zero research into Terminator tests. So, move over Turing and Lovelace tests. These will do nothing to save us. I challenge you, technically astute reader, to prevent the extinction of the human race and develop a Terminator test.Eric Holloway, “Move Over Turing and Lovelace – We Need a Terminator Test” at Mind Matters News
Takehome: Holloway: If we create an all-powerful artificial intelligence, we cannot assume it will be friendly. Thus, we need a Terminator test.
You may also like to read:
We Need a Better Test for True AI Intelligence. The difficulty is that intelligence, like randomness, is mathematically undefinable. The operation of human intelligence must be non-physical because it transcends Turing machines, which in turn transcend every physical mechanism. (Eric Holloway)
“Friendly” Artificial Intelligence Would Kill Us. Is that a shocking idea? Let’s follow the logic. We don’t want to invent a stupid god who accidentally turns the universe into grey glue or paperclips, but any god we create in our image will be just as incompetent and evil as we are. (Eric Holloway)
AI Is Not Nearly Smart Enough to Morph Into the Terminator. Computer engineering prof Robert J. Marks offers some illustrations in an ITIF think tank interview. AI cannot, for example, handle ambiguities like flubbed headlines that can be read two different ways, Dr. Marks said.
A Scientific Test for True Intelligence. A scientific test should identify precisely what humans can do that computers cannot, avoiding subjective opinion. The “broken checkerboard” is not the ultimate scientific test for intelligence that we need, but it is a truly scientific test. (Eric Holloway)