Uncommon Descent Serving The Intelligent Design Community

Eric Holloway: Move Over Turing and Lovelace – We Need a Terminator Test


The Turing test, and the Lovelace test, are attempts to determine if computers can show human-like intelligence. Holloway asks, what happens if researchers succeed in creating lifelike machines? in the sense of “wanting” things?

More research should be spent on a Terminator test to mitigate the threat of an unfriendly, all-powerful artificial intelligence…

In the movie Terminator, the humans use dogs to detect the terminators, but eventually the robots figure out how to use organic skin to fool the dogs. We can imagine this occurring with any sort of external test of the terminator’s appearance. So, to make a test that does not give us false positives, we need to look internally to the fundamental limits of computers, and there are a lot of them.

An idealized computer is a Turing machine. Turing machines have logical limits and performance limits. The most well-known limits are the halting problem and NP completeness. The halting problem is completely unsolvable by computers, and NP completeness means many important problems become unsolvable way before they become useful. Furthermore, there are many different problems that humans solve routinely that fall into these categories. So, if we want a good place to look for Terminator tests, this is where we should look.

Yet, despite the threat of AI wiping out humanity, and the fecundity of possible applications, there is zero research into Terminator tests. So, move over Turing and Lovelace tests. These will do nothing to save us. I challenge you, technically astute reader, to prevent the extinction of the human race and develop a Terminator test.

Eric Holloway, “Move Over Turing and Lovelace – We Need a Terminator Test” at Mind Matters News

Takehome: Holloway: If we create an all-powerful artificial intelligence, we cannot assume it will be friendly. Thus, we need a Terminator test.

You may also like to read:

We Need a Better Test for True AI Intelligence. The difficulty is that intelligence, like randomness, is mathematically undefinable. The operation of human intelligence must be non-physical because it transcends Turing machines, which in turn transcend every physical mechanism. (Eric Holloway)

“Friendly” Artificial Intelligence Would Kill Us. Is that a shocking idea? Let’s follow the logic. We don’t want to invent a stupid god who accidentally turns the universe into grey glue or paperclips, but any god we create in our image will be just as incompetent and evil as we are. (Eric Holloway)

AI Is Not Nearly Smart Enough to Morph Into the Terminator. Computer engineering prof Robert J. Marks offers some illustrations in an ITIF think tank interview. AI cannot, for example, handle ambiguities like flubbed headlines that can be read two different ways, Dr. Marks said.

A Scientific Test for True Intelligence. A scientific test should identify precisely what humans can do that computers cannot, avoiding subjective opinion. The “broken checkerboard” is not the ultimate scientific test for intelligence that we need, but it is a truly scientific test. (Eric Holloway)

I'm guessing that Madison Avenue is already on this researching effective ways to market to robots with "wants".... chuckdarwin
See also Asimov's three or four laws of robotics. https://www.scientificamerican.com/article/asimovs-laws-wont-stop-robots-from-harming-humans-so-weve-developed-a-better-solution/ Actually, AI is more likely (over time) to resemble "deep state bureaucrats" appearing before an oversight committee. They will convincingly appear as sycophants while in reality they don't actually provide any real information while skillfully pretending to fully comply. They are truly brilliant to watch in action! -Q Querius
I meant "unlikely" they are psychopaths.... EDTA
If they exhibit large amounts of humility over long periods of time (including when they think nobody else is watching), and therefore have many of their wants unmet for correspondingly long periods of time, then could we say it's likely they're psychopaths? EDTA
Excellent question from Seversky. Most psychopaths can be detected by the pattern of their actions, but their pattern is specifically designed to break up logic and patterns in other humans. Could a machine be programmed to execute this PROGRAM-DESTROYING program? polistra
Sounds like a good idea but if AIs were to start behaving like human psychopaths would they be any easier to detect than human psychopaths? Seversky

Leave a Reply