Mind News

Turing Test: Chatbots flunk once again at being human

Spread the love
File:A small cup of coffee.JPG
/Julius Schorzman

In “Chatbots fail to convince judges that they’re human,” (New Scientist, 20 October 2011), Paul Marks recounts his experiences as a judge in Turing tests:

A chatbot called Rosette won the $4000 annual Loebner Prize in Artificial Intelligence at the University of Exeter yesterday – but once again none of the four chatbots that were competing managed to convince any of the judges that they were human.

After computer pioneer Alan Turing in 1950 posited the notion that machines might one day be thought of as “thinking,” the competition attempts to find a computer program whose chat responses are indistinguishable from a human’s. They are nowhere near it.

Not what was confidently predicted by consensus science two decades ago.

Every year since 1991, the prize’s founder, Hugh Loebner, has asked four judges to sit at computer terminals where they can talk to a both a human (who’s hiding in another room) and a chatbot – but they are not told which is which. It’s up to the judges to decide which is the person and which is the software and then rate the chatbots on how good they are at human mimicry. A chatbot has only seemed more human than a human once in the competition’s history – but that, says Loebner, only occured when one human volunteer decided to behave like an early chatbot, skewing the results.

So not only can’t machines stand in for humans, but – it gets worse – humans can mess up things trying to stand in for machines.

The trouble with machines is, they can’t sound brainless enough to stand in for a certain type of seat-beside-you bus passenger who wants you to know all what’s wrong with her hair, her clothes, her landlord, her dog, her job, and her boyfriend, all the way to Ottawa. And leaves you wondering, “Is the brain really an illusion?”

Follow UD News at Twitter!

Leave a Reply