Artificial intelligence: Conversing with computers? … or with their programmers?
|October 6, 2008||Posted by O'Leary under Intelligent Design|
One reason the artificial intelligence fantasy (“Soon computers will think and feel just like people!”) has enjoyed such a long shelf life is a fundamental misunderstanding: The computer is thinking.
Actually, the computer is not thinking. A programmer has developed a series of responses to our inputs. To the extent that the programmer can guess what we need, things will work. One way of seeing this is “thought, in the past tense.”
Just yesterday, for example, I was trying to order ten copies of a book from an automated book ordering site. But the programmer apparently forgot to build in the option of ordering ten copies at once. Needless to say, I was hardly going to order one copy ten times. But it’s no use trying to talk to the computer. I e-mailed the office and asked to have someone phone me.*
That’s what I mean by “thought, in the past tense.” If the programmer didn’t think of it, the computer won’t either.
Now, fast forward to the Turing test (can a machine fool you into believing it is a person?), which is once again being tested. David Smith, the Observer’s technology correspondent reports,
Can machines think? That was the question posed by the great mathematician Alan Turing. Half a century later six computers are about to converse with human interrogators in an experiment that will attempt to prove that the answer is yes.
In the ‘Turing test” a machine seeks to fool judges into believing that it could be human. The test is performed by conducting a text-based conversation on any subject. If the computer’s responses are indistinguishable from those of a human, it has passed the Turing test and can be said to be ‘thinking’. (“‘Intelligent’ computers put to the test. Programmers try to fool human interrogators,” October 5, 2008)
October 12, the designers of six computer programs are competing for the Loebner Prize in Artificial Intelligence – an 18-carat gold medal and $100,000. Volunteers will sit at a computer, half of whose split screen is operated by another human and half by a program. After five minutes of text-based talk, they must guess. If 30% are unsure, then the computer is said to be “thinking.”
I’ve always felt there was something pretty fishy about this “Turing test”, and I agree with philosopher A.C. Grayling who points out,
‘The test is misguided. Everyone thinks it’s you pitting yourself against a computer and a human, but it’s you pitting yourself against a computer and computer programmer. AI is an exciting subject, but the Turing test is pretty crude.’
(Note: I think Grayling means here that you are pitting yourself against a human who is a computer programmer who has coded responses to possible questions and a human who is not a programmer and is simply generating the responses in real time.)
I have no doubt that a programmer with Oscar Wilde’s dialogue skills could program a computer as a clever conversation partner. Most people will believe that the computer is human if it just sounds wittier or sexier than they do. In fact, the only reason this isn’t yesterday’s news is that so many computer nerds are inarticulate, and wouldn’t have any idea what to program the computer to say.
My level of confidence in the Turing test did not improve when I read cyberneticist Kevin Warwick’s explanation that machines are in fact conscious:
I would say now that machines are conscious, but in a machine-like way, just as you see a bat or a rat is conscious like a bat or rat, which is different from a human. I think the reason Alan Turing set this game up was that maybe to him consciousness was not that important; it’s more the appearance of it, and this test is an important aspect of appearance.
Professor Warwick, did you get the memo on the “hard problem of consciousness”?
It really is a hard problem. They’re not just making that up to get research funding.
Computers: Most engineers must have guessed that they are not robots
Artificial intelligence: A look at things that neither we nor computers can discover
Can a conscious mind be built out of software?
Also, Just up at The Mindful Hack:
Altruism: Can mathematics, with a dash of faith, explain altruism?
Spirituality: Is this a trend? Guy tries Judaism “on spec” – discovers 7-day no-refund policy, ends as famous pulpit rabbi
Neuroscience: Getting past the “You are a computer made of meat” phase
Psychology: Picture yourself deciding you actually like the way you look!
*They have since phoned me. They are now trying to reconfigure the software. The software is not trying to reconfigure itself.