Uncommon Descent Serving The Intelligent Design Community

Artificial intelligence: Conversing with computers? … or with their programmers?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One reason the artificial intelligence fantasy (“Soon computers will think and feel just like people!”) has enjoyed such a long shelf life is a fundamental misunderstanding: The computer is thinking.

Actually, the computer is not thinking. A programmer has developed a series of responses to our inputs. To the extent that the programmer can guess what we need, things will work. One way of seeing this is “thought, in the past tense.”

Just yesterday, for example, I was trying to order ten copies of a book from an automated book ordering site. But the programmer apparently forgot to build in the option of ordering ten copies at once. Needless to say, I was hardly going to order one copy ten times. But it’s no use trying to talk to the computer. I e-mailed the office and asked to have someone phone me.*

That’s what I mean by “thought, in the past tense.” If the programmer didn’t think of it, the computer won’t either.

Now, fast forward to the Turing test (can a machine fool you into believing it is a person?), which is once again being tested. David Smith, the Observer’s technology correspondent reports,

Can machines think? That was the question posed by the great mathematician Alan Turing. Half a century later six computers are about to converse with human interrogators in an experiment that will attempt to prove that the answer is yes. 

In the ‘Turing test” a machine seeks to fool judges into believing that it could be human. The test is performed by conducting a text-based conversation on any subject. If the computer’s responses are indistinguishable from those of a human, it has passed the Turing test and can be said to be ‘thinking’. (“‘Intelligent’ computers put to the test. Programmers try to fool human interrogators,” October 5, 2008)

October 12, the designers of six computer programs are competing for the Loebner Prize in Artificial Intelligence – an 18-carat gold medal and $100,000. Volunteers will sit at a computer, half of whose split screen is operated by another human and half by a program. After five minutes of text-based talk, they must guess. If 30% are unsure, then the computer is said to be “thinking.”

I’ve always felt there was something pretty fishy about this “Turing test”, and I agree with philosopher A.C. Grayling who points out, 

‘The test is misguided. Everyone thinks it’s you pitting yourself against a computer and a human, but it’s you pitting yourself against a computer and computer programmer. AI is an exciting subject, but the Turing test is pretty crude.’

 

(Note: I think Grayling means here that you are pitting yourself against a human who is a computer programmer who has coded responses to possible questions and a human who is not a programmer and is simply generating the responses in real time.)

I have no doubt that a programmer with Oscar Wilde’s dialogue skills could program a computer as a clever conversation partner. Most people will believe that the computer is human if it just sounds wittier or sexier than they do. In fact, the only reason this isn’t yesterday’s news is that so many computer nerds are inarticulate, and wouldn’t have any idea what to program the computer to say.

My level of confidence in the Turing test did not improve when I read cyberneticist Kevin Warwick’s explanation that machines are in fact conscious:

I would say now that machines are conscious, but in a machine-like way, just as you see a bat or a rat is conscious like a bat or rat, which is different from a human. I think the reason Alan Turing set this game up was that maybe to him consciousness was not that important; it’s more the appearance of it, and this test is an important aspect of appearance.

 

Professor Warwick, did you get the memo on the “hard problem of consciousness”?

It really is a hard problem. They’re not just making that up to get research funding.

See also:

Computers: Most engineers must have guessed that they are not robots

Artificial intelligence: A look at things that neither we nor computers can discover

Can a conscious mind be built out of software?

Also, Just up at The Mindful Hack:

Altruism: Can mathematics, with a dash of faith, explain altruism?

Spirituality: Is this a trend? Guy tries Judaism “on spec” – discovers 7-day no-refund policy, ends as famous pulpit rabbi

Neuroscience: Getting past the “You are a computer made of meat” phase

Psychology: Picture yourself deciding you actually like the way you look!

*They have since phoned me. They are now trying to reconfigure the software. The software is not trying to reconfigure itself.

Comments
Another thought on this: The term "intelligent" has all sorts of expectations that go with it. However whether or not a computer algorithm fulfills all those expectations, is a matter of design or how well the design was implemented. Whether or not we have the computer program we think we do is always an issue in quality software development.jjcassidy
October 7, 2008
October
10
Oct
7
07
2008
03:24 PM
3
03
24
PM
PST
The most ironic thing about the Turing Test, is that it rests on the high-level assertion about a gap, to promote a high-level assertion as an accurate description. Now, the skeptical method in dealing with gaps, suggests that the details show that the high-level assumption was premature or inaccurate: What we found here was not some fuzzy notion of "intelligence", just silicon and bits and bytes and instruction codes. And in contrast to the promise of the analogous development of "Future Science" to resolve knowledge gaps, an engineered application has no lack of actual details known by somebody in the here and now, as any software engineer can attest. An Example Right now, this is a loose analogy, but I want to show that it does show a weakness to the dynamic of "discovery over time" as "God of the Gaps" is said to. To see how this fits with standard skepticism about gap theories, we can project a computer that fools--er, convinces--99% humans to believe it's sentient. We pronounce the program "intelligent" with a 99% confidence level. But let's imagine that the confidence level of the individuals making this assessment is distributed around 50% to 60%. However, a select few are able to throw the algorithm into a very mechnical behavior. In contrast, these people are nearly 100% convinced that it is most definitely NOT a sentient being, but a program. First, we have a large group that is convinced that it is "intelligent", which motivated the idea that "intelligent" was an accurate assessment. But when shown this flaw that they can repeat themselves, they realize that they judged prematurely. They then change their assessment, and they become fairly convinced that they are not talking to a personality. So there is an analog to the "Future Science" resolution. Not only do we have all these mechanical charts, that we've kind of thrown over our shoulder, but we also have the discovery over time of bugs that reverse the conviction of the people who pronounced it "intelligent" in the first place. Thus the positive statement "This computer is intelligent per Turing's Test" is equivalent to the prediction that there is no bug which will reveal the inhuman nature of the algorithm. Thus it's an assertion across the gap between knowledge we have now of how the algorithm performs and all future knowledge. It can be seen that even across the design barrier, knowledge of exactly how the program acts can grow over time. Thus "Future Testing" will soon "fill in" many gaps about how the program actually reacts to situations. Thus Turing's Test should be totally discredited by anybody who finds a problem with "God of the Gaps" arguments.jjcassidy
October 7, 2008
October
10
Oct
7
07
2008
03:00 PM
3
03
00
PM
PST
In fact, the only reason this isn’t yesterday’s news is that so many computer nerds are inarticulate, and wouldn’t have any idea what to program the computer to say.
Ouch.reluctantfundie
October 6, 2008
October
10
Oct
6
06
2008
11:25 PM
11
11
25
PM
PST

Leave a Reply