Uncommon Descent Serving The Intelligent Design Community

Artificial intelligence: Getting computers to pretend to converse is an “extremely hard computational problem”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A computer engineer of some importance has written to say that he thinks me a bit off the mark here, where I deny that computers actually think. He writes,

modern computers are often programmed to be adaptive, in that rules are given for learning (e.g, generalizing or updating “beliefs”) based on experience. So in fact computers can be (and are) programmed to “learn” things that their programmers don’t know.

He argues that behaviorally, this is thinking, but that it does not include consciousness (which means that he disagrees with Kevin Warwick).

He does, however, say,

And the important part that you and I apparently agree on is that there is no compelling reason to believe that a computer program is doing something altogether like what a human being does.

Now, there, he must certainly be right. In the original post, I had discussed a problem I was having with a computer-based book order system that did not allow me to buy ten copies of a book (because no one had thought to program in the possibility of multiple orders). The stupidest human clerk would have understood immediately.

He is really annoyed with my saying that “Most people will believe that the computer is human if it just sounds wittier or sexier than they do. In fact, the only reason this isn’t yesterday’s news is that so many computer nerds are inarticulate, and wouldn’t have any idea what to program the
computer to say.”

He calls it “gratuitous” and “wrong!”.

Hey, I only said that to see who I would get a rise out of. Turned out to be him, imagine!

Anyway, he advises me that making computers respond convincingly to unscripted dialogue (natural language processing) is “an extremely hard computational problem.”

But I am hardly surprised. Dialogue is fiendishly difficult to write well. It is one reason why I went into non-fiction rather than fiction.

Also just up at The Mindful Hack …

Artificial intelligence: Computers do not think, they “shuffle bits”

Artificial intelligence: Getting computers to pretend to converse is an ” extremely hard computational problem”

Spirituality: Churches nobody goes to any more vs. the “ancient and ever new” ones

Brain: The turtle really did beat the rabbit, you know …

Spirituality and the arts: High time someone said this

The Mindful Hack is my blog that supports The Spiritual Brain (Beauregard and O’Leary, 2007) .

Comments
"modern computers are often programmed to be adaptive, in that rules are given for learning (e.g, generalizing or updating “beliefs”) based on experience. So in fact computers can be (and are) programmed to “learn” things that their programmers don’t know" I don't agree. They are programmed to react algoritmically to new inputs. There is no learning there. In the same line, I could say that a simple digital calculator operating sums is "learning" something new each time it does a new sum, because the programmer certainly did not know that specific input. Even if the input modifies something in the structure of the software, that happens according to the procedures established by the programmer. There is no learning, no understanding, no intelligence other than that inputted by the programmer, no consciousness. Experience of that kind is not experience, but only data input. Experience implies someone who experiences, in other words consciousness. "He argues that behaviorally, this is thinking, but that it does not include consciousness." He argues very badly. I am not aware of such a "behavioural" definition of thinking. In my world, thinking means thinking, that is a process of consciousness. Even Turing's test does not imply that any computer is thinking, but only that its behaviour could be "undistinguishable" from the behaviour of a thinking agent. The nature of the thinking experience in humas is based on two very different data: 1) The direct experience of our thinking in our own consciousness. That is an intuitive, direct fact, very strong and unique, indeed the basis for all other thoughts. 2) The inference of the same kind of reality in other people, who appear similar to us, and behave like us. That is an indirect inference, but a very strong one, if ever one existed. Please note that the strength of inference 2) critically depends on the intuitive certainty of experience 1). If, and I mean if, a computer ever were successful in a Turing test, and I mean not for 5 minutes and with trivial requests, but in such a way that no behavioural test could ever distinguish it from a living being (see Dick's "Do androids dream of electric sheep" for reference), then one could suggest an inference that it is conscious and that it is thinking. That inference would anyway be a very weak one, because the computer would anyway be very different from a human being, and human beings are the only category for which we have direct intuitive experience of the thinking event. So, a computer which is only "adapting" to an input according to the procedures established by its programmer, is definitely not thinking at all.gpuccio
October 11, 2008
October
10
Oct
11
11
2008
08:56 AM
8
08
56
AM
PDT
OT but thought you might want to see this about PVS (Persistent Vegetative State) and new research that shows that people with PVS may not be so dead after all. I would be interested in seeing what you and Mario have to say about this in relation to non materialist neuroscience.Jon Jackson
October 10, 2008
October
10
Oct
10
10
2008
05:35 PM
5
05
35
PM
PDT
I've often thought that it's possible for computers to be intelligent of a kind, but only intelligence in relation to human intelligence as human intelligence is in relation to God's intelligence. God created us, and we have much lesser intelligence. God created us in his own image, therefore we have the ability to create things with much lesser intelligence than us.tragicmishap
October 9, 2008
October
10
Oct
9
09
2008
07:21 PM
7
07
21
PM
PDT

Leave a Reply