Uncommon Descent Serving The Intelligent Design Community

At Mind Matters News: New AI learns to simulate common sense

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The GPT-3 program can get through grammatical issues on which others stumble, says Robert J. Marks. It is a simulation because the AI can perform the task but does not “understand” what the concepts mean:

The classic test for AI common sense is resolution of Winograd schema. Winograd schema contain vague, ambiguous pronouns. Common sense resolves the ambiguity. An example is:

“John is afraid to get in a fight with Bob because he is so tall, muscular and ill-tempered.”

Does the vague pronoun “he” refer to John or Bob? Common sense says Bob is the tough guy and John is the scared dude. Another Winograd schema example is

“John did not ask Bob to join him in prayer because he was an atheist.”

Common sense says that Bob was the atheist. Solving Winograd schema requires common sense.

Can AI parse these and other Winograd schema to identify the person behind the vague pronoun? Until recently, Winograd schema AI contests resulted in a per cent accuracy not much better than a coin flip. But AI innovators, led by OpenAI’s amazing GPT3, program are scoring upwards of 90% accuracy. In testing, care was taken to avoid Winograd schema whose resolution could be googled. These results are remarkable.

Robert J. Marks, “New: AI learns to simulate common sense” at Mind Matters News (December 30, 2021)

That doesn’t mean that AI has common sense. It means that clever programming can crack the problem of the correct reference back to a previous subject.

Takehome: Unlike understanding, creativity and sentience, common sense could be computable. There is no indication that common sense is non-algorithmic.

You may also wish to read: What did the computer learn in the Chinese Room? Nothing. Computers don’t “understand” things and they can’t handle ambiguity, says Robert J. Marks. Larry L. Linenschmidt interviews Robert J. Marks on the difference between performing a task and understanding the task, as explained in philosopher John Searle’s famous “Chinese Room” thought experiment.

Comments
It's almost impossible to measure common sense. Judges are inevitably biased. As with judging telepathy, positive judges will be too inclusive, and negative judges will be too intolerant. polistra
Can the AI explain how it "knows" the correct antecedent? Does its "common sense" go beyond correctly parsing pronouns? These tiny advances by AI programmers do not bode well for true artificial intelligence. Fasteddious
It did even simulate it found away to statically cheat AaronS1978

Leave a Reply