The GPT-3 program can get through grammatical issues on which others stumble, says Robert J. Marks. It is a simulation because the AI can perform the task but does not “understand” what the concepts mean:
The classic test for AI common sense is resolution of Winograd schema. Winograd schema contain vague, ambiguous pronouns. Common sense resolves the ambiguity. An example is:
“John is afraid to get in a fight with Bob because he is so tall, muscular and ill-tempered.”
Does the vague pronoun “he” refer to John or Bob? Common sense says Bob is the tough guy and John is the scared dude. Another Winograd schema example is
“John did not ask Bob to join him in prayer because he was an atheist.”
Common sense says that Bob was the atheist. Solving Winograd schema requires common sense.
Can AI parse these and other Winograd schema to identify the person behind the vague pronoun? Until recently, Winograd schema AI contests resulted in a per cent accuracy not much better than a coin flip. But AI innovators, led by OpenAI’s amazing GPT3, program are scoring upwards of 90% accuracy. In testing, care was taken to avoid Winograd schema whose resolution could be googled. These results are remarkable.
Robert J. Marks, “New: AI learns to simulate common sense” at Mind Matters News (December 30, 2021)
That doesn’t mean that AI has common sense. It means that clever programming can crack the problem of the correct reference back to a previous subject.
Takehome: Unlike understanding, creativity and sentience, common sense could be computable. There is no indication that common sense is non-algorithmic.
You may also wish to read: What did the computer learn in the Chinese Room? Nothing. Computers don’t “understand” things and they can’t handle ambiguity, says Robert J. Marks. Larry L. Linenschmidt interviews Robert J. Marks on the difference between performing a task and understanding the task, as explained in philosopher John Searle’s famous “Chinese Room” thought experiment.