Artificial Intelligence Mind

Infants beat AI at commonsense psychology

Spread the love

At ScienceDaily:

Infants outperform artificial intelligence in detecting what motivates other people’s actions, finds a new study by a team of psychology and data science researchers. Its results, which highlight fundamental differences between cognition and computation, point to shortcomings in today’s technologies and where improvements are needed for AI to more fully replicate human behavior. – New York University (February 21, 2023)

The human mind is hard/impossible to replicate even in its infancy.

The paper is open access.

11 Replies to “Infants beat AI at commonsense psychology

  1. 1
    bornagain77 says:

    🙂 Thanks News,,, besides putting a smile on my face first thing this morning, this one is definitely a keeper.

    And News, while I am at it, thank you for all the work you do in finding these interesting articles. You really are very good at bringing us “News”.

  2. 2
    EDTA says:

    I second what BornAgain77 says!

  3. 3
    PyrrhoManiac1 says:

    Neural nets are not even an attempt to replicate the human mind. The fact that infants out-perform neural nets is interesting, but not a surprise.

    What is interesting about this study is that it’s the first attempt to directly compare neural nets against infants, and that’s important for establishing a baseline for the development of explainable AI. The goal isn’t to replicate human intelligence (whatever that could mean) but to build an AI that we are capable of understanding.

  4. 4
    Seversky says:

    “Truly wonderful, the mind of a child is.”

    – Yoda, Attack of the Clones.

  5. 5
    bornagain77 says:

    Of semi related note:

    A Type Of Reasoning Ai Can’t Replace
    Abductive Reasoning Requires Creativity, In Addition To Computation
    News October 10, 2019
    ,,, Abductive reasoning, originally developed by an American philosopher Charles Sanders Peirce (1839–1914), is sometimes called an “inference to the best explanation,”,,,
    ,,, As you can see, abductive reasoning involves a certain amount of creativity because the suggested hypothesis must be developed as an idea, not just added up from existing pieces of information. And creativity isn’t something computers really do.

    The Human Skills AI Can’t Replace – William J. Littlefield II – 25 Sep 2019
    Excerpt: the history of AI can be broadly periodized based on which form of logical inference computer programs utilize: inductive or deductive.,,,
    For the foreseeable future, man will (via abductive reasoning) innovate, machine will toil, and The Terminator will remain science fiction.
    – William J. Littlefield II is a philosopher and professional software engineer.

  6. 6
    bornagain77 says:

    Also of semi-related note:

    Robert Marks: Some Things Computers Will Never Do: Nonalgorithmic Creativity and Unknowability – video

    Artificial Intelligence & Human Uniqueness by Robert J. Marks – 2019

    The Turing Test Is Dead. Long Live the Lovelace Test.
    Robert J. Marks II – July 3, 2014
    Excerpt: Here are a few others statements expressing doubt about the computer’s ability to create Strong AI.
    “…no operation performed by a computer can create new information.”
    Douglas G. Robertson
    “The [computing] machine does not create any new information, but it performs a very valuable transformation of known information.”
    Leon Brillouin
    “Either mathematics is too big for the human mind or the human mind is more than a machine.”
    – Kurt Godel
    and, of course, my favorite:7
    “Computers are no more able to create information than iPods are capable of creating music.”
    – Robert J. Marks II
    The limitations invoked by the law of conservation of information in computer programming have been a fundamental topic of investigation by Winston Ewert, William Dembski and me at the Evolutionary Informatics Lab. We have successfully and repeatedly debunked claims that computer programs simulating evolution are capable of generating information any greater than that intended by the programmer.8,9,10,11,12,13

    In short, (as Dawkins’ Weasel program itself infamously demonstrated, and as the law of conservation of information proves), there is no computer algorithm that will ever generate a single sentence of Shakespeare, much less the complete works of Shakespeare, unless that computer algorithm already has that single sentence, and/or the complete works, of Shakespeare programmed within itself somewhere.

    Also of note

    Yes, “We’ve Been Wrong About Robots Before,” and We Still Are – Erik J. Larson – November 12, 2014
    Excerpt: Take this statement, originally from computer scientist Hector Levesque (it also appears in Nicholas Carr’s 2014 book about the dangers of automation, The Glass Cage):
    “The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam, the large ball or the table?”
    Watson would not perform well in answering this question, nor would Deep Blue. In fact there are no extant AI systems that have a shot at getting the right answer here, because it requires a tiny slice of knowledge about the actual world. Not “data” about word frequencies in languages or GPS coordinates or probability scoring of next-best chess moves or canned questions to canned answers in Jeopardy. It requires what AI researches call “world knowledge” or “common sense knowledge.”,,
    Having real knowledge about the world and bringing it to bear on our everyday cognitive problems is the hallmark of human intelligence, but it’s a mystery to AI scientists, and has been for decades.,,,
    Given that minds produce language, and that there are effectively infinite things we can say and talk about and do with language, our robots will seem very, very stupid about commonsense things for a very long time. Maybe forever.

    What Is a Mind? More Hype from Big Data – Erik J. Larson – May 6, 2014
    Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, “Understanding Natural Language,” about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland’s article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required “holistic interpretation.” That is, the ambiguities weren’t resolvable except by taking a broader context into account. The words by themselves weren’t enough.
    Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal “test” to see if his claims were still valid today.,,,
    ,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland’s point about machine translation made afresh, in 2014.
    Erik J. Larson – Founder and CEO of a software company in Austin, Texas

    Wacky Jabber – Douglas Hofstadter – Sept 2022
    Conclusion: “Garbage”, I’m afraid, is almost too kind a word for this kind of output text. Each of the three machine “translations” is an unprecedented example of sheer, total meaninglessness. And yet they were all produced by sober, no-nonsense, deadpan, tone-deaf, and stone-dead programs that have nonetheless been trumpeted in many prestigious and influential publications—such as the New York Times, the Economist, and others—as being astonishingly powerful and supremely accurate translators. It’s as if each program were telling you: “Okay; here’s exactly what the paragraph I’ve just ‘read’ really means. Plus, I produced it in but a split second!” What a farce!
    Also, the vast mismatch between the three systems’ output is simply stunning. I guess I laugh so hard because that disparity reveals, like a flash of lightning on a pitch-dark night, the amazing lack of understanding on the part of these highly vaunted and often virtuosic programs. It so vividly brings out their true, zombie-ish nature. And so, these three pieces of crazy, useless, thoughtlessly produced, utterly meaningless garbage masquerading as sense give me immeasurable pleasure.
    If you share my feelings of joy and mirth, I’ll be delighted.
    If not, well, de gustibus…

  7. 7
    bornagain77 says:

    Hmm, interesting, Evolution News just so happened to put this up yesterday,

    Robert J. Marks Pours Cold Water on ChatGPT Hype – Evolution News – February 25, 2023
    Excerpt: Dr. Robert J. Marks, director of Discovery Institute’s Walter Bradley Center, appeared on a segment of The Agenda recently to examine the hype surrounding artificial intelligence and ChatGPT. He was joined by Melanie Mitchell of the Sante Fe Institute and MIT’s Max Tegmark. Hosted by Steve Paikin, the three discussed the benefits and drawbacks of artificial intelligence and what it means to be human in a technological age, as well as the perennial question of consciousness. You can watch the entire conversation on YouTube:
    – Is ChatGPT Conscious? | The Agenda
    Dr. Marks had the opportunity to talk about some of the key themes he discusses in his book Non-Computable You: What You Do That Artificial Intelligence Never Will, contending that AI, while it has benefits, does not, and never will, have the creativity, empathy, and personal consciousness unique to human beings.

  8. 8
    Sandy says:

    Infants outperform artificial intelligence in detecting what motivates other people’s actions,

    AI is like sugar in its prime , garbage presented as value.

  9. 9
    relatd says:

    There is no such thing as Artificial Intelligence. It’s a fake term. What does exist are sophisticated computer programs.

    ChatGPT is a toy for the bored. A toy. I work for a publishing company. Authors will have to guarantee: “No, ChatGPT did not write this story for me.” Little kids and bored adults will play with this for a while and then the novelty will wear off.

    The same with fake art. “No, I did not use Midjourney to create this art.”

    This has nothing to do with “being human.” These computer programs are not human. They were designed to cut up pieces of art and writing and reassemble it. That’s it.

    As someone who works with genius level creative people, I know the difference between good storytelling and bad storytelling. And what has OpenAI been doing lately? Hiring lawyers. Why? They are using art and writing without permission.

  10. 10
    PyrrhoManiac1 says:


    That might the first post of yours I’ve ever agreed with here.

  11. 11
    relatd says:

    PM1 at 10,

    Strange things do happen. 🙂

Leave a Reply