Animal minds Artificial Intelligence Intelligent Design Mind

Google’s AI guru says AI must build on human intelligence

Spread the love
controls for AI/Pbroks13

From comments made to Jamie Condliffe by Demis Hassabis at Technology Review:

Building AI that can perform general tasks, rather than niche ones, is a long-held desire in the world of machine learning. But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem, in part because human traits like inquisitiveness, imagination, and memory don’t exist or are only in their infancy in the world of AI.

In a paper published today in the journal Neuron, Hassabis and three coauthors argue that only by better understanding human intelligence can we hope to push the boundaries of what artificial intellects can achieve.

First, they say, better understanding of how the brain works will allow us to create new structures and algorithms for electronic intelligence. Second, lessons learned from building and testing cutting-edge AIs could help us better define what intelligence really is. More.

The problems will likely prove stubborn or intractable anyway, for several reasons. First, it may not be possible to endow AI with the capacity to actually want anything, characteristic of life forms. In that case, it must always be supplied with motivation by the humans using the machines.

It’s not clear that the study of human language is even a science at present. We may be further off than we think from good answers to the AI experts’ questions.

Intelligence in general sounds likely to present that sort of problem. Even animal intelligence presents problems (intelligence without a brain, for example), and it is far less than human intelligence.

But great sci-fi will likely result from the efforts.

See also: Selensky, Shallit, & Koza vs artificial life simulations

What to fear from intelligent robots. But how can a robot want anything?

From Aeon: Is the study of language a science?

Animal minds: In search of the minimal self

and

Does intelligence depend on a specific type of brain?

11 Replies to “Google’s AI guru says AI must build on human intelligence

  1. 1
    Mung says:

    First they will need to define intelligence.

  2. 2
    Dionisio says:

    In a paper published today in the journal Neuron, Hassabis and three coauthors argue that only by better understanding human intelligence can we hope to push the boundaries of what artificial intellects can achieve.

    “only by better understanding human intelligence” ?

    Good luck! 🙂

    but don’t hold your breath… it may take a little while… 🙂

    BTW, is this somehow related to Chalmers’ “hard problem” thing?

  3. 3
    Dionisio says:

    Mung:

    First they will need to define intelligence.

    Maybe first they have to determine how they are going to define it?

  4. 4
    doubter says:

    The essence of the problem is intractable: how to build a machine (basically a glorified abacus using sophisticated algorithms implemented by circuits switching at unimaginable speeds) with conscious awareness, that knows an inner experience and what it is like to perceive and imagine and will. This is Chalmers’ famous “hard problem”. It’s interesting that even that infamous closed-minded skeptic of the paranormal Martin Gardner considered this impossible.

  5. 5
    ppolish says:

    Implanting machine learning into humans is the way to go. Imagine a Google chip implant – you search your own memory and google’s memory too. Much faster than using your smartphone.

    Or a computation chip implant. Las Vegas wouldn’t like that at all. They’ll learn to defend against it – but not before some nerd cleans up.

  6. 6
    Dionisio says:

    @5:

    “you search your own memory and google’s memory too.”

    ‘you’?

    What’s that?

  7. 7
    EricMH says:

    Intelligence may be noncomputable. Funny that possibility is never considered.

  8. 8
    vmahuna says:

    There is also the problem of local optimization. In the late 19th century, the original Social Scientists told the world that all of our problems could be solved by eliminating the personal biases of greedy businessmen and corrupt politicians by turning over the day-to-day running of everything to professional “scientific” Managers.

    Initially this seemed to hold true, but as more and more decision making was turned over to lifelong (if not truly “professional”) bureaucrats, non-bureaucrats began to notice that what the bureaucrats were actually optimizing was the smooth running of their own offices without regard to how this affected the lives and fortunes of the people for whom their bureaucracy officially existed.

    Even Communists stumbled on this problem, and as early as the Bolshevik Revolution Lenin and company argued in private about how the emergence of the New Class (“managers”) affected Marxist theories based ENTIRELY on the concept of struggles between TWO naturally opposing classes (labor and capital). Communist theory could not resolve the problem, and Communist governments were universally taken over by professional bureaucrats.

    So, if you set an unthinking machine to work on “solving” a problem, there is a very good chance that the machine will also move toward optimizing things that allow the machine to claim (an oddly human concept) completion of the task within such narrow parameters as to render the solution worthless.

    Alternatively, the classic result of letting a computer solve problems is that the computer seizes control of more and more of the resources (e.g., CPU cycles) in an attempt to solve the lack of response from the printer queue, etc. The computer “believes” that the printer queue is the reason for the existence of the computer system, and demands by insignificant carbon-based Users to do “work” other than printed output is part of the PROBLEM.

    This is EXACTLY the way human bureaucrats work. Read ANY of the stories about how the VA “services” patients needing actual medical treatment.

  9. 9
    ppolish says:

    @6:

    you = ghost in the machine

    Or maybe you = housecat

    http://www.businessinsider.com.....ace-2016-6

  10. 10
    Dionisio says:

    .

  11. 11
    Dionisio says:

    Future studies need to confirm these findings and should evaluate long-term safety of cortical surface stimulation in humans […]

    Human perception of electrical stimulation on the surface of somatosensory cortex
    Shivayogi V. Hiremath, Elizabeth C. Tyler-Kabara, Jesse J. Wheeler, Daniel W. Moran, Robert A. Gaunt, Jennifer L. Collinger, Stephen T. Foldes, Douglas J. Weber, Weidong Chen, Michael L. Boninger, Wei Wang
    https://doi.org/10.1371/journal.pone.0176020
    PLOS one
    http://journals.plos.org/ploso.....ne.0176020

Leave a Reply