Uncommon Descent Serving The Intelligent Design Community

Google’s AI guru says AI must build on human intelligence

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
controls for AI/Pbroks13

From comments made to Jamie Condliffe by Demis Hassabis at Technology Review:

Building AI that can perform general tasks, rather than niche ones, is a long-held desire in the world of machine learning. But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem, in part because human traits like inquisitiveness, imagination, and memory don’t exist or are only in their infancy in the world of AI.

In a paper published today in the journal Neuron, Hassabis and three coauthors argue that only by better understanding human intelligence can we hope to push the boundaries of what artificial intellects can achieve.

First, they say, better understanding of how the brain works will allow us to create new structures and algorithms for electronic intelligence. Second, lessons learned from building and testing cutting-edge AIs could help us better define what intelligence really is. More.

The problems will likely prove stubborn or intractable anyway, for several reasons. First, it may not be possible to endow AI with the capacity to actually want anything, characteristic of life forms. In that case, it must always be supplied with motivation by the humans using the machines.

It’s not clear that the study of human language is even a science at present. We may be further off than we think from good answers to the AI experts’ questions.

Intelligence in general sounds likely to present that sort of problem. Even animal intelligence presents problems (intelligence without a brain, for example), and it is far less than human intelligence.

But great sci-fi will likely result from the efforts.

See also: Selensky, Shallit, & Koza vs artificial life simulations

What to fear from intelligent robots. But how can a robot want anything?

From Aeon: Is the study of language a science?

Animal minds: In search of the minimal self

and

Does intelligence depend on a specific type of brain?

Comments
Future studies need to confirm these findings and should evaluate long-term safety of cortical surface stimulation in humans [...] Human perception of electrical stimulation on the surface of somatosensory cortex Shivayogi V. Hiremath, Elizabeth C. Tyler-Kabara, Jesse J. Wheeler, Daniel W. Moran, Robert A. Gaunt, Jennifer L. Collinger, Stephen T. Foldes, Douglas J. Weber, Weidong Chen, Michael L. Boninger, Wei Wang https://doi.org/10.1371/journal.pone.0176020 PLOS one http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0176020Dionisio
July 25, 2017
July
07
Jul
25
25
2017
07:39 PM
7
07
39
PM
PDT
.Dionisio
July 25, 2017
July
07
Jul
25
25
2017
05:28 PM
5
05
28
PM
PDT
@6: you = ghost in the machine Or maybe you = housecat http://www.businessinsider.com/elon-musk-on-neural-lace-2016-6ppolish
July 25, 2017
July
07
Jul
25
25
2017
08:39 AM
8
08
39
AM
PDT
There is also the problem of local optimization. In the late 19th century, the original Social Scientists told the world that all of our problems could be solved by eliminating the personal biases of greedy businessmen and corrupt politicians by turning over the day-to-day running of everything to professional "scientific" Managers. Initially this seemed to hold true, but as more and more decision making was turned over to lifelong (if not truly "professional") bureaucrats, non-bureaucrats began to notice that what the bureaucrats were actually optimizing was the smooth running of their own offices without regard to how this affected the lives and fortunes of the people for whom their bureaucracy officially existed. Even Communists stumbled on this problem, and as early as the Bolshevik Revolution Lenin and company argued in private about how the emergence of the New Class ("managers") affected Marxist theories based ENTIRELY on the concept of struggles between TWO naturally opposing classes (labor and capital). Communist theory could not resolve the problem, and Communist governments were universally taken over by professional bureaucrats. So, if you set an unthinking machine to work on "solving" a problem, there is a very good chance that the machine will also move toward optimizing things that allow the machine to claim (an oddly human concept) completion of the task within such narrow parameters as to render the solution worthless. Alternatively, the classic result of letting a computer solve problems is that the computer seizes control of more and more of the resources (e.g., CPU cycles) in an attempt to solve the lack of response from the printer queue, etc. The computer "believes" that the printer queue is the reason for the existence of the computer system, and demands by insignificant carbon-based Users to do "work" other than printed output is part of the PROBLEM. This is EXACTLY the way human bureaucrats work. Read ANY of the stories about how the VA "services" patients needing actual medical treatment.vmahuna
July 24, 2017
July
07
Jul
24
24
2017
11:05 PM
11
11
05
PM
PDT
Intelligence may be noncomputable. Funny that possibility is never considered.EricMH
July 24, 2017
July
07
Jul
24
24
2017
08:04 PM
8
08
04
PM
PDT
@5: "you search your own memory and google’s memory too." 'you'? What's that?Dionisio
July 24, 2017
July
07
Jul
24
24
2017
05:25 PM
5
05
25
PM
PDT
Implanting machine learning into humans is the way to go. Imagine a Google chip implant - you search your own memory and google's memory too. Much faster than using your smartphone. Or a computation chip implant. Las Vegas wouldn't like that at all. They'll learn to defend against it - but not before some nerd cleans up.ppolish
July 24, 2017
July
07
Jul
24
24
2017
04:32 PM
4
04
32
PM
PDT
The essence of the problem is intractable: how to build a machine (basically a glorified abacus using sophisticated algorithms implemented by circuits switching at unimaginable speeds) with conscious awareness, that knows an inner experience and what it is like to perceive and imagine and will. This is Chalmers' famous "hard problem". It's interesting that even that infamous closed-minded skeptic of the paranormal Martin Gardner considered this impossible.doubter
July 24, 2017
July
07
Jul
24
24
2017
11:57 AM
11
11
57
AM
PDT
Mung:
First they will need to define intelligence.
Maybe first they have to determine how they are going to define it?Dionisio
July 24, 2017
July
07
Jul
24
24
2017
10:20 AM
10
10
20
AM
PDT
In a paper published today in the journal Neuron, Hassabis and three coauthors argue that only by better understanding human intelligence can we hope to push the boundaries of what artificial intellects can achieve.
"only by better understanding human intelligence" ? Good luck! :) but don't hold your breath... it may take a little while... :) BTW, is this somehow related to Chalmers' "hard problem" thing?Dionisio
July 24, 2017
July
07
Jul
24
24
2017
10:17 AM
10
10
17
AM
PDT
First they will need to define intelligence.Mung
July 24, 2017
July
07
Jul
24
24
2017
09:54 AM
9
09
54
AM
PDT

Leave a Reply