From comments made to Jamie Condliffe by Demis Hassabis at Technology Review:
Building AI that can perform general tasks, rather than niche ones, is a long-held desire in the world of machine learning. But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem, in part because human traits like inquisitiveness, imagination, and memory don’t exist or are only in their infancy in the world of AI.
In a paper published today in the journal Neuron, Hassabis and three coauthors argue that only by better understanding human intelligence can we hope to push the boundaries of what artificial intellects can achieve.
First, they say, better understanding of how the brain works will allow us to create new structures and algorithms for electronic intelligence. Second, lessons learned from building and testing cutting-edge AIs could help us better define what intelligence really is. More.
The problems will likely prove stubborn or intractable anyway, for several reasons. First, it may not be possible to endow AI with the capacity to actually want anything, characteristic of life forms. In that case, it must always be supplied with motivation by the humans using the machines.
It’s not clear that the study of human language is even a science at present. We may be further off than we think from good answers to the AI experts’ questions.
Intelligence in general sounds likely to present that sort of problem. Even animal intelligence presents problems (intelligence without a brain, for example), and it is far less than human intelligence.
But great sci-fi will likely result from the efforts.
See also: Selensky, Shallit, & Koza vs artificial life simulations
What to fear from intelligent robots. But how can a robot want anything?
From Aeon: Is the study of language a science?
Animal minds: In search of the minimal self
Does intelligence depend on a specific type of brain?