From electrical Engineering prof Michael I. Jordan at Medium:
Of course, classical human-imitative AI problems remain of great interest as well. However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved.More.

I had the curious experience the other day, while waiting to get my hair cut, of listening to a vacuous radio talk show host explain that by the late 2020s, artificial intelligence would be “socially intelligent.” I vaguely wondered, “What does being socially intelligent mean, if one is not a human being?” But then the hairdresser signalled me to climb into her working chair, so…
I can be reasonably sure that the question I was asking myself had not occurred to the bubblacious host or to most of the people who say those things.
See also: Experts slam EU proposal to grant personhood to intelligent machines
Aw, Facebook, quit blaming AI for your goofs and shady practices One thing to be said for granting personhood to intelligent machines is that we could then blame them for things that go wrong.
and
Why the brain still beats the computer, even from a naturalist perspective