Uncommon Descent Serving The Intelligent Design Community

Researcher who hopes machines will think like humans draws flak for critiquing the field

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Gary Marcus

From a new paper by AI researcher Gary Marcus at ArXiv:

Although deep learning has historical roots going back decades, neither the term “deep learning” nor the approach was popular just over five years ago, when the field was reignited by papers such as Krizhevsky, Sutskever and Hinton’s now classic (2012) deep network model of Imagenet. What has the field discovered in the five subsequent years? Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.

From the Conclusion:

As a measure of progress, it is worth considering a somewhat pessimistic piece I wrote for The New Yorker five years ago , conjecturing that “deep learning is only part of the larger challenge of building intelligent machines” because “such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”

As we have seen, many of these concerns remain valid, despite major advances in specific domains like speech recognition, machine translation, and board games, and despite equally impressive advances in infrastructure and the amount of data and compute available.

Intriguingly, in the last year, a growing array of other scholars, coming from an impressive range of perspectives, have begun to emphasize similar limits… More.

Paper.

Apparently, his concerns triggered a backlash,

A day later, former AAAI Co-chair and NIPS Chair Thomas G. Dietterich countered Gary Marcus’ article with no less than 10 tweets, calling it a “disappointing article… DL learns representations as well as mappings. Deep machine translation reads the source sentence, represents it in memory, then generates the output sentence. It works better than anything GOFAI ever produced.” … Long-time deep learning advocate and Facebook Director of AI Research Yann LeCun backed Dietterich’s counter-arguments: “Tom is exactly right.” In a response to MIT Tech Review Editor Jason Pontin and Gary Marcus, LeCun testily suggested that the later might have mixed up “deep learning” and “unsupervised learning”, and said Marcus’s valuable recommendations totalled “exactly zero.” More.

Marcus had struck a nerve. He countered,

Within a few days, thousands of people had weighed in over Twitter, some enthusiastic (“e.g, the best discussion of #DeepLearning and #AI I’ve read in many years”), some not (“Thoughtful… But mostly wrong nevertheless”).

Because I think clarity around these issues is so important, I’ve compiled a list of fourteen commonly-asked queries. Where does unsupervised learning fit in? Why didn’t I say more nice things about deep learning? What gives me the right to talk about this stuff in the first place? What’s up with asking a neural network to generalize from even numbers to odd numbers? (Hint: that’s the most important one). And lots more. I haven’t addressed literally every question I have seen, but I have tried to be representative. More.

Marcus should be thankful he still has a job. Not everyone who questions or offers thoughtful critiques is so lucky these days. It depends on whether the claim questioned has become part of the belief system of so many in Big Science that questions and conflicting data begin to sound like heresy.

Hat tip: Brendan Dixon

See also: Big data raises bigger questions re artificial intelligence (Gary Marcus)

Another academic freedom meltdown in science, this time re GMOs: Most harms to science today are coming from the science bureaucracy itself. Bureaucrats naturally favor policing in favor of a consensus; that’s what a bureaucrat is. A pioneer or productive scientist is a different type of person. Control of the former over the latter is not a healthy sign for the future.

and

Bill Dembski on artificial intelligence’s homunculus problem Dembski: Now the point to realize is that this huge library of algorithms is not itself intelligent, to say nothing of being a genius. At best, such a library would pay homage to the programmers who wrote the algorithms and the people whose intelligent behaviors served to train them (a la machine learning). But a kludge of all these algorithms would not be intelligent. What would be required for true intelligence is a master algorithm that coordinates all the algorithms in this library. Or we might say, what’s needed is a homunculus.

Comments
Seversky at 2, it's probably just another community, stuck in a mess of its own making, that doesn't thank anyone for outlining the problems. All we ever say is, keep talking, keep talking. That's one part of our attachment to intellectual freedom. There are so many people that we desperately hope will just go on talking.News
June 16, 2018
June
06
Jun
16
16
2018
03:56 AM
3
03
56
AM
PDT
Is the "deep learning community" the academic branch of the "deep state"?Seversky
June 15, 2018
June
06
Jun
15
15
2018
06:46 PM
6
06
46
PM
PDT
The only reasons that Marcus's career has not been destroyed by the deep learning community is that 1) he is well off, having sold his AI company to Uber and 2) he is a materialist/Darwinist and a flaming anti-Trump liberal. Bear in mind that Marcus is not offering any solution to the AI problem that is worth mentioning. He's just as clueless about intelligence as the people he is criticising. This is a guy who actually believes that the brain's design is haphazard and kludgy because, you guessed it, evolution.FourFaces
June 15, 2018
June
06
Jun
15
15
2018
01:57 PM
1
01
57
PM
PDT

Leave a Reply