Uncommon Descent Serving The Intelligent Design Community
Topic

Brendan Dixon

Biologic Institute’s Brendan Dixon asks, could AI Winter be looming?

Artificial intelligence crashes are historically common: First, what caused previous AI winters? There was one straightforward reason: The technology did not work. Expert systems weren’t experts. Language translators failed to translate. Even Watson, after winning Jeopardy, failed to provide useful answers in the real-world context of medicine. When technology fails, winters come. Nearly all of AI’s recent gains have been realized due to massive increases in data and computing power that enable old algorithms to suddenly become useful. For example, researchers first conceived neural networks—the core idea powering much machine learning and AI’s notable advances—in the late 1950s. The worries of an impending winter arise because we’re approaching the limits of what massive data combined with hordes of computers can Read More ›

Brendan Dixon: Even the skeptical Deep Learning researcher left out one AI myth

Readers may remember Dixon from the time MIT tried building a universal moral machine. Here are some of his thoughts on one overlooked aspect of the “superintelligent AI” myth: [Google AI researcher] Francois Chollet is right to recognize that we, like all animals, come pre-wired. Young deer stand, leap, and run within hours of birth. Birds build nests without prior instruction. Squirrels bury and find nuts. We speak and juggle abstract thoughts. But basic chemistry does not create language; while speaking may require chemical bonding and signaling, language rests on something more. Vision is another “chicken and egg” problem: The best human eye in the world is worthless without a nervous system to transmit the signals and a mind to interpret Read More ›

Once upon a time, MIT tried building a universal Moral Machine…

In an effort to program self-driving cars to make decisions in a crisis, MIT’s Moral Machine offered 2.3 million people worldwide a chance to crowdsource who to kill and who to spare in a road mishap… The project aimed at building righteous self-driving cars revealed stark differences in global values. People from China and Japan were more likely to spare the old than the young. But in Western cultures, numbers matter more: The results showed that participants from individualistic cultures, like the UK and US, placed a stronger emphasis on sparing more lives given all the other choices—perhaps, in the authors’ views, because of the greater emphasis on the value of each individual. Karen Hao, “Should a self-driving car kill Read More ›