
From Matthew Hutson at Science:
Ali Rahimi, a researcher in artificial intelligence (AI) at Google in San Francisco, California, has charged that machine learning algorithms, in which computers learn through trial and error, have become a form of “alchemy.” Researchers, he says, do not know why some algorithms work and others don’t, nor do they have rigorous criteria for choosing one AI architecture over another. Now, in a paper presented on 30 April at the International Conference on Learning Representations in Vancouver, Canada, Rahimi and his collaborators document examples of what they see as the alchemy problem and offer prescriptions for bolstering AI’s rigor. … (paywall) Science 04 May 2018: Vol. 360, Issue 6388, pp. 478
DOI: 10.1126/science.360.6388.478More.
AI, meaning machines that think like people, is somewhat like the multiverse; it must be true. Truer than any evidence available in this frame of reality. It is alchemy for sure, but alchemy from the days when people could believe in alchemy. Okay, so we can’t believe in alchemy today but we can believe in AI taking over the world right? Gotta believe in something.
See also: China: Using AI for social control
The AI revolution has not happened yet. Probably never will, actually.
Experts slam EU proposal to grant personhood to intelligent machines
Aw, Facebook, quit blaming AI for your goofs and shady practices One thing to be said for granting personhood to intelligent machines is that we could then blame them for things that go wrong.
and
Why the brain still beats the computer, even from a naturalist perspective