Artificial Intelligence Mind

At Science: Is artificial intelligence alchemy?

Spread the love
controls for AI/Pbroks13

From Matthew Hutson at Science:

Ali Rahimi, a researcher in artificial intelligence (AI) at Google in San Francisco, California, has charged that machine learning algorithms, in which computers learn through trial and error, have become a form of “alchemy.” Researchers, he says, do not know why some algorithms work and others don’t, nor do they have rigorous criteria for choosing one AI architecture over another. Now, in a paper presented on 30 April at the International Conference on Learning Representations in Vancouver, Canada, Rahimi and his collaborators document examples of what they see as the alchemy problem and offer prescriptions for bolstering AI’s rigor. … (paywall) Science 04 May 2018: Vol. 360, Issue 6388, pp. 478
DOI: 10.1126/science.360.6388.478More.

AI, meaning machines that think like people, is somewhat like the multiverse; it must be true. Truer than any evidence available in this frame of reality. It is alchemy for sure, but alchemy from the days when people could believe in alchemy. Okay, so we can’t believe in alchemy today but we can believe in AI taking over the world right? Gotta believe in something.

See also: China: Using AI for social control

The AI revolution has not happened yet. Probably never will, actually.

Experts slam EU proposal to grant personhood to intelligent machines

Aw, Facebook, quit blaming AI for your goofs and shady practices One thing to be said for granting personhood to intelligent machines is that we could then blame them for things that go wrong.

and

Why the brain still beats the computer, even from a naturalist perspective

3 Replies to “At Science: Is artificial intelligence alchemy?

  1. 1
    polistra says:

    If the researchers don’t know why one setup works better, that’s a sign that AI is getting more like life. As long as the systems are fully understandable and calculable, they’re in a totally different department from real neurons.

  2. 2
    LocalMinimum says:

    I’m pretty sure we can fully automate middle management, and that’s good enough for me.

  3. 3
    jcfrk101 says:

    If the researchers don’t know why one setup works better, that’s a sign that AI is getting more like life. As long as the systems are fully understandable and calculable

    I don’t think he is saying that systems are incalculable or incomprehensible, but rather that people are not taking the time to determine why one algorithm is more efficient or effective than another. The problem being that they do not treat the work they do as something that can and should be understood, but rather as some form of vodoo that just happens. Alchemy takes what is clearly natural and attempts to make it into something mysterious and supernatural, it seems that many in the field of AI are beginning to treat AI in the same manner.

Leave a Reply