This post went viral yesterday at Mind Matters:
The 2014 science fiction film Transcendence featured a scientist who uploaded his consciousness into an AI program. Many people talk as though things like that are just around the corner. But industry pros say it isn’t really possible. Why not?
François Chollet, author of Keras, a framework for the Python deep learning language, offers a list of reasons, but starts by pointing to an underlying misconception: that a super-AI could be developed that would go on creating more super-AIs until something vastly more intelligent than a human being arises. He points out that such a process has not actually happened in the universe of which we have knowledge:
An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself. This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred. François Chollet, “The Impossibility of Intelligence Explosion” at Medium
If we cannot design an intelligence, why do we think we can design a machine that can design an intelligence? ”
More. Software pioneer says general superhuman artificial intelligence is very unlikely” at Mind Matters
Is the idea surprising or what?
Follow UD News at Twitter!
See also: See also: Should robots run for office? A tech analyst sees a threat to democracy if they don’t
Too late to prevent rule by The Algorithm? Dilbert’s creator, Scott Adams, tells Ben Shapiro why he thinks politicians soon won’t matter.
How AI could run the world Its killer apps, in physicist Max Tegmark’s tale, include a tsunami of “message” films
Human intelligence as a halting oracle (Eric Holloway)
Meaningful information vs. artificial intelligence (Eric Holloway)