According to a new book, Superintelligence: Paths, Dangers, Strategies, by Swedish philosopher Nick Bostrom:
We are still far from real AI despite last month’s widely publicised “Turing test” stunt, in which a computer mimicked a 13-year-old boy with some success in a brief text conversation. About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075. Bostrom takes a cautious view of the timing but believes that, once made, human-level AI is likely to lead to a far higher level of “superintelligence” faster than most expert
s expect – and that its impact is likely either to be very good or very bad for humanity.
The book enters more original territory when discussing the emergence of superintelligence. The sci-fi scenario of intelligent machines taking over the world could become a reality very soon after their powers surpass the human brain, Bostrom argues. Machines could improve their own capabilities far faster than human computer scientists.
“Machines have a number of fundamental advantages, which will give them overwhelming superiority,” he writes. “Biological humans, even if enhanced, will be outclassed.” He outlines various ways for AI to escape the physical bonds of the hardware in which it developed. More.
But, in the real world, how does one get a machine to want anything?
Follow UD News at Twitter!