Stephen Hawking was hardly the only one:
Along with Sir Martin Rees, Elon Musk, and Henry Kissinger, among many lesser knowns, the late Stephen Hawking worried about an AI apocalypse (the “worst event in the history of our civilization”). Otherwise very bright people don’t seem to have a grasp of the underlying situation. Let’s take just two examples:
1. What would we need to make machines “intelligent”? We don’t even understand animal intelligence clearly. Are seals really smarter than dogs? Plants can communicate to adjust to their circumstances without a mind or brain. Where does that place plants with respect to intelligence? And what about the importance of the brain? Humans with seriously compromised brains can have consciousness. News, “Stephen Hawking and the AI Apocalypse” at Mind Matters
On the other hand, it keeps them in the media.
Follow UD News at Twitter!
See also: Noted astronomer envisions cyborg on Mars
AI machines taking over the world? It’s a cool apocalypse but does that make it more likely?
Software pioneer says general superhuman artificial intelligence is very unlikely The concept, he argues, shows a lack of understanding of the nature of intelligence