From biologist and computer scientist Arend Hintze at LiveScience:
We have some time – somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn’t just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things – as are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.More.
Hintze assumes that the superintelligent AI would have any desires at all. But that is unclear. For one thing, we don’t really know why life forms seek to survive when non-life doesn’t. It’s not a matter of intelligence, just of being alive.
The road may be bumpier than he thinks.
See also: Are robots a threat to democracy?
What can we hope to learn about animal minds?
Does intelligence depend on a specific type of brain?
How did HAL get endowed with a desire to stay alive despite it not being programmed in?