- Share
-
-
arroba
From sociologist Steve Fuller, who has studied ID, at Telegraph:
Stephen Hawking summed up the thinking of many of the researchers and funders behind artificial intelligence this week when he launched the new Leverhulme Centre for the Future of Intelligence at Cambridge by claiming that AI is “either the best or worst thing to happen to humanity.”
Fuller argues for a different approach, making Hawking himself his example:
Indeed, we would do better to start with Stephen Hawking himself, universally acknowledged as the one of the great intellects of our times. Near the start of his illustrious career in physics forty years ago he began to suffer from motor neurone disease, which eventually rendered him quadriplegic. The word “cyborg” probably best captures Hawking’s current state of being, since his capacity for intellectual expression is made possible by machines which allow his deteriorating nerves to communicate with computerised interfaces.
Hawking – not the Terminator – is the likely face of tomorrow’s AI. Of course, we won’t all acquire Hawking’s level of intelligence, nor is it safe to say no risks will be involved. Rather, we should move away from Asimov’s “us vs. them” mentality. No matter the hardware and software needed to keep Hawking functional, we treat him as one of “us”. In the not too distant future, we may be faced with a full spectrum of people who to varying degrees are “enhanced” by AI devices. More.
Of course, in one sene, humans have been cyborg ever since false teeth, ear trumpets, and glasses were invented.
But all of this speculation leaves out the rapid growth of euthanasia across the developed world. Which will sharply reduce the perceived need for assistive devices.
See also: Steve Fuller’s Dissent over Descent
and
Child euthanasia centre soon to open in the Netherlands
Follow UD News at Twitter!