
Scary predictions are a thriving business but that does not make them a road map to the future:
In Salvo 46, we looked at the artificial intelligence doomsdays prophesied by media magnates like Stephen Hawking (1942–2018) and Elon Musk (“more dangerous than nukes”).
Media
do not, as a rule, examine such claims carefully. There is a market for them, after all; why damage the brand? Debunking thus falls to comparatively obscure sources.But what a land of opportunity awaits! The AI industry faces major, open, unsolved problems with the dream of replicating human intelligence. Some insights from ID thinkers and sympathizers can help us unpack the breathless claims.
First, what is intelligence? Intelligence enables us to know things. But what does it mean to “know” something?
We know things in the sense that our “selves” are aware of them. When we talk about knowledge, we assume a “knower,” a self to which the information is apparent. Absent a self as the subject of the experience of knowing, knowledge—in the sense in which we usually use the word—does not exist. As neurosurgeon Michael Egnor says, “Your computer doesn’t know a binary string [of code] from a ham sandwich. . . . Your cell phone doesn’t know what you said to your girlfriend this morning.”Denyse O’Leary, “It comes naturally” at Salvo
See also: Stephen Hawking and the AI Apocalypse
Noted astronomer envisions cyborg on Mars
AI machines taking over the world? It’s a cool apocalypse but does that make it more likely?
Software pioneer says general superhuman artificial intelligence is very unlikely The concept, he argues, shows a lack of understanding of the nature of intelligence
and
Machines just don’t do meaning And that, says a computer science prof, is a key reason they won’t compete with humans