From Michael Byrne at Motherboard:
It’s not that computer scientists haven’t argued against AI hype, but an academic you’ve never heard of (all of them?) pitching the headline “AI is hard” is at a disadvantage to the famous person whose job description largely centers around making big public pronouncements. This month that academic is Alan Bundy, a professor of automated reasoning at the University of Edinburgh in Scotland, who argues in the Communications of the ACM that there is a real AI threat, but it’s not human-like machine intelligence gone amok. Quite the opposite: the danger is instead shitty AI. Incompetent, bumbling machines.
Bundy notes that most all of our big-deal AI successes in recent years are extremely narrow in scope. We have machines that can play Jeopardy and Go—at tremendous cost in both cases—but that’s nothing like general intelligence.
… As Aaron Sloman, for instance, has successfully argued, intelligence must be modeled using a multidimensional space, with many different kinds of intelligence and with AI progressing in many different directions.” More.
Machine intelligence is nothing like general intelligence? But that suggests intelligence is bound up with the nature of the universe and not like something that simply “evolved-for” passing on selfish genes. We shall see.
Do these people realize this risks they are taking, unless someone at a science thinkmag can throw up a cloud of casuistries soon? Oh well, there is doubtless a budget for that.
See also: Would we give up naturalism to solve the hard problem of consciousness?
What great physicists have said about immateriality and consciousness
Follow UD News at Twitter!