One reason is “inference to the best explanation.” Computers, Erik J. Larson shows in his new book, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2021), can’t do some things by their very nature. A big gap is with “abduction,” also known as “inference to the best explanation.”
With regard to inference, he shows that a form of reasoning known as abductive inference, or inference to the best explanation, is for now without any adequate computational representation or implementation. To be sure, computer scientists are aware of their need to corral abductive inference if they are to succeed in producing an artificial general intelligence.
True, they’ve made some stabs at it, but those stabs come from forming a hybrid of deductive and inductive inference. Yet as Larson shows, the problem is that neither deduction, nor induction, nor their combination are adequate to reconstruct abduction. Abductive inference requires identifying hypotheses that explain certain facts of states of affairs in need of explanation. The problem with such hypothetical or conjectural reasoning is that that range of hypotheses is virtually infinite. Human intelligence can, somehow, sift through these hypotheses and identify those that are relevant. Larson’s point, and one he convincingly establishes, is that we don’t have a clue how to do this computationally.News, “No AI overlords?: What is Larson arguing and why does it matter?” at Mind Matters News
Abductive reasoning is part of design theory. Interesting that computers can’t do it.
See also: New book massively debunks our “AI overlords”: Ain’t gonna happen AI researcher and tech entrepreneur Eric J. Larson expertly dissects the AI doomsday scenarios. Many thinkers have tried to stem the tide of hype but, as an information theorist points out, no one has done it so well.
Why Richard Dawkins thinks AI may replace us He likes the idea because it is consistent with his naturalist philosophy. Dawkins does not advance an argument for why “anything that a human brain can do can be replicated in silicon,” apart from the fact that he is “committed to the view that there’s nothing in our brains that violates the laws of physics.”
3 Replies to “Bill Dembski on why Erik Larson says there will be no AI overlords”
I think that what really scares people about AI is one that is like us because we know all too well how dangerous we can be. An AI that formed a concept of ‘self’, an instinct for the continued survival of that ‘self’ and the will and power to assert its interests over those of any other beings with which it found itself in competition, could be a formidable threat to us. And Star Trek notwithstanding, trying to paralyze it by asking it to calculate the value of pi probably wouldn’t work.
Neither do I think we are superior at abductive reasoning or that we can or do sift through a virtually infinite range of hypotheses to arrive at the best inference. Human beings are demonstrably poor at estimating probabilities and, while a computer might do it, the human brain would be far too slow at such a “brute force” approach to finding the best inference for it to be practical. That’s not how we work. As a species, we use a combination of reasoning, intuition, instinct, “common sense” and anything else we guess might work as a short-cut to a solution. It’s a lot less efficient and often more costly than having an AI crunch the numbers but if you don’t have one handy – and we haven’t until very recently – what else are you going to do? And since we’re still here, it must work – at least, up to a point.
That said, I still look forward to having a HAL 9000 or a Lt Cdr Data to work with. I could even live with the Borg if they went light on the implants and fitted a cut-off switch to shut out the babble of other minds when they became too intrusive.
My own take on why AIs will not outpace us is here.
Seversky: An AI that formed a concept of ‘self’
How could that happen?
Likewise, If I said, “a forest of trees that formed a concept of ‘self”, one would be justified in asking how that could happen.