From Bill Dembski at Freedom, Technology, Education:
Artificial Intelligence’s Homunculus Problem: Why AI Is Unlikely Ever to Match Human Intelligence
So how can we see that AI is not, and will likely never be, a match for human intelligence? The argument is simple and straightforward. AI, and that includes everything from classical expert systems to contemporary machine learning, always comes down to solving specific problems. This can be readily reconceptualized in terms of search (for the reconceptualization, see here): There’s a well-defined search space and a target to be found and the task of AI is to find that target efficiently and reliably.
…
If intelligence were simply a matter of finding targets in well-defined search spaces, then AI could, with some justification, be regarded as subsuming intelligence generally. For instance, its success at coming up with chess playing programs that dominate human players might be regarded as evidence that machines are well on the way to becoming fully intelligent. And indeed, that view was widely advertised in the late 1990s when IBM’s Deep Blue defeated then world champion Garry Kasparov. Deep Blue was a veritable “genius” at chess. But computers had been “geniuses” at arithmetic before that.
Even to use the word “genius” for such specific tasks should give us pause. Yes, we talk about idiot savants or people who are “geniuses” at some one task that often a computer is able to do just as well or often better (e.g., determining the day of the week of some arbitrary date). But real genius presupposes a nimbleness of cognition in the ability to move freely among different problem areas and to respond with the appropriate solutions to each. Or, in the language of search, being able not just to handle different searches but knowing which search strategy to apply to a given search situation.
…
Now the point to realize is that this huge library of algorithms is not itself intelligent, to say nothing of being a genius. At best, such a library would pay homage to the programmers who wrote the algorithms and the people whose intelligent behaviors served to train them (a la machine learning). But a kludge of all these algorithms would not be intelligent. What would be required for true intelligence is a master algorithm that coordinates all the algorithms in this library. Or we might say, what’s needed is a homunculus.
A homunculus fallacy is most commonly associated with the study of perception. More.
In the 17th century, the “homunculus” was the little human inside a sperm that grew into a baby in the environment of the womb. In smart AI theory, it appears to be a sort of self who seeks a given outcome and co-ordinates searches in order to achieve it.
But then the question becomes, how to make machines care. That’s a tough one. At any rate, it’s a new take on the “search for the self.”
See also: Announcement: New Walter Bradley Center to assess claims for artificial intelligence critically
So good to see Bill Dembski contributing to this topic. Brilliant mind.
There is a digression in the middle of this about Godel incompleteness. Is he quoting someone or is that his own? I’m actually quite surprised by it, because it fails to note one particular factor of supreme importance – the fact that all finitary computational devices are equivalent. Therefore, the fact that we have our own Godel statements that we cannot prove is irrelevant. If we are able to find answers to statements which computers cannot, that is a more-or-less finished standard. All Turing machines are essentially equivalent, so if I show myself to be non-Turing, then, even if there are Godel statements I cannot access, the ability to point to statements which the computer cannot know does mean that I am not a computer.
Thinking out loud…
“Or, in the language of search, being able not just to handle different searches but knowing which search strategy to apply to a given search situation.”
Even that is amenable to AI, at least potentially.
Nonetheless, I think that the singularity point where AI surpasses humans in every reasoning exercise will never be reached. Fundamentally, this is because it needs the faculty of self-referential reasoning, which is a big stopper. Reflection is only possible for a conscious agent, something a machine will never become. A lot of our reasoning activities cannot be laid out in the form of an algorithm, whereas AI is fundamentally algorithm-based. AI will surpass humans in anything that is algorithm-based but will remain inferior in everything else.
There is no algorithm for insight, experience, wisdom, moral judgement, consciousness.
as to this claim from Dr. Dembski’s article:
If humans do not have the capacity of “looking under the hood (lifting the tops of their skulls?) and therewith identifying their own Goedel sentence”, then please prey tell how Dr. Dembski (and others) were able to identify the fallacy of the Homunculus argument in the first place?
To be able to even recognize the fallacy of the Homunculus argument, as Dr. Dembski has done in his article, requires us to have an outside perspective of “lifting the tops of (our) skulls”.
It would seem that Dr. Dembski’s appeal to the Homunculus argument itself directly refutes his claim that humans don’t have the capability of, so to speak, looking under the hood (lifting the tops of their skulls?) and therewith identifying their own Goedel sentence.
Might I also suggest that John Nash, (i.e. A Beautiful Mind), would have never recovered from his mental illness had he not been able to reach outside his own flawed thinking and, via a perspective outside of himself, ‘think rationally’ ?
I would also like to note that the Homunculus argument is very friendly to Dr. Michael Egnor’s (Theistic) contention (via Aristotle) that “Perception at a distance is no more inconceivable than action at a distance.”
It should be noted that Dr. Torley strongly objected to Dr. Egnor’s argument for ‘perception at a distance’.
Specifically, Dr. Torley held that perception cannot possibly be at a Supernova which “ceased to exist nearly 200 millennia ago, long before the dawn of human history.”
Besides the Homunculus argument undermining Dr. Torley’s claim that perception cannot possibly be at a distance, advances in Quantum Mechanics now also, empirically, undermines Dr. Torley’s claim:
Specifically, quantum entanglement in time “implies that the measurements carried out by your eye upon starlight falling through your telescope this winter somehow dictated the polarity of photons more than 9 billion years old.”
i.e. Quantum Entanglement in Time and the Homunculus argument both, fairly strongly, back up Dr. Egnor’s claim that perception must be ‘at a distance’. Perception simply refuses to be limited to ‘under the hood of our skulls’ as Dr. Torley (and Dr. Dembski) seem to imply.
Of semi-related note:
Also of note. Christian Theists have the ‘ultimate’ perspective outside of themselves to appeal to in order to try to correct how their thinking may be flawed in that they can appeal to their relationship with God:
Also related is an old video I did about “Solving Engineering Problems Using Theology”:
https://www.youtube.com/watch?v=yVeWBM1J-NE
Can Turing machines prove the halting problem? I submit they cannot, because a Turing machine cannot run all possible halting problem solvers and identify they will not halt. Thus, since Turing proved the halting problem at least Turing is not a Turing machine.
Just because the homunculus problem hasn’t been solved doesn’t mean it will not in the near future. A bit over half a century ago we did not have computers, and now they run our world.
The human ego would never admit to an AI with intelligence equal to or grew we than ours. We would simply shift the goal posts, as we have already done with ideas of intelligence, reasoning, abstract thought and language in animals.
In answer to News’ question about how do we make an AI that cares. Since when is caring or empathy a sign of intelligence?
Also, it is not true that computational systems can find Gödel sentences for axiomatic systems. A Gödel sentence requires first order logic, and determining whether a sentence is provable in first order logic is undecidable, since it uses universal quantification.
So, in summary, the Goödel argument against AI dismissed in this article seems most likely true, whereas the pace of technological innovation means the homunculus, if an algorithm, is within reach. Further, even if the homunculus is not within reach, if it is an algorithm this means that humans are still machines, and consequently not intelligent agents. This also dissolves any sort of special dignity attributed to humans, since all machines in theory can be copied.
On the other hand, I like how this argument shifts the burden of proof.
Allan Keith @ 8 –
It’s related, even if it’s not the same. Theory of mind is used as a test in studies of animal intelligence: it’s seen a apart of consciousness. I’m not sure that the correlation between consciousness and intelligence would have to be the same for computers, though: animal intelligence seems to be linked to sociality.
Bob O’H@12, I don’t disagree. But my point is that we address intelligence as if human type intelligence is the only type possible. Why does intelligence require consciousness as we know it? Why does it require caring? Why does it require self-awareness?