From Bill Dembski at Freedom, Technology, Education:
Artificial Intelligence’s Homunculus Problem: Why AI Is Unlikely Ever to Match Human Intelligence
So how can we see that AI is not, and will likely never be, a match for human intelligence? The argument is simple and straightforward. AI, and that includes everything from classical expert systems to contemporary machine learning, always comes down to solving specific problems. This can be readily reconceptualized in terms of search (for the reconceptualization, see here): There’s a well-defined search space and a target to be found and the task of AI is to find that target efficiently and reliably.
If intelligence were simply a matter of finding targets in well-defined search spaces, then AI could, with some justification, be regarded as subsuming intelligence generally. For instance, its success at coming up with chess playing programs that dominate human players might be regarded as evidence that machines are well on the way to becoming fully intelligent. And indeed, that view was widely advertised in the late 1990s when IBM’s Deep Blue defeated then world champion Garry Kasparov. Deep Blue was a veritable “genius” at chess. But computers had been “geniuses” at arithmetic before that.
Even to use the word “genius” for such specific tasks should give us pause. Yes, we talk about idiot savants or people who are “geniuses” at some one task that often a computer is able to do just as well or often better (e.g., determining the day of the week of some arbitrary date). But real genius presupposes a nimbleness of cognition in the ability to move freely among different problem areas and to respond with the appropriate solutions to each. Or, in the language of search, being able not just to handle different searches but knowing which search strategy to apply to a given search situation.
Now the point to realize is that this huge library of algorithms is not itself intelligent, to say nothing of being a genius. At best, such a library would pay homage to the programmers who wrote the algorithms and the people whose intelligent behaviors served to train them (a la machine learning). But a kludge of all these algorithms would not be intelligent. What would be required for true intelligence is a master algorithm that coordinates all the algorithms in this library. Or we might say, what’s needed is a homunculus.
A homunculus fallacy is most commonly associated with the study of perception. More.
In the 17th century, the “homunculus” was the little human inside a sperm that grew into a baby in the environment of the womb. In smart AI theory, it appears to be a sort of self who seeks a given outcome and co-ordinates searches in order to achieve it.
But then the question becomes, how to make machines care. That’s a tough one. At any rate, it’s a new take on the “search for the self.”
See also: Announcement: New Walter Bradley Center to assess claims for artificial intelligence critically