Uncommon Descent Serving The Intelligent Design Community

Bill Dembski on artificial intelligence’s homunculus problem

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
tiny person inside sperm/ Nicolaas Hartsoeker, 1695

From Bill Dembski at Freedom, Technology, Education:

Artificial Intelligence’s Homunculus Problem: Why AI Is Unlikely Ever to Match Human Intelligence

So how can we see that AI is not, and will likely never be, a match for human intelligence? The argument is simple and straightforward. AI, and that includes everything from classical expert systems to contemporary machine learning, always comes down to solving specific problems. This can be readily reconceptualized in terms of search (for the reconceptualization, see here): There’s a well-defined search space and a target to be found and the task of AI is to find that target efficiently and reliably.

If intelligence were simply a matter of finding targets in well-defined search spaces, then AI could, with some justification, be regarded as subsuming intelligence generally. For instance, its success at coming up with chess playing programs that dominate human players might be regarded as evidence that machines are well on the way to becoming fully intelligent. And indeed, that view was widely advertised in the late 1990s when IBM’s Deep Blue defeated then world champion Garry Kasparov. Deep Blue was a veritable “genius” at chess. But computers had been “geniuses” at arithmetic before that.

Even to use the word “genius” for such specific tasks should give us pause. Yes, we talk about idiot savants or people who are “geniuses” at some one task that often a computer is able to do just as well or often better (e.g., determining the day of the week of some arbitrary date). But real genius presupposes a nimbleness of cognition in the ability to move freely among different problem areas and to respond with the appropriate solutions to each. Or, in the language of search, being able not just to handle different searches but knowing which search strategy to apply to a given search situation.

Now the point to realize is that this huge library of algorithms is not itself intelligent, to say nothing of being a genius. At best, such a library would pay homage to the programmers who wrote the algorithms and the people whose intelligent behaviors served to train them (a la machine learning). But a kludge of all these algorithms would not be intelligent. What would be required for true intelligence is a master algorithm that coordinates all the algorithms in this library. Or we might say, what’s needed is a homunculus.

A homunculus fallacy is most commonly associated with the study of perception. More.

In the 17th century, the “homunculus” was the little human inside a sperm that grew into a baby in the environment of the womb. In smart AI theory, it appears to be a sort of self who seeks a given outcome and co-ordinates searches in order to achieve it.

But then the question becomes, how to make machines care. That’s a tough one. At any rate, it’s a new take on the “search for the self.”

See also: Announcement: New Walter Bradley Center to assess claims for artificial intelligence critically

Comments
Bob O'H@12, I don't disagree. But my point is that we address intelligence as if human type intelligence is the only type possible. Why does intelligence require consciousness as we know it? Why does it require caring? Why does it require self-awareness?Allan Keith
June 18, 2018
June
06
Jun
18
18
2018
06:01 AM
6
06
01
AM
PDT
Allan Keith @ 8 -
Since when is caring or empathy a sign of intelligence?
It's related, even if it's not the same. Theory of mind is used as a test in studies of animal intelligence: it's seen a apart of consciousness. I'm not sure that the correlation between consciousness and intelligence would have to be the same for computers, though: animal intelligence seems to be linked to sociality.Bob O'H
June 18, 2018
June
06
Jun
18
18
2018
02:52 AM
2
02
52
AM
PDT
On the other hand, I like how this argument shifts the burden of proof.EricMH
June 18, 2018
June
06
Jun
18
18
2018
01:41 AM
1
01
41
AM
PDT
So, in summary, the Goödel argument against AI dismissed in this article seems most likely true, whereas the pace of technological innovation means the homunculus, if an algorithm, is within reach. Further, even if the homunculus is not within reach, if it is an algorithm this means that humans are still machines, and consequently not intelligent agents. This also dissolves any sort of special dignity attributed to humans, since all machines in theory can be copied.EricMH
June 16, 2018
June
06
Jun
16
16
2018
11:18 AM
11
11
18
AM
PDT
Also, it is not true that computational systems can find Gödel sentences for axiomatic systems. A Gödel sentence requires first order logic, and determining whether a sentence is provable in first order logic is undecidable, since it uses universal quantification.EricMH
June 16, 2018
June
06
Jun
16
16
2018
11:15 AM
11
11
15
AM
PDT
The human ego would never admit to an AI with intelligence equal to or grew we than ours. We would simply shift the goal posts, as we have already done with ideas of intelligence, reasoning, abstract thought and language in animals. In answer to News’ question about how do we make an AI that cares. Since when is caring or empathy a sign of intelligence?Allan Keith
June 16, 2018
June
06
Jun
16
16
2018
10:39 AM
10
10
39
AM
PDT
Just because the homunculus problem hasn't been solved doesn't mean it will not in the near future. A bit over half a century ago we did not have computers, and now they run our world.EricMH
June 16, 2018
June
06
Jun
16
16
2018
10:14 AM
10
10
14
AM
PDT
Can Turing machines prove the halting problem? I submit they cannot, because a Turing machine cannot run all possible halting problem solvers and identify they will not halt. Thus, since Turing proved the halting problem at least Turing is not a Turing machine.EricMH
June 16, 2018
June
06
Jun
16
16
2018
10:04 AM
10
10
04
AM
PDT
Also related is an old video I did about "Solving Engineering Problems Using Theology": https://www.youtube.com/watch?v=yVeWBM1J-NEjohnnyb
June 15, 2018
June
06
Jun
15
15
2018
03:05 PM
3
03
05
PM
PDT
as to this claim from Dr. Dembski's article:
I’ve never found this argument (even when not oversimplified, as I did above) persuasive. When humans identify Goedel sentences, it is for algorithmic systems that are separate from themselves, where they can see the entire logical structure and then use that structure against itself, as it were, to identify a Goedel sentence. But even if human intelligence is algorithmic, humans don’t have the capability of, so to speak, looking under the hood (lifting the tops of their skulls?) and therewith identifying their own Goedel sentence.
If humans do not have the capacity of "looking under the hood (lifting the tops of their skulls?) and therewith identifying their own Goedel sentence", then please prey tell how Dr. Dembski (and others) were able to identify the fallacy of the Homunculus argument in the first place?
Homunculus argument Excerpt: The homunculus argument is a fallacy arising most commonly in the theory of vision. One may explain (human) vision by noting that light from the outside world forms an image on the retinas in the eyes and something (or someone) in the brain looks at these images as if they are images on a movie screen (this theory of vision is sometimes termed the theory of the Cartesian theater: it is most associated, nowadays, with the psychologist David Marr). The question arises as to the nature of this internal viewer. The assumption here is that there is a "little man" or "homunculus" inside the brain "looking at" the movie. The reason why this is a fallacy may be understood by asking how the homunculus "sees" the internal movie. The obvious answer is that there is another homunculus inside the first homunculus's "head" or "brain" looking at this "movie". But that raises the question of how this homunculus sees the "outside world". To answer that seems to require positing another homunculus inside this second homunculus's head, and so forth. In other words, a situation of infinite regress is created. The problem with the homunculus argument is that it tries to account for a phenomenon in terms of the very phenomenon that it is supposed to explain. https://en.wikipedia.org/wiki/Homunculus_argument
To be able to even recognize the fallacy of the Homunculus argument, as Dr. Dembski has done in his article, requires us to have an outside perspective of "lifting the tops of (our) skulls". It would seem that Dr. Dembski's appeal to the Homunculus argument itself directly refutes his claim that humans don’t have the capability of, so to speak, looking under the hood (lifting the tops of their skulls?) and therewith identifying their own Goedel sentence. Might I also suggest that John Nash, (i.e. A Beautiful Mind), would have never recovered from his mental illness had he not been able to reach outside his own flawed thinking and, via a perspective outside of himself, 'think rationally' ?
john nash Excerpt: slowly he regained engagement and lucidity.. others thought it was via drugs.. but he quit taking drugs in 1970 harold kuhn: i said john.. how in the devil have you recovered.. john: i willed it.. i decided i was going to think rationally https://redefineschool.com/john-nash/
I would also like to note that the Homunculus argument is very friendly to Dr. Michael Egnor’s (Theistic) contention (via Aristotle) that “Perception at a distance is no more inconceivable than action at a distance.”
Perception and the Cartesian Theater – Michael Egnor – December 8, 2015 Excerpt: Perception at a distance is no more inconceivable than action at a distance. The notion that a perception of the moon occurs at the moon is “bizarre” (Torley’s word) only if one presumes that perception is constrained by distance and local conditions — perhaps perception would get tired if it had to go to the moon or it wouldn’t be able to go because it’s too cold there. Yet surely the view that the perception of a rose held up to my eye was located at the rose wouldn’t be deemed nearly as bizarre. At what distance does perception of an object at the object become inconceivable? http://www.evolutionnews.org/2015/12/perception_and101471.html
It should be noted that Dr. Torley strongly objected to Dr. Egnor's argument for 'perception at a distance'. Specifically, Dr. Torley held that perception cannot possibly be at a Supernova which “ceased to exist nearly 200 millennia ago, long before the dawn of human history.”
The Squid and the Supernova: A Reply to Professor Egnor - December 9, 2015 – vjtorley Excerpt: In February 1987, a supernova appeared in the Southern skies, and remained visible for several months. ,,, The problem is that the object itself ceased to exist nearly 200 millennia ago, long before the dawn of human history. Even if the squid that witnessed the explosion were capable of having perceptions which are located in intergalactic space, as Egnor contends, they are surely incapable of having perceptions which go back in time. ,,,perception is a bodily event, and that an event involving my body cannot take place at a point which is separate from my body. An event involving my body may occur inside my body, or at the surface of my body, but never separately from it. Thus it simply makes no sense to assert that I am here, at point X, but that my perceptions – or for that matter, my actions – are located at an external point Y. https://uncommondescent.com/intelligent-design/the-squid-and-the-supernova-a-reply-to-professor-egnor/
Besides the Homunculus argument undermining Dr. Torley's claim that perception cannot possibly be at a distance, advances in Quantum Mechanics now also, empirically, undermines Dr. Torley's claim: Specifically, quantum entanglement in time “implies that the measurements carried out by your eye upon starlight falling through your telescope this winter somehow dictated the polarity of photons more than 9 billion years old.”
You thought quantum mechanics was weird: check out entangled time - Feb. 2018 Excerpt: Up to today, most experiments have tested entanglement over spatial gaps. The assumption is that the ‘nonlocal’ part of quantum nonlocality refers to the entanglement of properties across space. But what if entanglement also occurs across time? Is there such a thing as temporal nonlocality?,,, The data revealed the existence of quantum correlations between ‘temporally nonlocal’ photons 1 and 4. That is, entanglement can occur across two quantum systems that never coexisted. What on Earth can this mean? Prima facie, it seems as troubling as saying that the polarity of starlight in the far-distant past – say, greater than twice Earth’s lifetime – nevertheless influenced the polarity of starlight falling through your amateur telescope this winter. Even more bizarrely: maybe it implies that the measurements carried out by your eye upon starlight falling through your telescope this winter somehow dictated the polarity of photons more than 9 billion years old. https://aeon.co/ideas/you-thought-quantum-mechanics-was-weird-check-out-entangled-time
i.e. Quantum Entanglement in Time and the Homunculus argument both, fairly strongly, back up Dr. Egnor's claim that perception must be 'at a distance'. Perception simply refuses to be limited to 'under the hood of our skulls' as Dr. Torley (and Dr. Dembski) seem to imply. Of semi-related note:
Albert Einstein vs. Quantum Mechanics and His Own Mind – video https://www.youtube.com/watch?v=vxFFtZ301j4 Double Slit, Quantum-Electrodynamics, and Christian Theism https://www.youtube.com/watch?v=AK9kGpIxMRM
Also of note. Christian Theists have the 'ultimate' perspective outside of themselves to appeal to in order to try to correct how their thinking may be flawed in that they can appeal to their relationship with God:
Proverbs 15:3 The eyes of the LORD are in every place, beholding the evil and the good.
bornagain77
June 15, 2018
June
06
Jun
15
15
2018
04:19 AM
4
04
19
AM
PDT
Thinking out loud... "Or, in the language of search, being able not just to handle different searches but knowing which search strategy to apply to a given search situation." Even that is amenable to AI, at least potentially. Nonetheless, I think that the singularity point where AI surpasses humans in every reasoning exercise will never be reached. Fundamentally, this is because it needs the faculty of self-referential reasoning, which is a big stopper. Reflection is only possible for a conscious agent, something a machine will never become. A lot of our reasoning activities cannot be laid out in the form of an algorithm, whereas AI is fundamentally algorithm-based. AI will surpass humans in anything that is algorithm-based but will remain inferior in everything else. There is no algorithm for insight, experience, wisdom, moral judgement, consciousness.Eugene S
June 15, 2018
June
06
Jun
15
15
2018
03:13 AM
3
03
13
AM
PDT
There is a digression in the middle of this about Godel incompleteness. Is he quoting someone or is that his own? I'm actually quite surprised by it, because it fails to note one particular factor of supreme importance - the fact that all finitary computational devices are equivalent. Therefore, the fact that we have our own Godel statements that we cannot prove is irrelevant. If we are able to find answers to statements which computers cannot, that is a more-or-less finished standard. All Turing machines are essentially equivalent, so if I show myself to be non-Turing, then, even if there are Godel statements I cannot access, the ability to point to statements which the computer cannot know does mean that I am not a computer.johnnyb
June 14, 2018
June
06
Jun
14
14
2018
10:16 PM
10
10
16
PM
PDT
So good to see Bill Dembski contributing to this topic. Brilliant mind.Truth Will Set You Free
June 14, 2018
June
06
Jun
14
14
2018
08:07 AM
8
08
07
AM
PDT

Leave a Reply