At Mind Matters News, Peter Biles writes,
Robert J. Marks wrote an article for the Spring Issue of Salvo Magazine on AI, covering his ideas on its “non-computability” in the areas of love, empathy, and creativity.
The Quality of Qualia
I was particularly intrigued by Marks’s thoughts on qualia, a term used to describe the multifaceted realm of sensory experience. We often report on AI’s inability to be creative here at Mind Matters, but what about experiencing the world through touch, smell, and sight? Qualia is related to the mystery of consciousness, another non-computable feature of human life, and according to Marks, is far out of the purview of AI capabilities. Marks writes about the experience of biting into an orange as an example:
If the experience of biting an orange segment cannot be described to a man without the senses of taste and smell, how can a computer programmer expect to duplicate qualia experiences in a computer using computer code? If the true experience can’t be explained, it is nonalgorithmic and therefore non-computable. There are devices called artificial noses and tongues that detect chemicals like the molecules associated with the orange. But these sensors experience no pleasure when the molecules are detected. They experience no qualia. Duplicating qualia is beyond the capability of AI.” by Robert J. Marks
Qualia can’t be explained or encountered through algorithms. – Robert J. Marks “Cannot Compute”(March 7, 2023)
This might be an argument for the uniquess of being alive in general and of human consciousness in particular.
21 Replies to “Robert J. Marks on why AI doesn’t achieve consciousness”
Qualia being non-computational and central to consciousness goes back to, at least, Galileo. Matter doesn’t possess Qualia and there is no way to manipulate it to generate Qualia and therefore, consciousness.
The problem is we don’t just exist in our minds and if mind and matter share no properties, how do they casual interact? This is the root of what strong AI scientists are trying to solve. The more I learn about it the more I move towards the minority in the business of trying to create AGI, that it will not succeed and that Descartes was fundamentally correct.
Given that AI is in it’s infancy, it seems like staking out hard lines about its limitations and short comings is ill advised. There was a time when humans looked up at the moon and just the thought of walking on its surface was utterly ridiculous. And then one day it wasn’t…..
CD at 1,
Artificial Intelligence doesn’t exist. No AI is a person or will be a person. AI today is just more automation as it was in the past. New, and more complex programs will allow companies to remove or cut down on hiring human beings, and Wall Street will reward them. Just like a one megabyte chip was the size of a U.S. quarter in the 1970s, computer programs now exist that have been developed with the goal of getting rid of people by using the programs to replace them. That is all this is.
I hate to disagree with the article, but to play the advocate, I will describe how a robotic AI could maybe be made to “feel” qualia, or at least simulate the feeling to a point where no one could say its feelings weren’t real:
– start with a robot having some sensory apparatus, like taste or smell – even sight would do
– next, program into the AI’s core some code to adjust a variable according to what it senses
– then program the AI to respond (behave) differently to different levels of that variable
– and program the AI to explain its reaction to the sensory input in terms of “feelings”.
Example: the sensory apparatus detects human faces (already done)
– the variable (call it endorfin for example) goes up when a human face is detected
– the AI becomes excited and pleasant (shifting response parameters) when endorfin is high
– when asked why the attitude change, the AI responds that it “feels good”
– when asked why it feels good, the AI responds that “seeing” people makes it happy
– when asked how it is to feel happy, the AI can easily pull up some text on the subject.
Add enough pseudo hormones, triggered by suitable sensory inputs, along with enough shifts in “personality” in response, and access to descriptions of feelings, and the AI could do a fair job of seeming to have qualia.
In addition, though qualia are real in humans, are we able to explain them to others without using standard descriptive tropes? How do you explain the qualia of “redness” to someone born blind?
Notwithstanding all this, I agree that no AI will ever be conscious in the same way humans are. One big difference is that we understand meaning and know what we are talking about. Moreover, humans have introspection in which we hold internal dialogues about various subjects. Those are very different from what goes on inside an AI.
While an AI can simulate “knowing” it does not truly understand meanings the way we do and is not truly able to “think about” things. Mind you, it may be possible to simulate that as well, at least well enough to fool most people. Then again, the purpose of AI development should not be to fool people into thinking the AI is conscious.
Fasteddious at 3,
It would be a simulation of a human, nothing more. A few movies have already featured human-looking robots with simulated feelings.
Relatd @ 4: You do understand that the AI’s in movies are not real AI’s. The movie plot and dialogue are written by humans.
However, I agree with you that it would indeed be a simulation. But with enough detail and depth, the simulation would seem very real. Indeed, how do you know your friends aren’t clever simulations? We just assume they are real because no one we know about has ever made such a clever simulation. What fair test could distinguish dialogue by a very advanced and cleverly programmed AI from a human? Computer scientists come up with tests, but they get more and more complicated and tricky as the AI’s get smarter and more advanced. Just as an AI got better at playing chess until it could beat every human, who is to say that as AI’s advance they may not some day be able to convince every human that they too are conscious?
Yet I agree the AI would still not be truly conscious in the fully human sense. Even if it could pretend to report on its “internal dialogue” and “thinking” processes in human terms, it would still not really understand what it was reporting. At least that is my opinion.
I have an idea for an interesting thought experiment. Charge two scientists with the task of creating a robot. One will invent a robot who is conscious. The other will invent a robot who is not conscious but will be able to fool everybody into thinking it is.
How would you advise the two scientists to proceed?
First, I’d advise them to become really familiar with the Turing test:
Next, I’d suggest that they would need to understand the so-called “hard problem of consciousness:
Only then would the first scientist be able to make any significant differential progress from the second one.
Therein lies the rub. The answer is that there appears to always be a possible more advanced AI-based simulation that a “fair test” could not distinguish from a real human. But also, there always will remain the existential gulf between characteristics and parameters describable in any way including words and mathematical equations, and the ineffable reality of inner sentient experience, qualia for instance. In other words, “what it is like to experience X”.
This whole Fasteddious argument falls apart because of this logical dilemma. The evident answer is that since inner sentient experience of subjective consciousness absolutely cannot be reduced to quantifiable physically measurable characteristics (i. e what is the weight of an emotion?), and because simulations can only generate calculated physically measurable parameters or characteristics, no simulation can generate the essence of subjective awareness and consciousness.
Ah, but herein lies the difficulty: the ineffable reality of inner sentient experience, precisely because it is ineffable, could never be verified by any external test or measurement, regardless of how sophisticated it is.
The question is not just the old “how do you know that I’m not a zombie?” The real question is, how do I know that I’m not a zombie?
I consider myself conscious. But so what? For all I know — and for all that I am capable of knowing — everyone else has some special state in addition to consciousness, so that what they experience is something different from what I experience. How do I know that they have qualia and I’m the one who doesn’t? It doesn’t feel to me as though I’m without qualia — but how can I verify that?
There’s a further problem. I have doubts about whether a machine could gain consciousness – but if it were possible – a poor simulation would be as likely to be conscious as a good simulation.
When I look at the crane fly sitting on the table I think it enjoys an advantage over any computer regardless of how powerful. In the case of the fly, I think there is “somebody home” so to speak. I don’t think that’s the case with the machine.
Admittedly it’s an extrapolation. I just feel an affinity for certain animals and I project the feeling to lower forms of life. From a computer, I would no more expect consciousness than I would from an abacus.
The fundamental problem is that there’s no known cause for consciousness (aka the hard problem of consciousness) or any quantitative measurement device such as a “psychometer.”
Darwinists assumed that consciousness MUSTA EMERGED from large aggregations of neurons. The question of sperm whale (18 lbs), killer whale (12-15 lbs) and elephant (11 lbs) intelligence (hyperconsciousness?) was met by dividing brain mass by body mass. But then that would mean skinny people must be smarter than fat people and there is no explanation for the high intelligence of corvids (crows, etc) and some parrots.
Then, believers in panpsychism assumed that any sufficiently complex machinery or perhaps mass also must have consciousness–the stuff of science fantasy.
The idea of encephalization quotient was an interesting attempt to figure out what tends to drive brain evolution. As it turns out, the main reason why it was abandoned is that it only considered brain mass and not the actual number of neurons. It also has problems when considering brains that are organized in really different ways. The brains of birds are very different from mammals brains, and they developed a different way of being intelligent.
But, I think that encephalization quotients can be useful in comparing species that are closely related, such as dogs and wolves, or chimpanzees and humans.
As I suggested in a separate thread, panpsychism is motivated by a commitment to the reality of qualia and a denial of emergentism. The logical entailment of those commitments is — for them — the idea that every subatomic particle has qualitative awareness.
I agree that’s silly, but as the great philosopher David Lewis liked to say, an incredulous stare is not an argument. In other words, the fact that one finds a view to be utterly absurd is not an argument against that view.
I think this is a prime example of the postmodern materialist “hyperskepticism” so criticised by Kairosfocus. As Descartes realized in the 17th century, the personal experience of thinking, of experiencing anything in subjective consciousness, is the ground floor of existence and requires no verification: “I think, therefore I am”. The buck stops there and to go any further in the worship of rationality is the way of madness.
It cannot be verified by something other than itself.
Two grave errors underlie the idea that the question “how do I know that I’m not a zombie?” makes sense. The first error is the assumption that one can occupy a position independent of oneself, from which one can make the judgment. The second error is the belief that “yes, I am a zombie” can be an acceptable rational outcome.
We cannot coherently hold beliefs that, if true, would prevent us from being rational. In that context, we are forced to assume certain things about ourselves.
I don’t see the two cases as being as similar as you do.
The Cartesian question is, “is one necessarily aware of the fact of one’s own awareness, even if one doubts the existence of everything ‘external’ to the mind?” I think that Descartes was right to say that “yes” is the only intelligible answer to that question.
When David Chalmers used the concept of qualia in The Conscious Mind to raise the possibility of what he called “zombies”, he raised the question, “can we conceive of persons whose psychological functions are identical to our own but which lack any first-personal, phenomenal consciousness?”
Here’s why I think the auto-zombie is not absurd or “hyperskeptical”: when I introspect, do I identify any states of my own phenomenal consciousness that are not associated with some psychological function? Or is rather the case that phenomenal consciousness just is how psychological functions appear from the first-person standpoint?
So, could I be a zombie? Remember, a zombie is something that has all the same psychological functions as we do, but which lacks phenomenal consciousness. But if my own awareness of my phenomenal states just is how I subjectively track my own psychological functions, then for all I know, other people could have something else that does not track their psychological functions.
So for all I know — and for all that I can know — they would rightly consider me a zombie, if they could compare my mental states with their mental states.
So, you are arguing that you are a zombie?
No. I’m arguing that no one could argue that they are not zombies, if they take “the hard problem of consciousness” seriously to begin with.
In other words, it’s a reductio ad absurdum of the hard problem of consciousness: to accept it as even being a problem at all is to allow that you might be a zombie.
As with much of philosophy, the only move is not to play.
When Chalmers asked “can we conceive of persons whose psychological functions are identical to our own but which lack any first-personal, phenomenal consciousness?” he refers to others. I can coherently question if another person is a zombie or not. What I cannot do is coherently question if I am a zombie or not.
Why not? Because a zombie cannot ask himself that question — there is no one home.
I’m not as sure of that as you are. Firstly, it seems to conflate subjectivity or selfhood with qualia. But qualia are sheer states of phenomenal awareness — the feeling-ness of the color red, or of the tone of B-flat. (On this issue, I found this paper by Joseph Neisser to be quite enlightening.)
Secondly, it seems to me that you’re assuming that subjectivity is not itself a psychological function. If it is, then zombies would have it — since they have all the psychological functions that we do.
A zombie is defined as lacking first-personal consciousness. If so, how can it ask itself the question “am I a zombie or not”? Without first-personal consciousness, the term “I” in that question does not refer to anything, right? The question would have no meaning to the zombie. That’s why I wrote, “there is no one home.”