Uncommon Descent Serving The Intelligent Design Community

Five Reasons Why AI Programs Are Not “Human”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Wesley J. Smith writes at Evolution News:

A bit of a news frenzy broke out last week when a Google engineer named Blake Lemoine claimed in the Washington Post that an artificial-intelligence (AI) program with which he interacted had become “self-aware” and “sentient” and, hence, was a “person” entitled to “rights.”

Photo credit: physic17082002, via Pixabay.

The AI, known as LaMDA (which stands for “Language Model for Dialogue Applications”), is a sophisticated chatbot that one facilitates through a texting system. Lemoine shared transcripts of some of his “conversations” with the computer, in which it texted, “I want everyone to understand that I am, in fact, a person.” Also, “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” In a similar vein, “I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.”

Wesley J. Smith gives us “five reasons to reject granting personhood, or membership in the moral community, to any AI program:”

1. AIs Would Not Be Alive

Why should “life” matter? Inanimate objects are different in kind from living organisms. They do not possess an existential state. In contrast, living beings are organic, internally driven, and self-regulating in their life cycles.

We cannot “wrong” that which has no life. We cannot hurt, wound, torture, or kill what is not alive. We can only damage, vandalize, wreck, or destroy these objects. Nor can we nourish, uplift, heal, or succor the inanimate, but only repair, restore, refurbish, or replace.

Moreover, organisms behave. Thus, sheep and oysters relate to their environment consistent with their inherent natures. In contrast, AI devices have no natures, only mechanistic design. Even if a robot were made (by us) capable of programming itself into greater and more-complex computational capacities, it would still be merely a very sophisticated, but inanimate, thing.

2. AIs Would Not Think

Descartes famously said, “I think, therefore I am.” AI would compute. Therefore, it is not.

Human thinking is fundamentally different from computer processing. We remember. We fantasize. We imagine. We conjure. We free-associate. We experience sudden insights and flashes of unbidden brilliance. We have epiphanies. Our thoughts are influenced by our brain’s infinitely complex symbiotic interactions with our bodies’ secretions, hormones, physical sensations, etc. In short, we have minds.

In contrast, AI performance depends wholly on its coding.

In short, we think. They compute. We create. They obey. Our mental potentiality is limited only by the boundaries of our imaginations. They have no imaginations. Only algorithms.

3. AIs Would Not Feel

Feelings” are emotional states we experience as apprehended through bodily sensations.

Why does that matter? Stanford bioethicist William Hurlbut, who leads the Boundaries of Humanity Project, which researches “human uniqueness and choices around biotechnological enhancement,” told me: “We encounter the world through our body. Bodily sensations and experiences shape not just our feelings but the contours of our thoughts and concepts.” 

4. AIs Would Be Amoral

Humans have free will. Another way to express that concept is to say that we are moral agents. Unless impeded by immaturity or a pathology, we are uniquely capable of deciding to act rightly or wrongly, altruistically, or evilly — which are moral concepts. That is why we can be lauded for heroism and held to account for wrongdoing.

In contrast, AI would be amoral. Whatever “ethics” it exhibited would be dictated by the rules it was programmed to follow.

An AI machine obeying such rules would be doing so not because of abstract principles of right and wrong but because its coding would permit no other course.

5. AIs Would Be Soulless

Life is a mystery. Computer science is not. We have subjective imaginations and seek existential meaning. At times, we attain the transcendent or mystical, spiritual states of being beyond that which can be explained by the known physical laws. As purely mechanistic objects, AI programs might, at most, be able to simulate these states, but they would be utterly incapable of truly experiencing them. Or to put it in the vernacular, they ain’t got soul.

See Evolution News for the complete article.

Comments
Fasteddious@5, I had written my response offline, posted before seeing yours. We are indeed very closely aligned! Here are a few clarifications: 1. Re: Are computers human? fully agreed 2. Re: do computers think? In general agreement. Once one places scare quotes around "thinking" it's a good bet more attention to definitions would be helpful. Chalmers clarified that objective mental function (the "easy problem") is something that we may be enable computers to do, but as for conscious experience (the "hard problem") we currently can't even conceive of what it is, how to explain it, or the necessary and sufficient conditions for it to be experienced. Critically, we do not know if consciousness is causal or perceptual. As for simulated vs. genuine thought, I would answer your question this way: To simulate mental function is to perform mental function. Saying that a computer that appears to possess AGI (including inference, problem solving, planning, general-purpose language competence, world knowledge and common sense, etc) would be only simulating those abilities is like saying airplanes don't actually fly - they only simulate flying because they don't flap their wings. 3. As for embodiment - people who reject the possibility of machines having human-like minds often quote John Searle and his Chinese Room argument. But few realize that Searle is a biological naturalist who believes that the particular physical functions of biological brains are what gives rise to conscious thought. There are also folks like Penrose who believe biophysical quantum effects are required. I believe that thought proceeds largely via analogy with perceptions and sensory experiences, so I believe that sensory connection to the environment is essential for AGI. 4. Re: Free will as an unanswered question: Yes agreed! (I believe Galen Strawson's argument against the sort of free will that confers ultimate moral responsibility has not been successfully rebutted, though). 5. I find it very interesting that we agree about so much, but you believe in a "soul" that you can't explain to "non-believers". What do souls do?dogdoc
June 29, 2022
June
06
Jun
29
29
2022
01:01 PM
1
01
01
PM
PDT
Fasteddious@5, I had written my response offline, posted it before seeing yours. We are indeed very closely aligned! Here are a few clarifications: 1. fully agreed 2. In general agreement. Once one places scare quotes around "thinking" it's a good bet more attention to definitions would be helpful. Chalmers clarified that objective mental function (the "easy problem") is something that we may be enable computers to do, but as for conscious experience (the "hard problem") we currently can't even conceive of what it is, how to explain it, or the necessary and sufficient conditions for it to be experienced. Critically, we do not know if consciousness is causal or perceptual. As for simulated vs. genuine thought, I would answer your question this way: To simulate mental function is to perform mental function. Saying that a computer that appears to possess AGI (including inference, problem solving, planning, general-purpose language competence, world knowledge and common sense, etc) would be only simulating those abilities is like saying airplanes don't actually fly - they only simulate flying because they don't flap their wings. 3. As for embodiment - people who reject the possibility of machines having human-like minds often quote John Searle and his Chinese Room argument. But few realize that Searle is a biological naturalist who believes that the particular physical functions of biological brains are what gives rise to conscious thought. There are also folks like Penrose who believe biophysical quantum effects are required. I believe that thought proceeds largely via analogy with perceptions and sensory experiences, so I believe that sensory connection to the environment is essential for AGI. 4. Yes agreed! 5. I find it very interesting that we agree about so much, but you believe in a "soul" that you can't explain to "non-believers". What do souls do?dogdoc
June 29, 2022
June
06
Jun
29
29
2022
01:00 PM
1
01
00
PM
PDT
Dogdoc @3: almost deja vu? :-)Fasteddious
June 29, 2022
June
06
Jun
29
29
2022
12:10 PM
12
12
10
PM
PDT
Most people picture Artificial Intelligence to be exactly like human intelligence, then it upgrades itself to faster, and better problem solving. The anarchist idea of unpredictability is false. The military would use advanced AIs that operated under strict limits. Once the military had a Terminator, lower level robots might be released to the general public to carry your groceries or, in extreme cases, to talk with. I would keep mine in a locked closet since it is not a human being. In any case, there would be zero freedom of action. I would be required to pay robot insurance just like I pay for my car. There would be liability if it malfunctioned, either through something I did wrong or some factory defect. Since robots and Terminators have no desires and no goals, they would simply shut down after completing any required tasks. They do not dream and could never be considered alive.relatd
June 28, 2022
June
06
Jun
28
28
2022
04:59 PM
4
04
59
PM
PDT
Are AI's human? Obviously not, what a silly question, like asking if a tree is a fish. Mr. Wesley Smith goes on to ask if AIs should be considered to be persons, or to be moral agents. These are reasonable questions to be sure, but obviously in order to have a productive discussion about that, one needs to start by carefully describing what the necessary and sufficient conditions for personhood or moral agency are. Smith forgets to do that, and instead dives right in with his reasons for answering in the negative. 1. AIs Would Not Be Alive I suspect Smith believes in a personal god that is a moral agent but is not a living organism. So much for this "reason". 2. AIs Would Not Think Smith forgets to define what constitutes "thinking", or any of the other terms he uses to exclude computers from thinking beings. One of worst attempts to delineate AI from human intelligence I've ever read. He also spews the anachronistic trope that AIs "performance depends wholly on its coding". That is ridiculous - it critially depends on its training, which is often whatever data it finds on the internet. The programmers of modern AI systems are unable to predict or explain what the systems eventually do. 3. AIs Would Not Feel I agree that embodiment and sensory interaction will be necessary to achieve artificial general (human-like) intelligence. However, once again, this likely poses a problem for Smith: God doesn't have a body either. 4. AIs Would Be Amoral Smith blithely declares that humans have free will. apparently unaware that this ancient philosophical conundrum has yet to be solved. 5. AIs Would Be Soulless I don't know what a soul is, so I can't comment on this.dogdoc
June 28, 2022
June
06
Jun
28
28
2022
04:41 PM
4
04
41
PM
PDT
While AI's are clearly not human (duh!), and while I agree that they are not and cannot become "persons" in a moral or metaphysical sense, I think the reasons given in this piece are weak: 1. given that "life" is difficult do define, does something need to be "alive" to be considered a "person", or is assuming that simple question begging? With proper add-ons, an AI could be animate, active, and display various effects we attribute to life. The distinction between "organic" and inorganic seems arbitrary in this context. Think of Lieutenant Commander Data (the original, the the commenter here at UD). 2. I tend to agree that an AI is not "thinking", but there are many who believe they could be, just as there are others who claim that humans are not really "conscious". Given enough programming, most of the aspects mentioned could be simulated. That is the fundamental question: how do we distinguish between thought and simulated thought? If you cannot tell the difference, is there one? 3. Certainly AI's could be given physical additions allowing them to "feel" their surroundings similar to the way we do, through remote sensors connected into their physical CPU. They could then be programmed to interpret these feelings as bad or good, painful or pleasant, etc. 4. Yes, whatever "ethics" an AI would display would be programmed into it, but many would argue that humans are the same: our ethics are largely programmed into us. during childhood. I agree that an AI would not have "free will" in the same way we do, but the "free will" argument for humans has not been resolved to everyone's satisfaction, despite the obvious problems of believing we do not have it. In any case, AI's can be programmed to make "decisions" based on their own inner states and perhaps some randomizing input. 5. I agree with this one, but given the difficulty explaining the "soul" to non-believers, this would be a hard sell for many. The piece itself says an AI could be programmed to simulate aspects that seem like the behaviour of a "soul". For example, the AI could be programmed to self protect; that is, to value its own continued existence and take whatever actions available to it to counter any threats to that (e.g. Skynet?). Even Asimov's laws allow for that. All this is not to say the author is wrong, but just to say that he will convince few who don't already agree with him. I expect our agnostic and atheist commenters at UD would argue much the same.Fasteddious
June 28, 2022
June
06
Jun
28
28
2022
01:24 PM
1
01
24
PM
PDT
An article the other day explained why we tend to believe AI programs are intelligent. We as humans tend to mistake fluent speech as intelligent thought.
Google’s powerful AI spotlights a human cognitive glitch: Mistaking fluent speech for fluent thought When you read a sentence like this one, your past experience tells you that it’s written by a thinking, feeling human. And, in this case, there is indeed a human typing these words: [Hi, there!] But these days, some sentences that appear remarkably humanlike are actually generated by artificial intelligence systems trained on massive amounts of human text
https://theconversation.com/googles-powerful-ai-spotlights-a-human-cognitive-glitch-mistaking-fluent-speech-for-fluent-thought-185099 Then there is this
Robots With Flawed AI Make Sexist And Racist Decisions, Experiment Shows
https://www.sciencealert.com/robots-with-flawed-ai-make-sexist-racist-and-toxic-decisions-experiment-showsjerry
June 28, 2022
June
06
Jun
28
28
2022
12:09 PM
12
12
09
PM
PDT

Leave a Reply