Uncommon Descent Serving The Intelligent Design Community

Bob Marks Knocks it Out of the Park on AI

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

This is a great discussion about whether AI (1) is currently sentient and (2) can, in principle, be sentient. All three panelists agree that it not currently sentient. It is 2 to 1 on whether it can, in principle, be sentient. As you might expect, how the materialists reach their conclusion follows more from metaphysical commitments than evidence. Max and Melanie (the materialists) see no reason why, in principle, computers cannot in the future be conscious. Why not? they ask, we are all just material stuff. And if you agree with their metaphysical premises, that is an unanswerable question. Max, especially is committed to this view and thinks we should be more humble. He is so blinkered by his commitment to materialism that it does not seem to occur to him that there can be any possible reason to think machines cannot be conscious other than arrogance.

Bob is a dualist and reaches the opposite conclusion, and he gives some excellent reasons to question materialist premises. I commend this excellent discussion to you.

BTW, Bob Marks really knows his stuff, and he presents his arguments in a very winsome fashion. We should all follow his example.

Comments
PM1 @22 I appreciate your follow-up. There is a lot there. Some initial comments. What is Searle’s argument about exactly? He wrote:
“The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.”
So, he intended his argument to be specifically about understanding. Perhaps it can be framed like this: (1) Syntax does not lead to understanding. (2) A computer program is purely syntactical. (3) So, no computer acts with understanding. In line with your reasoning: (1) Syntax does not lead to understanding. (2) Neural connections are purely syntactical. (3) So, no brain acts with understanding. It seems to me that Searle is the kind of materialist, who would like to reject (3). As an aside, it is well-known that Rosenberg has no problem with it; chapter 8 of his book is titled “The brain does everything without thinking about anything at all.”Origenes
February 28, 2023
February
02
Feb
28
28
2023
07:05 AM
7
07
05
AM
PDT
@10
Some thoughts: Searle’s CR argument shows that the computer does not understand anything, setting aside the issue if there is something available to engage in understanding. However, in my view, the same can be said of the brain. Arguably, the brain also manipulates symbols without understanding. Perhaps the Chinese Room argument applies not only to computers.
YES!!!! You have nicely identified what is really incoherent about Searle's position. The Chinese Room thought-experiment is supposed to be an intuition-pump for the following argument:
(1) Syntax is not sufficient for semantics. (2) A computer program is purely syntactical. (3) A mind is aware of semantics. (4) So, no computer program could ever be a mind.
What you've correctly identified is that one could also run the following argument:
(1) Syntax is not sufficient for semantics. (2') Neural connections are purely syntactical. (3) A mind is aware of semantics. (4') So, no brain could ever be a mind.
The problem of course is that Searle himself would want to reject (4') -- he's a hardcore materialist. Yet he has no choice but to accept the premises. When pressed on this, he basically just says that brains are able to generate semantic content by virtue of their causal powers, and that it's the job of neuroscientists to tell us how this happens. But if causal powers are the special sauce, then the same could be true of computers, too: there's nothing in Searle's argument that prevents someone from saying that while a program qua written code is purely syntactical and has no semantic content, it becomes mysteriously endowed with semantic content when the program is actually run on a machine -- and that it's the job of computer scientists to tell us how that happens. Given that objection, Searle's next move is to say that computers have only derived intentionality -- they have the causal powers necessary for generating semantic content because we have built them for that purpose. By contrast, brains have original intentionality -- they have the causal powers necessary for generating semantic content because that's how they evolved, and it's the job of evolutionary biologists to tell us how that trick was pulled off. Needless to say, none of this is very convincing to anyone. John McDowell, who is (in my opinion) a 1,000 times the philosopher Searle wishes he were, once quipped that Searle is unusual among contemporary neo-Cartesians in thinking that the res cogitans can be identical with the brain and yet retain its extraordinary powers. I concur.PyrrhoManiac1
February 28, 2023
February
02
Feb
28
28
2023
06:14 AM
6
06
14
AM
PDT
How does one engage those who have their fingers firmly stuck in their ears while claiming victory and shouting "La-la-la-I-can't-hear-you"?Origenes
February 28, 2023
February
02
Feb
28
28
2023
05:12 AM
5
05
12
AM
PDT
This behavior, over and over and over again, is not the sign of a person who doesn’t “pretend to know
He knows. He is a man in his mid 70s who was religiously educated all his school life and graduated from a prestigious university and all he can do is be inane. And is supercilious at the same time. Not unlike all the anti ID commenters here. Best ignored. Aside: Each anti ID commenter in their own way validates ID. They have never contributed anything positive. They definitely know all the arguments and understand them…and probably agree with them because they never refute them. This behavior is the best support for ID there is.jerry
February 28, 2023
February
02
Feb
28
28
2023
05:07 AM
5
05
07
AM
PDT
Just a quick apology to Upright Biped for encouraging him to set up his website "complexity café" while expecting it (and his semiotic hypothesis) to end in obscurity.Alan Fox
February 27, 2023
February
02
Feb
27
27
2023
11:02 PM
11
11
02
PM
PDT
. Not difficult at all Chuck. This is an ID blog. I’ve tried several times to engage you in a discussion of ID, and every time you’ve ended up having some smug or smart-assed thing to say and then you run away from the discussion. You do it every time, over and over. Didn’t you once complain — as the entire centerpiece of your response — that I used the word “quiescent” (inactive) to describe the memory within the gene system? Yes, that was you. Of course, I used the word because that is the word Von Neumann himself used to describe it; plus the fact that Crick, Watson, Brenner, Hoagland, Zamecnik, and Nirenberg (and a thousand others) demonstrated his predictions to be true. I used the word because it is a critical detail that requires a specific type of organization in order to function. Details, Chuck, that is what you run from. This behavior, over and over and over again, is not the sign of a person who doesn’t “pretend to know”. It’s the sign of someone who wants to protect their beliefs from the empirical details of that belief — the details they don’t want to deal with. You’ve been here a good while, and you are clearly aware of the design inference I’ve describe. You know it is valid. I can recite it to you front to back, going through the history of every detail along the way. I can then ask you if the design inference is scientifically valid (edit, this is a question not about your beliefs, but of the correct status of the argument). There will be no errors of fact, of experimental result, or logic. Still, you will not be able to acknowledge it. You will then be the very definition of someone who “pretends to know”. It is the defense that is required from you. That is how you protect yourself.Upright BiPed
February 27, 2023
February
02
Feb
27
27
2023
06:05 PM
6
06
05
PM
PDT
Knock yourself out…….chuckdarwin
February 27, 2023
February
02
Feb
27
27
2023
04:14 PM
4
04
14
PM
PDT
. Yes Chuck you do. Would you like me to prove it to you?Upright BiPed
February 27, 2023
February
02
Feb
27
27
2023
04:08 PM
4
04
08
PM
PDT
Upright You mean I don't pretend to have the answers......chuckdarwin
February 27, 2023
February
02
Feb
27
27
2023
03:34 PM
3
03
34
PM
PDT
. #11 And with the entire body of scientific and philosophical knowledge behind you - every drop of it - you have no idea whatsoever how that could happen. #12 It hard to tell what scares Chuck more, that he is designed by an unknown intelligence, or that he’ll never have a scientifically-meaningful answer to those ghastly IDers. #13 You know the design inference at the OoL is valid. You have snappy comebacks, but no answers.Upright BiPed
February 27, 2023
February
02
Feb
27
27
2023
03:31 PM
3
03
31
PM
PDT
Thanks for the link, Origenes. I'll bookmark it and get to it.Alan Fox
February 27, 2023
February
02
Feb
27
27
2023
03:03 PM
3
03
03
PM
PDT
It's hard to tell what scares IDers more, the fact that they evolved from lower life forms or that they aren't as smart as their machines....chuckdarwin
February 27, 2023
February
02
Feb
27
27
2023
02:49 PM
2
02
49
PM
PDT
I don’t think that many people are arguing that the AI we have now, or will have in the near future, can be classified as conscious, as we define it. But I also don’t see any valid reason why attaining this is impossible.Ford Prefect
February 27, 2023
February
02
Feb
27
27
2023
02:48 PM
2
02
48
PM
PDT
~Searle's Chinese Room~
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese. - - - - Searle goes on to say, “The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.”
Some thoughts: Searle's CR argument shows that the computer does not understand anything, setting aside the issue if there is something available to engage in understanding. However, in my view, the same can be said of the brain. Arguably, the brain also manipulates symbols without understanding. Perhaps the Chinese Room argument applies not only to computers.Origenes
February 27, 2023
February
02
Feb
27
27
2023
02:02 PM
2
02
02
PM
PDT
Alan Fox @7
Ori: For a materialist, what exactly is the difference between a computer and a human being?
AF: One is intelligently designed, the other isn’t.
LOL
Could you remind me which thread it was where I offered to poll some scientists?
This thread: https://uncommondescent.com/origin-of-life/paul-davies-on-the-gap-between-life-and-non-life/Origenes
February 27, 2023
February
02
Feb
27
27
2023
01:41 PM
1
01
41
PM
PDT
@6
For a materialist, what exactly is the difference between a computer and a human being?
That's one way of putting the question. But I prefer to work my way from more specific question to more general ones. Rather than ask "given materialism, what follows for the differences between computers and humans?" I would prefer to ask "what is our best account of the relevant differences between humans and computers?", and only then ask if that account -- whatever it turns out to be -- is compatible with materialism. Even the question "what is our best account of the difference between computers and humans?" would need to be vastly refined to be made more useful. (I'm much heavier than my laptop and don't need to be plugged in, but is that the right kind of relevant difference?) I don't think there's any single correct way of asking the relevant, interesting questions here -- because there are different ways of evaluating what counts as relevant and interesting. I can say that as I see it, writing as someone who researches philosophy of mind and philosophy of cognitive science and who is just now getting into philosophy of AI, I'm interested in the question:
Do our currently best theories of biological cognition help explain why we have so far failed to achieve AGI?
but, I accept that most people don't think that's the right question to even begin with asking!PyrrhoManiac1
February 27, 2023
February
02
Feb
27
27
2023
01:26 PM
1
01
26
PM
PDT
For a materialist, what exactly is the difference between a computer and a human being?
One is intelligently designed, the other isn't. Eta @ Origenes. Could you remind me which thread it was where I offered to poll some scientists?Alan Fox
February 27, 2023
February
02
Feb
27
27
2023
12:26 PM
12
12
26
PM
PDT
For a materialist, what exactly is the difference between a computer and a human being?Origenes
February 27, 2023
February
02
Feb
27
27
2023
10:45 AM
10
10
45
AM
PDT
HAL: I'm sorry, Dave. I'm afraid I can't do that. Dave Bowman : What's the problem? HAL : I think you know what the problem is just as well as I do. —2001: A Space Odysseychuckdarwin
February 26, 2023
February
02
Feb
26
26
2023
04:50 PM
4
04
50
PM
PDT
Artificial Intelligence does not exist. AI cannot think like a human being. That's the way most people think about AI. With all due respect to our atheist friends, AL promises the kind of "hands off" independence they desire. 'No one controls me so no one can control an AI.' And "sentience" means that an AI can become a person, someday. Completely false. An AI with the ability to think like a human being would be a simulation ONLY. It would not be alive. It would not have an individual identity. It would have no desires, no goals and no purpose except what human beings program into it. Let's go into the near future: Meet Bob. My humanoid, partial AI housekeeper. He cleans my house, washes my clothes and prepares my food. At the end of the work day, when he's done, he shuts down. He just stands there until his built-in programming reactivates him for another day of work. Bob has no identity. When I bought him, I gave him a name. I selected his work programs. He has no desires. The end.relatd
February 26, 2023
February
02
Feb
26
26
2023
02:39 PM
2
02
39
PM
PDT
@2
Tegmark even called it “carbon chauvinism” for us to think that we are significantly different from silicon computers.
By "carbon chauvinism" Tegmark meant the assumption that computers could not possibly become sentient. He certainly did not mean or say that there are no significant differences between us and silicon-based computers. In any event, I was only indicating my agreement with that one specific argument that Tegmark made, not with rest of what he said or his general viewpoint. I'm much more in agreement with Erik Larson as to why AGI is basically impossible. Or perhaps better put, AGI is like faster-than-light (FTL) travel: we have no idea what it would take to develop a theory that would demonstrate how it is possible. We can speculate all we want, but we have no idea how to get from current science, to a science which shows us how FTL is possible. AGI is to computer science as FTL is to physics.PyrrhoManiac1
February 26, 2023
February
02
Feb
26
26
2023
12:42 PM
12
12
42
PM
PDT
As to: "I liked Tegmark’s point that we need to be very careful about which arguments we use in claiming that AIs lack (take your pick) consciousness, sentience, understanding, reasoning. There is a long and ugly history of using those arguments to deny moral standing to non-Europeans, to women and children, and to non-human animals." At one point, Tegmark even called it "carbon chauvinism" for us to think that we are significantly different from silicon computers. :) I guess that PM1, Tegmark, and others who actually believe their computers are now conscious, will start giving their personal computers proper burials when they quit working? :) But before atheists all start running around, mourning the loss of their computers, I guess it would be good to point out a few problems with their thinking, The first problem with Darwinian atheists appealing to objective morality, (in this case, appealing to the objective moral that human souls are created equal before God), to try to make their case that we should not 'discriminate' against computers and say that computers are not truly conscious, is that God is the source of objective morality.
Premise 1: If God does not exist, then objective moral values and duties do not exist. Premise 2: Objective moral values and duties do exist. Conclusion: Therefore, God exists. The Moral Argument – drcraigvideos - video https://youtu.be/OxiAikEk2vU?t=276
Without God, the Darwinian atheist simply had no basis in which to ground objective morality. i.e. no basis in which to differentiate good from evil.
"In a universe of blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won't find any rhyme or reason in it, nor any justice. The universe we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil and no good, nothing but blind, pitiless indifference. DNA neither knows nor cares. DNA just is. And we dance to its music.” - Richard Dawkins, River Out of Eden: A Darwinian View of Life - pg. 132 "Let me summarize my views on what modern evolutionary biology tells us loud and clear — and these are basically Darwin’s views. There are no gods, no purposes, and no goal-directed forces of any kind. There is no life after death. When I die, I am absolutely certain that I am going to be dead. That’s the end of me. There is no ultimate foundation for ethics, no ultimate meaning in life, and no free will for humans, either. What an unintelligible idea." - William Provine - the late Professor of Biological Sciences at Cornell University - quote as stated in a 1994 debate with Phil Johnson at Stanford University:
The second problem for atheists is that computers, although computers have software composed of immaterial information, the computers, in and of themselves, have no 'physically transcendent' component to their being. i.e. They have no immaterial souls that are capable of living past the 'death' of their hardware!
Can a Computer Think? - Michael Egnor - March 31, 2011 Excerpt: The Turing test isn't a test of a computer. Computers can't take tests, because computers can't think. The Turing test is a test of us. If a computer "passes" it, we fail it. We fail because of our hubris, a delusion that seems to be something original in us. The Turing test is a test of whether human beings have succumbed to the astonishingly naive hubris that we can create souls.,,, It's such irony that the first personal computer was an Apple. http://www.evolutionnews.org/2011/03/failing_the_turing_test045141.html
Much less do computers have immaterial conscious minds to truly 'know', and/or 'understand', anything.
"Your computer doesn’t know a binary string from a ham sandwich. Your math book doesn’t know algebra. Your Rolodex doesn’t know your cousin’s address. Your watch doesn’t know what time it is. Your car doesn’t know where you’re driving. Your television doesn’t know who won the football game last night. Your cell phone doesn’t know what you said to your girlfriend this morning. People know things. Devices like computers and books and Rolodexes and watches and cars and televisions and cell phones don’t know anything. They don’t have minds. They are artifacts — paper and plastic and silicon things designed and manufactured by people — and they provide people with the means to leverage their human knowledge. Computers (and books and watches and the like) are the means by which people leverage and express knowledge. Computers store and process representations of knowledge. But computers have no knowledge themselves." - Michael Egnor - 2015
An atheist might try to claim that, "So what if I have no actual scientific evidence that computers have souls and/or an immaterial conscious minds? The Christian Theist also has no scientific evidence for his claim that humans have souls and/or immaterial conscious minds that are capable of living past the death of their material body." On that count the atheist would be wrong. Advances in quantum biology have given us scientific evidence that we do indeed possess a transcendent component to our being, i.e. a 'soul', that is, in principle, capable of living past the death of our material bodies,
Oct. 2022 - So since Darwinian Atheists, as a foundational presupposition of their materialistic philosophy, (and not from any compelling scientific evidence mind you), deny the existence of souls/minds, (and since the materialist’s denial of souls/minds, (and God), has led (via atheistic tyrants) to so much catastrophic disaster on human societies in the 20th century), then it is VERY important to ‘scientifically’ establish the existence of these ‘souls’ that are of incalculable worth, and that are equal, before God. https://uncommondescent.com/off-topic/what-must-we-do-when-the-foundations-are-being-destroyed/#comment-768496
Verse:
John 11:25 Jesus said to her, “I am the resurrection and the life. The one who believes in me will live, even though they die; Luke 23:42-43 Then he said, “Jesus, remember me when You come into Your kingdom!” And Jesus said to him, “Truly I tell you, today you will be with Me in Paradise.”
bornagain77
February 26, 2023
February
02
Feb
26
26
2023
12:07 PM
12
12
07
PM
PDT
Interesting debate. I liked Tegmark's point that we need to be very careful about which arguments we use in claiming that AIs lack (take your pick) consciousness, sentience, understanding, reasoning. There is a long and ugly history of using those arguments to deny moral standing to non-Europeans, to women and children, and to non-human animals. I think Mitchell could have done a better job explaining her views about why AIs cannot understand anything. Her argument in Artificial Intelligence: A Guide for Thinking Humans is that AIs cannot cross what she calls "the barrier of meaning": they cannot understand they are saying or doing. Though she considers it possible that AI could understand the meaning of what it says and does, it would need to be fully embodied in order to interact with us. Needless to say, it would also need to be fully autonomous and capable of absorbing cultural information. It would need to be a child, not a child's toy. Mitchell's criticisms of current-day AI hype correspond nicely, I think, with the points that Larson raises in his The Myth of Artificial Intelligence. Larson draws upon Peirce's distinction between deductive, inductive, and abductive reasoning to argue that the main reason why AI cannot do what we do is that we have, at present, no way of automating abductive reasoning because we lack a theory of abductive reasoning. I think that Larson is very slightly mistaken about this: we do have the basic foundations of a theory of abductive reasoning, and that theory shows why abductive reasoning cannot be automated. Cognitive science has finally matured into becoming a theory of abductive inference. (I might be in a minority of thinking about cognitive science this way.) And what we are learning from cognitive science is that abductive inference cannot be decomposed into an algorithmic process, because organisms are not machines. As I see it, the great danger of AIs is that not that they will somehow become sentient or rational -- let alone "superintelligent" the way that Nick Bostrom or Yuvi Harari predict -- but that we will allow ourselves to be fooled by high-tech versions of Clever Hans.PyrrhoManiac1
February 26, 2023
February
02
Feb
26
26
2023
07:39 AM
7
07
39
AM
PDT
1 13 14 15

Leave a Reply