Uncommon Descent Serving The Intelligent Design Community

Michael Egnor: Is consciousness the sort of thing that could have evolved?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Researchers Simona Ginsberg and Eva Jablonka have written a book attempting to trace the evolution of consciousness. Neurosurgeon Michael Egnor responds:

In addition to the problem of intentionality, the capacity of human beings to reason and use intellect and will is an insurmountable obstacle for Darwinian theories of the evolution of consciousness. As Aristotle and scientists and philosophers who have followed his thinking have noted for millennia, the human capacity for abstract reasoning is inherently immaterial. No material explanation for the human capacity of reason is even conceivable.

For example, how can human beings contemplate “infinity” using physiological (material) processes in the brain? All material processes are finite and could not thereby account for thoughts about infinity. Nor can material processes explain the perfection inherent in certain mathematical concepts, such as triangularity. All material instantiations of triangularity are imperfect — lines aren’t perfectly straight and angles in actual (material) triangles don’t add up to exactly 180 degrees. Yet our abstract understanding of triangularity is perfect, in the sense that we understand triangularity as involving straight sides and 180 degree sums of angles.

Michael Egnor, “Is consciousness the sort of thing that could have evolved?” at Mind Matters News (June 28, 2022)

The book is Picturing the Mind (MIT Press, 2022). Here’s a free excerpt.

Takehome: Material processes cannot, for example, account for the power to grasp infinity or perfection — which are not material ideas.

Note: A common response among naturalists is to claim that such abstractions, like consciousness itself, are an illusion. Egnor would respond, “If your hypothesis is that your mind is an illusion, then you do not have a hypothesis.” That’s one reason that panpsychism is better tolerated in science than it used to be. The reality is, slowly but surely, sinking in.

You may also wish to read: Did minimal consciousness drive the Cambrian Explosion? Eva Jablonka’s team makes the daring case, repurposing Hungarian chemist Tibor Gánti’s origin of life studies. The researchers point out that life forms that show minimal consciousness have very different brains from each other. Behavior, not brain anatomy, is the signal to look for.

Comments
@ dogdoc I feel your frustration. I might suggest another word than materialism. Physicalism includes anything that impinges on observable reality, particles,fields, gravity, energy. But this is Uncommon Descent. Folks have other agendas besides understanding differing views. Other words to avoid are "qualia" and "consciousness". Both are philosophical conceits (not a typo) with not even a useful definition to argue about.Fred Hickson
July 5, 2022
July
07
Jul
5
05
2022
10:39 PM
10
10
39
PM
PDT
He says it better than I could: from an article about a new book by Robert J. Marks, The Non-Computable Human (https://evolutionnews.org/2022/06/the-no...ble-human/):
"Or consider another example (regarding qualia). I broke my wrist a few years ago, and the physician in the emergency room had to set the broken bones. I’d heard beforehand that bone-setting really hurts. But hearing about pain and experiencing pain are quite different. To set my broken wrist, the emergency physician grabbed my hand and arm, pulled, and there was an audible crunching sound as the bones around my wrist realigned. It hurt. A lot. I envied my preteen grandson, who had been anesthetized when his broken leg was set. He slept through his pain. Is it possible to write a computer program to duplicate — not describe, but duplicate — my pain? No. Qualia are not computable. They’re non-algorithmic. By definition and in practice, computers function using algorithms. Logically speaking, then, the existence of the non-algorithmic suggests there are limits to what computers and therefore AI can do."
Qualia are the essense of sentient subjective conscious awareness, which as in the above can easily be shown to be non-computable, non-algorithmic. The same reasoning applies to thought, another essential but non-algorithmic property of consciousness related to qualia.doubter
July 2, 2022
July
07
Jul
2
02
2022
09:18 AM
9
09
18
AM
PDT
But without further human intervention, it demonstrably had learned to win.
:lol: Yep, demonstrably you have no clue about programming.Lieutenant Commander Data
July 1, 2022
July
07
Jul
1
01
2022
10:11 PM
10
10
11
PM
PDT
SA,
For the purposes of this blog and for ID in general, what you said there meets the criterion.
No, nothing I said should be taken to mean I agreed with "Intelligent Design" as it is portrayed on this site. I don't at all.
ID doesn’t go into the kind of intelligence...
That is true, and that is quite the problem. There is an active effort to avoid specifying what the word "intelligence" is even supposed to mean. In AI, the working definition for "intelligent system" has traditionally been this: A system is artificially intelligent if is able to perform tasks that, if a human were to perform them, we would consider to require intelligence." By that definition, saying that "whatever caused the human brain to exist" was intelligent simply means that if a human figured out how to do that we would say they were intelligent. It means nothing else. Not one, single thing. You can't assume what else this thing could do, or couldn't do, besides somehow causing biological systems. You couldn't assume it could understand general purpose natural language, or had conscious experiences, or could solve a crossword puzzle. The label intelligence, applied in this context, means absolutely nothing beyond "was able to produce the phenomena in question".
ME: But even in this trivially simple example, you can see that even though the computer was just following a set of (very simple) instructions, it learned all by itself how to win at tic-tac-toe. SA: You’re making it sound like magic but it’s not.
Seriously? I made it sound like magic? I explained exactly how I built the program! Nothing but a few lines of code, so simple that just about anyone could understand how it works.
It’s actually just a very basic and simple set of instructions.
Well yes, that is just what I have been saying, over and over again, isn't it?!
There’s no learning,...
Of course there is learning!!!! When it began the system was not able to win at tic-tac-toe. But without further human intervention, it demonstrably had learned to win. If not "learn", what would you call it? "Developed the ability"? That just means the same thing.
there’s no thought, there’s no decision-making.
I've told you three or four times now, unless you define your terms these discussions are useless. Yet you persist, proclaiming that "there's no thought" again without bothering to even try to explain what that word entails in your view. Sorry, this is a waste of time.dogdoc
July 1, 2022
July
07
Jul
1
01
2022
09:41 PM
9
09
41
PM
PDT
I let the program play itself
:) When you think that a program "decides" to do something you didn't program/code to do maybe it's time for you to stop using dope or combinations of dope with alcohol .Lieutenant Commander Data
July 1, 2022
July
07
Jul
1
01
2022
07:48 PM
7
07
48
PM
PDT
DD
But even in this trivially simple example, you can see that even though the computer was just following a set of (very simple) instructions, it learned all by itself how to win at tic-tac-toe.
You're making it sound like magic but it's not. It's actually just a very basic and simple set of instructions. There's no learning, there's no thought, there's no decision-making. It's all just simple IF THEN statements. IF you WIN - then keep that. IF you LOSE, get rid of it. For each WIN, record the score. Rank the Scores. For each Game, assign highest score pattern. There is nothing to it. It's all just giving simple instructions to the computer. It seems like magic because of the magic of electricity - which enables processors to run through those basic logic commands many thousands of times and record results. The most advanced machine learning is doing nothing more than that when you break it down. Perhaps the only more advanced thing it does is "CHOOSE EVENT WITH HIGHEST CORRELATION" - so it does some statistics. This is not real learning - it's just a set of instructions that a machine carries out.Silver Asiatic
July 1, 2022
July
07
Jul
1
01
2022
07:36 PM
7
07
36
PM
PDT
DD
I have never argued that there is no evidence of intelligent design. Whatever caused things like human brains – and life itself – to exist may be called “intelligent”, depending on what you mean by “intelligent”. But just as computers can perform tasks that we would call “intelligent” if a human did it, but do not have minds anything like human minds, whatever caused human brains to exist did something that we would call intelligent if a human did it, but it may not have a mind anything like a human mind either.
For the purposes of this blog and for ID in general, what you said there meets the criterion. ID doesn't go into the kind of intelligence or source of it (although Stephen Meyer is saying something more about that lately and I don't necessarily agree with him on an ID basis). I mean at least you are willing to accept that there is evidence and that there is some way you can accept an ID inference. All the other questions regarding who the designer is, etc are for a different debate beyond here.Silver Asiatic
July 1, 2022
July
07
Jul
1
01
2022
07:28 PM
7
07
28
PM
PDT
LCD@69,
Can’t be too difficult to explain the internal processes that happen into computer “mind” that make you to say that a computer “learn”.
One of the very first programs I ever wrote was one that played tic-tac-toe. It is a very simple game with an optimum strategy that can be programmed explicitly. There are only eight rules I had to program, and the program could never lose - it could only win or tie. Then I built another program that played tic-tac-toe - a different kind of program. I didn't program any rules into this one about what moves to make. It started off just taking any open square at random when it was its turn, so it was a terrible tic-tac-toe player, and frequently lost. Then I changed that program, so that for each (random) turn the computer took, it recorded the state of the board (where the Xs and Os were) along with which square it took. When the game was over, if the computer won, it went through all of these recorded moves and incremented a counter for each one. If the computer lost, it would decrement a counter associated with each move. That's all I programmed it to do. I let the program play itself for many thousands of times (it only took a minute or so), recording all of the possible board states and the scores for each move made. I never told it where to move in any situation. At that point, I changed the program again: Instead of picking each move at random, it looked up all the moves in its database matching the current state of the board, and picked the move with the highest score. It began playing perfect tic-tac-toe, never losing, only winning or tying. This is the simplest example of machine learning I could think of. It bears no similarity to the state-of-the-art deep-learning neural networks being used all over the world, but it would be much too difficult to explain to you how those work. But even in this trivially simple example, you can see that even though the computer was just following a set of (very simple) instructions, it learned all by itself how to win at tic-tac-toe. QED.dogdoc
July 1, 2022
July
07
Jul
1
01
2022
07:02 PM
7
07
02
PM
PDT
SA, I believe your understanding of the fundamental operation and components of digital computers is lacking, as is your understanding of how machine learning and AI works. But I'm not going to belabor that; I don't think that is the important part of our discussion. Let me instead respond to this:
You’re very emphatic here and say that we’re not close at all.
Yes, absolutely. There are two aspects to the attempt to create an artificial mind. The first is to achieve sapience in the machine, what I would call thinking, and that includes perception, classifying, inference, logic, planning, applying world knowledge, goal setting, and other things. Again, I don't want to debate which aspects of thinking have been (or can be) implemented, but let me emphatically say that much remains to be done in order to approach general human intelligence, and those in AI who believe it's just a matter of scaling up the amount of processing power and the size of the datasets are mistaken. The second aspect of creating an artificial mind is to acheive sentience, or conscious awareness. It is my opinion that consciousness is deeply mysterious. We don't understand it, we can't say what the necessary or sufficient conditions are for consciousness to be experienced, and we don't know if it is causal or perceptual. Nobody knows even how to begin to create something that experiences sentience.
Doesn’t that make you pause for a moment?
I've been thinking about and studying this question for a very, very long time.
We’re not close to creating something that supposedly occurred through some DNA copy errors from an already existing brain-plan?
I don't understand how human brains came to exist.
All of the accumulated power and knowledge of human engineering and technology in our labs are not even close to creating a human mind.
True! Not even close!
Will you insist that there is no evidence of intelligent design present in the origin of the human mind in spite of this fact?
I have never argued that there is no evidence of intelligent design. Whatever caused things like human brains - and life itself - to exist may be called "intelligent", depending on what you mean by "intelligent". But just as computers can perform tasks that we would call "intelligent" if a human did it, but do not have minds anything like human minds, whatever caused human brains to exist did something that we would call intelligent if a human did it, but it may not have a mind anything like a human mind either.dogdoc
July 1, 2022
July
07
Jul
1
01
2022
06:58 PM
6
06
58
PM
PDT
"Your computer doesn’t know a binary string from a ham sandwich. "
"YOUR COMPUTER DOESN'T KNOW ANYTHING" AT EVOLUTION NEWS AND VIEWS (JANUARY 23, 2015). . JAN 25 . 2015 Your computer doesn’t know a binary string from a ham sandwich. Your math book doesn’t know algebra. Your Rolodex doesn’t know your cousin’s address. Your watch doesn’t know what time it is. Your car doesn’t know where you’re driving. Your television doesn’t know who won the football game last night. Your cell phone doesn’t know what you said to your girlfriend this morning. ¶ People know things. Devices like computers and books and Rolodexes and watches and cars and televisions and cell phones don’t know anything. They don’t have minds. They are artifacts — paper and plastic and silicon things designed and manufactured by people — and they provide people with the means to leverage their human knowledge. ¶ Computers (and books and watches and the like) are the means by which people leverage and express knowledge. Computers store and process representations of knowledge. But computers have no knowledge themselves. https://afterall.net/quotes/michael-egnor-on-what-your-computer-doesnt-know/
bornagain77
July 1, 2022
July
07
Jul
1
01
2022
06:49 PM
6
06
49
PM
PDT
If a computer only execute an instruction(or chain of instructions) set by somebody else how in the world can learn ?
You may want to update your understanding of computer science
You sound like you have a lot of understanding of computer science but somehow you hide it, right ? I wonder why? Can't be too difficult to explain the internal processes that happen into computer "mind" that make you to say that a computer "learn".Lieutenant Commander Data
July 1, 2022
July
07
Jul
1
01
2022
05:26 PM
5
05
26
PM
PDT
DD
Oh, no – I would not say that computers and humans have the same kind of mind! That question may arise if and when we ever manage to build a machine with AGI, but in my view (as opposed to that guy who got fired from Google for saying its AI computer was sentient) we are not close to that at all!
You're very emphatic here and say that we're not close at all. Doesn't that make you pause for a moment? We're not close to creating something that supposedly occurred through some DNA copy errors from an already existing brain-plan? All of the accumulated power and knowledge of human engineering and technology in our labs are not even close to creating a human mind. Will you insist that there is no evidence of intelligent design present in the origin of the human mind in spite of this fact?Silver Asiatic
July 1, 2022
July
07
Jul
1
01
2022
05:05 PM
5
05
05
PM
PDT
DD Computers had an early life as tape-recorders. That is, they captured data and stored it on magnetic tape. Then they played it back. There's no thinking here, it's just a storage and retrieval mechanism. It's completely mindless and lifeless.
Here you seem to be saying that anything that thinks must be comprised of biological tissues. Care to provide an argument that shows that is true? And don’t you believe that things with no body at all are capable of thinking (e.g. gods, angels, dead people’s souls, etc)?
Only living beings can think. Angels and rational souls are living beings. Non-living beings like computers cannot think. Computers are just storage and retrieval. It's like saying a wooden chair learns to fit my body better over time. It takes in the data (my body's impression on the seat) and retains it each time, adjusting itself. Over time, the seat has conformed itself to my body better. That's a learning process. That's what software does. It takes in data, makes adjustments and gives an output. Then it takes in more data, based on that, adjusts again, and outputs. There's no thinking, no real learning. The computer can be programmed to do anything and it doesn't know or care. It does not even have a real memory. As far as designing a test to show that creativity exists - the design of the test itself is proof of that. It's creating something for a purpose, in order to show the power of creativity. Then there is the interpretation of the test - again, an example of creativity. The goals and purposes of the test are not something the computer can come up with on its own. Humans, however, can do that.Silver Asiatic
July 1, 2022
July
07
Jul
1
01
2022
04:55 PM
4
04
55
PM
PDT
LCD@65, Well, you've put up quite a good argument, I must admit. You've made a lot of cogent points and supported them with an abundance of clear evidence. Still, after a great deal of thought, I believe I've come up with a rebuttal that may convince you my position is correct: Yup, a computer can be trained, can learn, can think, can "choose". A computer not only execute instructions. AI stories are not sci-fi stories for idiots. If I haven't convinced you, let's just agree to disagree, shall we?dogdoc
July 1, 2022
July
07
Jul
1
01
2022
04:55 PM
4
04
55
PM
PDT
A computer system can be trained to recognize chairs.
Nope, a computer can't be trained, can't learn, can't think, can't "choose" .A computer only execute instructions. AI stories are sci-fi stories for idiots. But you can debunk this on the spot. Please do it. :lol:Lieutenant Commander Data
July 1, 2022
July
07
Jul
1
01
2022
03:34 PM
3
03
34
PM
PDT
LCD@62,
the learning software Except don’t exist such a thing like learning software. To add to a database more words/pictures /patterns doesn’t mean that a database learns something.
You may want to update your understanding of computer science :-)dogdoc
July 1, 2022
July
07
Jul
1
01
2022
02:52 PM
2
02
52
PM
PDT
SA,
Yes, exactly. Even a 6 year old child would realize the problem with that.
A computer system can be trained to recognize chairs. Or tables, or apartment houses, or tumors, or stress fractures, or... You don't seem to be articulating what you're actually trying to get at here.
ME: Please tell me the salient, empirically distinguishable difference between a computer learning to recognize an apartment building and a computer doing the same thing. SA: I think you meant “human” in there for one of the computers.
Oops yes, sorry, thanks.
But you can give a simple example, that doesn’t say much at all.
You mean the example of recognizing chairs? I can absolutely give much, much more complex examples of the recognition abilities of deep learning systems, but I'm not sure that is what you're asking for.
What difference is there with a computer learning that 2+2=4 and with a human?
The answer will vary depending on what level of abstraction you are interested in. At the level of physical implementation, VSLI chips are completely different than brains. At the level of mathematical logic, the answer is actually a bit complicated. Calculators have circuits that are specially designed to perform mathematical operations; the hardware directly supports binary logic, which is all that is needed to build up more complex math functions. But the edge of AI research right now is trying to get learning systems to learn formal logic and math using the same basic methods that enable computers to recognize classes of objects. But let me answer a bit differently: We know exactly how a calculator computes 2+2. We know generally (but not exactly) how deep learning systems learn to recognize chairs and - recently - compute 2+2. But we do not know how humans learn and perform mathematical operations.
If none, then they have the same kind mind?
Oh, no - I would not say that computers and humans have the same kind of mind! That question may arise if and when we ever manage to build a machine with AGI, but in my view (as opposed to that guy who got fired from Google for saying its AI computer was sentient) we are not close to that at all!
Computers do not know what truth or falsehood is.
I don't think this is a very meaningful thing to say. You're probably packing a lot into the word "know" here that you haven't made explicit. It is trivial for a computer to be trained to accurately tell you whether your statements ("This is a chair. This is a table". etc) are true or false. But that isn't what you mean.
They actually don’t know anything.
This statement is provocative but meaningless until you lay out exactly what you mean by "know". I suspect you mean "are consciously aware of" or something like that. Clearly computer systems represent, retrieve, manipulate, and output knowledge. It makes perfect sense to say things like "The autopilot doesn't know how to adjust the trim flaps in a crosswind" or "The question answering system knew all the president's names in order" and so on. These discussions deteriorate into nonsense unless you start making your definitions explicit!
They are mindless.
Definitions please! Again I suspect you mean they are not conscious, and probably other things too, but we can't discuss them unless you carefully explicate what you are talking about.
It’s like saying that a DVD player knows the works of Shakespeare because I can see them on my screen.
Nope, nothing like that.
ME: Please tell me the salient, empirically distinguishable difference between a computer learning to recognize an apartment building and a computer doing the same thing. SA: Perhaps a better example is the difference between a computer learning how to drive a car and a 16 year old kid.
Okay.
Driving requires so much intuition and creativity – which is innate in human beings and not transferable to computers that many say that self-driving cars are actually impossible.
Please describe an empirical test that reveals whether or not something has intuition, or creativity.
It [a scarecrow] is a good comparison because we imagine that a computer is “thinking” but a computer is the same sort of thing as a stick with some straw and some old clothes on it. It’s just non-living matter.
Here you seem to be saying that anything that thinks must be comprised of biological tissues. Care to provide an argument that shows that is true? And don't you believe that things with no body at all are capable of thinking (e.g. gods, angels, dead people's souls, etc)?dogdoc
July 1, 2022
July
07
Jul
1
01
2022
02:46 PM
2
02
46
PM
PDT
the learning software
Except don't exist such a thing like learning software. To add to a database more words/pictures /patterns doesn't mean that a database learns something.Lieutenant Commander Data
July 1, 2022
July
07
Jul
1
01
2022
11:02 AM
11
11
02
AM
PDT
Please tell me the salient, empirically distinguishable difference between a computer learning to recognize an apartment building and a computer doing the same thing.
Perhaps a better example is the difference between a computer learning how to drive a car and a 16 year old kid. Driving requires so much intuition and creativity - which is innate in human beings and not transferable to computers that many say that self-driving cars are actually impossible.Silver Asiatic
July 1, 2022
July
07
Jul
1
01
2022
11:02 AM
11
11
02
AM
PDT
DD
Please tell me the salient, empirically distinguishable difference between a computer learning to recognize an apartment building and a computer doing the same thing.
I think you meant "human" in there for one of the computers. But you can give a simple example, that doesn't say much at all. What difference is there with a computer learning that 2+2=4 and with a human? If none, then they have the same kind mind? Computers do not know what truth or falsehood is. They actually don't know anything. They are mindless. It's like saying that a DVD player knows the works of Shakespeare because I can see them on my screen.Silver Asiatic
July 1, 2022
July
07
Jul
1
01
2022
10:59 AM
10
10
59
AM
PDT
Andrew
There would be a problem with convincing a human person that a table is suddenly now a chair. That’s where real recognition would happen. No such recognition in a computer.
Yes, exactly. Even a 6 year old child would realize the problem with that.Silver Asiatic
July 1, 2022
July
07
Jul
1
01
2022
10:56 AM
10
10
56
AM
PDT
"We could program the computer to accept false statements and violate logic. But we can’t consistently do that with humans, no matter how much we may try to brainwash them." SA, I was headed here also. There would be a problem with convincing a human person that a table is suddenly now a chair. That's where real recognition would happen. No such recognition in a computer. Andrewasauber
July 1, 2022
July
07
Jul
1
01
2022
10:54 AM
10
10
54
AM
PDT
SA,
We could program the computer to accept false statements and violate logic.
Again, I've been talking about learning systems rather than symbolic programming systems to make things a bit more clear. And obviously, learning systems that have trained on huge portions of the internet have learned to make all sorts of false and illogical conclusions (without being programmed to do so) - that's actually a big problem! But just as obviously, people who have read illogical, false conspiracy theories on the internet have had the exact same problem!
Not so with computers. They can never know that they’re illogical and false.
Actually, in formal systems (like math and formal logic), computers (symbolically programmed ones) are quite excellent at detecting logic errors! They never make mistakes!dogdoc
July 1, 2022
July
07
Jul
1
01
2022
10:52 AM
10
10
52
AM
PDT
SA,
When we say the “computer has learned” – the verb ‘to learn’ is different for a computer than for a human.
You would need to provide the different specific definitions for these different senses of the verb "to learn" in order to make this point.
We speak of “machine learning” but that’s a statistical process. There’s no intuition and no creativity – all necessary for true (as human) learning – in a computer.
Please tell me the salient, empirically distinguishable difference between a computer learning to recognize an apartment building and a computer doing the same thing.dogdoc
July 1, 2022
July
07
Jul
1
01
2022
10:48 AM
10
10
48
AM
PDT
I think it's more than the computer calling chairs tables. We could program the computer to accept false statements and violate logic. But we can't consistently do that with humans, no matter how much we may try to brainwash them. We have an aptitude and orientation to the truth and we always know it. Even if we're miseducated. Not so with computers. They can never know that they're illogical and false.Silver Asiatic
July 1, 2022
July
07
Jul
1
01
2022
10:47 AM
10
10
47
AM
PDT
LCD
Yep, and a scarecrow “learn” to scare crows.
It's a good comparison because we imagine that a computer is "thinking" but a computer is the same sort of thing as a stick with some straw and some old clothes on it. It's just non-living matter.Silver Asiatic
July 1, 2022
July
07
Jul
1
01
2022
10:44 AM
10
10
44
AM
PDT
Andrew@50, You are correct that if you labelled the chairs as "tables" the computer would label a new chair as a table. (If I was teaching a human being a new language and told them that chairs were called "tables" they would also call chairs "tables" of course.) You are also correct that the system I described would have no conception of how chairs are used, etc. Nor would there be any reason at all to suspect the computer was conscious of what it was doing - or anything else! Still, it seems to me that the ability to distinguish chairs from non-chairs is rightly called "the ability to recognize chairs". And while you refer to this as "pattern matching", it is actually much more involved when modern deep learning systems learn to recognize abstract things like chairs (or dogs or apartment buildings or anything else). No programmer is able to describe the "pattern" that these systems use to recognize these objects; rather, the system learns by itself to create intermediate level abstractions (various shapes and relationships between forms) that enable it to categorize these abstract things accurately.dogdoc
July 1, 2022
July
07
Jul
1
01
2022
10:44 AM
10
10
44
AM
PDT
When we say the "computer has learned" - the verb 'to learn' is different for a computer than for a human. We speak of "machine learning" but that's a statistical process. There's no intuition and no creativity - all necessary for true (as human) learning - in a computer.Silver Asiatic
July 1, 2022
July
07
Jul
1
01
2022
10:41 AM
10
10
41
AM
PDT
DD
To me, the fact that we can have a description in our heads of what a chair is, even though we are not thinking of a perceived instance that matches that description, says nothing at all about whether human thought is ontologically distinct from the rest of the world.
Agreed. It's just one component in a larger argument.
There is no connection to libertarian free will, to the possibility of conscious experience without physical embodiment, or any other aspect of mind/body dualism that people are typically motivated to defend.
There's an indirect connection to those aspects. It's really just establishing a pathway of thought. It's an attempt to move from a monism of whatever kind, to an acceptance of duality in nature. You mentioned mind/body dualism but there are other necessary parts to that. If we have universals (abstract concepts) that are distinct from particular. We have a basis for material/immaterial dualism. This refutes monism. We can start with the Law of Identity. Already, our rational process requires a dualism: "This thing is one thing which is not all other things". That's the dualistic nature of rational thought. Monism would have to deny that. Additionally, people deny the dualism of truth vs falsehood. However, we align truth with "what is real". That's how we validate the idea. So, we have reality vs illusion - or truth vs falsehood. All of these are dualisms. Again, this is just breaking down monism - either "all mind" monism or "all material" it doesn't matter. In an absolute monist system, you can't make distinctions. Everything is one. But that violates the Law of Identity and is thus irrational. A person could argue for absurdity by saying that logic does not correspond to reality and truth and falsehood are equal, and that the Law of Non-Contradiction does not hold. If we said "but you're contradicting yourself" they can say "so what?" All of that's fine except nobody can communicate with that person and the person has affirmed (which is a statement of truth) that rational thought has no value, etc. Basically that's just insanity. So instead, we affirm that rational distinctions are based on reality. Therefore there really are two apples, and when we have two more, we count them to be four apples. They are real. I can accept that a person may reject this (as one IDist here does very strongly) and insist that all is mind and there is no physical reality (given quantum indeterminacy, etc). The biggest logical problem I've found with that, however, is why all of humanity has intuitively felt that there is an external, material reality and even the science we've used to discover quantum effects is based on that same ontology. In other words, there does not seem to be a good reason to reject our intuition about life and reality especially considering that even if we thought that everything-is-mind, we'd still have to live and think and speak as if there is a reality outside of us and that physical objects really exist.Silver Asiatic
July 1, 2022
July
07
Jul
1
01
2022
10:38 AM
10
10
38
AM
PDT
"Please tell me why you think it is erroneous to say that this computer system has learned to recognize chairs." Because it doesn't recognize anything. It matches patterns. You could tell it chairs are tables and it would match chair input to table patterns. Andrewasauber
July 1, 2022
July
07
Jul
1
01
2022
10:34 AM
10
10
34
AM
PDT
1 2 3

Leave a Reply