Uncommon Descent Serving The Intelligent Design Community

Minds, brains and computing vs contemplation

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One of the underlying debates linked to the design issue is the notorious mind-brain gap challenge.

It keeps coming up, and on both sides of the ID debate.

I would therefore like to spark a bit of discussion with a clip from a Scott Aaronson Physics course lecture:

>> . . . If we interpret the Church-Turing Thesis as a claim about physical reality, then it should encompass everything in that reality, including the goopy neural nets between your respective ears. This leads us, of course, straight into the cratered intellectual battlefield that I promised to lead you into.

As a historical remark, it’s interesting that the possibility of thinking machines isn’t something that occurred to people gradually, after they’d already been using computers for decades. Instead it occurred to them immediately, the minute they started talking about computers themselves. People like Leibniz and Babbage and Lovelace and Turing and von Neumann understood from the beginning that a computer wouldn’t just be another steam engine or toaster — that, because of the property of universality (whether or not they called it that), it’s difficult even to talk about computers without also talking about ourselves.

So, I asked you to read Turing’s second famous paper, Computing Machinery and Intelligence. Reactions?

What’s the main idea of this paper? As I read it, it’s a plea against meat chauvinism. Sure, Turing makes some scientific arguments, some mathematical arguments, some epistemological arguments. But beneath everything else is a moral argument. Namely: if a computer interacted with us in a way that was indistinguishable from a human, then of course we could say the computer wasn’t “really” thinking, that it was just a simulation. But on the same grounds, we could also say that other people aren’t really thinking, that they merely act as if they’re thinking. So what is it that entitles us to go through such intellectual acrobatics in the one case but not the other?

If you’ll allow me to editorialize (as if I ever do otherwise…), this moral question, this question of double standards, is really where Searle, Penrose, and every other “strong AI skeptic” comes up empty for me. One can indeed give weighty and compelling arguments against the possibility of thinking machines. The only problem with these arguments is that they’re also arguments against the possibility of thinking brains!

So for example: one popular argument is that, if a computer appears to be intelligent, that’s merely a reflection of the intelligence of the humans who programmed it. But what if humans’ intelligence is just a reflection of the billion-year evolutionary process that gave rise to it? What frustrates me every time I read the AI skeptics is their failure to consider these parallels honestly. The “qualia” and “aboutness” of other people is simply taken for granted. It’s only the qualia of machines that’s ever in question.

But perhaps a skeptic could retort: I believe other people think because I know I think, and other people look sort of similar to me — they’ve also got five fingers, hair in their armpits, etc. But a robot looks different — it’s made of metal, it’s got an antenna, it lumbers across the room, etc. So even if the robot acts like it’s thinking, who knows? But if I accept this argument, why not go further? Why can’t I say, I accept that white people think, but those blacks and Asians, who knows about them? They look too dissimilar from me . . . >>

I think we can safely lay aside the attempted reduction to racism argument as unworthy.

What I find interesting here, is the ease with which the challenge of finding the many islands of function in vast configuration spaces is brushed aside by simply imposing an evolutionary materialist assumption. But, the implied search challenge is far more directly evident as sound, than is an ideologically constrained materialist reconstruction of a model past that we did not and cannot observe. So, just maybe, we have a reason here to reject the notion of writing the relevant algorithms through incremental blind chance circumstances and variations filtered through culling on differential reproductive success.

My second problem is the easy equation of brains and minds that seems to pervade the frame of thought.

Notice: meat chauvinism.

Dismissive, loaded rhetoric, comfortable in the dominance of a priori evolutionary materialism.

But the relevant division — never mind the obscurant Neil deGrasse Tysons with their dismissal of philosophy (which simply blindly traps them in ideologies) — is not between meat and silicon etc, but between dust and spirit. Not for nothing does Genesis talk of God forming man’s body from the dust of the ground and then breathing in the celestial fire of spirit leading to a complex unity, the embodied living soul/self.

However we choose to read the genre, the essential ontological point stands.

Golems notwithstanding.

I raise, therefore, the issue that it is self-evident that rocks have no dreams (so also, no beliefs, and no knowledge as that is warranted, credibly true belief); indeed, that they compute blindly, they do not contemplate insightfully:

self_aware_or_notIf you doubt the distinction I am here underscoring, simply ask: why am I AWARE that I am in a state of doubt?

Would yon rock be able to convince me it is sufficiently self aware to doubt tike this?

To ask the question and contemplate a conversation with a rock is to answer it.

Nor, is it materially different if the rock has been processed into a PC:

1030743_72733179

The transformation into gates, signal flow paths and memory cells backed up by magnetic storage etc does not resolve the problem that in our actual observation and experience of computation, a material substrate is intelligently designed but blindly executes signal flows and operations based on its architecture and software.

For, Garbage IN, Garbage OUT.

And, as for the racism issue, it seems fairly evident — at minimum, on a common-sense basis (hence the issue of cruelty to animals . . . ) — that at least some other animals may have some degree of that sort of self awareness, so the notion of ruling arbitrary lines between groups of humans is frankly silly.

Where does all of this point?

To the obvious: minds are not equal to brains, being radically diverse in core characteristics.

red_ball

To be appeared to redly on contemplating a bright red ball on a table is not at all the same as to have a sensor-processor suit that may detect say 680 nm radiation and then trigger a process that outputs on a screen, “red.”

Which is fairly obvious.

Beyond such contemplation, of course lies the associated conceptual dichotomising of the world into red ball A and rest of world NOT-A:

{ A | NOT-A }

Pictorially:

Laws_of_logic

Which, deploys the first principles of right reason, not as pre-loaded algorithms, but as insightful, contemplative reasoning on the import and correlates of distinct identity, namely the first principle laws of identity, non-contradiction and excluded middle, which then point onwards to sufficient reason and causality, etc.

In our actual experience, no unaided rock will even output that chain as a set of glyphs, much less conceive it and conceive of the associated self evidence.

Where also, we are individually, directly self-aware and have no good reason to skeptically dismiss that others like us are similarly self aware. (FWIW, Zombie arguments are in effect reduction to absurdity arguments that underscore that we must distinguish minds and brains.)

Which brings to bear the force of Leibniz in his Monadology, on the telling analogy of the mill:

14. The passing condition which involves and represents a multiplicity in the unity, or in the simple substance, is nothing else than what is called perception. This should be carefully distinguished from apperception or consciousness . . . .  16. We, ourselves, experience a multiplicity in a simple substance, when we find that the most trifling thought of which we are conscious involves a variety in the object. Therefore all those who acknowledge that the soul is a simple substance ought to grant this multiplicity in the monad . . . . 17. It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception. It is accordingly in the simple substance, and not in the compound nor in a machine that the perception is to be sought. Furthermore, there is nothing besides perceptions and their changes to be found in the simple substance. And it is in these alone that all the internal activities of the simple substance can consist.

The logic that makes a mill a mill and not just bits and pieces grinding against each other at random, is of course, externally imposed by an intelligent agent. The mill reflects such functionally specific, complex organisation and associated information (FSCO/I)  that on experience and analysis alike, we have every good reason to assign as a reliable sign of design. But also, it reflects the problem of the rock: rocks have no dreams, no contemplations, at most they can be used to effect a computation, symbolically or as material forces and components moving against one another.

In short, the design inference divide is pointing to profound worldview level issues.

But, one last contributory point, how can we conceive of an intelligent, conscious hybrid that has mind and brain working together?

ANS: The Eng Derek Smith two-tier controller cybernetic model:

The Eng Derek Smith Cybernetic Model
The Eng Derek Smith Cybernetic Model

 

Here, we see two tiers of control,

(i) an in the loop processor, and

(ii) a higher order supervisory controller

. . . that directs the lower order one which serves as its input-output interface to the loop and the external world.

I have no doubt that we may be able to create an artificial supervisory controller, programmed — by intelligent designers — at a higher level to take contingent alternative paths in algorithms of various sorts, but such a robotic entity is again a blend of the mechanical processing not materially different from the mill and perhaps some chance injections that randomise. What it is not, is what some find it ever so hard to acknowledge, never mind their own experience from moment to moment: an insightful, contemplative, aware, rational and responsible, choosing self, even one that is morally governed by the force of OUGHT.

(Which points onwards to the need for a foundational entity . . . IS . . . in our world capable of bearing the weight of a genuine OUGHT. For which, across time and many prolonged debates here at UD and over the centuries in the world of ideas, the only serious candidate is the inherently good Creator God who is a necessary and maximally great being. Something a priori evolutionary materialists are loathe to acknowledge, never mind how, even trivially, their system reduces to might and manipulation make right; for want of such a foundation. And if you would doubt that we re governed by ought, consider the point that it is self evidently wrong to kidnap, torture, rape and murder a child. So much so, that if we see such in progress we will instinctively strain every sinew to save the child from the monster. OUGHT is real.)

How can such a brain-body — mind interface be bridged?

This is a significant onward question, though the need for such an entity has to first be acknowledged.

There are many answers, the hylemorphic points to the issue that here is a form that the body etc conforms to which points beyond the merely material, there is the classic dualist who conceives of a substance termed spirit that inhabits the body, some point to subtle quantum influences that may intervene in neural networks.

I am not even sure that these are in mutual opposition, they may be along the lines of the old story of the six blind men groping around as they encounter an elephant.

But, in a sense that is an onward question, the key one is that we must firmly grasp that rocks have no dreams, it takes a mind to contemplate.

And from that, a whole world opens up for study, analysis, research and just maybe, onwards, synthesis. END

Comments
Mapou, I think the root problem is an equivocation of meanings. Blindly executing an algorithm, no matter how sophisticated, is just that -- blind and unintelligent. Insightfulness (which seems to be non-algorithmic inherently) is contemplative NOT a computation delimited by algorithms. Creativity is likewise. So is genuine choice not based on canned algorithmic mechanical necessity and/or blind chance. I find JB's discussion of oracles and how such may inject into a machine that is algorithmic a helpful suggestion. Indeed, taking the Smith Model in the OP, consider what happens if the higher order supervisory controller is an oracular entity that interfaces informationally with the i/o processor. It would mean that inherently, the system is irreducible to the algorithmic pivoting on blind chance plus mechanical necessity. And, its behaviour is prone to discontinuous, transformational changes that are probably unpredictable from a current state or pattern of the system. This strikes me along the lines of, here we at least have a way to articulate a pattern that is otherwise hard to represent. And of course information is inherently immaterial and independent of its expression in any particular material, medium or code. But, information is a crucial entity for dealing with what we are dealing with. Oracles informationally interacting with an i/o system. Oracles with consciousness, volition, creative imagination, insight, moral government etc. Let's do a fresh think. KFkairosfocus
May 17, 2014
May
05
May
17
17
2014
04:50 PM
4
04
50
PM
PDT
JB: The paper linked at 1 above is useful and interesting. KFkairosfocus
May 17, 2014
May
05
May
17
17
2014
04:37 PM
4
04
37
PM
PDT
Dionisio, I don't believe in strong AI. Strong AI is synonymous with conscious machine intelligence. I am convinced it's nonsense. However, I do believe that future machines will be so intelligent that, unfortunately, many will swear, wrongly, that they have a soul/spirit. I believe intelligent machines will do exactly what we train them and tell them to do, nothing more. And they will do it as intelligently as you and I. I should say that I base my AI opinion on my own research. For a while now, I've been working on an intelligent speech recognizer that can learn to recognize speech in any language just by listening. Just like a baby. This includes animal sounds and languages I know nothing about. So it cannot be said to have been programmed beforehand to recognize anything in particular. I plan to publish my results in the not too distant future. Keep your ears and eyes open.Mapou
May 15, 2014
May
05
May
15
15
2014
11:36 PM
11
11
36
PM
PDT
One thing we know for sure that 'strong AI' systems will always do well is understand evolution, because they are intelligent and only ignorant creationist IDiots don't understand it ;-) However, will different 'strong AI' systems have opposite irreconcilable worldviews as we humans do? Will they have fears, anxiety, concerns? Will they worry about not hearing back from a child that hasn't come back home at the expected time? Will they feel sad when a loved one is very sick? Will they rejoice when a child is born or graduates from school? Will they enjoy listening to a melody that brings up pleasant memories? Will they have any sense of humor? Will they be able to love someone? Will they... ok, that's enough for now. Perhaps they will, but based on what set of rules, what algorithms?Dionisio
May 15, 2014
May
05
May
15
15
2014
10:21 PM
10
10
21
PM
PDT
Computers can't think because nobody is there to make choices. its all just memory operations. not one thing is nOT a memory operation. We are not just memory operations. however we do mostly use just our memories. WE drive around almost entirely using our memory except for choosing where to go. the flaw in all this is about how memory is used by us and compters. its the same but with us we are choosing for personal desires.Robert Byers
May 15, 2014
May
05
May
15
15
2014
10:21 PM
10
10
21
PM
PDT
The 'strong AI' systems should be able to perform many complex tasks we humans can't do, or do things we could do, but much more efficiently than we can do. However, no matter how much the 'strong AI' system can do, it would not be able to think 'out of the box' because it will always do things based on the capacity of the software that runs its processors, even if it does parallel threading with fine granularity.Dionisio
May 15, 2014
May
05
May
15
15
2014
09:25 PM
9
09
25
PM
PDT
Mapou @ 8 Would a 'strong AI' system, that runs a corporation, decide to help some people in the community around one of the corporate offices in a town far away from the headquarters? Why? Based on what rule or algorithm? Would the 'strong AI' system that can perform brain surgery, decide to send medical help to poor countries that can't afford to have modern equipment or expensive medicine? Based on what set of rules or algorithm?Dionisio
May 15, 2014
May
05
May
15
15
2014
09:13 PM
9
09
13
PM
PDT
fossil @ 12 I see your point and sorta-kinda agree too. The most powerful super-advanced futuristic computer would not be able to improvise or give an opinion based on feelings, on emotions. Because the 'intelligence' of the AI system is a reflection of the intelligence of the people who designed the software that operates such system. Until the last time I checked on this, I don't think we humans know how to program emotions, feelings, etc. If any of you knows it, please share it with the rest of us.Dionisio
May 15, 2014
May
05
May
15
15
2014
08:00 PM
8
08
00
PM
PDT
Tim @ 9 & 10 Buddy, I could not have written my opinion on the subject better than you did it. So all you left for me to say is DITTO. Agree.Dionisio
May 15, 2014
May
05
May
15
15
2014
07:10 PM
7
07
10
PM
PDT
I think Mapou makes a point. I think computers can be programmed to do some amazing things and on the surface mimic human intelligence but to me there are limits. Kairosfocus hit on one of them, awareness, but I think there is another. Going back to the big red ball with the computer spitting out data about it which seems about all it can do. I suppose that it could be programmed to analyze the situation in reference to possible dangers or benefits but it can’t do what I can do. I can look at it and say, “I don’t like it.” Of course you would ask me why and I would answer, “I just don’t like it.” “But why?” “Well for one I don’t like that color of red, I think it should be more of a maroon.” Would the color change make a difference environmentally or biologically? “Nope, I just don’t like it.” “Well,” you say, “I like it and I like its size.” But then I would respond, “It’s too big, I think it needs to be smaller.” Any reason? “I just don’t like big beach balls, never have.” Have you ever been hurt by one? “No, but I still don’t like the size and the color stinks.” But, why? . . . No logic just a matter of taste which just happens to be opposite of yours. Two brains probably equal in complexity but with totally different outcomes and views on the world – for no apparent reason.fossil
May 15, 2014
May
05
May
15
15
2014
07:04 PM
7
07
04
PM
PDT
Folks: interesting discussion developing. I am distinguishing computational processing from active contemplative intelligence for many reasons, not least of which is the GIGO point and the blatant difference between a rock and a mind. Just to stir thought:
intelligence (?n?t?l?d??ns) n 1. (Psychology) the capacity for understanding; ability to perceive and comprehend meaning 2. good mental capacity: a person of intelligence. 3. news; information 4. (Military) military information about enemies, spies, etc 5. (Military) a group or department that gathers or deals with such information 6. (often capital) an intelligent being, esp one that is not embodied 7. (Military) (modifier) of or relating to intelligence: an intelligence network. [C14: from Latin intellegentia, from intellegere to discern, comprehend, literally: choose between, from inter- + legere to choose] in?telli?gential adj Collins English Dictionary – Complete and Unabridged © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003
I am not just asserting authority, I am highlighting that insight, choice, reception of meaning, discernment etc are bound up in intelligence, and those who would re-define it may just need to go get their own word. KFkairosfocus
May 15, 2014
May
05
May
15
15
2014
02:27 PM
2
02
27
PM
PDT
Perhaps Mapou and I only differ on definitions. He, I believe, calls a computer's ability to accomplish tasks (see his list @8) intelligence. I don't. For me, those machines and all they can do will certainly appear intelligent, but the way I see it, those computers will never make a decision on their own. All decisions will be imported in the programming. If Mapou is comfortable with calling such importation (if he even agrees with this description!) artificial intelligence, he is certainly free to do so. I prefer advanced (then later, super-super advanced) technology, or perhaps, reflected intelligence.Tim
May 15, 2014
May
05
May
15
15
2014
01:55 PM
1
01
55
PM
PDT
In the passage quoted by KF:
one popular argument is that, if a computer appears to be intelligent, that’s merely a reflection of the intelligence of the humans who programmed it. But what if humans’ intelligence is just a reflection of the billion-year evolutionary process that gave rise to it? What frustrates me every time I read the AI skeptics is their failure to consider these parallels honestly.
You can call me an AI skeptic. I have considered the parallels honestly. I find them wanting. The following is oversimplified, but if you don't agree, please give me some evidence!! How about considering this: 1) computers depend on human programmers so that they can appear intelligent (or reflect intelligence) 2)(unstated) the inaninmate depends specifically and only on the human creative mind to reflect creativity 3)what if human creativity (intelligence) is a reflection the evolutionary process 4)implied: if "3" is true, "1" is not strictly dependent on 2 5)implied: the inanimate plus (evolution, time, whatever) produces human intelligence My problem with the above is this: UTMs (and all their physical analogs) are incapable of creativity. My outcomes: I could be right, leaving those beings who demonstrate creativity as irreducible to entities that are solely physical, or I could be wrong and purely physical entities are creative, and don't merely reflect imported creativity. Of the two, the strict materialist finds the former abhorrent but has no evidence for the latter and counter-evidence that is legion (everytime I encounter something that is not human, it lacks creativity; everytime I encounter a human, I find creativity, well most of the time). I sure am glad I am not a strict materialist.Tim
May 15, 2014
May
05
May
15
15
2014
01:33 PM
1
01
33
PM
PDT
I find the continued conflation of intelligence and consciousness rather disconcerting. Intelligence is not synonymous with consciousness. You can have intelligence without an ability to see red or to appreciate music and the arts. Logic and rationality are simply based on causality and causality can be computed. The main ingredient of intelligence is the ability to make predictions based on learned patterns and sequences. This is perfectly computable. As I said elsewhere, based on my own research, there is no question that we will create highly intelligent machines that will surpass humans at most if not all intelligent tasks. There is no need to consciously experience colors in the same way as humans in order to behave highly intelligently with purpose. The promise of AI is the future construction of machines that can gain a thorough understanding of their environments and act accordingly. They'll wash the dishes, mop the floor, mow the lawn, fold the laundry, feed the baby, argue a case in court, perform brain surgeries, fight our enemies, design other machines, run a corporation, confound evolutionary biologists and design rockets. And much, much more. This will happen in your lifetimes.Mapou
May 15, 2014
May
05
May
15
15
2014
01:03 PM
1
01
03
PM
PDT
Dionisio: However, if one would have asked the same brilliant machine that humiliated the human chess champion, her opinion on the loser’s personality, what could have been her response? Exactly. That also reminds me of Rainman when the doctor asks Raymond to count the number of toothpicks on the floor, and he gives him the exact count in a matter of a second or two. Then, the same doctor asks Raymond to tell him how much is a candy bar, "About a dollar". How much is a car? "About a dollar". BTW, I am not comparing an autistic person to a computer by any means, but the reality is that raw computational ability is far, far, far from being a "mind".OldArmy94
May 15, 2014
May
05
May
15
15
2014
12:25 PM
12
12
25
PM
PDT
KF, thank you for this interesting OP! (btw, a young veterinarian doctor, recently told me she doesn't understand how someone could spend time reading things like this). Well, according to some opinions we can read out there and even in this blog, the revolutionary field of 'strong AI' eventually will lead us, sooner or later, to 'thinking' machines. They claim it's just a matter of time. On at least one occasion, someone presented the chess-playing supercomputer as an example of thinking machines. However, if one would have asked the same brilliant machine that humiliated the human chess champion, her opinion on the loser's personality, what could have been her response? The only intelligence one can easily recognize in the 'strong AI' machines is the intelligence of the guys who design them. Hence, the ultimate supreme intelligence is in the mind that designed the intelligent people who design the 'AI' machines :)Dionisio
May 15, 2014
May
05
May
15
15
2014
10:57 AM
10
10
57
AM
PDT
This reminds me of the Uncanny Valley phenomenon. I watch my son play his Xbox games, and though the realism is remarkable, there is a real sense that these pixels are just phantoms, and I just don't see that "valley" ever being bridged. There IS something very different between the computed intellect and the real mind that only man possesses.OldArmy94
May 15, 2014
May
05
May
15
15
2014
06:48 AM
6
06
48
AM
PDT
Good work KF. When I contemplate these issues I often think about the movie Bicentennial Man, which was more materialist propaganda that art. It seems to me that the Chinese Room is unanswerable; at least it has never been answered.Barry Arrington
May 15, 2014
May
05
May
15
15
2014
06:10 AM
6
06
10
AM
PDT
JB: Thanks, the paper looks like a good read, gotta run now. KFkairosfocus
May 15, 2014
May
05
May
15
15
2014
05:58 AM
5
05
58
AM
PDT
F/N: as a sparker for thought, here is the notorious Lewontin remark, which reveals the circularity of what we are addressing, and . . . inadvertently . . . its self referential incoherence: ______________ >> . . . to put a correct view of the universe into people's heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [[--> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . . To Sagan, as to all but a few other scientists, it is self-evident [[--> actually, science and its knowledge claims are plainly not immediately and necessarily true on pain of absurdity, to one who understands them; this is another logical error, begging the question , confused for real self-evidence; whereby a claim shows itself not just true but true on pain of patent absurdity if one tries to deny it . . ] that the practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test [[--> i.e. an assertion that tellingly reveals a hostile mindset, not a warranted claim] . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door . . . [Lewontin, NYRB 1997. In case you accept or want to use the common talking point that this is "quote-mined" kindly cf here. >> _________________ See where the mind collapses on such premises? KFkairosfocus
May 15, 2014
May
05
May
15
15
2014
05:56 AM
5
05
56
AM
PDT
kairos - Good thoughts! For a specific argument about a specific aspect of consciousness (creativity), you should check out my paper on whether or not Turing machines can be creative.johnnyb
May 15, 2014
May
05
May
15
15
2014
05:55 AM
5
05
55
AM
PDT

Leave a Reply