Uncommon Descent Serving The Intelligent Design Community

Neurosurgeon outlines why machines can’t think


From Michael Egnor at Mind Matters Today:

The hallmark of human thought is meaning, and the hallmark of computation is indifference to meaning. That is, in fact, what makes thought so remarkable and also what makes computation so useful. You can think about anything, and you can use the same computer to express your entire range of thoughts because computation is blind to meaning.

Thought is not merely not computation. Thought is the antithesis of computation. Thought is precisely what computation is not. Thought is intentional. Computation is not intentional.

iCub/Xavier Care (CC)

A reader may object at this point that the output of computation seems to have meaning. After all, the essay was typed on a computer. Yes, but all of the meaning in the computation is put into it by a human mind. Computation represents thought in the same way that a pen and paper can be used to represent thought, but computation does not generate thought and cannot produce thought. Computation can, of course, distort thought, reveal thought, conceal thought, etc., just as pen and paper can. And that is the challenge we face in understanding how artificial intelligence works and how it will affect us.

But to believe that machines can think or that human thought is a kind of computation is a profound error. More.

Note: Mind Matters Today is the blog of Walter Bradley Center for Natural and Artificial Intelligence

See also: Modern brain imaging techniques offer examples of a human mind with very little brain (Michael Egnor)

PS: I have too often seen redefinitions that materially alter the meaning of key terms, which are then inappropriately (but often unwittingly) projected back to the original sense. In this context, the gap between a blindly mechanical and/or stochastic computational substrate and self-aware, conscious perception, insight, evaluation, reflective reason per ground and consequent -- a meaning based logical not a causal relationship -- etc are involved. Modern thought/analysis is too often tainted with an implicit evolutionary materialism, which undermines the credibility of conclusions drawn due to its self-referential incoherence pivoting on precisely the gap in question. While we are at it, another gap is the organisation and information gap that gets us to a functional computational substrate with interwoven hard and soft ware. The only actually observed and needle in haystack search challenge analysis supported cause of such functionally specific complex organisation and associated information is intelligently directed configuration. However there is often an implicit evolutionary materialism that infers that such FSCO/I must have somehow emerged by blind mechanisms. Even when one disagrees, that may still be influencing one's thought. kairosfocus
FF, kindly define for us, learning and its products, knowledge and understanding. Compare, for example, the understanding that knowledge is warranted, credibly true (and so also, reliable) belief and the implications on responsibly and rationally free contemplation, insight and inference. Then, show us how the definitions used bridge the gaps I pointed out. KF kairosfocus
kairosfocus @6, A learning algorithm that is programmed to learn like a baby will do all the things you mentioned. It's true that, as of this writing, we only have limited programs that can do a few of these things in isolation, but there is nothing to suppose that we cannot build learning machines that can achieve a human-level ability to operate and accomplish complex tasks in the world as efficiently and intelligently as humans can. I have written an unsupervised audio learning program for my research that can learn to recognize sounds, including speech and the sounds of musical instruments, machinery and animals. Yet I did not explicitly program it with any knowledge about specific sounds. Causal reasoning is a physical process. There is a reason that the brain is so complex and contains 100 billion neurons and trillions of synapses. They are not there for just grins and giggles. The brain is a magnificent machine designed by a magnificent designer and we should not denigrate its amazing abilities just because we want to oppose atheists and materialists. FourFaces
FF, no. Models embedded in code reflect the programmer's understanding, not the machine's. The machine is blindly executing mechanical and/or stochastic causal chains of events on input and/or stored signals per GIGO, it is not understanding meanings, inferring import, drawing reasoned conclusions etc through ground-consequent reasoning or judgement of degree of support for inductive conclusions. There is a categorical distinction at work. KF kairosfocus
Latemarch @2, There is no question that current AI is dumb but that does not mean that we cannot build intelligent machines. There is nothing about understanding that requires a spiritual entity. Understanding simply means having a causal and predictive model of one's environment. ID supporters and especially Christians should not come out against the possibility of intelligent machines, in my opinion. On the contrary, we Christians invented modern science. We should be the ones to lead this field and show those Godless materialists that they are clueless about both intelligence and consciousness. FourFaces
Why thought is unique to human beings and is impossible for AI: Thought is brought about by and is a property of, a conscious agent. Fundamental properties of conscious agents are experience, sense of self - "Iness", and intentionality. These qualities of consciousness are closely related to the experience of qualia, or what it is like to perceive the color red, for instance. An AI system is fundamentally a complex machine, a computational system, and the essence of computation is the manipulation of data through the execution of algorithms by a computing machine following a logical plan or program. The execution of algorithms by a computing machine of any possible complexity and speed, the program it is executing and the computing machine itself are fundamentally in a different category of existence than the experience of qualia. This basically boils down to the famous "hard problem" of consciousness. The manipulation of 1s and 0s by logic gates in a computer fundamentally does not have these qualities of conscious experience, and no complexification of such a machine will make any difference in this. doubter
blind, mechanical and/or stochastic computation is not insightful contemplation. kairosfocus
FF@1: There is a lot of blurring of the meaning of words in the Artificial Intelligence field. I am not sure that I understand your use of the word intelligence. Merriam Webster uses this:
a (1) : the ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)
While AI can learn I'm pretty sure it can't understand. The last point of dealing with new or trying situations is going to be a huge problem. SELF-DRIVING CARS HIT AN UNNOTICED POTHOLE The generalization problem is going to be why your robot will not be able to walk into just any kitchen and fix the breakfast you describe. Latemarch
Egnor is confusing conscious thought with intelligence. Words like "thinking" are confusing because they cannot be precisely defined. If a robot can walk into a generic kitchen and fix a breakfast of scrambled eggs, bacon, toast and coffee, that robot is indeed intelligent. Conscious? No. But consciousness is not required for intelligence. PS. Such a robot will be built, probably in Egnor's lifetime. FourFaces

Leave a Reply