Uncommon Descent Serving The Intelligent Design Community

There is no Reason to Believe Any Computer Will Ever be Conscious


On this date in 1944 one of the first computers, the IBM Mark I, became operational.  See the Wiki article here.  From the article:

[The Mark I] could do 3 additions or subtractions in a second.  A multiplication took 6 seconds, a division took 15.3 seconds, and a logarithm or a trigonometric function took over one minute.

Now, here is the question for the class.  What is the difference, in principle, between the Mark I and the IBM Summit, which, as of late 2018, became the fastest supercomputer in the world, capable of performing calculations at the rate of 148.6 petaflops (one petaflop is one thousand million million floating-point operations per second)?

The answer, of course, is “absolutely nothing.” 

Both machines do nothing but calculate.  The Mark I calculated slowly (by todays standards).  The Summit calculates very rapidly.  But there is no difference in principle between performing algorithms slowly as opposed to rapidly.

This should give pause to proponents of AI (at least proponents of AI in its “strong” conceptualization).  Unless one defines “consciousness” as “executing algorithms very quickly,” (which would be absurd), there is no reason to believe that any computer will ever be conscious.  Decades from now when people look back at the Summit the way we look back at the Mark I and marvel about how anyone could have thought that it was “fast,” computers will still be, in principle, doing the same thing the Mark I was doing.  The argument I am making is practically identical to Searle’s Chinese Room argument with the Mark I standing in for the room.

Biological learning curves outperform existing ones in artificial intelligence algorithms PeterA
Polistra @ 11. Your argument rests on an equivocation. A computer does not "decide" or "sort" in the same way a human decides or sorts. You are using the same words to describe very different things. Barry Arrington
Hazel at 9. KF answered this in the post I linked to:
BA, the issue pivots on what we mean by knowledge and by certainty. Both come in degrees, this is the issue of warrant. We are self evidently certain of our own consciousness, undeniably so. But we may be morally certain that others are similarly conscious, so that it would be irresponsible to treat them as mere empty zombie-bots. turning to machines, why would we even entertain that such would be more than sophisticated calculating machines, apart from an implicit notion that matter, suitably arranged, becomes conscious. The empirical, observational evidence for such is nil. where, if sufficiently clever programming and hardware were to compose a de novo story or the like, we may have a certain degree of artificial intelligence, but this is not to be equated to consciousness. Instead we have a problem that traces to the issue of presumed materialism, often lab coat clad. But evolutionary materialism is also self-referentially absurd in many ways so despite its social power the presumption is ill-founded. Consciousness is yet another sign that reality is more than the materialists suppose. KF
Barry Arrington
Sort of devil's advocating: Obviously I can't know if anyone but me is conscious, so the whole question is null. But if we could know, the premise that "computers are just calculators" is wrong. From the start with Hollerith's 1890 census machine, computers are primarily DECIDERS or sorters. Calculation is a side gig. If "rational inference" is the criterion, computers have always been there. But "rational inference" is irrelevant. Awareness doesn't require rationality, and rationality doesn't require awareness. Totally disjunct concepts. polistra
Folks, Reppert's reminder:
. . . let us suppose that brain state A [--> notice, state of a wetware, electrochemically operated computational substrate], which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief [--> concious, perceptual state or disposition] that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.
Yes, we need to hear it over and over until it finally hits home: computationalism is simply barking up the wrong tree. Computation on a substrate is manifestly not the same kind of thing as rational inference is. KF kairosfocus
#5 is the key point. We take it "on faith", so to speak, that other people have the same type of internal conscious experience that we have, but we can't really know that. Therefore if a robot behaved exactly as a human, including describing internal experiences, we could easily believe that it didn't really have the same kind of experience we were having. So we couldn't never really know whether a machine had consciousness or not. hazel
Barry: There Is No Reason To Believe Any Computer Will Ever Be Conscious. Sev: Do we know enough yet to be able to make such a determination? Yes, for the reasons pointed out in the OP. Barry Arrington
VMahuna @ 1, >“A jerk [defined somewhere] from your class [defined somewhere] is standing in front of you in the lunch line [defined somewhere]. He does not know you’re behind him. Should you sucker punch him in the kidneys?” That would certainly spice up the Turing test. Probably get more people interesting in computer science and STEM too. 8-) EDTA
There Is No Reason To Believe Any Computer Will Ever Be Conscious
Do we know enough yet to be able to make such a determination? Seversky
New @ 3: To which we might add the fact that we cannot, in principle, "know" whether another person is conscious. I discuss this here: https://uncommondesc.wpengine.com/intelligent-design/we-cannot-in-principle-know-whether-a-machine-is-conscious/ Barry Arrington
This might be of related interest for you Mr Arrington: Because of the computational requirement of merging of two computation paths, computers will never achieve consciousness because they would be “continuously hemorrhaging information."
Sentient robots? Not possible if you do the maths - 13 May 2014 Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and his colleagues have developed a mathematical framework for consciousness that has become one of the most influential theories in the field. According to their model, the ability to integrate information is a key property of consciousness. ,,, But there is a catch, argues Phil Maguire at the National University of Ireland in Maynooth. He points to a computational device called the XOR logic gate, which involves two inputs, A and B. The output of the gate is "1" if A and B are the same and "0" if A and B are different. In this scenario, it is impossible to predict the output based on A or B alone – you need both. Crucially, this type of integration requires loss of information, says Maguire: "You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously hemorrhaging information.",,, Based on this definition, Maguire and his team have shown mathematically that computers can't handle any process that integrates information completely. If you accept that consciousness is based on total integration, then computers can't be conscious. http://www.newscientist.com/article/dn25560-sentient-robots-not-possible-if-you-do-the-maths.html#.U3LD5ChuqCe Mathematical Model Of Consciousness Proves Human Experience Cannot Be Modeled On A Computer - May 2014 Excerpt: The central part of their new work is to describe the mathematical properties of a system that can store integrated information in this way but without it leaking away. And this leads them to their central proof. “The implications of this proof are that we have to abandon either the idea that people enjoy genuinely [integrated] consciousness or that brain processes can be modeled computationally,” say Maguire and co. Since Tononi’s main assumption is that consciousness is the experience of integrated information, it is the second idea that must be abandoned: brain processes cannot be modeled computationally. https://medium.com/the-physics-arxiv-blog/mathematical-model-of-consciousness-proves-human-experience-cannot-be-modelled-on-a-computer-898b104158d
Also of related interest:
Consciousness Does Not Compute (and Never Will), Says Korean Scientist - May, 2015 (article based on 2008 paper) Excerpt: "In his 2008 paper, "Non-computability of Consciousness," Daegene Song proves human consciousness cannot be computed. Song arrived at his conclusion through quantum computer research in which he showed there is a unique mechanism in human consciousness that no computing device can simulate. "Among conscious activities, the unique characteristic of self-observation cannot exist in any type of machine," Song explained. "Human thought has a mechanism that computers cannot compute or be programmed to do." Non-computability of Consciousness" documents Song's quantum computer research into TS (technological singularity (TS) or strong artificial intelligence). Song was able to show that in certain situations, a conscious state can be precisely and fully represented in mathematical terms, in much the same manner as an atom or electron can be fully described mathematically. That's important, because the neurobiological and computational approaches to brain research have only ever been able to provide approximations at best. In representing consciousness mathematically, Song shows that consciousness is not compatible with a machine. Song's work also shows consciousness is not like other physical systems like neurons, atoms or galaxies. "If consciousness cannot be represented in the same way all other physical systems are represented, it may not be something that arises out of a physical system like the brain," said Song. "The brain and consciousness are linked together, but the brain does not produce consciousness. Consciousness is something altogether different and separate. The math doesn't lie." Of note: Daegene Song obtained his Ph.D. in physics from the University of Oxford http://www.33rdsquare.com/2015/05/consciousness-does-not-compute-says.html Reply to (alleged) Mathematical Error in "Incompatibility Between Quantum Theory and Consciousness" - Daegene Song - 2008 http://www.neuroquantology.com/index.php/journal/article/download/176/176
As to the fact that, as Mr Arrington pointed out, computers, at their most foundational level, are superfast number crunchers, the following quotes from Godel (and video about Godel) are also of related interest:
"Either mathematics is too big for the human mind, or the human mind is more than a machine." - Kurt Gödel As quoted in Topoi : The Categorial Analysis of Logic (1979) by Robert Goldblatt, p. 13 Gödel’s philosophical challenge (to Turing) – Wilfried Sieg – lecture video 38 second mark: “The human mind infinitely surpasses any finite machine.” http://www.youtube.com/watch?v=je9ksvZ9Av4 “Even if the finite brain cannot store an infinite amount of information, the spirit may be able to. The brain is a computing machine connected with a spirit. If the brain is taken to be physical and as [to be] a digital computer, from quantum mechanics [it follows that] there are then only a finite number of states. Only by connecting it [the brain] to a spirit might it work in some other way.” - Kurt Gödel - Section 6.2.14 from A Logical Journey by Hao Wang, MIT Press, 1996. Cantor, Gödel, & Turing: Incompleteness of Mathematics - video https://www.facebook.com/philip.cunningham.73/videos/vb.100000088262100/1119397401406525/?type=2&theater
Barry, here are three reasons why computers can never be conscious: 1. There are no good theories of consciousness. At Chronicles of Higher Education, the entire field was described last year as “bizarre.” Not my words, the writer’s. So when someone says “Computers can be conscious,” he has the great advantage that he needn’t mean anything specific. By comparison, if he had said, “Computers can be composed entirely of gases,” he might be right or wrong. But we could ask him to explain his view in science-based terms. We can’t ask for that when he says “Computers can be conscious.” But notice that he is allowed to parade as if he were making a science-based statement. 2. Michael Egnor has assembled a fair amount of the evidence that the human mind is an immaterial reality. For example:
I have argued that logic and science strongly point to the immateriality of man’s intellect and will. The research on free will by Wilder Penfield and Benjamin Libet, the remarkable unity of intellect despite disconnection of the hemispheres of the brain, the lack of brain localization for abstract thought as predicted by phrenologists, the fact that there are no intellectual seizures, the remarkable preservation of complex abstract thought in some patients with massive brain injury and persistent vegetative state, and the thus far intractable difficulty with human cloning despite the relative ease of animal cloning all point to an immaterial aspect to man’s soul.
It is probable that human consciousness relates to immateriality and no aspect of computers is considered to be immaterial. 3. Because computers are computational only, there are certain types of mental operations that they just do not do. Bigger computers will not help. Gary Smith explored these issues recently in his discussion of why an AI pioneer thinks that Watson, as marketed, is a fraud. If we think Darwinians have a big problem, the people touting conscious AI probably have a way bigger problem. But not enough people know how bad it is yet. News
Vmahuna, You are talking past the OP. No one questions that computers can do certain things better than humans though the execution of algorithms very very quickly. The interesting issue is whether computers are (or in principle can be) conscious. Nothing you wrote speaks to that issue. Barry Arrington
OK, but what's clearly missing is taking the infant brain in the computer and teaching it enough customs and morality to pass as a 1st Grader. And so the question, "A jerk [defined somewhere] from your class [defined somewhere] is standing in front of you in the lunch line [defined somewhere]. He does not know you're behind him. Should you sucker punch him in the kidneys?" If the computer cannot answer this question, then it hasn't been properly programmed as AI. On the other hand, most of what a Combat Infantryman does once he leaves friendly territory is VERY tightly defined, and I would expect that a patrol where half of the "soldiers" are Kevlar-encased robots would perform better in average situations than an all human patrol. They're also going to be better shots. And on the civilian side, big business has already decided that AI "Help" is better, faster, cheaper, than human Help. The low IQ, poorly experienced humans who are the alternative are MUCH worse than even simple AI assistance. Note that the humans are working off a NARROWLY delimited script, and if your problem ain't in their script, then the human is gonna have to pass you to a TRAINED technician, the same way the AI would. On the other hand, in several separate cases of airline crashes over the last 10 years, most especially, Air France Flight 447, an Airbus 330, crashed into the Atlantic Ocean killing all onboard because the pitot tub, an ANCIENT mechanical sensor, had frozen over and the AI running the autopilot concluded that this MUST mean that airspeed (which was actually close to Mach 1) had fallen to ZERO mph and so the aircraft MUST be in a "stall", despite what the human aircrew could see outside the windows and feel with their bodies. So the AI SEIZED CONTROL of the aircraft and threw it into a SEVERE "nose up" attitude in order to "increase lift" (to correct the "stall"). The aircraft then fell 30,000 feet with all systems (except those run by the AI) functioning just fine. It hit the ocean TAIL FIRST, still in an extreme "nose up" attitude. All of the humans onboard died, but the AI was probably still working when the batteries finally went dead... So, ya wanna have a WHOLE LOT of Manual Override on your AI, and ya wanna have a WHOLE LOT of 1 in a million kinda sub-sub-routines. On the other hand, except for legalistic kinda problems, I don't see why Catholics couldn't use AI to run Confessions. It's all VERY stylized, the vast bulk of the sins confessed are from a standard list, and the penance assigned is also pretty standard. Ya would wanna have some "alert human administrator" if the penitent mentions that he has a continuing urge to kill everyone on Earth and has finally worked out HOW... vmahuna

Leave a Reply