On this date in 1944 one of the first computers, the IBM Mark I, became operational. See the Wiki article here. From the article:
[The Mark I] could do 3 additions or subtractions in a second. A multiplication took 6 seconds, a division took 15.3 seconds, and a logarithm or a trigonometric function took over one minute.
Now, here is the question for the class. What is the difference, in principle, between the Mark I and the IBM Summit, which, as of late 2018, became the fastest supercomputer in the world, capable of performing calculations at the rate of 148.6 petaflops (one petaflop is one thousand million million floating-point operations per second)?
The answer, of course, is “absolutely nothing.”
Both machines do nothing but calculate. The Mark I calculated slowly (by todays standards). The Summit calculates very rapidly. But there is no difference in principle between performing algorithms slowly as opposed to rapidly.
This should give pause to proponents of AI (at least proponents of AI in its “strong” conceptualization). Unless one defines “consciousness” as “executing algorithms very quickly,” (which would be absurd), there is no reason to believe that any computer will ever be conscious. Decades from now when people look back at the Summit the way we look back at the Mark I and marvel about how anyone could have thought that it was “fast,” computers will still be, in principle, doing the same thing the Mark I was doing. The argument I am making is practically identical to Searle’s Chinese Room argument with the Mark I standing in for the room.
OK, but what’s clearly missing is taking the infant brain in the computer and teaching it enough customs and morality to pass as a 1st Grader. And so the question, “A jerk [defined somewhere] from your class [defined somewhere] is standing in front of you in the lunch line [defined somewhere]. He does not know you’re behind him. Should you sucker punch him in the kidneys?”
If the computer cannot answer this question, then it hasn’t been properly programmed as AI.
On the other hand, most of what a Combat Infantryman does once he leaves friendly territory is VERY tightly defined, and I would expect that a patrol where half of the “soldiers” are Kevlar-encased robots would perform better in average situations than an all human patrol. They’re also going to be better shots.
And on the civilian side, big business has already decided that AI “Help” is better, faster, cheaper, than human Help. The low IQ, poorly experienced humans who are the alternative are MUCH worse than even simple AI assistance. Note that the humans are working off a NARROWLY delimited script, and if your problem ain’t in their script, then the human is gonna have to pass you to a TRAINED technician, the same way the AI would.
On the other hand, in several separate cases of airline crashes over the last 10 years, most especially, Air France Flight 447, an Airbus 330, crashed into the Atlantic Ocean killing all onboard because the pitot tub, an ANCIENT mechanical sensor, had frozen over and the AI running the autopilot concluded that this MUST mean that airspeed (which was actually close to Mach 1) had fallen to ZERO mph and so the aircraft MUST be in a “stall”, despite what the human aircrew could see outside the windows and feel with their bodies. So the AI SEIZED CONTROL of the aircraft and threw it into a SEVERE “nose up” attitude in order to “increase lift” (to correct the “stall”). The aircraft then fell 30,000 feet with all systems (except those run by the AI) functioning just fine. It hit the ocean TAIL FIRST, still in an extreme “nose up” attitude. All of the humans onboard died, but the AI was probably still working when the batteries finally went dead…
So, ya wanna have a WHOLE LOT of Manual Override on your AI, and ya wanna have a WHOLE LOT of 1 in a million kinda sub-sub-routines.
On the other hand, except for legalistic kinda problems, I don’t see why Catholics couldn’t use AI to run Confessions. It’s all VERY stylized, the vast bulk of the sins confessed are from a standard list, and the penance assigned is also pretty standard. Ya would wanna have some “alert human administrator” if the penitent mentions that he has a continuing urge to kill everyone on Earth and has finally worked out HOW…
Vmahuna,
You are talking past the OP. No one questions that computers can do certain things better than humans though the execution of algorithms very very quickly. The interesting issue is whether computers are (or in principle can be) conscious. Nothing you wrote speaks to that issue.
Barry, here are three reasons why computers can never be conscious:
1. There are no good theories of consciousness. At Chronicles of Higher Education, the entire field was described last year as “bizarre.” Not my words, the writer’s.
So when someone says “Computers can be conscious,” he has the great advantage that he needn’t mean anything specific. By comparison, if he had said, “Computers can be composed entirely of gases,” he might be right or wrong. But we could ask him to explain his view in science-based terms.
We can’t ask for that when he says “Computers can be conscious.” But notice that he is allowed to parade as if he were making a science-based statement.
2. Michael Egnor has assembled a fair amount of the evidence that the human mind is an immaterial reality. For example:
It is probable that human consciousness relates to immateriality and no aspect of computers is considered to be immaterial.
3. Because computers are computational only, there are certain types of mental operations that they just do not do. Bigger computers will not help. Gary Smith explored these issues recently in his discussion of why an AI pioneer thinks that Watson, as marketed, is a fraud.
If we think Darwinians have a big problem, the people touting conscious AI probably have a way bigger problem. But not enough people know how bad it is yet.
This might be of related interest for you Mr Arrington:
Because of the computational requirement of merging of two computation paths, computers will never achieve consciousness because they would be “continuously hemorrhaging information.”
Also of related interest:
As to the fact that, as Mr Arrington pointed out, computers, at their most foundational level, are superfast number crunchers, the following quotes from Godel (and video about Godel) are also of related interest:
New @ 3:
To which we might add the fact that we cannot, in principle, “know” whether another person is conscious. I discuss this here:
https://uncommondescent.com/intelligent-design/we-cannot-in-principle-know-whether-a-machine-is-conscious/
Do we know enough yet to be able to make such a determination?
VMahuna @ 1,
>“A jerk [defined somewhere] from your class [defined somewhere] is standing in front of you in the lunch line [defined somewhere]. He does not know you’re behind him. Should you sucker punch him in the kidneys?”
That would certainly spice up the Turing test. Probably get more people interesting in computer science and STEM too. 😎
Barry: There Is No Reason To Believe Any Computer Will Ever Be Conscious.
Sev: Do we know enough yet to be able to make such a determination?
Yes, for the reasons pointed out in the OP.
#5 is the key point. We take it “on faith”, so to speak, that other people have the same type of internal conscious experience that we have, but we can’t really know that. Therefore if a robot behaved exactly as a human, including describing internal experiences, we could easily believe that it didn’t really have the same kind of experience we were having. So we couldn’t never really know whether a machine had consciousness or not.
Folks, Reppert’s reminder:
Yes, we need to hear it over and over until it finally hits home: computationalism is simply barking up the wrong tree. Computation on a substrate is manifestly not the same kind of thing as rational inference is.
KF
Sort of devil’s advocating:
Obviously I can’t know if anyone but me is conscious, so the whole question is null.
But if we could know, the premise that “computers are just calculators” is wrong. From the start with Hollerith’s 1890 census machine, computers are primarily DECIDERS or sorters. Calculation is a side gig. If “rational inference” is the criterion, computers have always been there.
But “rational inference” is irrelevant. Awareness doesn’t require rationality, and rationality doesn’t require awareness. Totally disjunct concepts.
Hazel at 9. KF answered this in the post I linked to:
Polistra @ 11. Your argument rests on an equivocation. A computer does not “decide” or “sort” in the same way a human decides or sorts. You are using the same words to describe very different things.
Biological learning curves outperform existing ones in artificial intelligence algorithms