Uncommon Descent Serving The Intelligent Design Community

At Technology Review: There is no clear path to giving computers the power to think

arroba Email
controls for AI/Pbroks13

From tech reporter Brian Bergstein at Technology Review:

Is it possible to give machines the power to think, as John McCarthy, Marvin Minsky, and other originators of AI intended 60 years ago? Doing that, Levesque explains, would require imbuing computers with common sense and the ability to flexibly make use of background knowledge about the world. Maybe it’s possible. But there’s no clear path to making it happen. That kind of work is separate enough from the machine-learning breakthroughs of recent years to go by a different name: GOFAI, short for “good old-fashioned artificial intelligence.”

If you’re worried about omniscient computers, you should read Levesque on the subject of GOFAI. Computer scientists have still not answered fundamental questions that occupied McCarthy and Minsky. How might a computer detect, encode, and process not just raw facts but abstract ideas and beliefs, which are necessary for intuiting truths that are not explicitly expressed?More.

See also: Artificial intelligence is no smarter than a rat?


At LiveScience: Will AI become conscious?

We already have AI, it's just primitive, weak, and not comprehensive in any specific package. Integrating calculators have been building rules out of rule sets for ages now. Google makes sense of poorly typed queries and finds things that you wanted but couldn't properly describe. Our AI is equivalent to the lighter-than-air-vehicle stage of human flight. While it's going to be far more useful and practical than how they envisioned space travel 60 years ago, it is similarly bogged down by the practical details of reality; just as anything done in space has to be measured against enormous energy costs, anything AI does well requires vast investments of man-hours, minds, and CPU time. It isn't a magic wand; it doesn't escape entropy; it offers no free lunch. It may, however, offer a cheaper lunch, perhaps even of better quality; while being fed millions of lives worth of career time and tons upon tons of coal converted into electricity. LocalMinimum
Consciousness is, of course, what is lacking. A consciou I, and all the subjective experiences and representations and processes that are related to a conscious I. That's what makes the difference, at the cognition and feeling and action level. A conscious being can understand meanings. A machine cannot. A conscious being can feel joy and pain. A machine cannot. A conscious being can initiate free actions. A machine cannot. "If you’re worried about omniscient computers," I am not. But I could be worried about conscious beings who can use powerful computers to do bad things. I have never been worried about a knife. I could definitely be worried about some hostile person holding a knife in front of me. gpuccio
harry, Excellent! Thanks. Dionisio
The following is from the perspective of an old man who spent most of his adult life working with technology, mostly writing software, but also trouble shooting hardware back in the days when one could use an oscilloscope to determine the particular diode, transistor or other component that had failed. Most technology is way too miniaturized for that kind of trouble shooting precision these days. Now a faulty circuit pack, if not the entire device, is replaced and nobody cares which microscopic-sized component went bad. I wrote some very low-level software over the years, as in software that simulated the instruction set of a computer's CPU, software that managed telephony switching machines, and software that enabled computers to communicate with robotic equipment in a modern factory. I mention all this first so the reader has some idea where I am coming from. There is no such thing as artificial intelligence. A computer has no more intelligence than your electric can opener, or your hammer, or a box of rocks. A computer is a just tool, as are all machines, albeit an extremely intricate one. One of the world's greatest scientists, Johann Kepler, discoverer of the laws of planetary motion, remarked that he was merely thinking God's thoughts after Him. Computers don't even think man's thoughts after him. They don't think at all. They have no awareness whatsoever of what they are doing. They only mindlessly respond to input, sometimes amazingly complex input, in the way they were programmed to do so. That's it. They implement man's thoughts after him, not think them. The "thinking" of computers is merely an illusion. A very persuasive one sometimes, but an illusion nonetheless. There is "nobody home" in the computer. Don't be fooled like a jungle savage convinced there is a little man inside the radio talking. Yet the savage is really hearing the thoughts of someone, just as believers in AI are really experiencing the thoughts of computer programmers. It is difficult for programmers to think of a reasonable response to all possible inputs to a moderately complex program. Mistakes happen. Driverless cars, as amazing as they are, sometimes cause fender-benders due to faulty software. The virtually infinite variety of input and situations an AI "god" tasked with directing world affairs would have to deal with is impossible for its programmers to foresee and then pre-program a reasonable response for each of them. Its mistakes will be far more catastrophic than a fender-bender for those who blindly follow it and for those who don't. And I haven't even mentioned yet the fact that the "morality" of an AI god's decisions will be based on rules programmed in by software engineers as directed by godless social engineers, not people with degrees in theology. I half suspect that the anti-Christ will be an AI "god." harry
A talk I gave on the subject (and am giving next week!) - "Solving Engineering Problems Using Theology": https://www.youtube.com/watch?v=yVeWBM1J-NE johnnyb

Leave a Reply