Uncommon Descent Serving The Intelligent Design Community

Assisted intelligence vs. artificial intelligence

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

From software engineer Brendan Dixon at Evolution News & Views:

AI theorists consider what they call Artificial Generalized Intelligence (or AGI) the ultimate goal: The intelligence of an AGI would match or beat — if you believe Musk, Kurzweil, and the other true believers — human intelligence. For these theorists, AI’s recent successes, including Google’s DeepMind, IBM’s Watson, and Tesla’s self-driving cars, are no more than steps toward that end. Like all goals, however, the pursuit of AGI rests not just on a desire to see what we can accomplish, but on beliefs about what is.

The misguided goals, the bad aim, of so much AI (though not all) arises from dismissing human uniqueness. Such AI becomes, not a tool to assist humans, but one to replace them. Whether it replaces uniquely human abilities, such as making moral judgments, or squeezes humans out altogether, as some robotics proposals tend to assume, someone will get hurt. Re-aiming AI toward “Assisted Intelligence,” rather than replacement-directed “Artificial Intelligence,” would bring more benefit and remove the scariest scenarios. Our tools do not cause our problems; it is how we use them. More.

On the other hand, computers created us, right?

See also: Data basic

Follow UD News at Twitter!

Comments
Here is another article from Michael Egnor which goes well with the preceding:
Is Artificial Intelligence Possible? - Michael Egnor July 6, 2015 Excerpt: The arguments against AI are several. I believe the most convincing is the argument about representation and instantiation of universals.,,, To see why a machine cannot think, consider the difference between universals and particulars. Particulars are specific things that exist in the world -- a particular apple, or pencil, or person. Universals are concepts which do not exist as particulars, but are real in some sense. Love, mercy, and justice are universals. Thought is the perception of particulars and the contemplation of universals.,,, Both particulars and universals can be represented in matter. I can take a photograph of an apple, and the apple is then represented on my camera's memory card. I can write a love sonnet to my wife, and my love for my wife is represented in the sonnet on my hard drive. It is in this sense that particulars and universals can be represented in computers. Specific arrays of electrons can represent apples and love and all sorts of things, according to the input and output of programmers and users of the machine. But representations are not the thing itself. My photo of my apple or my sonnet to my wife are not my actual apple or my actual love for my wife. They are representations of my apple and my love -- my apple and my love exist in some other way, but neither can meaningfully be said to be wholly in my computer.,,, Now you may ask, "If universals cannot be instantiated in matter, how can the brain, which is matter, think about a universal?" That's a good question. It is an argument -- an ancient argument -- for the immateriality of the intellect. http://www.evolutionnews.org/2015/07/is_artificial_i097441.html
bornagain77
October 17, 2016
October
10
Oct
17
17
2016
06:24 AM
6
06
24
AM
PDT
That computers have no real comprehension of human language is made clear by 'Google translate':
What Is a Mind? More Hype from Big Data - Erik J. Larson - May 6, 2014 Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, "Understanding Natural Language," about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland's article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required "holistic interpretation." That is, the ambiguities weren't resolvable except by taking a broader context into account. The words by themselves weren't enough. Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal "test" to see if his claims were still valid today.,,, ,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland's point about machine translation made afresh, in 2014. Erik J. Larson - Founder and CEO of a software company in Austin, Texas http://www.evolutionnews.org/2014/05/what_is_a_mind085251.html   
Since a computer has no free will in order to create new information, nor conscious awareness so as to be able take overall context of language into consideration, then one simple way of defeating the Turing test would simply be to tell, or invent, a new joke:,,,
“(a computer) lacks the ability to distinguish between language and meta-language.,,, As known, jokes are difficult to understand and even more difficult to invent, given their subtle semantic traps and their complex linguistic squirms. (In the Turing test) The judge can (thus) reliably tell the human (from the computer by simply telling a new joke)” Per niwrad Turing Test Extra Credit – Convince The Examiner That He’s The Computer – cartoon http://imgs.xkcd.com/comics/turing_test.png
In the following article, which was written in response to Tom Wolfe's recent book exposing the fact that Darwinists have no real clue how human language possibly could have 'evolved', neurosurgeon Dr. Michael Egnor points out that there is an irreducible element to human language that will forever be beyond materialistic explanation, (and thus, by default, the ability to actually understand human language will forever be beyond our capacity to program into computers),,,.
Language Is a Rock Against Which Evolutionary Theory Wrecks Itself - Michael Egnor - September 19, 2016 Excerpt: Wolfe provides a précis of his argument: "Speech is not one of man's several unique attributes -- speech is the attribute of all attributes!" And yet, as Wolfe points out, Darwinists are at an utter loss to explain how language -- the salient characteristic of man -- "evolved." None of the deep drawer of evolutionary just-so stories come anywhere close to explaining how man might have acquired the astonishing ability to craft unlimited propositions and concepts and subtleties within subtleties using a system of grammar and abstract designators (i.e. words) that are utterly lacking anywhere else in the animal kingdom.,,, I have argued before that the human mind is qualitatively different from the animal mind. The human mind has immaterial abilities -- the intellect's ability to grasp abstract universal concepts divorced from any particular thing -- and that this ability makes us more different from apes than apes are from viruses. We are ontologically different. We are a different kind of being from animals. We are not just animals who talk. Although we share much in our bodies with animals, our language -- a simulacrum of our abstract minds -- has no root in the animal world. Language is the tool by which we think abstractly. It is sui generis. It is a gift, a window into the human soul, something we are made with, and it did not evolve. Language is a rock against which evolutionary theory wrecks, one of the many rocks -- the uncooperative fossil record, the jumbled molecular evolutionary tree, irreducible complexity, intricate intracellular design, the genetic code, the collapsing myth of junk DNA, the immaterial human mind -- that comprise the shoal that is sinking Darwin's Victorian fable. http://www.evolutionnews.org/2016/09/language_is_a_r103151.html
Moreover, speech, or more particularly, our ability to infuse information into material substrates, is what has given us the ability to become 'masters of the planet'. I might add, to become 'masters of the planet' in spite of the fact that, on a Darwinian view of things, we are a 'sad case' as far as 'survival of the fittest' is concerned:
“Speech is 95 percent plus of what lifts man above animal! Physically, man is a sad case. His teeth, including his incisors, which he calls eyeteeth, are baby-size and can barely penetrate the skin of a too-green apple. His claws can’t do anything but scratch him where he itches. His stringy-ligament body makes him a weakling compared to all the animals his size. Animals his size? In hand-to-paw, hand-to-claw, or hand-to-incisor combat, any animal his size would have him for lunch. Yet man owns or controls them all, every animal that exists, thanks to his superpower: speech.” —Tom Wolfe, in the introduction to his book, The Kingdom of Speech https://books.google.com/books?id=NPslCwAAQBAJ&pg=PT5
Also of related interest, from quantum mechanics we find that physical reality ultimately reduces to a 'information theoretic' basis. As well, over the past six decades since the discovery of DNA, we also find that life itself is ultimately 'information theoretic' in its foundational basis. It is hard to imagine a more convincing proof that we are made ‘in the image of God’, than finding both the universe and life itself are ‘information theoretic’ in their foundational basis, and that we, of all the creatures on earth, uniquely possess an ability to understand and create information, and have even come to ‘master the planet’ precisely because of our unique ability infuse information into material substrates. Verses and Music:
Genesis 1:26 And God said, Let us make man in our image, after our likeness: and let them have dominion over the fish of the sea, and over the fowl of the air, and over the cattle, and over all the earth, and over every creeping thing that creepeth upon the earth. John 1:1-4 In the beginning was the Word, and the Word was with God, and the Word was God. The same was in the beginning with God. All things were made by Him, and without Him was not anything made that was made. In Him was life, and that life was the Light of men. Casting Crowns - The Word Is Alive https://www.youtube.com/watch?v=X9itgOBAxSc
Of humorous supplemental note: Artificial Intelligence debunked in one short paragraph:
Your Computer Doesn't Know Anything - Michael Egnor - January 23, 2015 Excerpt: Your computer doesn't know a binary string from a ham sandwich. Your math book doesn't know algebra. Your Rolodex doesn't know your cousin's address. Your watch doesn't know what time it is. Your car doesn't know where you're driving. Your television doesn't know who won the football game last night. Your cell phone doesn't know what you said to your girlfriend this morning. http://www.evolutionnews.org/2015/01/your_computer_d_1092981.html
bornagain77
October 17, 2016
October
10
Oct
17
17
2016
05:13 AM
5
05
13
AM
PDT
as to this excerpt from the article:
AI theorists consider what they call Artificial Generalized Intelligence (or AGI) the ultimate goal: The intelligence of an AGI would match or beat — if you believe Musk, Kurzweil, and the other true believers — human intelligence. For these theorists, AI’s recent successes, including Google’s DeepMind, IBM’s Watson, and Tesla’s self-driving cars, are no more than steps toward that end.,,, That is, the hope for AGI begins by failing to appreciate human intelligence, assuming it to be the accidental by-product -- an emergent condition with the illusion of free will -- of random changes locked in a struggle for survival.
Methinks the demise of the uniqueness of human intelligence is greatly exaggerated. The fallacious belief that human intelligence is nothing but massive amounts of computational ability has been with us since Alan Turing invented computers.
Alan’s brain tells his mind, “Don’t you blow it.” Listen up! (Even though it’s inchoate.) “My claim’s neat and clean. I’m a Turing Machine!” … ‘Tis somewhat curious how he could know it.
Ironically, Alan Turing, in his demonstration that Godel's incompleteness theorem applied to computers as well as to mathematics, i.e. the infamous 'halting problem', was himself instrumental in directly falsifying the belief that human intelligence could ever be programmed into computers. You can pick that bit of history up in the later part of the following video:
Cantor, Gödel, & Turing: Incompleteness of Mathematics - video (excerpted from BBC's 'Dangerous Knowledge' documentary) https://www.facebook.com/philip.cunningham.73/videos/vb.100000088262100/1119397401406525/?type=2&theater
As to the implications of his incompleteness theorem as it is applied to computers, Godel himself stated this:
"Either mathematics is too big for the human mind, or the human mind is more than a machine." - Kurt Gödel As quoted in Topoi : The Categorial Analysis of Logic (1979) by Robert Goldblatt, p. 13
Here are a few quotes backing up Godel's claim,
The mathematical world - James Franklin - 7 April 2014 Excerpt: the intellect (is) immaterial and immortal. If today’s naturalists do not wish to agree with that, there is a challenge for them. ‘Don’t tell me, show me’: build an artificial intelligence system that imitates genuine mathematical insight. There seem to be no promising plans on the drawing board.,,, James Franklin is professor of mathematics at the University of New South Wales in Sydney. http://aeon.co/magazine/world-views/what-is-left-for-mathematics-to-be-about/ The danger of artificial stupidity – Saturday, 28 February 2015 “Computers lack mathematical insight: in his book The Emperor’s New Mind, the Oxford mathematical physicist Sir Roger Penrose deployed Gödel’s first incompleteness theorem to argue that, in general, the way mathematicians provide their “unassailable demonstrations” of the truth of certain mathematical assertions is fundamentally non-algorithmic and non-computational” http://machineslikeus.com/news/danger-artificial-stupidity Algorithmic Information Theory, Free Will and the Turing Test - Douglas S. Robertson Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information. http://cires.colorado.edu/~doug/philosophy/info8.pdf
As well, in all the hype surrounding AI, people tend to forget one crucial weakness regarding AI programs. Specifically, all the AI programs, that people get so excited about, all do just one specific task extremely well. One specific task that they were painstakingly programmed, i.e. intelligently designed, to do extremely well. People tend to overlook the crucial weakness in AI that these programs are perfectly worthless on any other tasks that they were not programmed to do.
For all the hoopla surrounding each amazing AI advance, from IBM's Watson to Google's more recent Go-conquering machine, AlphaGo, we forget one critical detail in our amazement: Each of these machines does just one thing. They may do it remarkably well and fast, but that is all they can do. Watson cannot dance, clap, or take a bow. It cannot write a book, play the piano, or sing a song. It cannot drive a car, mow the lawn, or weed the garden. It cannot tell jokes and it does not laugh. It cannot recognize pictures of cats or identify faces. It cannot play Go or Chess. IBM is hoping it can assist in answering medical questions. We know it can win at Jeopardy! Watson does what all AI systems do: It captures and replays just one human ability. http://www.evolutionnews.org/2016/08/what_does_it_me_1103056.html Stephen Hawking Overestimates the Evolutionary Future of Smart Machines - May 7, 2014 Excerpt: The methods of Big Data, which I referred to yesterday, all show performance gains for well-defined problems, achieved by adding more and more input data -- right up to saturation. "Model saturation," as it's called, is the eventual flattening of a machine learning curve into an asymptote or a straight line, where there's no further learning, no matter how much more data you provide. Russell (one would hope) knows this, but the problem is not even mentioned in the piece, let alone explained. Instead, front and center is Hawking's ill-defined worry about a future involving "super" intelligence. This is hype, at its best.,,, Adding more data won't help these learning problems -- performance can even go down. This tells you something about the prospects for the continual "evolution" of smart machines.,,, Norvig conceded in an article in The Atlantic last year: "We could draw this curve: as we gain more data, how much better does our system get?" he says. "And the answer is, it's still improving -- but we are getting to the point where we get less benefit than we did in the past." This doesn't sound like the imminent rise of the machines. http://www.evolutionnews.org/2014/05/in_an_apocalypt085311.html Yes, "We've Been Wrong About Robots Before," and We Still Are - Erik J. Larson - November 12, 2014 Excerpt: Nothing has happened with IBM's "supercomputer" Watson,,, Outside of playing Jeopardy -- in an extremely circumscribed only-the-game-of-Jeopardy fashion -- the IBM system is completely, perfectly worthless.,,, IBM, by the way, has a penchant for upping their market cap by coming out with a supercomputer that can perform a carefully circumscribed task with superfast computing techniques. Take Deep Blue beating Kasparov at chess in 1997. Deep Blue, like Watson, is useless outside of the task it was designed for,,, Self-driving cars are another source of confusion. Heralded as evidence of a coming human-like intelligence, they're actually made possible by brute-force data: full-scale replicas of street grids using massive volumes of location data.,,, Interestingly, where brute computation and big data fail is in surprisingly routine situations that give humans no difficulty at all. Take this statement, originally from computer scientist Hector Levesque (it also appears in Nicholas Carr's 2014 book about the dangers of automation, The Glass Cage): "The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam, the large ball or the table?" Watson would not perform well in answering this question, nor would Deep Blue. In fact there are no extant AI systems that have a shot at getting the right answer here, because it requires a tiny slice of knowledge about the actual world. Not "data" about word frequencies in languages or GPS coordinates or probability scoring of next-best chess moves or canned questions to canned answers in Jeopardy. It requires what AI researches call "world knowledge" or "common sense knowledge.",, Having real knowledge about the world and bringing it to bear on our everyday cognitive problems is the hallmark of human intelligence, but it's a mystery to AI scientists, and has been for decades.,,, Given that minds produce language, and that there are effectively infinite things we can say and talk about and do with language, our robots will seem very, very stupid about commonsense things for a very long time. Maybe forever. http://www.evolutionnews.org/2014/11/yes_weve_been_w091071.html
As mentioned in the preceding article, besides AI programs being limited to just one task, AI also has another severe limitation in its ability to understand human language.
AI’s Language Problem Machines that truly understand language would be incredibly useful. But we don’t know how to build them. by Will Knight August 9, 2016 Excerpt: Systems like Siri and IBM’s Watson can follow simple spoken or typed commands and answer basic questions, but they can’t hold a conversation and have no real understanding of the words they use.,,, “There’s no way you can have an AI system that’s humanlike that doesn’t have language at the heart of it,” ,,, “It’s one of the most obvious things that set human intelligence apart.”,,, Basically, Le’s program has no idea what it’s talking about. It understands that certain combinations of symbols go together, but it has no appreciation of the real world. It doesn’t know what a centipede actually looks like, or how it moves. It is still just an illusion of intelligence, without the kind of common sense that humans take for granted.,,, Cognitive scientists like MIT’s Tenenbaum theorize that important components of the mind are missing from today’s neural networks, no matter how large those networks might be. https://www.technologyreview.com/s/602094/ais-language-problem/?set=602129
bornagain77
October 17, 2016
October
10
Oct
17
17
2016
05:12 AM
5
05
12
AM
PDT

Leave a Reply