Artificial Intelligence Intelligent Design Mind

Is artificial intelligence taking over? (AlphaGo version)

Spread the love

AlphaGo logo From Ross Pomeroy’s ultimate list of Top Ten science stories at RealClearScience:

Artificial Intelligence Defeats Go World Champion

This year we witnessed artificial intelligence master a new game: Go. Lee Sedol, the reigning world champion predicted victory at the outset, but by the end of the five-game series he had won only a single bout against Google’s AlphaGo computer program. Google technicians trained AlphaGo using 30 million positions from 160,000 games of Go played by human experts. They later made the program play games against itself to grow in skill even further. Programs like AlphaGo with an enormous potential to learn could one day be harnessed to solve real-world problems.More.

Physicist Rob Sheldon offers a different take:

There have been several nice articles (cf Wired) about the reason why Go was such a difficult game to teach computers to play: Each turn in chess has something like 15 possible moves, so to “look ahead” 12 moves involves examining about (15)^12 = 129,746,337,890,625 possibilities. Clever ways to abandon losing approaches (say, by assigning points to each piece and keeping running total) can winnow this down to something a computer can do in 15 minutes, which illustrates why chess computers are better than the best grandmaster. This is known as the “brute force” approach.

Dr Sheldon
Rob Sheldon

Go, on the other hand, has about 250 possibilities for each turn. Even six moves ahead is (250)^6 = 244,140,625,000,000 possibilities, which is more than chess at 12 moves. Unlike chess, clever strategies are not based on points, but on position, which is a much harder thing to quantify. As a consequence, the state of Go software was pitiful, with even Go beginners able to beat the best software.

So Google decided to use a different approach. They coded up a neural net that is good at pattern recognition. Then they trained it on all million published games, and then had the computer play against itself another 30 million times. This gave it the power to use moves and patterns never before seen in published play.

photograph of Go equipment with game in progress

It was this software’s novel gameplay that stumped Lee Sedol. But like most computer software, once you understand how it thinks, you can outwit it, which is what he did on the 4th game. My guess is that Lee would get better and better at defeating Google for the first 5 years until Google programmers learn how to expect his counter-attacks and have a neural net built for that as well.

Does this mean that AI will take over the world? Far from it.

The tasks that humans think so difficult are easy-peasy for a computer, whereas things that humans learn at age 3 (social interactions, how to tie your shoes) are quite difficult for a computer. But AI will mean that our culture will need re-evaluate what skills it rewards. There was a time when travel agents and accountants were a highly paid skill, but perhaps in the future we will highly pay script writers and socialites.

See also: The Singularity is unlikely

and

Claim: Humanity and AI inseparable by 2021 Most apocalypses actually can’t happen because they are competitive. Subtractive, not additive. The TED talks will, however, assuredly happen.

5 Replies to “Is artificial intelligence taking over? (AlphaGo version)

  1. 1
    bornagain77 says:

    A few related notes:

    Yes, “We’ve Been Wrong About Robots Before,” and We Still Are – Erik J. Larson – November 12, 2014
    Excerpt: Nothing has happened with IBM’s “supercomputer” Watson,,, Outside of playing Jeopardy — in an extremely circumscribed only-the-game-of-Jeopardy fashion — the IBM system is completely, perfectly worthless.,,, IBM, by the way, has a penchant for upping their market cap by coming out with a supercomputer that can perform a carefully circumscribed task with superfast computing techniques. Take Deep Blue beating Kasparov at chess in 1997. Deep Blue, like Watson, is useless outside of the task it was designed for,,,
    Self-driving cars are another source of confusion. Heralded as evidence of a coming human-like intelligence, they’re actually made possible by brute-force data: full-scale replicas of street grids using massive volumes of location data.,,,
    Interestingly, where brute computation and big data fail is in surprisingly routine situations that give humans no difficulty at all. Take this statement, originally from computer scientist Hector Levesque (it also appears in Nicholas Carr’s 2014 book about the dangers of automation, The Glass Cage):
    “The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam, the large ball or the table?”
    Watson would not perform well in answering this question, nor would Deep Blue. In fact there are no extant AI systems that have a shot at getting the right answer here, because it requires a tiny slice of knowledge about the actual world. Not “data” about word frequencies in languages or GPS coordinates or probability scoring of next-best chess moves or canned questions to canned answers in Jeopardy. It requires what AI researches call “world knowledge” or “common sense knowledge.”,,
    Having real knowledge about the world and bringing it to bear on our everyday cognitive problems is the hallmark of human intelligence, but it’s a mystery to AI scientists, and has been for decades.,,,
    Given that minds produce language, and that there are effectively infinite things we can say and talk about and do with language, our robots will seem very, very stupid about commonsense things for a very long time. Maybe forever.
    http://www.evolutionnews.org/2.....91071.html

    Stephen Hawking Overestimates the Evolutionary Future of Smart Machines – May 7, 2014
    Excerpt: The methods of Big Data, which I referred to yesterday, all show performance gains for well-defined problems, achieved by adding more and more input data — right up to saturation. “Model saturation,” as it’s called, is the eventual flattening of a machine learning curve into an asymptote or a straight line, where there’s no further learning, no matter how much more data you provide. Russell (one would hope) knows this, but the problem is not even mentioned in the piece, let alone explained. Instead, front and center is Hawking’s ill-defined worry about a future involving “super” intelligence. This is hype, at its best.,,,
    Adding more data won’t help these learning problems — performance can even go down. This tells you something about the prospects for the continual “evolution” of smart machines.,,,
    Norvig conceded in an article in The Atlantic last year:
    “We could draw this curve: as we gain more data, how much better does our system get?” he says. “And the answer is, it’s still improving — but we are getting to the point where we get less benefit than we did in the past.”
    This doesn’t sound like the imminent rise of the machines.
    http://www.evolutionnews.org/2.....85311.html

    AI’s Language Problem
    Machines that truly understand language would be incredibly useful. But we don’t know how to build them.
    by Will Knight August 9, 2016
    Excerpt: Systems like Siri and IBM’s Watson can follow simple spoken or typed commands and answer basic questions, but they can’t hold a conversation and have no real understanding of the words they use.,,,
    “There’s no way you can have an AI system that’s humanlike that doesn’t have language at the heart of it,” ,,,
    “It’s one of the most obvious things that set human intelligence apart.”,,,
    Basically, Le’s program has no idea what it’s talking about. It understands that certain combinations of symbols go together, but it has no appreciation of the real world. It doesn’t know what a centipede actually looks like, or how it moves. It is still just an illusion of intelligence, without the kind of common sense that humans take for granted.,,,
    Cognitive scientists like MIT’s Tenenbaum theorize that important components of the mind are missing from today’s neural networks, no matter how large those networks might be.
    https://www.technologyreview.com/s/602094/ais-language-problem/?set=602129

    What Is a Mind? More Hype from Big Data – Erik J. Larson – May 6, 2014
    Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, “Understanding Natural Language,” about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland’s article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required “holistic interpretation.” That is, the ambiguities weren’t resolvable except by taking a broader context into account. The words by themselves weren’t enough.
    Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal “test” to see if his claims were still valid today.,,,
    ,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland’s point about machine translation made afresh, in 2014.
    Erik J. Larson – Founder and CEO of a software company in Austin, Texas
    http://www.evolutionnews.org/2.....85251.html

    Algorithmic Information Theory, Free Will and the Turing Test – Douglas S. Robertson
    Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information.
    http://cires.colorado.edu/~dou...../info8.pdf

    The mathematical world – James Franklin – 7 April 2014
    Excerpt: the intellect (is) immaterial and immortal. If today’s naturalists do not wish to agree with that, there is a challenge for them. ‘Don’t tell me, show me’: build an artificial intelligence system that imitates genuine mathematical insight. There seem to be no promising plans on the drawing board.,,,
    James Franklin is professor of mathematics at the University of New South Wales in Sydney.
    http://aeon.co/magazine/world-.....-be-about/

    Evolutionary Computing: The Invisible Hand of Intelligence – June 17, 2015
    Excerpt: William Dembski and Robert Marks have shown that no evolutionary algorithm is superior to blind search — unless information is added from an intelligent cause, which means it is not, in the Darwinian sense, an evolutionary algorithm after all. This mathematically proven law, based on the accepted No Free Lunch Theorems, seems to be lost on the champions of evolutionary computing. Researchers keep confusing an evolutionary algorithm (a form of artificial selection) with “natural evolution.” ,,,
    Marks and Dembski account for the invisible hand required in evolutionary computing. The Lab’s website states, “The principal theme of the lab’s research is teasing apart the respective roles of internally generated and externally applied information in the performance of evolutionary systems.” So yes, systems can evolve, but when they appear to solve a problem (such as generating complex specified information or reaching a sufficiently narrow predefined target), intelligence can be shown to be active. Any internally generated information is conserved or degraded by the law of Conservation of Information.,,,
    What Marks and Dembski (mathematically) prove is as scientifically valid and relevant as Gödel’s Incompleteness Theorem in mathematics. You can’t prove a system of mathematics from within the system, and you can’t derive an information-rich pattern from within the pattern.,,,
    http://www.evolutionnews.org/2.....96931.html

    Artificial Intelligence debunked in one short paragraph:

    Your Computer Doesn’t Know Anything – Michael Egnor – January 23, 2015
    Excerpt: Your computer doesn’t know a binary string from a ham sandwich. Your math book doesn’t know algebra. Your Rolodex doesn’t know your cousin’s address. Your watch doesn’t know what time it is. Your car doesn’t know where you’re driving. Your television doesn’t know who won the football game last night. Your cell phone doesn’t know what you said to your girlfriend this morning.
    http://www.evolutionnews.org/2.....92981.html

    Since a computer has no free will in order to create new information, nor consciousness so as to take overall context of information into consideration, then one simple way of defeating the a computer in a Turing test is to simply tell, or to invent, a new joke:,,,

    Such as this joke:

    Turing Test Extra Credit – Convince The Examiner That He’s The Computer – cartoon
    http://imgs.xkcd.com/comics/turing_test.png

  2. 2
    Dionisio says:

    […] things that humans learn at age 3 (social interactions, how to tie your shoes) are quite difficult for a computer.

    I don’t think it would be too difficult for a robot to tie shoes. That’s a 4D algorithmic procedure.

    But I don’t think any strong AI system will ever be able to resolve the situation described in the comments posted in the following thread:

    http://www.uncommondescent.com.....ent-619696

    The hard problem of consciousness, but much harder.

  3. 3
    Dionisio says:

    Forget Chess and Go games. Let’s get serious.

    Maybe AI could help scientists to resolve the below linked problem much sooner than the author of this insightful paper predicts?

    http://www.uncommondescent.com.....ent-622846

    Strong AI is nonsense hogwash for the credulous masses.
    Test everything and hold what is good.
    Any comments on this?

  4. 4
    bornagain77 says:

    A few more notes

    For Artificial Intelligence, Humor Is a Bridge Too Far – November 13, 2014
    Excerpt: Thoughtful reader Paul comments on Erik Larson’s post “Yes, ‘We’ve Been Wrong About Robots Before,’ and We Still Are”:
    “The article reminded me of an exercise in one of my first programming books that made me aware of the limits of computers and AI. I’ve forgotten the author of the book, but the problem was something like the following: “Write a program that takes in a stream of characters that represent a joke, reads the input and decides whether it’s funny or not.”
    It’s a prefect illustration of Erik’s statement, “Interestingly, where brute computation and big data fail is in surprisingly routine situations that give humans no difficulty at all.” Even when my grandchildren were very young I marveled at how they grasped the humor of a joke, even a subtle one.”
    Yes, when a computer can identify, tell, or — even better — come up with a good joke, I’ll look a little less skeptically on claims of machines soon surpassing us other than in, as Erik Larson writes, “brute-force computation of circumscribed tasks.”
    http://www.evolutionnews.org/2.....91211.html

    Natural forces, intelligently designed computers, and Darwinian processes in particular, simply do not create new functional information:

    LIFE’S CONSERVATION LAW – William Dembski – Robert Marks – Pg. 13
    Excerpt: (Computer) Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case.
    http://evoinfo.org/publication.....ation-law/

    “systems can evolve, but when they appear to solve a problem (such as generating complex specified information or reaching a sufficiently narrow predefined target), intelligence can be shown to be active. Any internally generated information is conserved or degraded by the law of Conservation of Information.,,,”
    http://www.evolutionnews.org/2.....96931.html

    In regards to learning about the ‘brick wall’ limitation for unguided material processes ever creating even trivial levels of functional information, I highly recommend Wiker & Witt’s book “A Meaningful World” in which they show, using the “Methinks it is like a weasel” phrase, (a phrase that Richard Dawkins’ infamously used from the Shakespeare’s play Hamlet in order to try to illustrate the feasibility of Evolutionary Algorithms), that the ‘information problem’ is much worse for Darwinists than just finding the “Methinks it is like a weasel” phrase by a unguided search.

    Basically this ‘brick wall’ limitation for unguided material processes ever creating even trivial levels of information is found because the “Methinks it is like a weasel” phrase doesn’t make any sense unless the entire context of the play of Hamlet is taken into consideration.

    Computers simply ‘can’t do context’! A subjective mind is required in order to take an overall context into consideration.

    Moreover the context, in which the weasel phrase finds its meaning, is derived from several different levels of the play. i.e. The ENTIRE play, who said it, why was it said, where was it said, and even nuances of the Elizabethan culture, etc… are taken into consideration to provide proper context to the phrase.
    Dawkin’s infamous Weasel phrase simply does not make sense without taking its proper context into consideration

    A Meaningful World: How the Arts and Sciences Reveal the Genius of Nature – Book Review
    Excerpt: They focus instead on what “Methinks it is like a weasel” really means. In isolation, in fact, it means almost nothing. Who said it? Why? What does the “it” refer to? What does it reveal about the characters? How does it advance the plot? In the context of the entire play, and of Elizabethan culture, this brief line takes on significance of surprising depth. The whole is required to give meaning to the part.
    http://www.thinkingchristian.n.....821202417/

    In fact, it is interesting to note what the overall context for the “Methinks it is like a weasel” phrase is.

    Richard Dawkins’s Weasel Program Is Bad in Ways You Never Dreamed – Jonathan Witt – September 23, 2016
    Excerpt: “METHINKS IT IS LIKE A WEASEL.”,,,
    The whole scene and the wider tension between the two men, in other words, actually involves Polonius’s refusal to see intelligent design where it actually exists — namely, in the designed death, the murder, of old King Hamlet. Polonius attributes the old king’s death to purely blind, material causes when in fact the king’s death was intelligently designed — that is, foul play.
    Richard Dawkins Is Polonius
    One parallel to the origins science debate, then, is that Richard Dawkins is a modern day Polonius: He ignores the evidence of intelligent design that should be abundantly clear to him.
    And the moral, if we’re willing to draw a line so far afield from the original play to our present context: Don’t be Richard Dawkins. Don’t mistake an intelligent cause for a natural one. Don’t miss the wider context:
    http://www.evolutionnews.org/2.....03162.html

    Moreover, the specific context, in which the phrase is used, is also used to illustrate the spineless nature of one of the characters of the play. i.e. To illustrate just how easily the spineless character in the play, i.e. Polonius, can be led around by the nose to say anything that Hamlet wants him to say:

    Ham. Do you see yonder cloud that ’s almost in shape of a camel?
    Pol. By the mass, and ’t is like a camel, indeed.
    Ham. Methinks it is like a weasel.
    Pol. It is backed like a weasel.
    Ham. Or like a whale?
    Pol. Very like a whale.
    http://www.bartleby.com/100/138.32.147.html

    i.e. The phrase, when taken into proper context, reveals deliberate, nuanced, deception and manipulation of another person.
    After realizing what the actual context of the ‘Methinks it is like a weasel’ phrase was, I remember thinking to myself that it was perhaps the worse possible phrase that Dawkins could have possibly chosen to use to try to illustrate his point.
    I’m sure deception and manipulation of other people is hardly the overall point that Dawkins was trying to convey with his infamous ‘Weasel’ program.

    Verse:

    2 Corinthians 3:3
    being made manifest that ye are an epistle of Christ, ministered by us, written not with ink, but with the Spirit of the living God; not in tables of stone, but in tables that are hearts of flesh.

  5. 5
    Dionisio says:

    BA77 @4:

    Write a program that takes in a stream of characters that represent a joke, reads the input and decides whether it’s funny or not

    Bingo! Excellent illustration. Actually, the same story could be a joke for someone but not for all who hear/read/watch it. In some cases what somebody interprets as a joke could be offensive to somebody else. There are situations where what is supposed to be a joke really could lead to a feeling of pity toward those who may laugh at the alleged ‘joke’ because it shows their spiritual blindness. A scene, a picture, a song, a story, may produce different reactions in different persons.
    Somehow related to the problem referenced @2.

    The hard problem of consciousness, but much harder. 🙂

Leave a Reply