Artificial Intelligence Intelligent Design Mind

Computer beats humans at Go: so what?

Spread the love

The news that a computer program has beaten Go master Lee Se-dol in a best-of-five competition may have shocked some readers. In this post, I’d like to explain why I don’t think it matters much at all, by telling a little story about three guys named Tom, Sam and Al.

Tom has a brilliant mind. He puts his perspicacious intellect to good use by playing mentally challenging games, and he always wins. Tom’s freakish ability to win games by performing astonishing leaps of mental intuition leaves many spectators baffled. “How on earth do you do it?” they ask him, whenever he chalks up a victory against yet another hapless opponent. “It’s a strange gift I have,” Tom answers modestly. “I can’t really explain it, even to myself. I just have these mysterious intuitions that come to me out of the blue, and that enable me to win.”

Tom’s reputation spreads far and wide. Those who witness his spectacular triumphs are amazed and dumbfounded. After a while, people start calling him the world’s best game player.

How Sam beat Tom

One day, a stranger shows up in town, named Sam. Sam walks up to Tom (who is sitting in a bar) and says to him in a loud voice, “I can beat you!”

“No, you can’t,” answers Tom, “but you’re welcome to try anyway. Name your game.”

“Chess,” says Sam. “You know what they say: it’s the game of kings.”

“Good choice,” replies Tom. “I love that game.”

“I have a question,” says Sam. “Do you mind if I get some assistants to help me choose my moves?” “Not at all,” answers Tom. “I’m quite willing to be generous. Bring as many assistants as you like.”

Sam has one more question. “Since I have a very large number of assistants, do you mind if I contact them via email while I play, instead of bringing them all here?”

“Not at all,” replies Tom. “That’s fine by me.”

“That’s a big relief,” says Sam. “Actually, I have millions and millions of assistants. And it’s a good thing that they’re helping me, because I really don’t know much about chess. Nor do they, for that matter. But together, we’ll beat you.”

Now Tom looks puzzled. “How are you going to beat me,” he asks, “if you don’t really know the game?”

“By brute force,” answers Sam. “Each of my assistants is good at just one thing: evaluating a chess position. Thanks to my army of assistants, who are extremely well-organized and who are also very good at rapidly evaluating positions and sharing information with one another via email, I am effectively capable of evaluating hundreds of millions of chess positions in just a few seconds. I’ve also compiled a list of good opening and closing moves, as well as good moves in various tricky situations, by studying some past games played by chess experts.”

“Well, that sounds like an interesting way to play,” says Tom. “But speed and a list of good moves are no substitute for intuition. You and your assistants lack the ability to see the big picture. You can’t put it all together in your head, like I can.”

“We may lack your intuition,” responds Sam, “but because we’re fast, we can evaluate many moves that would never occur to you, and what’s more, we can see further ahead than you can. Do you want to try your luck against us?”

“Game on!” says Tom.

After just 20 minutes, it’s game over for Tom. For the first time in his life, he has been soundly defeated. He and Sam shake hands in a gentlemanly fashion after the game, and return to Tom’s favorite bar, where they both order a beer.

Tom is quiet for a while. Suddenly, he muses aloud, “I think I finally understand, Sam. What you’ve taught me is that the game of chess is fundamentally no different from a game of noughts and crosses, or Tic-Tac-Toe. It’s a game which yields to brute force calculations. My intuition enables me to see ahead, and identify lots of good moves that my opponents can’t see, because they’re not as far-sighted as I am. But your brute-strength approach is more than a match for my intuition. I’m limited by the fact that I can’t see all of the good moves I could make. You and your army of assistants can do that. No wonder you won, when you played me. Still, it’s taught me a valuable lesson about the limits of human intuition. Congratulations on your victory.”

“So you’re going to give up calling yourself the world’s best game player?” asks Sam.

“Not quite,” answers Tom. “From now on, I’m going to call myself the world’s best player of interesting games. By an ‘interesting game,’ I mean one that doesn’t yield to brute-strength calculations – in other words, one that requires a certain degree of intuition in order to be played well.”

“Would you care to nominate a game that fits that description?” inquires Sam.

“My nomination is the game of Go, which has been called the most complex of games,” replies Tom. “The number of possible positions on a 19 x 19 Go board is 10170, which is far greater than the number of atoms in the universe. “There’s no way that you and your army of assistants can evaluate that many moves. Admit it: you don’t have a hope of beating me at Go.”

“You’re right; we don’t,” acknowledges Sam. “But I know another man who I think can beat you. His name is Al. Remember that name. At the moment, he’s perfecting his game, but he’s improving by leaps and bounds. You’ll probably see him a few years from now.”

“I look forward to the challenge,” replies Tom. “Farewell, Sam, and take care.”

Al arrives in town

A few years later, Sam’s prophecy comes to pass. A peculiar-looking man in a dazzling purple cloak rides into town, and asks to see Tom. “Hi, Tom. I’m Al,” he says. “I’d like to challenge you to a game of Go. Although I have none of your brilliant intuition, I’m quite confident that I can win.”

“I really don’t see how you can,” answers Tom. “Even if you had an entire universe full of people helping you to choose your next move, there’s no way you could possibly see far enough ahead to properly evaluate all possible moves you could make. Without a brute strength approach, you really need intuition, in order to win.”

“Oh no you don’t,” Al replies. “It turns out that the game of Go has a long, long history which you know nothing about. On Earth, it first appeared in China, more than 2,500 years ago. But it was brought to Earth by aliens. I’ve been in contact with them: in fact, it was they who gave me this colorful cloak, which can instantly turn any color I tell it to, as well as turning invisible.”

“Wait a minute,” interrupts Tom. “Forget about the cloak. You mean to say I’ll be playing against a bunch of aliens?”

“By no means,” replies Al. “You’ll be playing against me, and I can promise you, I won’t be talking to any aliens, either. But I should tell you that aliens have been playing the game of Go for billions of years: in fact, there’s even an inter-galactic Go club. However, they play it in a very different way from you, Tom. They don’t rely on intuition at all.”

“How do they play, then?” asks Tom, perplexed.

“They play incrementally, by gradually building up a set of smart and successful moves in various situations,” answers Al. “A long time ago, the list of smart moves was fairly short: you could fit them all in one book. Now, after billions of years, the list is much bigger. When aliens play Go, they do so by following the rule book up to a certain point, and then trying out something a little bit new and different. It doesn’t make for very exciting games, but it does make for smart tactics. Recently, the aliens were kind enough to give me their list of moves. However, it’s so big that I’ll require an army of earthly assistants to help me search through the list, in order to keep within the time limits of the game. None of these assistants knows anything about the game of Go, but they’ll be communicating with me via email. I have to say that I know very little about the game of Go myself, but I’m going to be playing by the aliens’ rules. Is that all right with you?”

“Certainly,” replies Tom. “The aliens’ way of playing sounds rather dull to me. I’m going to spice up the game with some human intuition. You’ll soon see that nothing can beat intuition, in an interesting game like Go, where the best move can’t possibly be calculated.”

They sit down to play. After about an hour, Tom is forced to resign. In a dazed tone of voice, he asks, “How did you do it, Al?”

“I think I can explain, although I’m no Go expert,” answers Al. “Essentially, what I did was to pool the combined wisdom of billions of players who came before me. You were playing against that. What my victory means is that a sufficient amount of experience can beat human intuition, in a tactical game. But that’s hardly surprising, is it?”

Tom reflects for a while and finally replies, “No, Al, it isn’t. I was wrong to think that I could defeat the combined wisdom of so many players. I’ve come to appreciate the limits of human intuition. What I’m wondering now is: are there any situations where intuitions are indispensable?”

Tom reflects on the nature of human intuition, and where it might prove indispensable

Tom ponders again for a while. After a long silence, he announces, “I think I can see two kinds of cases where intuitions are indeed irreplaceable. One is in a game where the goal cannot be described in objective, “third-person” language; it can only be described in subjective terms which refer to the beliefs, desires and intentions of the other players. To win the same, you have to be able to put yourself in other people’s shoes. While a list of ‘smart moves’ might serve you well up to a point, it won’t help you in novel or unexpected situations, where you can only figure out what you should do by asking yourself what the other person would want you to do in that situation. Experience can never trump empathy.”

Tom continues: “The other case I can think of where intuition would be needed is in a situation where trying out incremental improvements won’t help you to get from A to B, simply because there are too many improvements to try out, making the search for the best move like searching for a needle in a haystack. Experience won’t help here, because there isn’t enough time to narrow down the search. Without a flash of insight, you’ll never be able to spot the right move to make, in moving towards your desired goal.”

Al is curious. “Would you care to offer any examples of these two cases you’ve proposed?” he asks.

“Happy to oblige,” answers Tom. “Right now, in the United States, there’s a presidential election going on. Politics is a game, and the U.S. presidential election is a winner-take-all game. But it’s not enough for the successful candidate to be a policy wonk, who knows how to fix the American economy, or even a ‘steady pair of hands,’ capable of handling any domestic or international crisis that might come up. You need more than intelligence and experience to win a presidential election. You need to be a good speaker, who is capable of inspiring people. You also need to be capable of leadership, so it definitely helps if you have a commanding presence and ‘sound presidential.’ It helps, too, if you have excellent networking skills, to help you raise lots of money, which you’ll need to finance your campaign. In addition to that, you need to be a fairly likable person: nobody wants to elect a curmudgeon, no matter how clever, experienced or commanding he or she may be. On top of that, you need to be capable of empathy: you need to be able to show the public that you are genuinely capable of feeling other people’s pain, or people will spot you for a phony and dismiss you as cold and uncaring. Oh – and you’d better at least as ethical as your opponents, or people will perceive you as a liar and a crook, and they probably won’t vote for you. As you can see, many of these skills require the ability to identify with other people. You simply can’t bluff your way through a presidential campaign with a catalogue of smart moves or canned responses. It’s too unpredictable. Let me put it another way. You could design a robot that could beat a human at the tactical games I’ve practiced playing, over the years. But you could never design a robot that could win an American presidential election. Only a human being who is capable of genuine empathy and of intuiting the right thing to do when interacting with other people could win a contest like that.”

“Interesting,” says Al. “What about your other case?”

“Protein design would be an excellent example of a challenge requiring leaps of human intuition,” answers Tom. “Very short proteins might arise naturally, but once you get to proteins that are more than 150 amino acids in length, the space of possibilities is simply too vast to explore, as Dr. Douglas Axe demonstrates in his 2010 paper, The Case Against a Darwinian Origin of Protein Folds. In his own words:

The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that relatively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence. If this is correct, the sampling problem is here to stay, and we should be looking well outside the Darwinian framework for an adequate explanation of fold origins.

“I’d say a situation like that calls for the intuitive insight of an intelligent designer, wouldn’t you?” asks Tom.

If Dr. Axe’s premises are correct, then it’s difficult to avoid that conclusion,” concedes Al. “But I’m not a biochemist, so I can’t really say. Still, I can at least see what you mean, now. One thing troubles me, though.”

“What’s that?” asks Tom.

“The two kinds of cases you’ve described are quite different in character,” replies Al. “One requires the ability to put yourself in other people’s shoes, while the other requires the ability to make a mental leap that surpasses the power of any computer or any trial-and-error process. What I’d like to know is: what is it that ties these two kinds of cases together?”

“That’s a very good question,” answers Tom. “I really don’t know. What I do know, however, is that all my life, the games I’ve been playing are only a tiny subset of the vast range of games that people play in real life, let alone the truly enormous set of games played by the Creator of the cosmos, when designing Nature. I’ve now come to realize that losing at chess and Go doesn’t matter very much, in the scheme of things. There are far more interesting games to play. And now, I’m off.”

“Where are you off to?” asks Al.

“Washington,” answers Tom. “I’m going to try my hand at political forecasting. Maybe I’ll succeed, or maybe fall flat on my face. But you’ve given me a lot to think about, Al. I’m going to try out some of the new ideas you’ve given me, and put them to the test. Wish me luck!”

I shall end my story there. I wonder if any of my readers can shed some light on the question posed by Al on human intuition, at the end of my story. What, if anything, unifies the two kinds of cases I have described?

Before I finish, I’d like to quote a short passage from an article by philosopher David Oderberg, who is now professor of philosophy at the University of Reading, England. In an article in the Australian magazine Quadrant (Vol. 42, No. 3, 1998: 5-10), he wrote:

“…[T]he game of chess, in itself, is nothing more than glorified noughts and crosses. Sure, it can be played with finesse, ingenuity, artistry and so on; but that is peripheral. In essence, chess is a formal system of well- defined axioms and rules, with a well-defined goal. No wonder a computer can play it at all. We should be amazed if it couldn’t.”

Food for thought. And now, over to you.

P.S. Perceptive readers will have noticed some similarities between my story and philosopher John Searle’s Chinese room thought experiment. My intention here, however, is not to address the question of whether computers think, or whether they are conscious, but rather, to explore the strengths and weaknesses of human intuition.

88 Replies to “Computer beats humans at Go: so what?

  1. 1
    daveS says:

    The news that a computer program has beaten Go master Lee Se-dol in a best-of-five competition may have shocked some readers. In this post, I’d like to explain why I don’t think it matters much at all, by telling a little story about three guys named Tom, Sam and Al.

    For me the shocking thing is that this is happening quite a bit sooner than expected. Perhaps Fan Hui will turn out to be the last human to beat the leading computer program at Go.

  2. 2
    hrun0815 says:

    Ah yes, the standard commentary (albeit long in this case) to every advancement in AI research: “Yes, sure, but is it intelligent?”

  3. 3
    hrun0815 says:

    And this: “But you could never design a robot that could win an American presidential election.” sounds very much like “But you could never design a robot that could successfully beat a Turing test.”

  4. 4
    Aleta says:

    Go is a wonderful game, and the kind of strategic decisions one has to make are very different from the more linear thinking involved in chess. I first learned to play when I was 14, using thumbtacks pushed into the top of a cardboard box, and I still have the pieces I bought in Chinatown in 1967.

    This comment has absolutely nothing to do with the OP – I’m just glad the game itself is getting some publicity, and it’s fun for me to reminisce.

  5. 5
    hrun0815 says:

    And just one last point: In your analogy you are reasonable well describing how Deep Blue worked when playing chess, but it completely fails in describing how AlphaGo plays the game.

  6. 6
    Mapou says:

    AlphaGo uses a combination of AI techniques to play Go. First, its deep neural network was trained on thousands of patterns from pre-recorded games played by humans. This is called the “policy network”. Second, it uses a combination of the Monte Carlo tree-search algorithm and reinforcement learning to build a value network. It was further trained by playing against other instances of itself millions of times.

    So is it intelligent? Yes, of course, but it’s a dumb, brittle, domain-specific type of intelligence. It’s not the kind of intelligence you would want to drive you and your car to work in the morning. Why? Only because it’s helpless if it encounters a situation is has never seen before. And with all of its prowess, AlphaGo could not tell the difference between Go and tic-tac-toe and could not even tell you what a board game is.

    The problem with current AI technologies is that, in spite of claims to the contrary, they are still doing GOFAI (good old fashioned AI). Supervised deep learning is still the same “baby boomer” symbolic AI of the last century but with a little lipstick on. One only needs to ask oneself the following question: Which is harder, beating a Go Grandmaster or doing the laundry (loading, washing, drying, ironing, folding, etc.)? We humans have no trouble doing the laundry and all types of other routine chores around the house. Current AI systems would be completely stomped out of their GPUs.

    In conclusion, truly general AI is coming but it’s not coming from the mainstream AI community. Those guys are totally out to lunch. They’re not even in the same galaxy of the right approach, let alone the same ballpark.

  7. 7
    hrun0815 says:

    It’s not the kind of intelligence you would want to drive you and your car to work in the morning.

    Funny you bring that up…

  8. 8
    Mapou says:

    hrun0815:

    Funny you bring that up…

    Yes, it is funny, considering all the buzz about self-driving cars from Google and the like. But guess what? Fully autonomous self-driving cars are still many years away. They have to spend many more years to train them in order to get them to a point where they can be confident enough to let them loose on their own. The current crop of autonomous vehicles are completely lost when faced with the hand signals of a road construction worker. And forget about any detour that is not registered in their internal GPS-enabled maps.

  9. 9
    Origenes says:

    The BBC article states: “Unlike the real world, a closed system of fixed rules suits computing” and I hold that this is spot on. Computers can only excel when there is a fixed set of rules — a fixed context, as with chess and Go. However computers are helpless when a new context is in place. Why is this? Because in order to calculate a computer needs a correct “atomization of meaning”, which breaks down when contexts are changing.

    Professor Edward F. Kelly wrote about his initial optimism about Computational Theory of the Mind [CTM], before he understood this problem:

    I must also acknowledge here that I myself initially embraced the computational theory practically without reservation. It certainly seemed an enormous step forward at the time. Fellow graduate students likely remember my oft-repeated attempts to assure them that the CTM would soon solve this or that fundamental problem in psychology. But all was not well.

    Later on it hit him:

    Any scheme based on atomization of meaning would necessarily fail to capture what to me had become the most characteristic property of word-meaning, a felt Gestalt quality or wholeness, at a level of generality that naturally supports extensions of usage into an indefinite variety—indeed whole families—of novel but appropriate contexts. The existing proposals could only represent the content of a general term such as “line” by some sample of its possible particularizations, and in so doing rendered themselves systematically unable to distinguish between metaphorical truth and literal falsehood.

    I also noted with a certain degree of alarm that this crucial property of generality underlying the normal use of words seemed continuous with developmentally earlier achievements. Skilled motor acts and perceptual recognition, for example, require similar on-the-fly adaptations of past learning to present circumstance.
    The importance of incorporating more general knowledge of the world into language-processing models, for example, had already begun to be recognized, and new formal devices were being introduced to represent what the computer needed to know (what we ourselves know) about various sorts of “typical” situations it might encounter. But it seemed clear to me that all of these knowledge-representation devices, such as “frames” (Minsky, 1975), “scripts” (Schank & Colby, 1973), and “schemata” (Neisser, 1976), suffered essentially the same problems I had identified in the Katz and Fodor account of word meaning. Specifically, they required the possible scenarios of application to be spelled out in advance, in great but necessarily incomplete detail, and as a result ended up being “brittle,” intolerant of even minor departures from the preprogrammed expectations.

    Many of the themes just sounded have been confirmed and amplified in more recent work. On the positive side, our knowledge of the content, organization, and development of the human conceptual system has increased enormously. The old Socratic idea that concepts must be defined in terms of necessary and sufficient features has given way to a recognition of the role of perceptual-level examples, prototypes, and family resemblances in the content of real human concepts (Medin & Heit, 1999; Rosch & Lloyd, 1978; E. E. Smith & Medin, 1981). The contributions of a fundamental human capacity for metaphorizing at levels ranging from everyday language to the highest flights of creativity, are also now more widely appreciated (Gentner, Holyoak, & Kokinov, 2001; Hofstadter & FARG, 1995; Holyoak & Tha-gard, 1995; Lakoff, 1987, 1995; see also our Chapter 7). Computer language-processing systems have tended to move, as I predicted, toward a relatively simplified and “surfacy” syntax coupled with richer representations of the lexicon, and they sometimes attempt to resolve residual ambiguities with the aid of statistical data derived from large databases.
    Hubert Dreyfus (1972) systematically questioned both the progress and the prospects of CS and AI. He began by reviewing the early work in game-playing, problem-solving, language translation, and pattern recognition. Work in each domain was characterized by a common pattern consisting of encouraging early success followed by steadily diminishing returns.

    E.F.Kelly. Irreducible Mind, 2007
    [My emphasis]

  10. 10
    GaryGaulin says:

    At least we agree on this one. I also say: so what?

  11. 11
    Me_Think says:

    The fact that AlphaGo won 3 games against Lee Se-dol without brute force number crunching is testament to the ingenuous AI. I applaud the bunch of human software engineers who made this AI.

  12. 12
    SteRusJon says:

    AlphaGo didn’t learn to play GO because it was looking for entertainment. Just say’in.

    Stephen

  13. 13
    Mapou says:

    The fact that AlphaGo won 3 games against Lee Se-dol without brute force number crunching is testament to the ingenuous AI. I applaud the bunch of human software engineers who made this AI.

    This is not true, of course. AlphaGo was trained on many thousands of pre-recorded games played by human professionals over many decades. This gave the program an initial evaluation function that it could not have otherwise. It then used this evaluation function in conjunction with tree-searching and a simple reinforcement learning algorithm to play millions of games against copies of itself. This allowed it to improve its evaluation function even further. In addition, it used 1200 CPUs and 170 superfast GPUs for parallel processing.

    This is about as brute force it as it gets. I doubt that Lee Sedol played millions of games in his lifetime. He’s not that old. A few hundreds at the most. I’m much more impressed with Sedol.

  14. 14
    hrun0815 says:

    This gave the program an initial evaluation function that it could not have otherwise.

    And you can prove this how? AlphaGo could have just learned his evaluation function from scratch. It would just have taken longer.

    In addition, [AlphaGo] used 1200 CPUs and 170 superfast GPUs for parallel processing. […] I’m much more impressed with Sedol.

    Because Sedol was able to play by using only his ~90 billion neurons after about a quarter of a century of training?

  15. 15
    Mapou says:

    hrun0815:

    And you can prove this how?

    I don’t have to prove it. It was plainly stated by the programming team. Look it up.

    Because Sedol was able to play by using only his ~90 billion neurons after about a quarter of a century of training?

    I doubt that Sedol uses more than 1% of his neurons to play Go. As the late Marvin Minsky once wrote, the mind is a society of agents. The “easy” stuff in life, such as doing chores around the house, are many orders of magnitude more complex than Go or chess. Otherwise, deep neural networks would be cooking meals and cleaning up the house instead of playing Go and Atari games. And, regardless of how long Lee Sedol trained, he never played more than a few hundred professional-level games in his lifetime. A thousand at the most, maybe. Furthermore, he does not have fast, dedicated parallel GPUs crunching Monte-Carlo based, tree-searching algorithms to work with. He only has his super slow neurons. So yes, I am impressed with Sedol.

  16. 16
    Aleta says:

    But the computer doesn’t enjoy playing, and I do (did, as I don’t play anymore), so there’s a difference! 🙂

  17. 17

    GaryGaulin at #10:

    At least we agree on this one. I also say: so what?

    Oh, nothing important Gary!

    Maybe just a minor reinforced conclusion: that human mind is supernatural.

    And maybe humans and all living organisms are supernatural.

    And maybe everything has a supernatural origin.

    Just think a little bit Gary! The materialist believes that matter (call it evolution, emergence, constructal theory, whatever) created everything including humans and their minds. But human mind is such an intangible, incomprehensible artifact that even humans cannot figure out how it works.

    Pretending that matter (again call it evolution, emergence, constructal theory, whatever) created even the simplest form of life is just an incredible absurd and ridiculous belief.

  18. 18
    vjtorley says:

    hrun0815,

    Thank you for your posts. You write:

    And this: “But you could never design a robot that could win an American presidential election.” sounds very much like “But you could never design a robot that could successfully beat a Turing test.”

    Funny you should mention the Turing test. Evidently you haven’t read my Uncommon Descent article, Why no 13-year-old boy talks like Eugene Goostman (June 10, 2014) or my more recent article, Can intelligence be operationalized? (March 11, 2015), which defends the validity of the Turing test.

    You also write that my story “completely fails in describing how AlphaGo plays the game.” I was quite clear in my story that AlphaGo’s triumph in the game of Go (as exemplified by the character named Al) was due not to a brute-strength approach but to its ability to “pool the combined wisdom of billions of players.” (OK, maybe “billions” was an exaggeration.) I added that AlphaGo’s (or Al’s) victory means that “a sufficient amount of experience can beat human intuition, in a tactical game.” I stand by that description. As Mapou points out, “AlphaGo was trained on many thousands of pre-recorded games played by human professionals over many decades.”

    I think Origenes made a very telling point when he wrote:

    The BBC article states: “Unlike the real world, a closed system of fixed rules suits computing” and I hold that this is spot on. Computers can only excel when there is a fixed set of rules — a fixed context, as with chess and Go. However computers are helpless when a new context is in place. Why is this? Because in order to calculate a computer needs a correct “atomization of meaning”, which breaks down when contexts are changing.

    You might object that a computer could be programmed to learn new rules in a changing situation. You’re quite right, but Origenes’ point is that at any given point in time, the rules themselves are clear-cut, with no room for ambiguity.

    Finally, you disparage Mapou’s observation that Lee Se-dol’s feat is much more impressive because he became a Go master after playing just a few hundred games, whereas AlphaGo required many times more than that, by pointing out that Se-dol had “a quarter of a century of training.” I ask: training at what? Learning how to walk as a baby? Learning how to read as a child? Why should that count? By any fair reckoning, Se-dol has indeed had far less training than AlphaGo.

    There’s one thing I do agree with you about, however. AlphaGo’s 1200 CPUs and 170 superfast GPUs are no match for the 90 billion neurons in Lee Se-dol’s brain. I explain why in my post, Could the Internet ever be conscious? Definitely not before 2115, even if you’re a materialist. Computers might be faster at particular tasks, such as calculating, but there are many things a human being can routinely do, which a computer cannot do at all.

  19. 19
    hrun0815 says:

    Just real quick:

    I did not claim that the Turing test was passed by Eugene.

    I still don’t believe that you accurately describe what AlphaGo does (pool the combined wisdom of billions of players). That pooling or training with human games is not what allows AlphaGo to mop the floor with a 9 Dan player.

    And I do believe that you are wrong about AIs not being able to deal with ambiguities but require a closed system. The rapid improvement of autonomous vehicles, as an example, belies that assertion.

    And I don’t disparage Mapou. I simply point out that it is difficult to draw a comparison between two completely different systems based on hardware, rather than output. Humans are clearly better at a huge number of things in this world, but the list of things where AIs outperform even the best humans is only getting longer.

    And finally, the discussion about whether or not the internet or computers could ever be conscious I’ll leave to you. It seems such a silly thing to attempt and predict since it is so poorly defined. That’s nearly as silly as attempting to predict at what point computers might be ‘intelligent’.

  20. 20
    hrun0815 says:

    I don’t have to prove it. It was plainly stated by the programming team. Look it up.

    I did find that this is what happened, but not that it couldn’t have happened otherwise. Clearly you are better informed than I am and should be easily able to post the relevant not from the programming team.

    I doubt that Sedol […]

    Or, in other words: “Yes, sure, but is it intelligent?”

  21. 21
    hnorman5 says:

    I am very happy that computers can play Go. To play Go you have to think about Go. To write a program that plays Go, you have to think about how we think. I wonder though — how long until we make a computer that feels bad when it loses?

  22. 22
    Origenes says:

    hnorman5: I am very happy that computers can play Go. To play Go you have to think about Go.

    Nope. Nice try but no cigar. Computers don’t think, but instead manipulate symbols without understanding the meaning of the sentences/formulas that are constituted by words/symbols.
    See Searle’s Chinese Room argument.

  23. 23
    hnorman5 says:

    You are correct. I meant that for a human to play Go he has to think about Go. To program a computer to play a game the programmer has to reflect on the process of making decisions. In both cases it’s the human doing the thi king.

  24. 24
    tragic mishap says:

    lol. Nice story.

    Here’s a list of game complexities from Wikipedia.

    https://en.wikipedia.org/wiki/Game_complexity#Complexities_of_some_well-known_games

    I haven’t looked at it in awhile, but according to some criteria, Go exceeds the UPB, whereas chess does not.

  25. 25
    SteRusJon says:

    Aleta,

    AlphaGo does not enjoy the game. More, or is it less, than that, it isn’t even aware that it is a game of Go that it is playing.

    I think the word “intellegence” is being wildly equivocated here. There is a vast difference between an “intellegence” that can “calculate” a result were the end is calculable and an “intellegence” that goes about inventing (designing) the game of Go for no other reason than to occupy/utilize its circuits for enjoyment.

    Stephen

  26. 26
    Zachriel says:

    vjtorley: “I think I can explain, although I’m no Go expert,” answers Al. “Essentially, what I did was to pool the combined wisdom of billions of players who came before me.”

    All good players study the play of previous masters. However, AlphaGo did more than that. AlphaGo learned from the experience of playing the game; again something all good players do.

  27. 27
    Mung says:

    So now computers have experiences.

  28. 28
    hrun0815 says:

    I think the word “intellegence” is being wildly equivocated here. There is a vast difference between an “intellegence” that can “calculate” a result were the end is calculable and an “intellegence” that goes about inventing (designing) the game of Go for no other reason than to occupy/utilize its circuits for enjoyment.

    And yet another “Ah, yes, but is it intelligent?”

    I really wonder what is behind this desire to figure out if playing go means the machine is intelligent, if the internet will be conscious by 2100, if a brain or 2500 processors are more impressive hardware, or if AlphaGo can fold laundry?

  29. 29
    Mapou says:

    News Flash!

    Lee Sedol won game 4 of the 5-game match. He did it by doing something that he would have never considered doing against a human opponent. He played an unconventional move that broke AlphaGo out of its comfort zone. Sedol could have won all the games if he had adopted this strategy from the beginning.

    After all is said and done, AlphaGo is just a brittle rule follower, a dumb automaton. Bravo to human intelligence and bravo to Sedol. This is not to say that machines can never in principle have human-level intelligence, but AlphaGo is certainly not there yet. Not by a long shot.

  30. 30
    hrun0815 says:

    Lee Sedol won game 4 of the 5-game match. He did it by doing something that he would have never considered doing against a human opponent. He played an unconventional move that broke AlphaGo out of its comfort zone. Sedol could have won all the games if he had adopted this strategy from the beginning.

    So you think that AlphaGo can not learn from this game and then adapt to such moves?

    And of course AlphaGo will not have human-level intelligence. But currently AlphaGo is able to beat one of the best Go players in a game of Go and if chess is an indication, relatively soon your cell phone is going to be able to perform the same feat. AlphaGo, however, will never be able to fold your laundry or drive you to work.

  31. 31
    Mapou says:

    hrun0815:

    So you think that AlphaGo can not learn from this game and then adapt to such moves?

    Maybe it could have but the AlphaGo team decided to freeze the program’s “brain” and disable its learning ability during the entire match. The reason was to prevent the possibility of introducing a catastrophic bug.

    Having said that, I still think that, even if AlphaGo’s learning ability had been enabled, it uses a type of learning that depends on a huge number of samples to work properly. Sedol would still be able to exploit this weakness in the machine.

  32. 32
    Aleta says:

    Lee Sedol won game 4 of the 5-game match

    Woot! Go humans!

  33. 33
  34. 34
    EvilSnack says:

    The only intelligence involved so far in game-playing AIs is in the determination of what the computer should do in any given situation. In every case so far, that determination has been made by a human being. The intelligence does not lie in the program, but in the process of making the program.

    When an AI can develop a winning strategy, based on no input other than a statement of the rules, call me.

  35. 35
    hrun0815 says:

    Having said that, I still think that, even if AlphaGo’s learning ability had been enabled, it uses a type of learning that depends on a huge number of samples to work properly. Sedol would still be able to exploit this weakness in the machine.

    Wait, are you suggesting that AlphaGo’s learning ability managed to get him to take three games of a 9 dan go player, but that’s as far as it is going to go? If so, then you should hit the bookies next time AlphaGo or its successor plays a world champion again. You could become a made man.

  36. 36
    hrun0815 says:

    When an AI can develop a winning strategy, based on no input other than a statement of the rules, call me.

    Does that count for the human opponent as well?

  37. 37
    Mapou says:

    Wait, are you suggesting that AlphaGo’s learning ability managed to get him to take three games of a 9 dan go player, but that’s as far as it is going to go?

    Not at all. Eventually, after months or years of further training, I expect that no human will stand a chance against AlphaGo.

    What I’m saying is that deep neural nets do not adapt as fast as we do. They must be trained on huge numbers of samples in order to learn a particular pattern. They don’t generalize easily. They are slow learners. So even if learning was enabled during the match, it would not make much difference. Here’s what someone wrote in another forum on this very topic:

    To make any appreciable change in the neural net preferences, you need a high number of games. Millions, according to the DeepMind guy in the post-game conference. So: AlphaGo does not adjust to individual players, because there aren’t millions of games for it to look at from any one player.

  38. 38
    hrun0815 says:

    Re #37: Thanks for the clarification. That I agree with completely.

  39. 39
    Me_Think says:

    For all you know, Google engineers might have let Sedol win the game !
    Let’s see the final game when Sedol plays black instead of white.

  40. 40
    Robert Byers says:

    I love that computer! I cheer for the computer.
    I nEVER heard of this game called GO. I guess its chess.
    First no board games EVER were about intellectual thought processes. Its a fable.
    What they are about is simple memory operation.
    Thats why a dumb non thinking computer can beat a human.
    The computer only has a memory working for it. No decisions other then memorized ones can happen.
    The human also.
    Its a myth that board games are about smart people.
    It just might be smart people tend to have better memory ability.
    So its funny to see human pride tweeked by a machine. YET its a bigger error of these things.

  41. 41
    hrun0815 says:

    Re #40: Now that’s some funny stuff right there.

  42. 42
    vjtorley says:

    Hi Zachriel,

    Thank you for your post. You write:

    All good players study the play of previous masters. However, AlphaGo did more than that. AlphaGo learned from the experience of playing the game; again something all good players do.

    I don’t deny for a moment that AlphaGo learned from the games it played, as well as the games played by previous masters. All that proves is that a sufficient amount of experience will trump intuition, in a tactical game where it’s possible to advance in skill via incremental improvements, and where the players don’t have to put themselves in the other players’ shoes. Are we agreed on that point?

  43. 43
    Mapou says:

    Re #40: Now that’s some funny stuff right there.

    ROTFL

  44. 44
    kairosfocus says:

    VJT, back in the real world of dynamic environments prone to kairos moments demanding fresh departures that change the game through imposing strategic surprise, the real world goes on. Have we forgotten Guderian’s and Rommel’s Panzers in May 1940 and how the vaunted French army was decisively put on the wrong foot and defeated? (And it seems the current US Presidential election cycle may be seeing a similarly game changing though very dangerous player breaking in. What truly gives me pause is the price that may be paid to defeat him, and what sort of figure may then emerge as an increasingly polarised and alienated electorate reacts to the suggested victory of business as usual elite cliques.) KF

    PS: Bottomline, full bore true intelligence shows itself through unlimited strategic creativity; especially at genius level. For good or ill. Ponder John Paul II on his first visit to Poland after becoming Pope: http://www.wsj.com/articles/SB122479408458463941

    PPS: I am not comfortable with the suggestion of learning above. Especially in a sense that may suggest the idea of novel insightful creative innovative understanding towards new approaches.

  45. 45
    Zachriel says:

    EvilSnack: The only intelligence involved so far in game-playing AIs is in the determination of what the computer should do in any given situation.

    You seem to be confusing neural networks with symbolic computation. Neural networks learn from experience.

    EvilSnack: When an AI can develop a winning strategy, based on no input other than a statement of the rules, call me.

    Neural networks can do that now.

    hrun0815: Does that count for the human opponent as well?

    Sure. Why not? Humans and neural nets rarely play well at first.

    vjtorley: All that proves is that a sufficient amount of experience will trump intuition,

    Tom learned from experience as well.

    vjtorley: in a tactical game where it’s possible to advance in skill via incremental improvements, and where the players don’t have to put themselves in the other players’ shoes. Are we agreed on that point?

    AlphaGo presumably does put itself in the other player’s shoes in order to look at counter-moves. It doesn’t consider motivation or other psychological factors, though. Do you think computers can play well at Poker?

  46. 46
    hrun0815 says:

    I am not comfortable with the suggestion of learning above. Especially in a sense that may suggest the idea of novel insightful creative innovative understanding towards new approaches.

    Yes, that much is obvious. Just like people are apparently uncomfortable in admitting that winning a game of go is a feat of intelligence. Or admitting that AlphaGo does not actually play moves that are ‘taught’, but acts in ways that go way beyond the understanding of the programmers (and soon probably beyond analysis by the best Go players in the world).

    One thing is quite telling to me: A high level Go player suggested that soon AlphaGo will be used by professional players for training AND for help in analyzing the quality of their moves or value of their positions.

  47. 47
    Zachriel says:

    hrun0815: One thing is quite telling to me: A high level Go player suggested that soon AlphaGo will be used by professional players for training AND for help in analyzing the quality of their moves or value of their positions.

    Cheating by computer is already a problem in chess.

  48. 48
    vjtorley says:

    Hi Zachriel,

    You ask: “Do you think computers can play well at Poker?” Personally, I doubt that they’d do very well, and even if they did, I certainly wouldn’t expect them to do much better than the best poker players.

    Here’s why. First of all, in order to be a good poker player, you need to be good at spotting a liar. Now, perhaps you might think that a suitably programmed computer, fed with thousands and thousands of observations of liars and truth-tellers making the same series of statements, could train itself to spot the telltale signs of a liar – including some signs that we humans haven’t identified yet. That idea is based on the assumption that the best way to spot a liar is through their body language. Not so, according to a recent BBC article:

    Study after study has found that attempts – even by trained police officers – to read lies from body language and facial expressions are more often little better than chance. According to one study, just 50 out of 20,000 people managed to make a correct judgement with more than 80% accuracy. Most people might as well just flip a coin.

    Over the last few years, deception research has been plagued by disappointing results. Most previous work had focused on reading a liar’s intentions via their body language or from their face – blushing cheeks, a nervous laugh, darting eyes… Even if we think we have a poker face, we might still give away tiny flickers of movement known as “micro-expressions” that might give the game away, they claimed.

    Yet the more psychologists looked, the more elusive any reliable cues appeared to be. The problem is the huge variety of human behaviour. With familiarity, you might be able to spot someone’s tics whenever they are telling the truth, but others will probably act very differently; there is no universal dictionary of body language. “There are no consistent signs that always arise alongside deception,” says Ormerod, who is based at the University of Sussex. “I giggle nervously, others become more serious, some make eye contact, some avoid it.” Levine agrees: “The evidence is pretty clear that there aren’t any reliable cues that distinguish truth and lies,” he says. And although you may hear that our subconscious can spot these signs even if they seem to escape our awareness, this too seems to have been disproved.

    The best way to spot a liar, it seems, is to employ a cognitive approach, and probe their story – something a computer would not be good at, since it lacks the ability to converse naturally, as we do (recall my remarks above about the Turing Test):

    Ormerod and his colleague Coral Dando at the University of Wolverhampton identified a series of conversational principles that should increase your chances of uncovering deceit:

    Use open questions…

    Employ the element of surprise…

    Watch for small, verifiable details…

    Observe changes in confidence…

    The aim is a casual conversation rather than an intense interrogation. Under this gentle pressure, however, the liar will give themselves away by contradicting their own story, or by becoming obviously evasive or erratic in their responses…

    Officers trained in Ormerod and Dando’s interviewing technique were more than 20 times more likely to detect these fake passengers than people using the suspicious signs, finding them 70% of the time.

    Another reason why a computer would make a terribly good poker player is that a skilled player has to have the ability to put themselves in the other person’s shoes and imagine how they are feeling and what they would probably want to do. In other words, a subjective approach is required. You write that AlphaGo “presumably does put itself in the other player’s shoes in order to look at counter-moves.” But the strategic computations performed by AlphaGo don’t require it to adopt a subjective, first-person approach; an objective, third-person approach will do the job just as well.

    For these reasons, I wouldn’t expect a computer to win poker tournaments any time soon.

  49. 49
    hrun0815 says:

    Re #48:

    Personally, I doubt that they’d do very well, and even if they did, I certainly wouldn’t expect them to do much better than the best poker players.

    Surely, you are entitled to your opinion. However, it looks like the current crop of poker programs is mopping the floor with amateur poker players and getting into a level where they can realistically compete with the pros. Considering the trajectory of advance for AIs in the realm of playing games, I wouldn’t put much money on the pros keeping the upper hand for long. But hey, I’m not a gamble, so what do I know.

  50. 50
    Zachriel says:

    vjtorley: I wouldn’t expect a computer to win poker tournaments any time soon.

    Polaris 2.0 defeated a team of human competitors in a series of duplicate limit hold’em matches…

    the computer did not employ similar tactics against all of the humans, but followed different strategies against each, making it harder for the humans to adjust during to the computer’s changing strategies …

    Polaris 2.0 also learned from its own mistakes, employing an algorithm intriguingly named “counter-factual regret”

    http://www.pokernews.com/news/.....poker-.htm

  51. 51
    hrun0815 says:

    The poker bot Claudico, by the way, did exactly what EvilSnack in #34 was calling for: Claudico taught himself the game of poker by just being told the rules. All strategies for the game come from playing himself within the rules and learning from that.

  52. 52
    Virgil Cain says:

    Everything the computer does can be traced back to the humans who designed the computer and wrote the programs. That is just a basic fact of life.

  53. 53
    hrun0815 says:

    Yes, Virgil, that is correct and completely irrelevant to the points being made here. Nobody in this thread has argued that the AIs sprung into being without the aid of human intelligence and intervention.

  54. 54
    Virgil Cain says:

    LoL! Of course it is relevant. The computer does as it was designed to do. As the OP says- it doesn’t matter much at all and that is why.

  55. 55
    Mapou says:

    As a Christian, it bothers me that most Christians believe that high intelligence requires a spirit or consciousness. The truth is that intelligence is a mechanical, cause-effect phenomenon. Programs like AlphaGo prove this. It is only a matter of time before intelligent machines are able to perform almost all human-level tasks and jobs. One thing machines will not have is an independent sense of beauty. They will only learn what humans consider beautiful by observing us.

    But what bothers me the most, as a hard core dualist, is the stupid materialist notion that a machine can, by some unspoken magic, have conscious sensations. This is not even wrong.

  56. 56
    hrun0815 says:

    Re #54: You mean AlphaGo is the product if intelligent design, just like humans are, so all his actions (just like ours) are simply the product of that intelligent design? And therefor, AlphaGo (just like humans) can not be intelligent? That’s an odd thing to assert.

  57. 57
    hrun0815 says:

    Re #55: Totally agree about the consciousness. Probably not for the same reason, but the notion in itself is silly.

    I wonder, did anybody ever propose how one could distinguish theoretically between ‘AI is pretending to be conscious’ from ‘AI is conscious’?

  58. 58
    vjtorley says:

    Hi Zachriel,

    Interesting news on the poker-playing computer. I suppose if it tracked individual players and employed a different strategy against each, based on what it could “read” about them, it might perform quite well.

    My guess is that a computer following such a strategy could make it into the professional league, but I see no reason to think it would do better than the pros.

  59. 59
    Zachriel says:

    vjtorley: My guess is that a computer following such a strategy could make it into the professional league, but I see no reason to think it would do better than the pros.

    Polaris 2.0 beat the pros at duplicate. There’s little doubt that computers will only get better as time goes on.

  60. 60
    EvilSnack says:

    Zachriel:

    EvilSnack:

    When an AI can develop a winning strategy, based on no input other than a statement of the rules, call me.

    Neural networks can do that now.

    Is there a neural network that has actually done this?

  61. 61
    Virgil Cain says:

    hrun:

    You mean AlphaGo is the product if intelligent design, just like humans are, so all his actions (just like ours) are simply the product of that intelligent design? And therefor, AlphaGo (just like humans) can not be intelligent?

    Nice meaningless rant. Humans can actually choose and play many more games than just the one. AlphaGo is locked in on one game. It cannot choose to play any other game. It cannot tell how beautiful a sunset is.

  62. 62
    Mapou says:

    Final Match tonight. You can watch it live on YouTube starting at 8:30 PM Pacific time:

    Match 5 – Google DeepMind Challenge Match: Lee Sedol vs AlphaGo

  63. 63
    hrun0815 says:

    Nice meaningless rant. Humans can actually choose and play many more games than just the one. AlphaGo is locked in on one game. It cannot choose to play any other game. It cannot tell how beautiful a sunset is.

    Ah, so once AlphaGo or another AI can play multiple games it is clear that both AI and humans are not intelligent, but only do what they are designed to do? Or is it the other way round? Who knows, but I am sure your objections are logically rigorous.

  64. 64
    Robert Byers says:

    hruno815
    GO is not a game of intelligence. Its just a game of memory. like skateboarding.
    Thats why kids and computers can do it and prevail.
    Why do you think GO is a game of intelligence? Why do you think a computer can win at it? Could the computer of invented it and the motive for it as a game? NO!!
    its not a thinking thing. Just a adding machine.
    Board games are not operations of human intelligence. HOWEVER it might be that smarter people prevail at it but thats just because they are more attentive or use their memories more.
    The same GO curve of winners would probably do better home budgets also.
    I never heard of GO but it doesn’t matter. jUst another dumb board game.

  65. 65
    hrun0815 says:

    My guess is that a computer following such a strategy could make it into the professional league, but I see no reason to think it would do better than the pros.

    Turns out that Claudico, as far as I know, adjusts his play for every player. So it remains to be seen if what you propose is necessary.

    I am, however, somewhat puzzled by your belief that a computer would not get better than a pro. Looking at the trajectory of virtually everything else computers and AIs do, they have gotten exponentially better at it. Why do you think it’s going to be different for poker and the best status an AI can attain is to be nearly as good as the best pros we currently have around, but no better?

  66. 66
    hrun0815 says:

    Re #64: Wow. That’s getting even funnier! So I bet AlphaGo just memorized all the best moves for every possible positions in the game. Tell us more, Robert.

  67. 67
    Smidlee says:

    Computer has the intelligence of a rock. The “intelligence” you see in a computer programs is the product of the human mind. A computer program with the brute force of trial and error running on a code written by human minds can be a powerful TOOL but still nothing but a tool.
    It’s the same as a love letter to my wife has not intelligence of itself. It’s just ink and paper (or just dots on the screen) until the eye and brain processes the image so my wife can read my thoughts. I have thoughts which my brain processes into a code (English) which has absolutely no meaning until that code is process back into my thoughts to the person who receives them. There are no thoughts outside the mind. Only code and information send out in form of energy or matter.
    Since we can program a computer to mimic some human behavior it can mislead some to believe it’s actually thinking.
    Anything that works totally mechanically can not reason.

  68. 68
    hrun0815 says:

    Re #67: So if we were to rebuild AlphaGo’s as a biological rather than a mechanical neural network it would not be intelligent?

  69. 69
    Aleta says:

    Robert’s never heard of Go, but it’s just another dumb board game that’s just a game of memory, not intelligence.

    That is a breathtakingly ignorant comment.

  70. 70
    Smidlee says:

    #68
    Biological machines are still machines and can’t reason either. Poisons kill you for the simple fact it screws up the mechanics of our cell and/or brain. Our brain can process digital data just like a computer but then comes the mystery … somehow the physical becomes something totally non-physical which has the ability to reason.

  71. 71
    hrun0815 says:

    Re #70: I’m not going to go into it. The points are just too obvious.

    Just this: How would one distinguish theoretically between ‘AI is pretending to be conscious’ from ‘AI is conscious’? If you like, replace the word ‘conscious’ with ‘intelligent’ or ‘able to reason’.

  72. 72
    Mapou says:

    @66:

    Tell us more, Robert.

    Please don’t.

    @71:

    How would one distinguish theoretically between ‘AI is pretending to be conscious’ from ‘AI is conscious’?

    IMO, when we finally have truly intelligent machines, many people will swear that they are conscious. This is not unlike the way many people swear that animals are conscious now. We are already beginning to see people feeling sorry for robots that are being kicked by a human being. The tendency to anthropomorphise seems to be a strong part of human nature. I guess it’s a form of empathy.

    To determine whether or not an intelligent machine is conscious, I propose what I call the Beauty Test. It consists of presenting a fairly large number of random patterns to both humans and a machine under test and let them score the patterns according to their beauty or ugliness using a 10 to 1 scale. If the machine responds similarly to humans, I would consider it conscious.

    The problem with this test is that an intelligent machine that has been raised among humans will figure out what kind of things humans consider beautiful. So the trick is to come up with patterns that the machine has never seen before. It’s not an easy test.

    If you like, replace the word ‘conscious’ with ‘intelligent’ or ‘able to reason’.

    Reasoning is a cause-effect process. Machines can already do this to a limited extent. It has nothing to do with consciousness, IMO.

  73. 73
    hrun0815 says:

    To determine whether or not an intelligent machine is conscious, I propose what I call the Beauty Test. […]

    Hmm, interesting, but I think you already identified a major problem: Beauty, at least to a certain degree, must be a learned behavior (see for example cultural difference in beauty ideals). So if we allow a computer to learn from humans what is beautiful and what is not, my guess is it won’t take long for an AI to pass the Turing version of the Beauty Test. Google and FB are putting massive efforts into object recognition in images and deep learning on imaging features is a standard method used in biology these days (works even when it is completely unknown what these features actually are).

    Reasoning is a cause-effect process. Machines can already do this to a limited extent. It has nothing to do with consciousness, IMO.

    My contention is that we actually have no idea what exactly these different actions entail in our brains. So, at least for now, I’d say the only thing we can use to judge if a machine possesses certain features (like consciousness or intelligence or ability to reason) is if we can train it to act as such. Granted, that is a somewhat unsatisfying operational definition, but I think it’s the best we can do at this time.

    Everything else just turns into endless fuzziness of unclear terminology and non-testable definitions.

  74. 74
    GaryGaulin says:

    I think that the test for consciousness is whether it on its own becomes as obsessed as we are over where consciousness comes from, try to explain it. Where the intelligences are at the human level it would end up with a branch of science for its study. In their case though there is no question that they live in a virtual world inside a machine, have that to discover.

  75. 75
    Virgil Cain says:

    hrun:

    Ah, so once AlphaGo or another AI can play multiple games it is clear that both AI and humans are not intelligent, but only do what they are designed to do? Or is it the other way round? Who knows, but I am sure your objections are logically rigorous.

    And another meaningless rant that just proves hrun has reading comprehension issues.

  76. 76
    hrun0815 says:

    Sedol lost also game five after seemingly being able to adapt to AlphaGo in game four. I guess maybe this means he didn’t quite figure things out yet. That being said, considering the ability of AlphaGo to learn so rapidly is predict that the only way for a human to beat AlphaGo in the future is by being spotted several stones in advance. The real question is: How many will be needed to even the playing field.

  77. 77
    hrun0815 says:

    And another meaningless rant that just proves hrun has reading comprehension issue.

    Yes. It is pretty meaningless. And it is also true that with many of your posts (like KF’s and BA’s) I have some serious reading comprehension issues. I guess the only thing I don’t agree with is the term rant.

    And don’t bother answering this meaningless post. 🙂

  78. 78
    Zachriel says:

    EvilSnack: Is there a neural network that has actually done this?

    hrun0815 already pointed to Claudico.

  79. 79
    Mapou says:

    hrun0815 @73,

    Any test for consciousness must target purely subjective aspects of the mind. The idea behind the Beauty Test is to eliminate any possibility of confusing consciousness with intelligence. The premise is that, since beauty is not a physical property of matter, it can only come from our consciousness.

    Experiments have shown that babies are innately attracted to patterns that we, as adults, consider beautiful. Animals do not have this attraction. It is impossible for a machine to have such a subjective inclination, aka free will. Unless, of course, we figure out a way to trap a “ghost in the machine.” This is highly unlikely. 🙂

  80. 80
    hrun0815 says:

    Re #79: At first glance the beauty test simply looks like a variation of common (yet difficult) pattern recognition problem. I am quite certain that relatively soon this is achievable by a well-trained machine.

    However, you are bringing in the idea of innately picking out patterns find beautiful. That’s of course where things get hairy. I don’t think there is any way to replicate this in a machine. What would be the right test parameters if the machine like a neural network is based on learning? On the other hand, if we would manage to encode this pattern recognition in hardware (which seems theoretically possible), would this still count?

  81. 81
    Robert Byers says:

    hrun #66
    lets think about this.
    I insiust its just memorized moves.
    You say it couldn’t of memorized every move!
    I agree. Its memorized the concepts for general moves.
    Its not figuring out anything that it hasn’t already figured out.
    Why do think its moves must all independently be memorized as opossed to bASIC MOVE CONCEPTS.
    Its just a search engine for types of moves for types of responses.
    There is no intelligence going on. Just dumb memory operations. Just a slot machine.

  82. 82
    PaV says:

    Francis Crick:

    What is found in biology is mechanisms, mechanisms built with chemical components and that are often modified by other, later, mechanisms added to the earlier ones. While Occam’s razor is a useful tool in the physical sciences, it can be a very dangerous implement in biology. It is thus very rash to use simplicity and elegance as a guide in biological research. While DNA could be claimed to be both simple and elegant, it must be remembered that DNA almost certainly originated fairly close to the origin of life when things were necessarily simple or they would not have got going.
    Biologists must constantly keep in mind that what they see was not designed, but rather evolved.

    May I paraphrase a little:

    What is found in AI is algorithms, algorithms built with primary coding language and that are often modified by other, later, algorithms added to the earlier ones. . . . While ‘machine intelligence’ could be claimed to be both simple and elegant, it must be remembered that ‘machine intelligence’ almost certainly originated fairly close to the origin of machines when algorithms were necessarily simple or they would not have got going. AI programmers must constantly keep in mind that what they see was not designed, but rather evolved.

    If AlphaGo’s “intelligence” “evolved” over time with experience–that is, as it became “more fit”–then obviously we must conclude that if we go back in time, mere physics and blind forces can explain everything. There was NO Designer. End of story.

  83. 83
    Robert Byers says:

    pav
    Well done. i get it. That was cool.
    Old man Crick was wrong and didn’t know what he was talking about. He only examine real time details of biology. origins of biology is invisable.
    None of his business to use his prestige to opine on origins as if its his study.

  84. 84
    Aleta says:

    hrun writes,

    That being said, considering the ability of AlphaGo to learn so rapidly is predict that the only way for a human to beat AlphaGo in the future is by being spotted several stones in advance. The real question is: How many will be needed to even the playing field.

    In Go, even just a few handicap stones, which are put on specific spaces, can make a large difference, which makes it possible to make good games between people of fairly different abilities. For instance giving someone four handicap stones might be sort of like having a chess player remove a knight and a rook at the start of game. (However, analogies between chess and Go are very unlikely to be very accurate because the games are so different.)

  85. 85
    hrun0815 says:

    Re #81: Ah man, Robert. You have to stop at some point. Memorizing basic move concepts? You are slaying us.

  86. 86
    hrun0815 says:

    Re #84: Yup. I do realize the massive advantage a few stones will give you. That’s why I am so curious just how far the next gen of AlphaGo might be able to push things?

    Have people tried to see how much you can handicap a chess program and have it still win against a grandmaster?

  87. 87
    PaV says:

    Robert Byers:

    I’m glad you saw through my subtlety. All of this can easily be argued to death.

    But my real point is that if Crick wants to see “evolving” mechanisms telling us that everything reduces itself down to raw physics and molecules, then mutatis mutandi, the same applies to AI, “artificial intelligence.” And we know this to be completely wrong.

  88. 88
    Trumper says:

    A little late to the party here but the bottom line is just a big yawn….. nothing unexpected about a programmed application being able to function by performing faster calculations than a human….humans are inundated with thousands more input points than a rote learning chip. After all… I seem to recall that humans invented the game…something AI was incapable of doing. Does this mean that if some box of sand someday invents a novel game that humans like to play equates to intelligence???? maybe you would then consider a traffic light intelligent. LOL pretty sad if so.

Leave a Reply