Uncommon Descent Serving The Intelligent Design Community

Computer beats humans at Go: so what?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The news that a computer program has beaten Go master Lee Se-dol in a best-of-five competition may have shocked some readers. In this post, I’d like to explain why I don’t think it matters much at all, by telling a little story about three guys named Tom, Sam and Al.

Tom has a brilliant mind. He puts his perspicacious intellect to good use by playing mentally challenging games, and he always wins. Tom’s freakish ability to win games by performing astonishing leaps of mental intuition leaves many spectators baffled. “How on earth do you do it?” they ask him, whenever he chalks up a victory against yet another hapless opponent. “It’s a strange gift I have,” Tom answers modestly. “I can’t really explain it, even to myself. I just have these mysterious intuitions that come to me out of the blue, and that enable me to win.”

Tom’s reputation spreads far and wide. Those who witness his spectacular triumphs are amazed and dumbfounded. After a while, people start calling him the world’s best game player.

How Sam beat Tom

One day, a stranger shows up in town, named Sam. Sam walks up to Tom (who is sitting in a bar) and says to him in a loud voice, “I can beat you!”

“No, you can’t,” answers Tom, “but you’re welcome to try anyway. Name your game.”

“Chess,” says Sam. “You know what they say: it’s the game of kings.”

“Good choice,” replies Tom. “I love that game.”

“I have a question,” says Sam. “Do you mind if I get some assistants to help me choose my moves?” “Not at all,” answers Tom. “I’m quite willing to be generous. Bring as many assistants as you like.”

Sam has one more question. “Since I have a very large number of assistants, do you mind if I contact them via email while I play, instead of bringing them all here?”

“Not at all,” replies Tom. “That’s fine by me.”

“That’s a big relief,” says Sam. “Actually, I have millions and millions of assistants. And it’s a good thing that they’re helping me, because I really don’t know much about chess. Nor do they, for that matter. But together, we’ll beat you.”

Now Tom looks puzzled. “How are you going to beat me,” he asks, “if you don’t really know the game?”

“By brute force,” answers Sam. “Each of my assistants is good at just one thing: evaluating a chess position. Thanks to my army of assistants, who are extremely well-organized and who are also very good at rapidly evaluating positions and sharing information with one another via email, I am effectively capable of evaluating hundreds of millions of chess positions in just a few seconds. I’ve also compiled a list of good opening and closing moves, as well as good moves in various tricky situations, by studying some past games played by chess experts.”

“Well, that sounds like an interesting way to play,” says Tom. “But speed and a list of good moves are no substitute for intuition. You and your assistants lack the ability to see the big picture. You can’t put it all together in your head, like I can.”

“We may lack your intuition,” responds Sam, “but because we’re fast, we can evaluate many moves that would never occur to you, and what’s more, we can see further ahead than you can. Do you want to try your luck against us?”

“Game on!” says Tom.

After just 20 minutes, it’s game over for Tom. For the first time in his life, he has been soundly defeated. He and Sam shake hands in a gentlemanly fashion after the game, and return to Tom’s favorite bar, where they both order a beer.

Tom is quiet for a while. Suddenly, he muses aloud, “I think I finally understand, Sam. What you’ve taught me is that the game of chess is fundamentally no different from a game of noughts and crosses, or Tic-Tac-Toe. It’s a game which yields to brute force calculations. My intuition enables me to see ahead, and identify lots of good moves that my opponents can’t see, because they’re not as far-sighted as I am. But your brute-strength approach is more than a match for my intuition. I’m limited by the fact that I can’t see all of the good moves I could make. You and your army of assistants can do that. No wonder you won, when you played me. Still, it’s taught me a valuable lesson about the limits of human intuition. Congratulations on your victory.”

“So you’re going to give up calling yourself the world’s best game player?” asks Sam.

“Not quite,” answers Tom. “From now on, I’m going to call myself the world’s best player of interesting games. By an ‘interesting game,’ I mean one that doesn’t yield to brute-strength calculations – in other words, one that requires a certain degree of intuition in order to be played well.”

“Would you care to nominate a game that fits that description?” inquires Sam.

“My nomination is the game of Go, which has been called the most complex of games,” replies Tom. “The number of possible positions on a 19 x 19 Go board is 10170, which is far greater than the number of atoms in the universe. “There’s no way that you and your army of assistants can evaluate that many moves. Admit it: you don’t have a hope of beating me at Go.”

“You’re right; we don’t,” acknowledges Sam. “But I know another man who I think can beat you. His name is Al. Remember that name. At the moment, he’s perfecting his game, but he’s improving by leaps and bounds. You’ll probably see him a few years from now.”

“I look forward to the challenge,” replies Tom. “Farewell, Sam, and take care.”

Al arrives in town

A few years later, Sam’s prophecy comes to pass. A peculiar-looking man in a dazzling purple cloak rides into town, and asks to see Tom. “Hi, Tom. I’m Al,” he says. “I’d like to challenge you to a game of Go. Although I have none of your brilliant intuition, I’m quite confident that I can win.”

“I really don’t see how you can,” answers Tom. “Even if you had an entire universe full of people helping you to choose your next move, there’s no way you could possibly see far enough ahead to properly evaluate all possible moves you could make. Without a brute strength approach, you really need intuition, in order to win.”

“Oh no you don’t,” Al replies. “It turns out that the game of Go has a long, long history which you know nothing about. On Earth, it first appeared in China, more than 2,500 years ago. But it was brought to Earth by aliens. I’ve been in contact with them: in fact, it was they who gave me this colorful cloak, which can instantly turn any color I tell it to, as well as turning invisible.”

“Wait a minute,” interrupts Tom. “Forget about the cloak. You mean to say I’ll be playing against a bunch of aliens?”

“By no means,” replies Al. “You’ll be playing against me, and I can promise you, I won’t be talking to any aliens, either. But I should tell you that aliens have been playing the game of Go for billions of years: in fact, there’s even an inter-galactic Go club. However, they play it in a very different way from you, Tom. They don’t rely on intuition at all.”

“How do they play, then?” asks Tom, perplexed.

“They play incrementally, by gradually building up a set of smart and successful moves in various situations,” answers Al. “A long time ago, the list of smart moves was fairly short: you could fit them all in one book. Now, after billions of years, the list is much bigger. When aliens play Go, they do so by following the rule book up to a certain point, and then trying out something a little bit new and different. It doesn’t make for very exciting games, but it does make for smart tactics. Recently, the aliens were kind enough to give me their list of moves. However, it’s so big that I’ll require an army of earthly assistants to help me search through the list, in order to keep within the time limits of the game. None of these assistants knows anything about the game of Go, but they’ll be communicating with me via email. I have to say that I know very little about the game of Go myself, but I’m going to be playing by the aliens’ rules. Is that all right with you?”

“Certainly,” replies Tom. “The aliens’ way of playing sounds rather dull to me. I’m going to spice up the game with some human intuition. You’ll soon see that nothing can beat intuition, in an interesting game like Go, where the best move can’t possibly be calculated.”

They sit down to play. After about an hour, Tom is forced to resign. In a dazed tone of voice, he asks, “How did you do it, Al?”

“I think I can explain, although I’m no Go expert,” answers Al. “Essentially, what I did was to pool the combined wisdom of billions of players who came before me. You were playing against that. What my victory means is that a sufficient amount of experience can beat human intuition, in a tactical game. But that’s hardly surprising, is it?”

Tom reflects for a while and finally replies, “No, Al, it isn’t. I was wrong to think that I could defeat the combined wisdom of so many players. I’ve come to appreciate the limits of human intuition. What I’m wondering now is: are there any situations where intuitions are indispensable?”

Tom reflects on the nature of human intuition, and where it might prove indispensable

Tom ponders again for a while. After a long silence, he announces, “I think I can see two kinds of cases where intuitions are indeed irreplaceable. One is in a game where the goal cannot be described in objective, “third-person” language; it can only be described in subjective terms which refer to the beliefs, desires and intentions of the other players. To win the same, you have to be able to put yourself in other people’s shoes. While a list of ‘smart moves’ might serve you well up to a point, it won’t help you in novel or unexpected situations, where you can only figure out what you should do by asking yourself what the other person would want you to do in that situation. Experience can never trump empathy.”

Tom continues: “The other case I can think of where intuition would be needed is in a situation where trying out incremental improvements won’t help you to get from A to B, simply because there are too many improvements to try out, making the search for the best move like searching for a needle in a haystack. Experience won’t help here, because there isn’t enough time to narrow down the search. Without a flash of insight, you’ll never be able to spot the right move to make, in moving towards your desired goal.”

Al is curious. “Would you care to offer any examples of these two cases you’ve proposed?” he asks.

“Happy to oblige,” answers Tom. “Right now, in the United States, there’s a presidential election going on. Politics is a game, and the U.S. presidential election is a winner-take-all game. But it’s not enough for the successful candidate to be a policy wonk, who knows how to fix the American economy, or even a ‘steady pair of hands,’ capable of handling any domestic or international crisis that might come up. You need more than intelligence and experience to win a presidential election. You need to be a good speaker, who is capable of inspiring people. You also need to be capable of leadership, so it definitely helps if you have a commanding presence and ‘sound presidential.’ It helps, too, if you have excellent networking skills, to help you raise lots of money, which you’ll need to finance your campaign. In addition to that, you need to be a fairly likable person: nobody wants to elect a curmudgeon, no matter how clever, experienced or commanding he or she may be. On top of that, you need to be capable of empathy: you need to be able to show the public that you are genuinely capable of feeling other people’s pain, or people will spot you for a phony and dismiss you as cold and uncaring. Oh – and you’d better at least as ethical as your opponents, or people will perceive you as a liar and a crook, and they probably won’t vote for you. As you can see, many of these skills require the ability to identify with other people. You simply can’t bluff your way through a presidential campaign with a catalogue of smart moves or canned responses. It’s too unpredictable. Let me put it another way. You could design a robot that could beat a human at the tactical games I’ve practiced playing, over the years. But you could never design a robot that could win an American presidential election. Only a human being who is capable of genuine empathy and of intuiting the right thing to do when interacting with other people could win a contest like that.”

“Interesting,” says Al. “What about your other case?”

“Protein design would be an excellent example of a challenge requiring leaps of human intuition,” answers Tom. “Very short proteins might arise naturally, but once you get to proteins that are more than 150 amino acids in length, the space of possibilities is simply too vast to explore, as Dr. Douglas Axe demonstrates in his 2010 paper, The Case Against a Darwinian Origin of Protein Folds. In his own words:

The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that relatively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence. If this is correct, the sampling problem is here to stay, and we should be looking well outside the Darwinian framework for an adequate explanation of fold origins.

“I’d say a situation like that calls for the intuitive insight of an intelligent designer, wouldn’t you?” asks Tom.

If Dr. Axe’s premises are correct, then it’s difficult to avoid that conclusion,” concedes Al. “But I’m not a biochemist, so I can’t really say. Still, I can at least see what you mean, now. One thing troubles me, though.”

“What’s that?” asks Tom.

“The two kinds of cases you’ve described are quite different in character,” replies Al. “One requires the ability to put yourself in other people’s shoes, while the other requires the ability to make a mental leap that surpasses the power of any computer or any trial-and-error process. What I’d like to know is: what is it that ties these two kinds of cases together?”

“That’s a very good question,” answers Tom. “I really don’t know. What I do know, however, is that all my life, the games I’ve been playing are only a tiny subset of the vast range of games that people play in real life, let alone the truly enormous set of games played by the Creator of the cosmos, when designing Nature. I’ve now come to realize that losing at chess and Go doesn’t matter very much, in the scheme of things. There are far more interesting games to play. And now, I’m off.”

“Where are you off to?” asks Al.

“Washington,” answers Tom. “I’m going to try my hand at political forecasting. Maybe I’ll succeed, or maybe fall flat on my face. But you’ve given me a lot to think about, Al. I’m going to try out some of the new ideas you’ve given me, and put them to the test. Wish me luck!”

I shall end my story there. I wonder if any of my readers can shed some light on the question posed by Al on human intuition, at the end of my story. What, if anything, unifies the two kinds of cases I have described?

Before I finish, I’d like to quote a short passage from an article by philosopher David Oderberg, who is now professor of philosophy at the University of Reading, England. In an article in the Australian magazine Quadrant (Vol. 42, No. 3, 1998: 5-10), he wrote:

“…[T]he game of chess, in itself, is nothing more than glorified noughts and crosses. Sure, it can be played with finesse, ingenuity, artistry and so on; but that is peripheral. In essence, chess is a formal system of well- defined axioms and rules, with a well-defined goal. No wonder a computer can play it at all. We should be amazed if it couldn’t.”

Food for thought. And now, over to you.

P.S. Perceptive readers will have noticed some similarities between my story and philosopher John Searle’s Chinese room thought experiment. My intention here, however, is not to address the question of whether computers think, or whether they are conscious, but rather, to explore the strengths and weaknesses of human intuition.

Comments
I think the word “intellegence” is being wildly equivocated here. There is a vast difference between an “intellegence” that can “calculate” a result were the end is calculable and an “intellegence” that goes about inventing (designing) the game of Go for no other reason than to occupy/utilize its circuits for enjoyment.
And yet another "Ah, yes, but is it intelligent?" I really wonder what is behind this desire to figure out if playing go means the machine is intelligent, if the internet will be conscious by 2100, if a brain or 2500 processors are more impressive hardware, or if AlphaGo can fold laundry?hrun0815
March 13, 2016
March
03
Mar
13
13
2016
09:37 AM
9
09
37
AM
PDT
So now computers have experiences.Mung
March 13, 2016
March
03
Mar
13
13
2016
09:29 AM
9
09
29
AM
PDT
vjtorley: “I think I can explain, although I’m no Go expert,” answers Al. “Essentially, what I did was to pool the combined wisdom of billions of players who came before me." All good players study the play of previous masters. However, AlphaGo did more than that. AlphaGo learned from the experience of playing the game; again something all good players do.Zachriel
March 13, 2016
March
03
Mar
13
13
2016
07:30 AM
7
07
30
AM
PDT
Aleta, AlphaGo does not enjoy the game. More, or is it less, than that, it isn't even aware that it is a game of Go that it is playing. I think the word "intellegence" is being wildly equivocated here. There is a vast difference between an "intellegence" that can "calculate" a result were the end is calculable and an "intellegence" that goes about inventing (designing) the game of Go for no other reason than to occupy/utilize its circuits for enjoyment. StephenSteRusJon
March 13, 2016
March
03
Mar
13
13
2016
07:11 AM
7
07
11
AM
PDT
lol. Nice story. Here's a list of game complexities from Wikipedia. https://en.wikipedia.org/wiki/Game_complexity#Complexities_of_some_well-known_games I haven't looked at it in awhile, but according to some criteria, Go exceeds the UPB, whereas chess does not.tragic mishap
March 13, 2016
March
03
Mar
13
13
2016
05:36 AM
5
05
36
AM
PDT
You are correct. I meant that for a human to play Go he has to think about Go. To program a computer to play a game the programmer has to reflect on the process of making decisions. In both cases it's the human doing the thi king.hnorman5
March 13, 2016
March
03
Mar
13
13
2016
05:31 AM
5
05
31
AM
PDT
hnorman5: I am very happy that computers can play Go. To play Go you have to think about Go.
Nope. Nice try but no cigar. Computers don't think, but instead manipulate symbols without understanding the meaning of the sentences/formulas that are constituted by words/symbols. See Searle's Chinese Room argument.Origenes
March 13, 2016
March
03
Mar
13
13
2016
04:41 AM
4
04
41
AM
PDT
I am very happy that computers can play Go. To play Go you have to think about Go. To write a program that plays Go, you have to think about how we think. I wonder though -- how long until we make a computer that feels bad when it loses?hnorman5
March 12, 2016
March
03
Mar
12
12
2016
11:05 PM
11
11
05
PM
PDT
I don’t have to prove it. It was plainly stated by the programming team. Look it up.
I did find that this is what happened, but not that it couldn't have happened otherwise. Clearly you are better informed than I am and should be easily able to post the relevant not from the programming team.
I doubt that Sedol [...]
Or, in other words: “Yes, sure, but is it intelligent?”hrun0815
March 12, 2016
March
03
Mar
12
12
2016
10:31 PM
10
10
31
PM
PDT
Just real quick: I did not claim that the Turing test was passed by Eugene. I still don't believe that you accurately describe what AlphaGo does (pool the combined wisdom of billions of players). That pooling or training with human games is not what allows AlphaGo to mop the floor with a 9 Dan player. And I do believe that you are wrong about AIs not being able to deal with ambiguities but require a closed system. The rapid improvement of autonomous vehicles, as an example, belies that assertion. And I don't disparage Mapou. I simply point out that it is difficult to draw a comparison between two completely different systems based on hardware, rather than output. Humans are clearly better at a huge number of things in this world, but the list of things where AIs outperform even the best humans is only getting longer. And finally, the discussion about whether or not the internet or computers could ever be conscious I'll leave to you. It seems such a silly thing to attempt and predict since it is so poorly defined. That's nearly as silly as attempting to predict at what point computers might be 'intelligent'.hrun0815
March 12, 2016
March
03
Mar
12
12
2016
09:23 PM
9
09
23
PM
PDT
hrun0815, Thank you for your posts. You write:
And this: “But you could never design a robot that could win an American presidential election.” sounds very much like “But you could never design a robot that could successfully beat a Turing test.”
Funny you should mention the Turing test. Evidently you haven't read my Uncommon Descent article, Why no 13-year-old boy talks like Eugene Goostman (June 10, 2014) or my more recent article, Can intelligence be operationalized? (March 11, 2015), which defends the validity of the Turing test. You also write that my story "completely fails in describing how AlphaGo plays the game." I was quite clear in my story that AlphaGo's triumph in the game of Go (as exemplified by the character named Al) was due not to a brute-strength approach but to its ability to "pool the combined wisdom of billions of players." (OK, maybe "billions" was an exaggeration.) I added that AlphaGo's (or Al's) victory means that "a sufficient amount of experience can beat human intuition, in a tactical game." I stand by that description. As Mapou points out, "AlphaGo was trained on many thousands of pre-recorded games played by human professionals over many decades." I think Origenes made a very telling point when he wrote:
The BBC article states: “Unlike the real world, a closed system of fixed rules suits computing” and I hold that this is spot on. Computers can only excel when there is a fixed set of rules — a fixed context, as with chess and Go. However computers are helpless when a new context is in place. Why is this? Because in order to calculate a computer needs a correct “atomization of meaning”, which breaks down when contexts are changing.
You might object that a computer could be programmed to learn new rules in a changing situation. You're quite right, but Origenes' point is that at any given point in time, the rules themselves are clear-cut, with no room for ambiguity. Finally, you disparage Mapou's observation that Lee Se-dol's feat is much more impressive because he became a Go master after playing just a few hundred games, whereas AlphaGo required many times more than that, by pointing out that Se-dol had "a quarter of a century of training." I ask: training at what? Learning how to walk as a baby? Learning how to read as a child? Why should that count? By any fair reckoning, Se-dol has indeed had far less training than AlphaGo. There's one thing I do agree with you about, however. AlphaGo's 1200 CPUs and 170 superfast GPUs are no match for the 90 billion neurons in Lee Se-dol's brain. I explain why in my post, Could the Internet ever be conscious? Definitely not before 2115, even if you’re a materialist. Computers might be faster at particular tasks, such as calculating, but there are many things a human being can routinely do, which a computer cannot do at all.vjtorley
March 12, 2016
March
03
Mar
12
12
2016
08:59 PM
8
08
59
PM
PDT
GaryGaulin at #10:
At least we agree on this one. I also say: so what?
Oh, nothing important Gary! Maybe just a minor reinforced conclusion: that human mind is supernatural. And maybe humans and all living organisms are supernatural. And maybe everything has a supernatural origin. Just think a little bit Gary! The materialist believes that matter (call it evolution, emergence, constructal theory, whatever) created everything including humans and their minds. But human mind is such an intangible, incomprehensible artifact that even humans cannot figure out how it works. Pretending that matter (again call it evolution, emergence, constructal theory, whatever) created even the simplest form of life is just an incredible absurd and ridiculous belief.InVivoVeritas
March 12, 2016
March
03
Mar
12
12
2016
08:42 PM
8
08
42
PM
PDT
But the computer doesn't enjoy playing, and I do (did, as I don't play anymore), so there's a difference! :-)Aleta
March 12, 2016
March
03
Mar
12
12
2016
08:36 PM
8
08
36
PM
PDT
hrun0815:
And you can prove this how?
I don't have to prove it. It was plainly stated by the programming team. Look it up.
Because Sedol was able to play by using only his ~90 billion neurons after about a quarter of a century of training?
I doubt that Sedol uses more than 1% of his neurons to play Go. As the late Marvin Minsky once wrote, the mind is a society of agents. The "easy" stuff in life, such as doing chores around the house, are many orders of magnitude more complex than Go or chess. Otherwise, deep neural networks would be cooking meals and cleaning up the house instead of playing Go and Atari games. And, regardless of how long Lee Sedol trained, he never played more than a few hundred professional-level games in his lifetime. A thousand at the most, maybe. Furthermore, he does not have fast, dedicated parallel GPUs crunching Monte-Carlo based, tree-searching algorithms to work with. He only has his super slow neurons. So yes, I am impressed with Sedol.Mapou
March 12, 2016
March
03
Mar
12
12
2016
08:33 PM
8
08
33
PM
PDT
This gave the program an initial evaluation function that it could not have otherwise.
And you can prove this how? AlphaGo could have just learned his evaluation function from scratch. It would just have taken longer.
In addition, [AlphaGo] used 1200 CPUs and 170 superfast GPUs for parallel processing. [...] I’m much more impressed with Sedol.
Because Sedol was able to play by using only his ~90 billion neurons after about a quarter of a century of training?hrun0815
March 12, 2016
March
03
Mar
12
12
2016
08:05 PM
8
08
05
PM
PDT
The fact that AlphaGo won 3 games against Lee Se-dol without brute force number crunching is testament to the ingenuous AI. I applaud the bunch of human software engineers who made this AI.
This is not true, of course. AlphaGo was trained on many thousands of pre-recorded games played by human professionals over many decades. This gave the program an initial evaluation function that it could not have otherwise. It then used this evaluation function in conjunction with tree-searching and a simple reinforcement learning algorithm to play millions of games against copies of itself. This allowed it to improve its evaluation function even further. In addition, it used 1200 CPUs and 170 superfast GPUs for parallel processing. This is about as brute force it as it gets. I doubt that Lee Sedol played millions of games in his lifetime. He's not that old. A few hundreds at the most. I'm much more impressed with Sedol.Mapou
March 12, 2016
March
03
Mar
12
12
2016
07:51 PM
7
07
51
PM
PDT
AlphaGo didn't learn to play GO because it was looking for entertainment. Just say'in. StephenSteRusJon
March 12, 2016
March
03
Mar
12
12
2016
07:18 PM
7
07
18
PM
PDT
The fact that AlphaGo won 3 games against Lee Se-dol without brute force number crunching is testament to the ingenuous AI. I applaud the bunch of human software engineers who made this AI.Me_Think
March 12, 2016
March
03
Mar
12
12
2016
06:32 PM
6
06
32
PM
PDT
At least we agree on this one. I also say: so what?GaryGaulin
March 12, 2016
March
03
Mar
12
12
2016
04:52 PM
4
04
52
PM
PDT
The BBC article states: “Unlike the real world, a closed system of fixed rules suits computing” and I hold that this is spot on. Computers can only excel when there is a fixed set of rules — a fixed context, as with chess and Go. However computers are helpless when a new context is in place. Why is this? Because in order to calculate a computer needs a correct “atomization of meaning”, which breaks down when contexts are changing. Professor Edward F. Kelly wrote about his initial optimism about Computational Theory of the Mind [CTM], before he understood this problem:
I must also acknowledge here that I myself initially embraced the computational theory practically without reservation. It certainly seemed an enormous step forward at the time. Fellow graduate students likely remember my oft-repeated attempts to assure them that the CTM would soon solve this or that fundamental problem in psychology. But all was not well.
Later on it hit him:
Any scheme based on atomization of meaning would necessarily fail to capture what to me had become the most characteristic property of word-meaning, a felt Gestalt quality or wholeness, at a level of generality that naturally supports extensions of usage into an indefinite variety—indeed whole families—of novel but appropriate contexts. The existing proposals could only represent the content of a general term such as “line” by some sample of its possible particularizations, and in so doing rendered themselves systematically unable to distinguish between metaphorical truth and literal falsehood. I also noted with a certain degree of alarm that this crucial property of generality underlying the normal use of words seemed continuous with developmentally earlier achievements. Skilled motor acts and perceptual recognition, for example, require similar on-the-fly adaptations of past learning to present circumstance. The importance of incorporating more general knowledge of the world into language-processing models, for example, had already begun to be recognized, and new formal devices were being introduced to represent what the computer needed to know (what we ourselves know) about various sorts of “typical” situations it might encounter. But it seemed clear to me that all of these knowledge-representation devices, such as “frames” (Minsky, 1975), “scripts” (Schank & Colby, 1973), and “schemata” (Neisser, 1976), suffered essentially the same problems I had identified in the Katz and Fodor account of word meaning. Specifically, they required the possible scenarios of application to be spelled out in advance, in great but necessarily incomplete detail, and as a result ended up being “brittle,” intolerant of even minor departures from the preprogrammed expectations. Many of the themes just sounded have been confirmed and amplified in more recent work. On the positive side, our knowledge of the content, organization, and development of the human conceptual system has increased enormously. The old Socratic idea that concepts must be defined in terms of necessary and sufficient features has given way to a recognition of the role of perceptual-level examples, prototypes, and family resemblances in the content of real human concepts (Medin & Heit, 1999; Rosch & Lloyd, 1978; E. E. Smith & Medin, 1981). The contributions of a fundamental human capacity for metaphorizing at levels ranging from everyday language to the highest flights of creativity, are also now more widely appreciated (Gentner, Holyoak, & Kokinov, 2001; Hofstadter & FARG, 1995; Holyoak & Tha-gard, 1995; Lakoff, 1987, 1995; see also our Chapter 7). Computer language-processing systems have tended to move, as I predicted, toward a relatively simplified and “surfacy” syntax coupled with richer representations of the lexicon, and they sometimes attempt to resolve residual ambiguities with the aid of statistical data derived from large databases. Hubert Dreyfus (1972) systematically questioned both the progress and the prospects of CS and AI. He began by reviewing the early work in game-playing, problem-solving, language translation, and pattern recognition. Work in each domain was characterized by a common pattern consisting of encouraging early success followed by steadily diminishing returns. E.F.Kelly. Irreducible Mind, 2007 [My emphasis]
Origenes
March 12, 2016
March
03
Mar
12
12
2016
04:13 PM
4
04
13
PM
PDT
hrun0815:
Funny you bring that up…
Yes, it is funny, considering all the buzz about self-driving cars from Google and the like. But guess what? Fully autonomous self-driving cars are still many years away. They have to spend many more years to train them in order to get them to a point where they can be confident enough to let them loose on their own. The current crop of autonomous vehicles are completely lost when faced with the hand signals of a road construction worker. And forget about any detour that is not registered in their internal GPS-enabled maps.Mapou
March 12, 2016
March
03
Mar
12
12
2016
03:08 PM
3
03
08
PM
PDT
It’s not the kind of intelligence you would want to drive you and your car to work in the morning.
Funny you bring that up...hrun0815
March 12, 2016
March
03
Mar
12
12
2016
02:57 PM
2
02
57
PM
PDT
AlphaGo uses a combination of AI techniques to play Go. First, its deep neural network was trained on thousands of patterns from pre-recorded games played by humans. This is called the "policy network". Second, it uses a combination of the Monte Carlo tree-search algorithm and reinforcement learning to build a value network. It was further trained by playing against other instances of itself millions of times. So is it intelligent? Yes, of course, but it's a dumb, brittle, domain-specific type of intelligence. It's not the kind of intelligence you would want to drive you and your car to work in the morning. Why? Only because it's helpless if it encounters a situation is has never seen before. And with all of its prowess, AlphaGo could not tell the difference between Go and tic-tac-toe and could not even tell you what a board game is. The problem with current AI technologies is that, in spite of claims to the contrary, they are still doing GOFAI (good old fashioned AI). Supervised deep learning is still the same "baby boomer" symbolic AI of the last century but with a little lipstick on. One only needs to ask oneself the following question: Which is harder, beating a Go Grandmaster or doing the laundry (loading, washing, drying, ironing, folding, etc.)? We humans have no trouble doing the laundry and all types of other routine chores around the house. Current AI systems would be completely stomped out of their GPUs. In conclusion, truly general AI is coming but it's not coming from the mainstream AI community. Those guys are totally out to lunch. They're not even in the same galaxy of the right approach, let alone the same ballpark.Mapou
March 12, 2016
March
03
Mar
12
12
2016
02:38 PM
2
02
38
PM
PDT
And just one last point: In your analogy you are reasonable well describing how Deep Blue worked when playing chess, but it completely fails in describing how AlphaGo plays the game.hrun0815
March 12, 2016
March
03
Mar
12
12
2016
02:12 PM
2
02
12
PM
PDT
Go is a wonderful game, and the kind of strategic decisions one has to make are very different from the more linear thinking involved in chess. I first learned to play when I was 14, using thumbtacks pushed into the top of a cardboard box, and I still have the pieces I bought in Chinatown in 1967. This comment has absolutely nothing to do with the OP - I'm just glad the game itself is getting some publicity, and it's fun for me to reminisce.Aleta
March 12, 2016
March
03
Mar
12
12
2016
02:11 PM
2
02
11
PM
PDT
And this: "But you could never design a robot that could win an American presidential election." sounds very much like "But you could never design a robot that could successfully beat a Turing test."hrun0815
March 12, 2016
March
03
Mar
12
12
2016
01:53 PM
1
01
53
PM
PDT
Ah yes, the standard commentary (albeit long in this case) to every advancement in AI research: "Yes, sure, but is it intelligent?"hrun0815
March 12, 2016
March
03
Mar
12
12
2016
01:49 PM
1
01
49
PM
PDT
The news that a computer program has beaten Go master Lee Se-dol in a best-of-five competition may have shocked some readers. In this post, I’d like to explain why I don’t think it matters much at all, by telling a little story about three guys named Tom, Sam and Al.
For me the shocking thing is that this is happening quite a bit sooner than expected. Perhaps Fan Hui will turn out to be the last human to beat the leading computer program at Go.daveS
March 12, 2016
March
03
Mar
12
12
2016
12:41 PM
12
12
41
PM
PDT
1 2 3

Leave a Reply