Uncommon Descent Serving The Intelligent Design Community

Artificial Intelligence and the Game of Checkers

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I was going to post this as a comment in Salvador’s thread (https://uncommondescent.com/index.php/archives/1481) but it became too long, so I’m posting it as a new topic.

On the subject of computer checkers I have some observations. I hate to brag (okay, I lied!) but I am one of the world’s leading authorities on this subject.

I know David Fogel. He is a very bright researcher and a really cool, nice guy. (Plus, he is a very talented pianist and musician. We have shared our piano CDs. Is there some correlation between music, mathematics and computer science?)

There are three essential elements in an artificial-intelligence (AI) games-playing computer program:

1) A move generator. The board and pieces must be represented in the computer’s memory, either as arrays or bit-maps. An algorithm must be devised to generate moves based on the rules of the game.

2) A tree-search algorithm. This is even more complex and requires sophisticated software engineering. The game tree explodes exponentially, so an alpha-beta minimax algorithm is employed. Other sophisticated algorithms must be used to curtail the exponential explosion: quickly accessible hash tables that contain previously-seen positions, iterative deepening with best-move prediction accessed from the hash tables, use of killer moves and move sorting based on a history heuristic, and much more.

3) A terminal-node evaluation function which attempts to ascertain the value of a position statically, since the tree search must terminate at some point. These values are backed up through the search tree with the alpha-beta minimax algorithm.

Most of the playing strength of these programs comes from the tree search (which is not possible without the move generator), since humans have a hard time keeping track of this much detail.

Fogel and his team of software engineers programmed elements 1 and 2, without which element 3 (the product of the genetic algorithm) would have been completely useless. The fitness function was the definition of a win at the end of the tree search: no moves left for the opponent. This was a goal defined by the programmers. The leaf-node evaluation function is not essentially strategic; “strategy” is mostly a byproduct of the tree search.

This is not to say that Fogel’s research is meaningless. It is intriguing and worthwhile. What all this means is that a cleverly designed trial-and-error process, combined with a lot of even more clever software engineering and human-devised algorithms, can produce interesting and productive results.

It should be noted that human-crafted (i.e., intelligently-designed terminal-node evaluation functions) totally trounce Blondie’s eval function with ease.

The really cool stuff in games-playing programs is the development of endgame databases. The massive brute force and memory of modern computers, and highly efficient retrograde analysis algorithms, have produced mind-boggling results.

My colleague and I computed both win-loss-draw databases and perfect-play lookup databases for the game of checkers. (We are the only ones to have computed PPL databases for checkers.) The PPL databases required nearly two months of 24-hour-per-day computation on my dual-processor machine equipped with four gigs of RAM. I kept both CPUs and all four gigs busy for that entire time.

You can download a copy of our paper on checkers endgame databases here. This paper was published in the premier international peer-reviewed publication for games-playing AI programs, the ICGA Journal, and was reproduced in the hardcover book, Advances in Computer Games, Proceedings of the 10th Advances in Computer Games Conference.

Alex Moiseyev, the current human checkers world champion, after playing the PPL databases, exclaimed in his thick Russian accent: “This play is INHUMAN!”

You can download a free small version of my checkers program at http://WorldChampionshipCheckers.com. You can get a copy of the massive and really cool PPL and WLD databases for the cost of disk duplication and shipping ($25) from bobnewell@bobnewell.net. The highly compressed data require 12 gigabytes of disk space.

There is an interesting aspect to this massive computing capability. I call it “the elegance of brute force.” The PPL databases have corrected classic human play that has been around for centuries. Was any new information produced?

I suggest that the answer to this question is both yes and no. Solutions were found that no human could ever have found without the power of modern computation. But what human or team of humans could compute PI to a billion decimal places?

Comments
Music appreciation is most definitely subjective. The under-girding structuring of compositions are not.Scott
August 24, 2006
August
08
Aug
24
24
2006
08:38 AM
8
08
38
AM
PDT
Music appreciation is intensely subjective. And since the appreciation is subjective so too must be the composition otherwise everyone could be a wildly successful composer just by following formulaic procedures. I think the mathematical connection pretty much starts and ends with simple relationships between notes on the scale and also the way the notes combine to form harmonic sounds.DaveScot
August 24, 2006
August
08
Aug
24
24
2006
08:03 AM
8
08
03
AM
PDT
I am terribly sorry if I strayed ever so little from the subject of the thread. It was the same Pythagoreans that discovered the frequency of the octave is twice that of the lower note as the ones that discovered that A squared plus B squared equals C squared, known among carpenters as the 3,4,5 rule. Mathematics and the rules of music are obviously closely related. They even originated together and neither are subjective. "A past evolution is undeniable, a present evolution undemonstrable."John A. Davison
August 24, 2006
August
08
Aug
24
24
2006
07:26 AM
7
07
26
AM
PDT
You are not strange Scott, I fell your pain... I'm a bassist too (currently in a Death Metal band) and I have a bad time when it comes to math. Is there some correlation between bass playing and poor mathematical skills? =PMarcos
August 24, 2006
August
08
Aug
24
24
2006
07:20 AM
7
07
20
AM
PDT
I play 4 & 5 string bass and guitar. I have been in a couple of "touring" bands, 2 of which almost signed major label deals (Warner Bros. & Capricorn Records). I can't solve math equations beyond basic polynomials to save my life. Yet, I do computer programming every day. I am strange.Scott
August 24, 2006
August
08
Aug
24
24
2006
06:54 AM
6
06
54
AM
PDT
John Davison If you read the part of the article asking about music you'll see a couple of things. First of all the question was asking about a correlation between music, math, and computer science, not just music and math. Second, in context, the music portion of the question regarded talent at playing piano. I have CDs of Gil playing piano. Then Gil, because he is strong in math, computer science, and piano playing and ran into someone else with same set of talents, wondered if the three tended to run in groups. I answered no (based on anecdotal evidence) because I know a lot of people including myself who are strong in math and computer science but play no musical instruments, or at least don't play them well.DaveScot
August 24, 2006
August
08
Aug
24
24
2006
06:47 AM
6
06
47
AM
PDT
Tom English: "Along those lines, I should mention that several years ago David and associates evolved new parameter values for the 'intelligently designed' static evaluator of a public-domain chess program." "Fitness among Competitive Agents: A Brief Note," by William A. Dembski, should probably be considered in the discussion: http://www.designinference.com/documents/2005.10.fitness_among_competitive_agents.pdfj
August 24, 2006
August
08
Aug
24
24
2006
04:50 AM
4
04
50
AM
PDT
Of course there are many similarities between music and mathematics. My God the Pythagoreans used music as an example when they founded mathematics. You may recall that on another thread I suggested that music, like mathematics had already been composed before the presumed composers "discovered" it. I notice that suggestion did not evoke much response but I was deadly serious. Surely no one is so weak minded as to suggest that mathematics was the invention of the human mind are they? Perhaps some are but I am not one of them. I believe that all of science, which is of course based on mathematics, has always been there just waiting to be discovered. I have simply tentatively extended this notion to include all of man's experiences. What we have dismissed as "subjective" may not be that at all, but just as "prescribed" as every other aspect of the universe. Just as mathematics proceeded through a series of ascending levels of complexity so did music, literature and the arts generally. In my opinion our cultural life has reached what Schindewolf called the evolutionary stage of "typolysis" or degeneration followed by extinction. An interesting question mught be - is that happening to mathematics as well? I can't answer that question except to say that much of recent mathematical and cosmological theory is utterly beyond my comprehension, but then so is most of modern music and art as well. Perhaps we have finally blown out intellectual and cultural wad. Euclid's Elements in several volumes may be lttle different in origin from Beethoven's 9 symphonies. Both may have been simply discovered and, of course, subsequently published. They are now both for eternity for which we should be very grateful. I know this will not sit well with many but doesn't it remain in fundamental agreement with Einstein's unqualified dictum - "Everything is determined... by forces over which we have no control." I say it does. What say others and why? "A past evolution is undeniable, a present evolution undemonstrable." John A. DavisonJohn A. Davison
August 24, 2006
August
08
Aug
24
24
2006
04:02 AM
4
04
02
AM
PDT
I think backgammon is a rather more interesting subject of study than e.g. chess and checkers. The latter games are deterministic, and thus a single blunder by the AI can be exploited endlessly by a human. Also, the "branching factor" of these two games is sufficiently low to allow for a systematic tree search. (Which is not to say that these games as a whole were simple!) The remaining problem, then, is simply increasing the search depth by whatever means. In backgammon, the average branching factor is above 200, which makes a tree search infeasible. (There are 21 different rolls, and more than 10 valid moves for each roll, on average.) Because the available moves for each position are limited by the rolled dice, the game is indeterministic. A single mistake usually only *decreases* your chance to win, but does not guarantee a deterministic victory to a perfectly-playing opponent. Also, even if a human opponent detects slightly weak spots in the AI, it will be hard to steer an indeterministic game to a position exploiting a weakness in every game. Thus, it is sufficient for the AI to perform well *on the average*. Most biologists would probably agree that this kind of statistical strength, as opposed to perfection, is typical to a behavior resulting from any genotype. The revolutionary backgammon AIs of the 90s concentrated the bulk of intelligence in the *evalution* function. (A function which in a chess AI might be as simple as a basic "material strength" calculation, e.g. Q=9/R=5/B=3/K=3/P=1.) This evaluation function doesn't just "rate" each position by giving it so-and-so many points. It outputs the direct probability of e.g. black winning the game. (In practice, also winning "gammons" and perhaps even "backgammons" are taken into account, but let's keep it simple here.) I.e. the evaluation function approximates an extremely deep and complicated search. Yes, it's "only" an estimate, but it greatly impressed grand masters initially, and it continues to do so today. The method for creating the evaluation function was neural networks trained with temporal difference learning. A randomly initialized NN learned by playing against itself. Training the network with corpuses containing grand master level games proved less successful than pure self-learning. Including extra "tactical data" in the networks inputs, instead of just the "raw encoding" of the game position, has proved more or less a waste of time. (Increasing hidden nodes often works well!) Thus the "core engine" of a backgammon AI is a neural network which simply takes in a game position, outputs an expert estimate of the winning chance, and learns by simply playing against itself. The current boom in backgammon seems to be GAs. I think the "core engine" remains the same, but instead of reinforcement learning (including the backpropagation method), they are mutating the network randomly. The results are rather good, it seems. But I believe that the training process must take much, much more time than with more traditional NN training.caligula
August 24, 2006
August
08
Aug
24
24
2006
01:31 AM
1
01
31
AM
PDT
Gil, The AI community has terribly mixed feelings about two-adversary games of perfect information. Although computer systems are now better than all human players in checkers and chess, they don't succeed by applying any of the principles of AI that have emerged over the past 50 years. And experimentation with the top-flight AI systems for checkers and chess has taught us precious little that we can apply in other domains. Most of cleverness in playing these games derives from two old warhorses, minimax game tree search and alpha-beta pruning. "The game tree explodes exponentially, so an alpha-beta minimax algorithm is employed." Actually, the growth in number of nodes from level to level in the game tree is exponential even with alpha-beta pruning of branches. (In the worst case, there is no pruning at all. In the best case, the exponent is halved.) Heuristics used in conjunction with the basic methods give some speed-up and improvement in quality of play, but selecting moves by minimax game tree evaluation is fundamentally a brute-force proposition. Playing out the endgame according to stored solutions in a massive database is also brute force. Worse yet, it took several decades of fairly intensive work with checkers and chess for all the pieces to come together. Let me put it another way. When you plot the rating of the world computer chess champion as a function of processor speed and get something close to a straight line, it strongly suggests that programs have not so much gotten smarter as computers faster. "Most of the playing strength of these programs comes from the tree search (which is not possible without the move generator), since humans have a hard time keeping track of this much detail." I disagree, and I have an important point to make. What you are trying to say, I think, is that most of the "playing strength" of a program is attributable to how many moves ahead in the game it can look (i.e., how deep the game tree search goes). Researchers have tried to make the static board evaluator (your "terminal-node evaluation function") contribute more to the quality of play. In other words, they have tried to make the program do better at looking a pieces on the board and assessing how good or bad the situation is for the computer. But this is another failure of checkers and chess research. Programs with quick-and-dirty board evaluation have consistently beaten programs with more accurate (slow) evaluation. In other words, adding "intelligence" slows down the evaluator and keeps the program using it from looking as far ahead in the game as it might have. "F*gel and his team of software engineers programmed elements 1 and 2, without which element 3 (the product of the genetic algorithm) would have been completely useless." I think all of the programming was done by one person. The program was an evolutionary algorithm -- the simulation is at level of phenotypes, not genotypes. The population essentially contained strategies (programs) with static board evaluators implemented as artificial neural networks. Only the static board evaluators evolved. "The fitness function was the definition of a win at the end of the tree search: no moves left for the opponent. This was a goal defined by the programmers. The leaf-node evaluation function is not essentially strategic; “strategy” is mostly a byproduct of the tree search." The program has no fitness function. The approach is one of coevolution, in which fitness of strategies is assessed by putting them in direct competition with one another. Each strategy plays games against randomly selected opponents in the population. For each game, both players are awarded points at the end. The payoff is 3 points for a win, -2 points for a loss, and 0 points for a draw (I think). After all of the games are played, the strategies with fewer points are culled from the population, and those with more points engage in reproduction-with-variation. The only goal defined by the programmer is to get points. Everything else is given by the game of checkers. "It should be noted that human-crafted (i.e., intelligently-designed terminal-node evaluation functions) totally trounce Bl*ndie’s eval function with ease." After close to one thousand generations of evolution, David took the best strategy from the population and began using it to play against humans on the Internet. Bl*ndie's opponents did not know they were playing against a computer. Despite very shallow searches of the game tree, Bl*ndie established an expert rating and played better than 99% of human players. In other words, her static evaluator is intelligent. Now we get to the heart of things. You're going to have to prove that to me in a controlled study. See if David will share the Bl*ndie code with you. Create a copy and do nothing to it but plug in a hand-crafted static evaluator you think will beat Bl*ndie. Then immediately test your version against David's. Both versions of the program must search to the same ply in the search tree. And that is why I hypothesize that your version loses. Your intelligently-designed static evaluator will not be as intelligent as the evolved one in Bl*ndie, and your program will be able to beat Bl*ndie only by brute force. Along those lines, I should mention that several years ago David and associates evolved new parameter values for the "intelligently designed" static evaluator of a public-domain chess program. In the beginning, the program was rated a master. After two days of evolution, it was rated a grandmaster. If memory serves, the gain in rating was 389 points. Furthermore, David did not use the asymmetric payoff function that Bill Dembski claimed was a source of CSI in Bl*ndie. Evolving strategies got 1 point for a win, -1 points for a loss, and 0 points for a draw.Tom English
August 24, 2006
August
08
Aug
24
24
2006
12:46 AM
12
12
46
AM
PDT
Salvador, "The answer in Debmski’s book is, No." Boldface "no." Inquisitive minds want to know. Why did he say no?Tom English
August 23, 2006
August
08
Aug
23
23
2006
10:16 PM
10
10
16
PM
PDT
I wrote a cribbage AI 20 years ago that people swore cheated. It's still on the internet available for download at cardandboardgames.com It doesn't cheat. I simply wrote an expert system that made the same decisions that I would make in any given situation. That alone made it a good competitor. I then improved on mother nature by leveraging what a computer is good at - calculating odds precisely and quickly. As each card was exposed I calculated the odds of where remaining cards would be. This would not be possible for a human unless some kind of savant like Rainman but it's certainly not cheating. Think of it like card counting at blackjack in vegas only more complicated. I didn't take any card into consideration until it had been legally exposed during normal gameplay. This made the program virtually invicible after playing it enough times for luck to average out so skill level can become evident. I could still whip the snot out of it but that's because I knew exactly what it was thinking and that's enough of an advantage to nullify the card counting.DaveScot
August 23, 2006
August
08
Aug
23
23
2006
09:14 PM
9
09
14
PM
PDT
The relevance of Computer Checkers is tied to pages 221-223 of Dembski seminal work, No Free Lunch. David Fogel is mentioned:
Perhaps the most subtle example I know of an evolutionary algorith that appears to generate specified complexity for free is the evolutionary checker program of Kuma Chellapilla and David Fogel.... What is remarkable about this program is that it attained such a high level of play without having to be explicitly programmed with expert knowledge like the world champion chess program Deep Blue or the world champion checker program Chinook. But did teh evolutionary checker program of Chellapilla and Fogel find its exper checker-playing neural nets without commensurate input from prior intelligence? William Dembski
The answer in Debmski's book is, No.scordova
August 23, 2006
August
08
Aug
23
23
2006
09:05 PM
9
09
05
PM
PDT
"Is there some correlation between music, mathematics and computer science?" No.DaveScot
August 23, 2006
August
08
Aug
23
23
2006
09:00 PM
9
09
00
PM
PDT
A brief history of computer checkers
There was Checkers by Gil Dodgen, and Colossus by Martin Bryant. Bryant traded his opening book against Chinook’s endgame database, and in the early 90’s, Colossus was probably the best PC program, but it was never developed further. Gil Dodgen went on to create World Championship Checkers with Ed Trice, …. Murray Cash and the Trice/Dodgen team both computed the 8-piece database by the end of 2001, and after I also finished computing the 8-piece database in early 2002, Schaeffer finally released the 8-piece Chinook database to the public. Nemesis is the current computer world champion. …. On high-end machines, it should be no big problem to compute the 10-piece database, and there are rumours that the Trice/Dodgen team have already done this.
scordova
August 23, 2006
August
08
Aug
23
23
2006
08:58 PM
8
08
58
PM
PDT
1 2

Leave a Reply