| Jay : game learning : Morph : games |
Bob Levinson sent me these four chess games as examples of some of Morph’s better play against gnuchess. Gnuchess is searching three or four ply in these games, so it is not playing strongly. Morph is restricted to one-ply search, relying on its pattern-matching abilities. The hope is that Morph’s automatically learned knowledge will balance gnuchess’s handmade search algorithm.
These games aren’t representative of Morph’s strength. The 1992 “Experience-based adaptive search” paper reported (page 17) that Morph had won two and drawn forty-some games against gnuchess out of thousands played.
These chess games are in PGN format (Portable Game Notation). If you don’t have a PGN reader, you can view them as text.
Game 1 - Apparently early in training, Morph is defeated soundly.
Game 2 - Like an ambitious but weak human, Morph tries an unsound sacrificial attack. Gnuchess falls for it, but Morph does not understand how to finish off its victim and settles for a draw by repetition.
Game 3 - Maneuvering for an attack, Morph suddenly threatens mate, and there is no defense. Gnuchess, as traditional programs do, throws away its queen to delay the mate. But Morph doesn’t see the win, and makes error after error. Again, the attack peters out to a draw by repetition.
Game 4 - Gnuchess goes on a material-hunting spree, leaving its king in the center. It is ahead by a rook and two pieces when Morph’s queen intrudes, delivering mate.
What do the games say about Morph’s learning algorithms?
Morph’s style is strikingly different from a traditional chess program’s. Morph plays directly for mate, freely giving up material if it can threaten the enemy king. In game 3, for example, it sacrificed a pawn with the apparent aim of keeping the opposing king in the center (though I don’t know whether that was the program’s actual reason).
It’s refreshing to see, but what it means is that Morph tries to win by the most direct method. It successfully learned to endanger the enemy king, usually with its queen. The games show that it was less successful at learning indirect methods of winning, such as controlling the center of the board and maintaining good pawn structure (methods that traditional programs such as gnuchess excel at). Learning some of these indirect methods is definitely within the state of the art (see “Discovering complex othello strategies through evolutionary neural networks” by Moriarty and Miikkulainen, under marker-based neurogenetic algorithm), so somewhere Morph is missing an ability it needs.
What is the missing ability? I suspect that the problem may lie in two weaknesses of Morph’s design. First, some important chess knowledge is difficult or impossible to represent in Morph’s pattern language. Naturally it can’t learn that knowledge. Second, Morph combines pattern weights into an evaluation in one global step, ignoring interactions between the patterns. Patterns are likely to interact strongly (see “Some studies in machine learning using the game of checkers ii: recent progress” by Arthur Samuel, 1967, and “A pattern classification approach to evaluation function learning” by Lee and Mahajan, 1988, complete references later when I reach them in my queue).
Morph is poor at tactics—even poorer than it should be with a one-ply search. Sometimes it overlooks mate in one, or loses material for no discernable reason. This may be because it does not generalize over patterns, but rather learns the weight of each pattern independently, no matter how similar it may be to other patterns. That slows down learning and makes weights less reliable.
Gnuchess, though it is much stronger than Morph, makes mistakes which Morph can occasionally exploit. MorphII is supposed to fix all the flaws of Morph that I mentioned above, and has other improvements besides, so it should do better.