rational search

Rational search is founded on decision theory. The idea is to mathematically analyze the decisions that a search algorithm makes, so that they can be made as well as possible with the information on hand. For example, if searching more deeply under a given node is estimated to be likely to affect the search result, that node is a good candidate for deeper search.

Jay : game learning : search : rational search

Rational search algorithms require learning for search control, and they can also use learned evaluation functions. The domain-independent search algorithm promises generality, and the domain-dependent learned search control knowledge promises power, so this is a crucial research area. Though I’d also guess that, once we’ve made enough progress, these general and powerful search algorithms will look much different than our current ideas.

To be rational in the classical sense, you have to know all of the implications of your beliefs. For example, if you believe a few axioms, then to be classically rational you have to also believe all the theorems that can be proved from them. That’s not remotely realistic. The “rational” in rational search means limited rationality, rationality taking into account the computational limitations of the reasoning agent. A related term is “bounded rationality”.

Rational search is the future, but it’s not all there yet. Rational search algorithms are complex, and it’s difficult to make them efficient. Be cautious of comparisons between rational algorithms and alpha-beta. Alpha-beta may be operating at a handicap, without all the sophisticated features of a top performance program.

- Do the right thing: Studies in limited rationality (1991)
Stuart Russell and Eric Wefald
A book. Includes an inspiring discussion of what rational search is and why it makes sense. The book also proposes and analyzes specific algorithms. The algorithms are no longer state of the art—in fact, they are not useful in themselves—but you can learn a lot from the analyses.

- Extended abstract: Learning search strategies (1999, 5 pages)
Daishi Harada and Stuart Russell
Postscript. Treating search as a real-time control problem. Tested with the game of tetris.

- A Bayesian approach to relevance in game playing (1997 Citeseer)
Eric Baum and Warren Smith
Constructing a more capable algorithm, BP (for “Bayesian Program”). It’s highly sophisticated, and somewhat hard to understand. I believe this is an updated version of the 1996 paper “A Bayesian approach to game playing” by the same authors.

- Experiments with a Bayesian game player (1996 Citeseer)
Warren Smith, Eric Baum, Charles Garrett, Rico Tudor
Experiments with BP from the above paper.

- Propagating distributions up directed acyclic graphs (1998 Citeseer)
Eric Baum and Warren Smith
Efficiently handling the case where the same position can occur more than once in the game “tree” (which is of course really a directed acyclic graph). In other words, adding transposition tables to the search.

- Advances in decision-theoretic AI: Limited rationality and abstract search (1994 Citeseer)
Michael P. Frank
A master’s thesis (as of May 2012 Citeseer offers misinformation about the author). This surveys the field of rational search. It also explains abstract search and makes a start on understanding it.

updated 17 June 2012 (added the sentence about future algorithms likely being different)