archive by month
Skip to content

lazy learning

Today’s post is AI background info. To use a neural network, or a decision tree, or most machine learning methods, you train it first and query it afterward. The system includes a model, represented as a data structure with code to interpret its meaning, and the learning algorithm adjusts the model to fit the training data. Training may be slow, but the model is usually designed so that answering queries is fast.

Some bots, including Steamhammer, do their opening learning differently, without persistent model data: They store only the training data, records of what happened in past games. When querying the system—deciding what opening to play based on experience—they look through the game records to find any relevant information and decide on the fly, not keeping any other permanent representation. That is called lazy learning, as opposed to eager learning, because it computes its model lazily on demand rather than eagerly up front. It’s a special case of lazy evaluation. There is also the closely related instance-based learning, which often means the same thing.

As an aside, this is analogous to the difference between an interpreter and a compiler. Partial evaluation of a lazy learning algorithm produces an equivalent eager learning algorithm, in exactly the same way that the appropriate Futamura projection converts an interpreter into a compiler. Not that that’s important, I just think it’s fun.

The prototypical lazy learning algorithm is k-nearest neighbors: Represent your training data as points in some space, each associated with a value. To query for some point in the space, you find the k closest points whose values you know and fit some function to them (say, take their average, which amounts to fitting a constant function). Choose k > 1 to average out any noise in individual values. I like to think of it in more general terms: Find whatever instances you can that are similar to your query, and generalize or synthesize the information in the instances into an answer to your query. That is a description of case-based reasoning. You can see k-nearest neighbors as a lightweight example and case-based reasoning as a heavyweight example of the same underlying idea.

Lazy learning does not mean that the learning system does not have a model, it only means that it does not have an explicit model that it stores as a data structure. There is still code to answer queries, and the code embodies an implicit model of how to turn training data into answers.

Lazy learning has advantages and disadvantages. Learning is trivial, since it amounts to storing records of your experience, so it is easy to keep up to date. With a deep learning model it is not very useful to update online in regular games, because training needs so much data. On the other hand, if your learning data is large, then with lazy learning you may not be able to use all of it to answer any given query. You’ll need a way to pick out the relevant records. It’s still effective if the relevant records are few enough, but by restricting the data you use you might be dropping useful information.

Lazy learning methods are natural for opponent modeling. I wouldn’t count them out for strategic uses, though. I can imagine a case base of strategic situations with reactions and evaluations of how the reactions worked out. That could produce a bot which keeps up with the meta by learning it from experience.

Trackbacks

No Trackbacks

Comments

Dan on :

I find re-training for each game to be very advantageous. You can debug the model from inputs, and you can change the model without losing all your data. When doing a large-scale test run I can just glance at the latest training data to get a glimpse of what's happening in each matchup (and have a script to repackage the training data into a win-loss per opponent and per map). If re-training is fast and deterministic it's all advantage.

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Form options

Submitted comments will be subject to moderation before being displayed.