archive by month
Skip to content

Rob Bogie’s MaasCraft has map problems

What is it with Rob Bogie’s SSCAIT version of MaasCraft (originally written by Dennis Soemers)? Why does it do so much better on some maps and so much worse on others, with far more variation than the rest of the bots?

I looked up MaasCraft’s results in the AIIDE 2014 and CIG 2014 competitions (it didn’t play in other years). Both map pools overlap with the SSCAIT map pool. CIG 2014 had an extra wide and diverse map pool, with some nonstandard maps, and it only overlaps a little. The overall row is across all maps in that competition, not only the ones in the table.

mapCIG 2014
win %
AIIDE 2014
win %
SSCAIT
Elo diff
Andromeda63%-361
Benzene55%+42
Circuit Breaker52%+246
Destination59%-313
Empire of the Sun61%-333
Fighting Spirit50%-418
Heartbreak Ridge61%-193
Icarus50%+291
Python60%-306
Tau Cross57%+365
overall55%59%0

I’m comparing raw win percentages with Elo differences, but the pattern is clear—I mean, the lack of pattern. The 2014 tournament results look normal, the SSCAIT results look extreme, and the two are nothing alike.

CIG 2014 and AIIDE 2014 both released source code. The MaasCraft source is exactly the same in both. It looks as though some difference or bug is affecting only the SSCAIT version under Rob Bogie.

Later today: First thoughts on the AIIDE 2016 results.

humans don’t understand bots

Igor Dimitrijevic’s comment on yesterday’s post reminded me: It’s difficult to understand much of a bot’s behavior by watching it.

Krasi0 is a good example. In the last several months I’ve watched the old veteran bot grow much stronger, returning to the top ranks. I can describe in general terms some things Krasi0 has improved at: It is more aggressive, it is better at protecting its workers from danger, it is smarter about where it sieges its tanks. (It also solved crashing bugs.) But I feel sure that the points that I’ve noticed are only the tip of the iceberg. There must be not only details but whole classes of behaviors that I did not pick up on at all—otherwise it could not have improved so much.

I guess humans don’t have the perceptual bandwidth to take it all in, at least not without the experience or prior knowledge to know what to look for. Starcraft play is too complicated for us to follow! I’m sure I could understand more if I studied replays closely.

I’ll take it as a reminder not to be too glib in drawing conclusions.

Speaking of glib conclusions about bot behavior, MaasCraft looks more interesting when it plays against more interesting opponents. I concluded earlier from watching 2014 replays that it mostly moved its army forward and back along the path between bases. Well, that’s what its opponents were doing too, so it may not have had much choice. Today’s bots try more complicated maneuvers, and today’s MaasCraft reacts with its own more complicated maneuvers. I’ve seen it (seemingly intentionally) split its army to trap stray units, for example. It reacts sensibly to multi-prong attacks.

MaasCraft is still scoring poorly, but now its tactical search is showing sparks of promise—I suspect due to changes in its opponents, not itself. As a reminder, LetaBot has a search descended from the same code, turned off in some versions but likely to be turned on in final versions.

Iron vs Letabot

Games of Iron against different versons of LetaBot have been entertaining me. Iron has learned the same trick as Tscmoo of killing tanks by laying mines next to them. LetaBot often suffers unnecessary mine hits when it sieges up just as a mine triggers. On the other hand, LetaBot also has micro skillz and likes to kill mines after they trigger—I’ve seen it kill a mine with two fast-reacting SCVs!

The games go back and forth, but Iron has the upper hand for now. Iron has become strong against other bots. LetaBot could improve by sieging more cautiously, which is part of the terran technique of inching units forward to force a minefield. Or it could scan for mines. Scanning intelligently for mines is not that easy because the bot has to keep track of where mines are likely or possible, which seems like a lot of coding work to support a single skill.

in other news

Rob Bogie has uploaded MaasCraft (originally written by Dennis Soemers) to SSCAIT, and it plays like the replays I watched last month. MaasCraft is scoring poorly—the opposition is a lot stronger than in 2014. He also re-uploaded it, which sounds like an update. Is MaasCraft going to see major updates under its new ownership? I haven’t yet noticed any changes to its play.

PS Making slow progress on the website improvements. I chose a hard road for myself.

MaasCraft’s play

Does MaasCraft’s search make its play more interesting? I grabbed its replays from Starcraft AI Data Archive and watched some.

Short answer: No. It mostly moves its army forward and back along a path between its base and the enemy’s (exactly what its author Dennis Soemers hoped search would avoid) and rarely pulls any interesting maneuvers. It bears out what the author said about the appropriateness of the algorithm to the strategy.

I did notice one advantage MaasCraft has over many bots: It doesn’t chase after fast units that it can’t catch. I suppose the search sees that they get away.

tactical search in MaasCraft

MaasCraft is a protoss bot by Dennis Soemers which does a tactical search to decide how to maneuver its army. MaasCraft looks slightly above average. It finished 8th of 18 with a 59% win rate in AIIDE 2014 and 7th of 13 in CIG 2014 with 55% wins and hasn’t played since as far as I can tell.

Soemers’s paper on it is Tactical Planning Using MCTS in the Game of StarCraft (pdf BSc thesis from Games and AI Group, Maastricht University). You can get the code from the Starcraft AI Data Archive.

So, tactical search. “Tactical” here means maneuvering squads around the map and deciding when to engage or retreat. At the most basic level, search means finding alternatives and comparing them to pick one that is probably better. The comparison itself might involve a further search—and that is the root of all search algorithms. Search done right ought to outperform any scripted behavior, so our mission (should we choose to accept it) is to figure out how to do search right. This is the most interesting try I’ve seen so far.

The search nodes for tactical decisions have to include some representation of squads and squad positions on the map. In a chess program a search node represents the whole game state, but a specialized search in Starcraft should only represent the details it cares about. MaasCraft keeps it super abstract. The map is simplified to a graph with “nodes at each chokepoint and each potential base location,” the paper says, so that every location is just a node in the graph. A squad has a location, a speed, a hit point total, and a total damage rate (and presumably a number of units). A base is treated like a squad that can’t move (its static defense provides a damage rate). The paper doesn’t mention any provision for air units, but MaasCraft does mention air units in its code.

The moves for a squad that is not in combat are to stand pat or to move to an adjacent node in the location graph (where it may get into a fight). A squad that is already in combat can either keep fighting or retreat to an adjacent location. Moves take different amounts of time, and since each side can have any number of squads, many moves can be ongoing at once.

Finding the next state when you know the current state and the ongoing moves amounts to simulating part of the game. Movement is easy: A squad has a speed and the location graph has a distance. The course of a battle is estimated by Lanchester’s Square Law, which is a good estimate only for ranged fire and works better if all units in a squad are alike. I’m sure it’s less accurate than a battle simulator but it must be faster.

The search algorithm is one of the coolest bits. It is a variation of MCTS (Monte Carlo tree search) adjusted to cope with simultaneous moves of different durations. The next move down a branch is whichever move comes up next for whichever player. A squad that’s moving or retreating gets a new move when its current move finishes. If I understand it right, a squad that’s waiting or fighting gets an opportunity to change its mind when some other move finishes.

Algorithms in the MCTS family traditionally evaluate leaves of the tree by playing out one line of the game to the end. MaasCraft plays out up to one minute ahead and then applies a heuristic evaluation (not unlike AlphaGo). The evaluator gives points for destroying bases and harming armies, plus a tiny bonus to encourage moving toward the enemy.

How well does it work? Search should make play much more flexible and adaptive. A tactical search can plan sandwich maneuvers, or deliberately sacrifice a base to a strong enemy squad as long as it can defeat a weaker squad and take an enemy base in compensation. Looking into the future is what intelligence is. But this look into the future is simplified, so the question stands: How well does it work?

In the form interviewDennis Soemers wrote “I suspect the algorithm may be better suited for a more defensive and late-game strategy which is less reliant on perfect execution of the early-game. I also believe the algorithm would still be able to run sufficient simulations with a more fine-grained map representation and a more accurate Battle Model, since the bot is currently running on the bots ladder using less time per frame (I believe I use 35ms per frame in the competitions and only 20ms per frame on the ladder), with similar performance.”

That makes sense to me. In the early game, details matter; if you’re chasing the enemy scout, or fighting the first zealots with your first marines, it seems to me that you want to represent the situation in detail and search as exactly as you can. As armies get bigger, approximations are good enough and may be needed for speed.

By the way, code under the name “MCTSDennis” appears in LetaBot. Switches control whether it is active. The two bot authors are both from Maastricht University.