archive by month
Skip to content

blog software

It’s time to get down to serious work on the blog software. You people deserve at least an RSS feed. The software I picked fits into a plan for my whole website and will take time to set up, so I expect to post less often here until it’s ready. Stand by!

popular articles

Here are the 4 best articles aimed at the lay public. They’re all worth reading, for different reasons.

Wall Street Journal article

Computers That Crush Humans at Games Might Have Met Their Match: StarCraft. 22 April 2016.

With emphasis on deception as an element of difficulty in the game. My opinion: Deception is an interesting challenge but not a leading one.

Artosis

Infinite APM? Artosis on Deepmind and StarCraft - Part 1 and Part 2, March 2016.

This two-part article is interesting for representing the common reactions of Starcraft players yet showing more thought. Part 1 argues that the high speed of computers essentially breaks the design of Starcraft as a game, so that computers are unfair opponents. Part 2 argues that if that problem were eliminated, then it becomes difficult for computers to match human creativity and deception, though he concludes it should be possible eventually. Both parts take the view that Starcraft is an essentially human game.

My opinion: BWAPI programs do play a different Starcraft than human players. BWAPI provides a higher-bandwidth interface to the game than a screen, keyboard, and mouse, and not only in terms of APM. It’s not remotely a problem in practice so far, but maybe it will become one someday. If so, technical solutions don’t seem hard. Part 2 I see as showing some of the arrogance and human exceptionalism that chess players, backgammon players, and go players (among others) showed before they were defeated. These experts learned that computers are not less creative in game play than humans but more creative. And deception is a game theory problem that we understand in principle; it is easier to solve than other aspects of Starcraft.

LetaBot in Rock Paper Shotgun

StarCraft: Building A Brilliant Brood War Bot, January 2016.

In the form of an interview with Martin Rooijackers, author of LetaBot, with a bunch of interesting details. My opinion: It’s good!

the classic Ars Technica article

Skynet meets the Swarm: how the Berkeley Overmind won the 2010 StarCraft AI competition, January 2011.

A great old classic. I think this article inspired more than a few to enter the BWAPI world. Martin Rooijackers once offered the opinion that the Berkeley Overmind would be the best bot today if they had kept developing it. Makes sense to me.

hitting the wall

In a game last month, Tyr by Simon Prins was winning until this happened.

screenshot of game in which Tyr blocked itself into its own base

Pro tip:Don’t do that. The Great Wall doesn’t keep barbarians out, it keeps you in. The protoss bot by Roman Danielis dropped a dark templar inside the base from a shuttle and cleaned house. (You thought that a dark templar carried a warp blade? This one had a mop.)

I shouldn’t pick on Tyr—I think Tyr is above average in placing its depots in tight clumps along the edge of its base. It hit a bug this time, and I think the bug has been fixed. Building placement is hard, and it’s especially hard if you are terran, because terran bases need a lot of room for production buildings and supply depots. Some bots space all buildings apart from each other so that there’s a gap between any two and blocking is impossible. Terran bots that do that end up running out of room and having to build depots and factories outside the base in vulnerable open areas. Here ICEbot spaced out the buildings in its main and its construction had to spill into the map.

screenshot of terran buildings spilling out of the base

The fancy solution is do a path analysis in deciding on building placement. Does it block units off from anywhere they need to get to? Can fresh units get to the front without threading a maze? The analysis will be trivial sometimes, but not when the base starts filling up. PerfectBot will analyze both its own and its opponent’s building placement, for both attack and defense. “Oh, the opponent built an obstacle course. The enemy army is here so I’ll drop there.

Knowing when the opponent has built a wall is a valuable skill—notice how well LetaBot’s ramp wall-in works against many opponents. If you don’t want to do real pathfinding around buildings, you could try to recognize a wall when your units fail to make progress toward their goal. I’m not sure it’s easier and I’m pretty sure it works less well, but monitoring progress toward goals is a good idea regardless. Units that find themselves behind a wall may want to destroy a building in the wall, without caring whether the building is enemy or friendly. Tyr could have won if it had opened its own wall.

Most bots probably rely on heuristics to place buildings so that they rarely cause problems. Nothing wrong with that; it should work well enough and it will be easy, and there’s a lot to do so ease counts. But I hope a few authors will go for more ambitious solutions.

tournament design

If you design a tournament differently, a different bot may be favored to win.

AIIDE 2015 was an example. As pointed out in the tournament results, UAlbertaBot finished fourth even though it had a plus score against every other bot, because compared to the top 3 it was less consistent in defeating weaker bots. AIIDE runs on a round-robin design, all-play-all, so UAlbertaBot could defeat the top finishers and still be ranked behind them. In a progressive elimination tournament in which weaker competitors were dropped over time, UAlbertaBot would likely have finished first.

If you’ve seen the math of tournament design, or of related stuff like voting system design, then you know there’s no such thing as a fair tournament in which the best competitor always has the best chance to win, because there isn’t always such a thing as a best competitor. If A > B and B > C but C > A, then which is the “best”? That’s called intransitivity. A more complicated kind of intransitivity happened in AIIDE 2015.

Rating systems in the Elo tradition have the same issue (and their designers know all about it). They assume—they have to assume, to be what they are—that players have a “true skill” in a mathematical sense, putting players into a smooth mathematical model that doesn’t correspond exactly with bumpy reality. It’s a good approximation; Elo ratings are mostly accurate in predicting future results. (The small mismatch with reality has inspired a lot of variations of Elo ratings, Glicko and TrueSkill and so on, that try to do a little better.)

Given any big enough set of games (games that link up the competitors into a connected graph), you can find Elo ratings for the players. The ratings may have big uncertainties, but you can rank the players. You can use virtually any tournament design with almost any kind of random or biased pairings, and get a ranking.

To me this is an intuitive way to think about tournament design: Players play games which we take as evidence of skill, and the key question is: With a given amount of time to play games, how do you want to distribute the evidence? If you want to rank all the competitors as well as possible, then distribute the evidence equally in a round-robin. That’s the idea behind AIIDE’s design—I approve. If you want to pick out the one winner, or the top few winners, as clearly as possible, then let potential winners play more games. If Loser1 and Loser2 are out of the running, then games between them produce little evidence of who the top winner will be. A game between Winner1 and Loser1 produces less evidence than a game between Winner1 and Winner2. Because of intransitivity you may get a different winner than the round robin, but you have more evidence that your winner is the “best.” It’s a tradeoff, ask and it shall be given you.

You might also care about entertaining the spectators. That’s the idea behind SSCAIT’s elimination format for the “mixed competition.” I approve of that too; it’s poor evidence but good fun.

As a corollary, the kind of tournament you want to win could make a difference in what you want to work on. In a round robin, beating the weak enemies more consistently like ZZZKBot counts as much as clawing extra points from the strong enemies like UAlbertaBot.

MaasCraft’s play

Does MaasCraft’s search make its play more interesting? I grabbed its replays from Starcraft AI Data Archive and watched some.

Short answer: No. It mostly moves its army forward and back along a path between its base and the enemy’s (exactly what its author Dennis Soemers hoped search would avoid) and rarely pulls any interesting maneuvers. It bears out what the author said about the appropriateness of the algorithm to the strategy.

I did notice one advantage MaasCraft has over many bots: It doesn’t chase after fast units that it can’t catch. I suppose the search sees that they get away.

predictions

PredictionBook members have 64% confidence that a bot will defeat a top-10 ranked human in at least one game by 2023. That’s based on the judgment of 6 people who don’t all seem well-informed, but maybe some of us will offer our insight.

There are two predictions with different dates. The confidence as of today:

by 1 January 201949%
by 1 January 202364%

tactical search in MaasCraft

MaasCraft is a protoss bot by Dennis Soemers which does a tactical search to decide how to maneuver its army. MaasCraft looks slightly above average. It finished 8th of 18 with a 59% win rate in AIIDE 2014 and 7th of 13 in CIG 2014 with 55% wins and hasn’t played since as far as I can tell.

Soemers’s paper on it is Tactical Planning Using MCTS in the Game of StarCraft (pdf BSc thesis from Games and AI Group, Maastricht University). You can get the code from the Starcraft AI Data Archive.

So, tactical search. “Tactical” here means maneuvering squads around the map and deciding when to engage or retreat. At the most basic level, search means finding alternatives and comparing them to pick one that is probably better. The comparison itself might involve a further search—and that is the root of all search algorithms. Search done right ought to outperform any scripted behavior, so our mission (should we choose to accept it) is to figure out how to do search right. This is the most interesting try I’ve seen so far.

The search nodes for tactical decisions have to include some representation of squads and squad positions on the map. In a chess program a search node represents the whole game state, but a specialized search in Starcraft should only represent the details it cares about. MaasCraft keeps it super abstract. The map is simplified to a graph with “nodes at each chokepoint and each potential base location,” the paper says, so that every location is just a node in the graph. A squad has a location, a speed, a hit point total, and a total damage rate (and presumably a number of units). A base is treated like a squad that can’t move (its static defense provides a damage rate). The paper doesn’t mention any provision for air units, but MaasCraft does mention air units in its code.

The moves for a squad that is not in combat are to stand pat or to move to an adjacent node in the location graph (where it may get into a fight). A squad that is already in combat can either keep fighting or retreat to an adjacent location. Moves take different amounts of time, and since each side can have any number of squads, many moves can be ongoing at once.

Finding the next state when you know the current state and the ongoing moves amounts to simulating part of the game. Movement is easy: A squad has a speed and the location graph has a distance. The course of a battle is estimated by Lanchester’s Square Law, which is a good estimate only for ranged fire and works better if all units in a squad are alike. I’m sure it’s less accurate than a battle simulator but it must be faster.

The search algorithm is one of the coolest bits. It is a variation of MCTS (Monte Carlo tree search) adjusted to cope with simultaneous moves of different durations. The next move down a branch is whichever move comes up next for whichever player. A squad that’s moving or retreating gets a new move when its current move finishes. If I understand it right, a squad that’s waiting or fighting gets an opportunity to change its mind when some other move finishes.

Algorithms in the MCTS family traditionally evaluate leaves of the tree by playing out one line of the game to the end. MaasCraft plays out up to one minute ahead and then applies a heuristic evaluation (not unlike AlphaGo). The evaluator gives points for destroying bases and harming armies, plus a tiny bonus to encourage moving toward the enemy.

How well does it work? Search should make play much more flexible and adaptive. A tactical search can plan sandwich maneuvers, or deliberately sacrifice a base to a strong enemy squad as long as it can defeat a weaker squad and take an enemy base in compensation. Looking into the future is what intelligence is. But this look into the future is simplified, so the question stands: How well does it work?

In the form interviewDennis Soemers wrote “I suspect the algorithm may be better suited for a more defensive and late-game strategy which is less reliant on perfect execution of the early-game. I also believe the algorithm would still be able to run sufficient simulations with a more fine-grained map representation and a more accurate Battle Model, since the bot is currently running on the bots ladder using less time per frame (I believe I use 35ms per frame in the competitions and only 20ms per frame on the ladder), with similar performance.”

That makes sense to me. In the early game, details matter; if you’re chasing the enemy scout, or fighting the first zealots with your first marines, it seems to me that you want to represent the situation in detail and search as exactly as you can. As armies get bigger, approximations are good enough and may be needed for speed.

By the way, code under the name “MCTSDennis” appears in LetaBot. Switches control whether it is active. The two bot authors are both from Maastricht University.

Iron meteor

Today Iron learned how to keep expanding. I watched it starve out ICEbot, which is exactly how a contain is supposed to work. It’s moving fast, and may yet go over 90% win rate.

Update: I also watched a tense close game against XIMP. For half the game XIMP had no probes and 30 minerals but 8 carriers, while Iron had many bases that it could not defend. Would the carriers have to stay home to resist the pressure, or could they relieve it enough to go marauding? XIMP won, but I was unsure until near the end.

recent developments

SSCAIT has binary bot downloads on the Bots & Results page. Binaries are of course most safely executed inside a virtual machine.

screenshot showing download links

Iron has continued to make progress but hasn’t yet lived up to its author’s hopes. It can now expand to its natural, though not beyond. Its game plan amounts to setting an elastic containment outside the enemy base, but its unit mix is not adaptive. An enemy that finds the right counter-mix and keeps pressure up can push the contain back and either break through or starve Iron out. Aggressive tscmoo zerg can’t be contained at all.

Tscmoo terran now knows how to kill tanks by laying mines next to them. The technique seems deadly effective. I also saw it chase down overlords with Napoleonic vigor, though only after the zerg was already toast.

Krasi0 and especially Tyr have both been playing more strongly after updates. I can’t pin down what they’re doing better, though.

Hmm, 4 terrans. Are we in for a terran renaissance?

precomputing a catalog of build orders

Today is an idea from the future work section of Dave Churchill’s thesis “Heuristic Search Techniques for Real-Time Strategy Games.” I don’t talk about it using the same words and I toss out a few side ideas that are not in the thesis, but the important points were all written down by Dave Churchill, author of UAlbertaBot. He even wrote up a preliminary experiment.

The idea is to, he wrote, “Perform a build-order search which does not attempt to achieve a given goal set of units, but instead attempts to maximize a given army value.” The BOSS build order search from UAlbertaBot can find near-optimal build orders if you tell it what units you want. It works in a general way, and with the right mods it can find build orders that, say, “make my army as big as possible at a given time, who cares what the units are” or “make my army beat this unit mix” or anything else you can turn into a number.

Of course if you ask something complicated it will take longer and you may not be able to run it in real time during a game. I think the natural course is to calculate a large catalog of good build orders (or even a catalog per map). Then on one side you can play the build orders, and on the other side match the opponent’s play to the catalog.

I’ll suggest a few recipes. You may think of tastier ones. Before we get cooking, here are the ingredients.

Ingredient: Build order search. You’ll have to insert your own evaluation function into the code. You may want to read the BOSS paper first and check whether any of BOSS’s optimizations and shortcuts gets in your way. Besides BOSS, I understand that tscmoo calculates build orders, though I haven’t looked into how it works and whether it would be good for cataloging.

Ingredient: Army comparison. An army does not have a strength in itself, it is strong or weak relative to the enemy. If you want to find counter-builds, not only builds, you need to compare armies. One way is with a battle simulator like SparCraft: Set the two armies down in some formation and have them wargame it out. MaasCraft used to do the same job faster but less accurately with a classic military formula. To estimate whether an advantage is enough to win, you might try a second war game giving the weak side a 20-second head start on its build, to represent that it is defending its base and doesn’t have to cross the map.

Recipe: Tech rushes, the kind of attack where you arrange for your army to be particularly strong at a given time, such as when you’ve teched up to a new unit type or when research finishes. Tech rushes are keystones of the build catalog because they are the strongest instantaneous attacks.

Step 1. List the unit combinations you care about: Zerglings, hydralisks, hydraling, lurkling, hydralurk, hydralurkling, mutaling—that might be all for zerg, maybe a few more. I don’t know any reason to precompute build orders into the late game. You can skip unit combinations that are too gas-heavy, like muta+lurker or tank+wraith—surely it’s better to include at least one unit that costs mostly minerals, so that you can spend efficiently.

Step 2. For each unit in one of your combinations, list the research and upgrades that you care about. For hydralisks you’ll list speed, range, missile attack, and carapace. +3 upgrades are for late game, and even +2 might be too late to be worth precomputing. Combine the units and upgrades into a longer list: Slowlings, speedlings, +1 speedlings, etc. Each is a target army that you might aim for in a tech rush.

Step 3. Brute force it. For each target army in the list, do the build order search to find out how fast you can build it. That’s the earliest timing you can hit. Then, for that time (fast tech rush) and later times (every n frames) until some time limit (slow but heavy tech rushes), search to find the largest target army you can make at that time, counting only the target military units and not the workers. That will give a curve of possible tech rushes versus time, from 4-pool to whatever upper time and tech limits you set.

Recipe: Timing attacks, where you recognize what the enemy is doing and strike while they are weak. I expect that strong tech rushes are likely to risk strong timing attacks.

For each tech rush that you want to counter, find timings when the build is weaker: Before units come out or before research finishes. For each potentially vulnerable timing, pull tech rushes from the catalog and for each one do the army comparison to see whether it’s a counter, in which case it is a timing attack too. And of course label the counters in the catalog.

The same if you want to counter any other known strategy, like what your opponent just beat you with. In that case you may be able to calculate a counter in game—after it’s hopeless and before the gg should be enough time.

Recipe: Universally safe builds are those safe against most or all tech rushes. Brute force it! Do you need static defense? The best safe build is the one that gives you the most workers. Some tech rushes may be safe, but maybe you should also consider how many workers are cut and seek out more economic builds. For universal safety you only have to fear early attacks, because later you can get practical safety.

Recipe: Practically safe builds are those safe against tech rushes that are consistent with your scouting info (ones the opponent could still play). One way to precompute them might be to mine replays for typical scouting info and brute force a range of safe builds so that you can switch into one from where you are. Again, the best safe build is the one that gives you the most workers.

More is possible, and by now you should get the idea. The catalog is a tree of builds, of course, in one representation. The branching structure of the tree tells you when you have to scout for what information.

I expect that PerfectBot will have some kind of knowledgebase created by large-scale search of strategy space. That doesn’t mean we should have it today, or that this way of making one will work well. It’s super simplified compared to real play, after all. I feel confident that some build catalog could be made useful, but I dunno how many new ideas it might hunger for.

oversiege

I promised strategy search today, but it’s taking longer than planned. Instead, here’s a rant about indecisiveness.

Terran bots that make siege tanks then go on to siege and unsiege the tanks too often. Knowledgable ICEbot does it, micro-aware LetaBot and WOPR do it. Iron does it less often, because it likes to stay in tank mode as much as possible. The tanks can’t fire while sieging and unsieging and end up shooting less than they should—and tanks are the backbone, with a weak backbone the army falls. I keep seeing tanks siege up, fire a shot, unsiege, then without moving a pixel siege up to fire a second shot. Even politicians don’t flip-flop that often.

No taunt intended. It’s tough to know when to siege. But sieging up 6 tanks to annihilate a single probe that wanders into view—OK, now I’m taunting. Bots can definitely do better than that.

I’m going to throw out a few easy ideas. I can almost promise that all of these have been tried, and I can promise for sure that none of them solves the problem. But maybe it will help people improve?

If you decide to siege up based on a condition, the condition probably should not be “a target is in sight.” Otherwise GarmBot will stall your army in the middle of the map with one zergling at a time. Try a tighter condition, at least “a target the army can’t destroy in one volley is in sight.”

Or you could try putting a time delay on unsieging. Siege up immediately, but when you want to unsiege, instead set a flag that says “unsiege after n frames.” Clear the flag if the siege condition is met in the meantime. Then if another target appears a moment later, which I see a lot, you’ll be ready.

If you siege based on a measurement, a “need to siege,” you could put hysteresis on it. If you siege when needToSiege reaches 10, maybe you shouldn’t unsiege until needToSiege falls below 5, or falls to 0, or whatever.

If you siege a tank based on orders to its squad, have it follow orders. If the squad is on the move, it’s “go! go! go!” until battle is joined (after that it gets complicated). If the squad’s orders are to stand guard at the choke, tanks should find good positions and stay sieged until their orders or the situation changes (“oops, a dark templar reached me”). Or something like that; guarding against drops is trickier.

If you decide by cost-benefit analysis, be sure to include a fat term for the cost of sieging or unsieging. The cost is not only the time spent immobile and unfiring, it is the risk accepted that the best decision will change suddenly.

Sieging has two advantages, higher range and higher damage rate (including higher damage from splash). Whatever decision method you use, maybe it makes sense to separate the two as different concerns. If you’re sieging for the range, stay sieged as long as the goal remains that you need the range for: If you’re destroying static defense from a safe distance, until the target is destroyed; if you’re defending an approach, until you are ordered to do something else. If you’re sieging for the damage rate, you want to try to stay sieged long enough to make up for the shots you would have fired in tank mode. Well, the considerations are complicated, but that’s a start on them.

So much for easy ideas. Maybe the only good solutions are hard solutions. But there is definitely a problem to solve.

And y’know, this is top secret and maybe I shouldn’t say, but the indecision between tank mode and siege mode is only one example. Bots suffer a lot of indecisiveness. Should the lurkers burrow, should I attack or run away (nah, I’ll just move back and forth under fire until I die), where does this army want to go (this way, that way, this way, why go anywhere?).... Any agent making a big decision based on small considerations is at risk of vacillating.

high-level view of strategy inference

PerfectBot 1.0, which deploys every processor cycle optimally to play the best possible Brood War, is scheduled for release at the end of time. I doubt it will be later. How will PerfectBot figure out its enemy’s opening plan?

Saa, who knows? But we can guess part of it.

Needless to say, PerfectBot picks up on more cues than bots today. A few possibilities:

  • direct cues
    • what is the map? (different strategies will be good)
    • what are the respective base positions?
    • what buildings did we see? (tech, production, expansion...)
    • what units did we see?
    • what minerals has the enemy mined?
    • does the enemy have gas? still mining?
    • what information did the enemy collect about what I’m doing?
  • effort cues (how much should we have seen?)
    • where and when and how long did we scout?
    • could there be unfound hidden buildings?
    • do we have units forward to put pressure on or to see the enemy moving out?
  • behavior cues (what is the enemy army doing?)
    • when did the enemy scout?
    • are the first enemy units hanging back or out on the map?
    • what goals does the enemy building placement support? (e.g. a wall-in is better for some strategies than others)
    • is the enemy trying extra hard to prevent scouting?

PerfectBot does at least two kinds of inference. First, it figures out what is physically possible under the rules of the game. “Mutalisks can be out at time t, not earlier.” Second, it recognizes cues associated with strong strategies. By combining the two, PerfectBot can narrow down the situation accurately.

How do strategy cues work? The enemy has three choices about the message it sends with every visible action it takes:

  1. play a straight move that supports the chosen strategy
  2. pay a strategic cost to take a deceptive action
  3. play badly with disadvantage

Straight moves tend to be strategy cues, but they also make your strategy effective. Bad moves can be ignored in strategy inference; in theory they only decrease the hazard that PerfectBot faces in the range of possible futures.

Deceptive moves come with a more difficult theory. Think of a deceptive move as a move to confuse the opponent: A move that looks strong for a different strategy than you are playing (researching air weapons to look like dragoon range while you prepare a reaver drop), or that goes above and beyond to hide information (blocking the ramp with drones to keep the scout out). First, a deceptive move always incurs a cost, meaning that it makes the strategy in some way less likely to succeed. If it had no cost, it would be a straight move that happened to give little information away. Second, the payback for the cost of deception does not occur in the game in which you play a deceptive move (not against an opponent as smart as PerfectBot). The theory is related to the theory of bluffing in poker: Game theory says you should bluff a certain amount of the time, varying by situation, and the payback for bluffing is the opponent’s higher uncertainty all the time. PerfectBot knows its opponent may be pulling a trick, and (since it’s perfect) it knows what the available tricks are, but it still has to prepare for a wider range of future events (could be dragoons, could be reaver drop).

Based on the combination of cues and analysis of possibilities, PerfectBot can do something like assign probabilities to future events (“overlords hanging around outside my base, high chance of slowdrop around time t”) and play to maximize its odds of winning.

You can’t deceive today’s bots because they’re not smart enough to take the bait. So first we should concentrate on the strategy cues of straight moves: If you know what the good strategies are and how to play them, you can find the straight cues. Knowing the good strategies, for example with a database of builds like LetaBot’s, is a strong start.

Instead of relying on human knowledge of good strategy, we could try to compute good strategies with a search through strategy space. I imagine that’s how PerfectBot gets its knowledge. I’ll write about that tomorrow—it’s easy enough that I can imagine trying it this year.

I’m also looking forward to ideas for combining physics knowledge of what’s possible with cues. If the opponent is going mutas, when will they come out? (Ever notice experts building turrets literally at the last second?) Maybe try a build order search along the lines of UAlbertaBot’s BOSS, but applied to the opponent.

High level theory like this is super vague but I hope it’s helpful for thinking about concrete ideas so that tomorrow’s bots can become smart enough to be tricked.

LetaBot’s build order inference

LetaBot does an elaborate inference of its opponent’s opening build order. A couple posts ago I found no scouting inferences by LetaBot—I missed it. Martin Rooijackers had to tip me off. Thanks! It’s already there in the AIIDE 2015 version. There’s a ton of code in all the bots and I didn’t read most of it, so I may have missed many inferences in other bots too. This one is kind of fancy and breaks my earlier conclusion that no inferences were “deep or clever.”

AIIDE 2015 LetaBot declares terran and zerg build orders that an opponent may play. I imagine that the current version also knows protoss build orders. But in any case, at least for terran and zerg opponents LetaBot can compare what it sees against the different build orders and find the best match.

A build order declaration looks like this. A zerg build order from BuildOrderManager.cpp:

void BuildOrderManager::NinePoolSpeed(){
	BuildOrder newBo;
	newBo.YourRace = BWAPI::Races::Zerg;
	newBo.RaceEnemy = BWAPI::Races::Terran;
	newBo.Creator = "Dennis Soemers Text mining. Manual adjustments by Martin Rooijackers";
	newBo.Name = "9 Pool Speed";

	BOpiece addPiece;
	addPiece.supply = 9;
	addPiece.Building = BWAPI::UnitTypes::Zerg_Spawning_Pool;
	newBo.BOpieces.push_back( addPiece );
	addPiece.supply = 8;
	addPiece.Building = BWAPI::UnitTypes::Zerg_Drone;
	newBo.BOpieces.push_back( addPiece );
	addPiece.supply = 9;
	addPiece.Building = BWAPI::UnitTypes::Zerg_Extractor;
	newBo.BOpieces.push_back( addPiece );
	addPiece.supply = 8;
	addPiece.Building = BWAPI::UnitTypes::Zerg_Overlord;
	newBo.BOpieces.push_back( addPiece );

	newBo.WeakCounterBy.push_back( "2 Rax FE" );
	newBo.StrongCounterTo.push_back("14 CC");
	newBo.StrongCounterTo.push_back("BBS");
	newBo.WeakCounterTo.push_back("1 Rax FE");

	BOs.push_back(newBo);
}

Some counter-builds are declared near the bottom. The AIIDE 2015 version does not use them.

LetaBot draws the build order inference in BuildOrderManager.onFrame() so that the inference can change from moment to moment as more information turns up.

	for(int j=0; j<BOs.size(); j++){
		std::vector<BWAPI::UnitType>  EnemyBuildingsCopy;
	  for(int i=0; i<InfoMan->EnemyBuildings.size(); i++){
		  if( InfoMan->EnemyBuildings[i].building->getType() == BWAPI::UnitTypes::Zerg_Hatchery){
			  if( InfoMan->EnemyBase != BWAPI::TilePositions::Unknown ){
				  if(  InfoMan->EnemyBuildings[i].position.getDistance( BWAPI::Position(InfoMan->EnemyBase) ) > 4*32 ){
			        EnemyBuildingsCopy.push_back( InfoMan->EnemyBuildings[i].type );
				  }
			  }
		  } else {
			  EnemyBuildingsCopy.push_back( InfoMan->EnemyBuildings[i].type );
		  }
			  
	  }

	  int BuildAvailable = 0;
	  int BuildMissing = 0;
		  for(int k=0; k<BOs[j].BOpieces.size(); k++){
			  if( BOs[j].BOpieces[k].Building.isBuilding() ){
				  bool isThere = false;
				  for( int m=0; m<EnemyBuildingsCopy.size(); m++){
					  if( EnemyBuildingsCopy[m] == BOs[j].BOpieces[k].Building ){
						  isThere = true;
						  EnemyBuildingsCopy.erase( EnemyBuildingsCopy.begin() + m);
						  break;
					  }
				  }
				  if( isThere ){
					  BuildAvailable++;
				  } else {
					  BuildMissing++;
				  }
			  }
		  }


	  int score = BuildAvailable - BuildMissing;
	  if( BestScore < score){
		  BestScore = score;
		  EnemyBo = BOs[j].Name;
	  }
	}

Compute a score for each build order. Add to the score (BuildAvailable) for each part of the build order that LetaBot has seen, and subtract from the score (BuildMissing) for each part it has not seen. The enemy is (LetaBot concludes) following the build order that gets the highest score. In case of ties, the first high score gets it.

It’s simple and probably jumps to conclusions sometimes, but if the build orders are chosen and ordered correctly then I think it could avoid jumping to dangerous conclusions that make LetaBot do something foolish.

In the AIIDE 2015 version, LetaBot uses the inferred build order to decide how many marines to train for its marine-SCV rush, but not for any other purpose. Openings that give zerg a stronger early army need more marines. From TwoRaxSCVRush.cpp:

	if( Broodwar->enemy()->getRace() == BWAPI::Races::Zerg){
	//get marines to build based on BO
	if( BOMan->EnemyBo == "9 Pool"){
		MarinesToBuild = 8;
	}
	//get marines to build based on BO
	if( BOMan->EnemyBo == "OverPool"){
		MarinesToBuild = 6;
	}
	//get marines to build based on BO
	if( BOMan->EnemyBo == "12 Hatch"){
		MarinesToBuild = 6;
	}
	//get marines to build based on BO
	if( BOMan->EnemyBo == "9 Pool Speed"){
		MarinesToBuild = 10;
	}
	}

The latest version of LetaBot surely has many more uses for the inferred build order.

Bottom line: It looks to me like an excellent idea for adapting your opening strategy to the opponent’s, as long as the opponent is playing a recognized good strategy. If the opponent plays a bad strategy, why worry? Against zerg there should be few problems caused by unscouted buildings. Even against terran or protoss, I have the feeling that there are ways to manage the uncertainty from unscouted buildings or ambiguous build orders. For example, for build orders that branch you can declare one build order up to the branching point and further builds along each possible branch.

Who’s coming in with the next tip-off?

Update: If you catch LetaBot playing on the SSCAIT live stream you may be able to watch the build order inference in action. LetaBot reports its conclusion at the top center of the screen; at the start of the game it is “Unknown”. For some reason it doesn’t seem to show up every game (maybe only games versus zerg?). The conclusion can be way off when the opponent plays something LetaBot doesn’t know, like “1 Hatch Lurker” when it has seen two hatcheries and no gas. (I guess it should subtract from the score when it sees buildings that are outside the build order, or some other cleverness increment.)

bots on Big Game Hunters

Today is April Fools, and the SSCAIT live stream has bots playing on the map BGH.

Some bots are struggling on the highly non-standard map, which has 8 starting locations, huge resources so that expanding has a different dynamic, and strangely laid-out bases and paths. For example XIMP, in the game I saw, misplaced its cannon wall and got run over since there is no standard natural expansion. Other bots make other uncharacteristic blunders, such as pathing errors or self-blocking in the narrow walkways.

It’s funny, but the serious lesson is that scripted behaviors are fragile. And also that a lot of Starcraft strategy is specific to given maps, and some commonalities that hold across maps come about because the maps have standardized aspects.