archive by month
Skip to content

Overkill’s new learning 6 - how it fits with the rest of the program

The learning is done by StrategyManager::strategyChange(). During the game, the production manager is in one of two states, depending on whether Overkill is still in its opening build order.

	enum ProductionState {fixBuildingOrder, goalOriented};

	ProductionState				productionState;

When productionState is goalOriented, ProductionManager::update() calls onGoalProduction() to check whether the goal (zerglings, hydralisks, mutalisks) should be changed. The key parts:

	bool isAllUpgrade = true;
	for (int i = 0; i < int(queue.size()); i++)
	{
		//if remain unit type is just worker/upgrade/tech, check the strategy
		if (... a long condition here ...)
			isAllUpgrade = false;
	}
	if (isAllUpgrade)
	{
		if (BWAPI::Broodwar->getFrameCount() > nextStrategyCheckTime)
		{
			std::string action = StrategyManager::Instance().strategyChange(0);
			BWAPI::Broodwar->printf("change strategy to %s", action.c_str());
			setBuildOrder(StrategyManager::Instance().getStrategyBuildingOrder(action), false);

			nextStrategyCheckTime = BWAPI::Broodwar->getFrameCount() + 24 * 10;
			curStrategyAction = action;
		}
		...
	}

In other words, if the production queue is empty of combat units, and at least 10 seconds have passed since the last strategyChange(), then call strategyChange() again (with reward 0 because we’re in the middle of the game). Overkill changes its choice at most once every 10 seconds.

At the end of the game, Overkill calls strategyChange() one last time, giving a reward of 100 for a win and -100 for a loss. From Overkill::onEnd():

	if (frameElapse >= 86000 && BWAPI::Broodwar->self()->supplyUsed() >= 150 * 2)
	{
		win = true;
	}
	else
	{
		win = isWinner;
	}

	int reward = win == true ? 100 : -100;
	
	StrategyManager::Instance().strategyChange(reward);

There’s something curious here. isWinner is the flag passed into onEnd(). If Overkill makes it to the late game, it sets win = true no matter what; otherwise it believes the isWinner flag. For purposes of its learning algorithm, Overkill tells itself that making it to the late game counts as a win in itself! Isn’t that a pessimistic way to think?

Every call to strategyChange() produces 1 data point for the learning algorithm. The reward comes in only at the end of the game, but Overkill needs the other data points to fill in the Q values for states during the game. The model has a lot of values to learn, so the more data points the better.

exploration

When Overkill is in Release mode, it explores 10% of the time. The code is in StrategyManager::strategyChange(). So, 1 time in 10 when Overkill makes a decision, it’s a random exploratory decision instead of the best decision it knows.

On the one hand, if Overkill wants to learn an opponent model, it has to explore. On the other hand, making that many random decisions can’t be good for its play! Since the AIIDE 2016 tournament was not long enough for the opponent model to become accurate, Overkill might have done better to skip the opponent modeling.

Next: Other changes to Overkill.

Overkill’s new learning 5 - the full list of features

I changed my mind about what to cover today. Here is an overview of the features in Overkill’s model. I decided it was interesting to see what they are.

Yesterday we saw that features are kept in a map of maps, so names have two levels. The variable featureNames declares the top-level groups of feature names and some of the lower-level names. It looks like this (leaving out most of it):

	featureNames = decltype(featureNames) {
		//state feature
		{"state_general_feature", { 
			{ "time_", { "6", "12", "20", "30", "max" } },
			{ "enemyRace_", { "z", "t", "p", "unknow" } },
			{ "ourSupplyUsed_", { "9", "18", "30", "50", "80", "120", "150", "max" } },
		...
		{ "state_raw_combine_feature", {} },
		{ "action_battle_combine_state_battle_feature", {} }
	};

Features that originally take on a range of values have to be made into binary features. Overkill does it by breaking the range into coarse pieces. time_ looks to be game time in minutes. state_raw_combine_feature and action_battle_combine_state_battle_feature have their lower-level names filled in by code rather than declared directly. Those last two are the majority of features.

Here are top-level names and what it looks like they cover. Don’t trust my interpretations too much. Not all features in the code end up in the I/O files. I wrote down the number of features that the code includes, but apparently some of the features are never present in actual games.

name # my interpretation
state_general_feature 53 Game time, map name, distance between bases, enemy race, etc.
state_tech_feature 7 Overkill’s tech level.
state_building_feature 30 Overkill’s tech buildings, the enemy’s tech and production buildings.
state_economy_feature 46 Minerals and gas for Overkill, expansions and workers for both sides.
state_our_army 40 Counts of Overkill’s static defense and units of each kind.
state_battle_feature 168 Counts of enemy units of each kind.
state_raw_combine_feature 6723 state_our_army crossed with state_battle_feature, that is, every combination of our units and the enemy’s, plus 3 extra features.
action_battle_combine_state_battle_feature 6754 Copies of state_raw_combine_feature and state_building_feature and the one state_tech_feature feature ourKeyUpgrade_zerglingsAttackSpeed.

We know from yesterday that the features in action_battle_combine_state_battle_feature are exactly the ones that matter in calculating the Q value—they are the ones which get the action appended to their names. The others are only along for the ride; it’s an implementation quirk that they are kept in the same data structure. A number of features seem to be declared, evaluated, learned and remembered, but never effectively used.

So if I counted right (which I question), then there are 6754 binary features in total, though it would be strange for all of them to be useful against any one opponent.

Next: How it fits into the rest of the program, for real.

Overkill’s new learning 4 - the model and its features

What does it mean to have a linear model with binary features? “Linear” means that each feature comes with a number, its weight, so that with binary features you find Q(s,a) by adding up the weights for each feature that is present. Usually only a small proportion of all the features are present, so it’s not as crazy as it may sound.

Overkill gives its features long multi-part names, which it implements throughout as strings accessed via maps. (I was surprised to see that in a real-time program, but it’s probably easier.) The feature names are written out plainly in the I/O files. Here are a few scattered samples from the file feature_valueAiur, which lists 9638 features altogether:

action_battle_combine_state_battle_feature:enemyKeyBuilding_hasP_robotics_facility*hydraBuild:0.13396
action_battle_combine_state_battle_feature:enemyKeyBuilding_hasP_robotics_facility*mutaBuild:0.07588
action_battle_combine_state_battle_feature:enemyKeyBuilding_hasP_robotics_facility*zerglingBuild:0.06963
action_battle_combine_state_battle_feature:enemyKeyBuilding_hasP_stargate*hydraBuild:0.05439
action_battle_combine_state_battle_feature:enemyKeyBuilding_hasP_stargate*mutaBuild:0.10049
action_battle_combine_state_battle_feature:enemyKeyBuilding_hasP_stargate*zerglingBuild:0.26210

state_raw_combine_feature:enemyP_cannon_1*ourHydra_6:-0.21410
state_raw_combine_feature:enemyP_cannon_1*ourHydra_12:-0.43786
state_raw_combine_feature:enemyP_cannon_1*ourHydra_18:-0.08806
state_raw_combine_feature:enemyP_cannon_1*ourHydra_24:0.24174
state_raw_combine_feature:enemyP_cannon_1*ourHydra_36:0.42465
state_raw_combine_feature:enemyP_cannon_1*ourHydra_48:0.39939
state_raw_combine_feature:enemyP_cannon_1*ourHydra_60:0.52629
state_raw_combine_feature:enemyP_cannon_1*ourHydra_max:0.59403

state_tech_feature:ourKeyUpgrade_zerglingsAttackSpeed:2.33542
state_tech_feature:ourTechLevel_hatchery:2.28803
state_tech_feature:ourTechLevel_lair:0.25170
state_tech_feature:ourTechLevel_hive:1.48611

You can guess what the feature names mean: Enemy has 1 cannon and we have up to 6 hydralisks, for example. That’s how it got so many features!

Each opponent’s file seems to list a different number of features, probably leaving out features that never came up, so 9638 is not the total number of features. But there’s something here I don’t understand. 9638 is not divisible by 3. Each line gives one weight—shouldn’t there be 3 weights for each state, so that the 3 actions can all be evaluated?

Here’s the routine that calculates Q(s,a). Its arguments are reversed—it puts the action before the state.

double StrategyManager::calActionFeature(std::string curAction, std::map<std::string, std::map<std::string, int>>& features)
{
	for (auto categoryStateFeature : features)
	{
		if (categoryStateFeature.first == "state_raw_combine_feature" || categoryStateFeature.first == "state_building_feature")
		{
			for (auto stateFeature : categoryStateFeature.second)
			{
				std::string combineFeatureName = stateFeature.first + "*" + curAction;
				features["action_battle_combine_state_battle_feature"][combineFeatureName] = 1;
			}
		}
	}

	if (features["state_tech_feature"].find("ourKeyUpgrade_zerglingsAttackSpeed") != features["state_tech_feature"].end())
	{
		std::string combineFeatureName = std::string("ourKeyUpgrade_zerglingsAttackSpeed") + "*" + curAction;
		features["action_battle_combine_state_battle_feature"][combineFeatureName] = 1;
	}

	double curQValue = 0;
	for (auto categoryFeature : features)
	{
		for (auto curfeature : categoryFeature.second)
		{
			int curfeatureValue = curfeature.second;
			if (parameterValue.find(categoryFeature.first) != parameterValue.end() && parameterValue[categoryFeature.first].find(curfeature.first) != parameterValue[categoryFeature.first].end())
			{
				double curParameterValue = parameterValue[categoryFeature.first][curfeature.first];
				curQValue += curParameterValue * curfeatureValue;
			}
		}
	}
	return curQValue;

}

parameterValue holds the model. curAction is the action and the features map with its nested type is the state. Having read this, I still don’t understand. The action name is coded into some feature names and not others, which we see above as + curAction. The list of actions:

	stateActions = {"zerglingBuild", "hydraBuild", "mutaBuild"};

Here’s the call, the bit of code which chooses the action with the highest Q value. (Below this is another bit where it changes the action if it feels like exploring.)

		for (auto action : stateActions)
		{
			std::map<std::string, std::map<std::string, int>> actionFeatureValue = featureValue;
			double curQValue = calActionFeature(action, actionFeatureValue);

			if (curQValue > maxQValue)
			{
				maxQValue = curQValue;
				maxAction = action;
				maxFeatureValue = actionFeatureValue;
			}
		}

The call does nothing to differentiate actions. As far as I can tell, only the features which include the action in their names can be used to tell actions apart, and the other features are irrelevant constants that happen to be added in.

$ grep hydraBuild feature_valueAiur | wc -l
    2176
$ grep mutaBuild feature_valueAiur | wc -l
    2267
$ grep zerglingBuild feature_valueAiur | wc -l
    2403

So 2176+2267+2403 = 6846 features out of 9638 encode the build name in the I/O file for AIUR. As far as I can tell, the other 2792 features are irrelevant. And those 2792 features include some that look important. Surely you want to pay attention to what upgrades you have when you choose which units to make!

The number of features is different for each action. That means two things. 1. The fact that the total number of features is not divisible by 3 is meaningless. 2. Not all actions have been explored in the different states. As expected, the games played against AIUR were not enough to fill in the model.

Either I’ve misunderstood something, or Overkill’s learning has flaws (I wouldn’t go so far as to say bugs, it is only a loss of effectiveness, not an error). Can anybody correct me? I’ll contact Sijia Xu.

Next: How it fits into the rest of the program.

Overkill’s new learning 3 - one model or many?

My first question about Overkill’s model is: One model for all opponents, or one model for each opponent? It turns out that the answer is: It depends. It checks curMode.

enum developMode { Develop, Release };
extern developMode	curMode;

Here is an example. Bits of code like this show up for each of the 3 data files used to store the model and learning information.

	if (curMode == Develop)
	{
		filePath = “./bwapi-data/write/RL_data”;
	}
	else
	{
		string enemyName = BWAPI::Broodwar->enemy()->getName();
		filePath = “./bwapi-data/write/RL_data”;
		filePath += enemyName;
	}

In the AIIDE 2016 competition, curMode is set to Release. It looks as though each opponent gets its own model, learned independently. But not learned from scratch!

My idea that Overkill has a general model turned out true. (I may have read it somewhere and forgotten where.) When it plays an opponent for the first time, it uses a model defined in file ModelWeightInit.h as the initial model, and learns starting from there. I don’t see any information about how the initial model was created. It may have been trained by playing against a variety of opponents, in Develop mode.

You could say that the initial model is “how to play Starcraft” and the refined model made for each opponent is “how to beat this opponent.” The same learning system can be used both offline to learn the game and online to model the opponent.

How well did opponent modeling work? We can look at Overkill’s graph of win rate over time in the AIIDE 2016 results. Its winning rate after 20 rounds was 0.59 and after 90 rounds was 0.62. The curve shows a fairly steady rise, and visually it’s convincing that Overkill learned something about its opponents, but it improved its win rate only a little. The unsteady learning curve we saw in the description suggests that Overkill might have eventually learned more if the tournament had gone on long enough—maybe.

Next: The features of Overkill’s model.

LetaBot’s wall 5 - discussion

I’m done looking at code, but I have a bunch of final observations.

limitations

As written, LetaBot’s wall code can only plan walls with a barracks and two supply depots. In theory it might create a wall with a barracks and one depot, or with only the barracks, but the map would have to have a strange layout of unbuildable spots so that the later buildings fail to be placed during the search. Normally it will squeeze all buildings in, all in contact with each other, even if some are unneeded, because it places all buildings before checking whether it has a wall.

It looks easy to reconfigure to use another fixed set of buildings. SpaceArray already includes data for the command center (wall until you lift it off) and the academy. Walls including a factory are more common, and it looks simple to add the data for that. If you have the time for the extra calls to WalledOffAStar, you could do a more extensive search that considered different possible buildings and stopped as soon as it found a wall, so that no unnecessary buildings are included. Of course the choice of buildings interacts with the build order. If your build order has the factory late but you need the wall early....

If you know the size and form of the choke, maybe there’s a way to figure out the right set of buildings to wall it off, so that you don’t have to try different sets until you find one that works. It’s an idea, anyway. For protoss, simple heuristics would do pretty well: To close a wide choke as tightly as possible, start with gateway + forge. To narrow a wide choke without closing it, use pylons. To wall a ramp, use pylons. Of course LetaBot doesn’t have code for protoss building restrictions.

reusing the wall code

The wall code is complicated. It’s also fairly messy, showing the scars of a lot of surgery. Some of the rework may be due to changing goals as the bot’s strategy shifted, but I suspect that most of it is from the sheer difficulty of getting the complex calculation to work properly in real play. If I wanted the code for a project of my own, I would first take time to clean and tighten it, comb for bugs, and make the efficiency improvements that belong in a second draft. It’s unpolished, but I judge the code quality overall as within the normal range for an academic project done under the usual heavy time constraints.

Legalities. I didn’t find any copyright license info in the CIG 2016 version of LetaBot. Of course it’s not required! But authors who actively want their code to be reused should think about conditions. For example, academic authors probably want credit and might choose a license that allows reuse on condition that the author is credited. Creative Commons offers choices. Copyright laws are different everywhere, and international copyright is even more confusing, so it’s helpful to make things as easy as possible for those who want to get by within the legal system.

more uses for WalledOffAStar

WalledOffAStar takes a start and end location and looks for a path between them for a unit of a given size, staying within a bounding box, taking into account known buildings. If there is no path, then the way is blocked by buildings or terrain (there may exist a path that goes outside the bounding box). It is coded to work only for the buildings that LetaBot uses in its wall, but it doesn’t seem difficult to generalize. You could use it to check for walls that you suspect or fear might exist.

If one of your units is trying to move somewhere, and it hasn’t made expected progress, and it is near your own buildings, then you might want to use code like this to check whether you have accidentally walled it in. If yes, you can also use it to pick a building to lift or destroy to break the wall.

If you see enemy buildings near a choke point that you want to attack through, you can use code like this to see which of your units fit through, or how much the choke is narrowed. The result can inform whether you should try to destroy buildings as the first stage in the attack.

spawning inside or outside

I don’t see any attempt to judge whether units will spawn inside or outside the wall. Terran and protoss units as well as zerg larvae spawn in predictable locations relative to the building that creates them, so I think it would be a comparatively easy extension. For a wall that includes a production building, it seems important to know. On the other hand, there’s usually not much you can do about it when units spawn outside.

how I expected the wall code to work

When I heard that LetaBot uses A* to plan its wall, I imagined a different algorithm than it actually uses. LetaBot uses brute force: It places buildings in different arrangements and figures out whether the buildings form a wall by looking for a path through them. Humans plan a wall by spacial reasoning, not brute force.

I imagined an algorithm that’s sort of closer to the human method. First, identify a flat area that you want to wall and find the two edges that the wall should run between. Let’s say the edges are the top and bottom. We’ll use A* to place buildings in a best-first way. When you use A* to find a path, the cost of a partial path is g(x) the length of the path so far plus h(x) a heuristic estimate of the remaining path length. The wall we want is basically a path of buildings from the top edge to the bottom edge! Our h(x) will be the distance from the bottommost point of the bottommost building in the wall to the bottom edge. g(x) is a measure of how long the path is so far: It will be the sum of the heights of the buildings placed.

As LetaBot does, decree a maximum gap size and don’t allow any larger gaps between buildings. Place the first building against the top edge, respecting the gap size. The A* formula of g(x) + h(x) tells you which placement to try first. Place the second building against the first one, and A* again tells you which placement to try first, etc. You’re done when the distance to the bottom respects the gap size.

This is a more expensive version of A* than WalledOffAStar, but you only run it once so apparently wall-finding would take less work overall. Though you shouldn’t trust human intuition when it comes to run times! It’s likely that g(x) and h(x) would have to be tweaked for best results—why should my first idea be the best idea? As always, it’s just an idea.

LetaBot’s wall 4 - placing buildings to form the wall

Continuing from yesterday, here is how RecursiveWall places buildings to try to create a wall. It is of course recursive. It wants a vector of building types and a current search depth.

void BuildingManager::RecursiveWall(std::vector Buildings, int depth){

The call is like this. (The three bits of code here are in different places in the source.) The numbers representing buildings are used as indexes into the SpaceArray mentioned yesterday, which has the information needed to compute the gaps between buildings. I’m not sure why the two depots get different indexes into SpaceArray (which has the same data for both). Is it so they can be told apart in maps, so that building adjacency can be found? Anyway, depth is initially 0.

	  WallSound = false;
	  
	  std::vector Buildings;
	  Buildings.push_back(2);//Barracks
	  Buildings.push_back(3);//Supply depot
	  Buildings.push_back(4);//Supply depot

	  RecursiveWall(Buildings,0);

The code is full of implementation details that don’t matter at this level of discussion (and includes some redundant code), so here is pseudocode. CanWall() is a utility function to figure out whether a building can be placed at a given location. mapWall() maintains a 2D map of building locations. I should also mention, though I didn’t realize it until today, that WalledOffAStar() returns false if a wall exists and true if you can walk through, despite its name and some comments. It’s confusing.

RecursiveWall (vector Buildings, int depth) :
  if (WallSound) {                    // we found a wall
    return;                           // so stop the search
  }
  if (depth == Buildings.size()) {    // all buildings are placed
    walkable = WalledOffAStar (...)   // can we walk through?
    if (not walkable) {               // no, try lifting the barracks
      gate = WalledOffAStar (..., ignoring barracks)
      if (gate) {                     // can we walk through now?
        WallSound = true;             // yes, we found a wall
        BarracksWall = ...;           // record the building placements
      }
    }  
    return;                           // either way, we're done
  } else {
    int BuildType = Buildings[depth]; // next building to place
    for (int x, y inside the bounding box) {
      if (CanWall (BuildType, x, y)) {
        mapWall (x, y, BuildType, BuildType); // update data structures
        BWAPI::TilePosition thisTile = BWAPI::TilePosition (x, y);
        BuildingPlace.push_back (thisTile);
        RecursiveWall (Buildings, depth+1);   // recursive call
        BuildingPlace.pop_back();             // roll back updates
        mapWall (x, y, BuildType, BUILDABLE);
      }
    }
  }

Wow, it’s a little hard to follow even boiled down this much. WallSound is initialized to false before the call. So WallSound ends up true only if both the wall is sound and lifting the barracks opens it (so that the barracks is a functional part of the wall and not an appendix to a wall of supply depots). Older versions of LetaBot sometimes got that wrong and walled themselves permanently into their bases.

RecursiveWall does not know which of the buildings in its vector are the same, so if one building fails to place at any location it simply continues on to the next. As far as it knows, a good wall might still be possible with the remaining buildings. An optimization might be to prune the search appropriately when building placements fail. Is it too rare to matter?

Here’s what CanWall does, again in pseudocode. It is given the building’s type and proposed location and returns whether the location is good to build on. Buildings are required to be adjacent to an already placed building. It knows that the barracks is first and lets it be “adjacent to itself.”

CanWall (buildingType, buildingX, buildingY) :
  nextToB = false;       // adjacent to an existing building?
  for (int x, y in the building's footprint from SpaceArray) {
    if (outside the map) return false;
    if (not buildable) return false;
    if (covers Closest or Furthest) return false;
    if (useHighGround and this is not high ground) return false;
    if (buildingType == barracks) nextToB = true;
    if (adjacent to an already placed building) nextToB = true;
  }
  return nextToB;

The code runs the “adjacent to an existing building” loop even for a barracks. For efficiency, it could skip adjacency checks (the inner loop and thus the most expensive part of the code) once nextToB turns true, since it never goes false again. The compiler may be smart enough to lift the “is this building a barracks?” test above the outer loop, since it doesn’t depend on x and y, but it would be tidier to code it that way from the start.

Next: Discussion of walling code.

LetaBot’s wall 3 - low-level utilities

Today, a few low-level parts that RecursiveWall() relies on. As mentioned yesterday, LetaBot plans its wall to fit inside a bounding box around the center of the choke to be walled off. It finds the Closest and Furthest tiles from the original command center that are in the box.

First, the SpaceArray that stores the sizes of buildings. The information can be found under List of Unit and Building Sizes in Liquipedia. The top, bottom, left, and right numbers are pixel gaps that the building leaves along each side. It’s the basic data you need to figure out how tight a wall is. With the gap sizes of two adjacent buildings, you can find the total gap between the buildings and see which units fit through the gap. Here is a sample of the initialization code:

ExtraSpace SpaceArray[10];

	ExtraSpace Barracks;
	Barracks.Top = 8;
	Barracks.Bottom = 15;
	Barracks.Left = 16;
	Barracks.Right = 7;
	Barracks.width = 4;
	Barracks.heigth = 3;

	SpaceArray[2] = Barracks;

MaxGap() is the function that uses the SpaceArray to find the gap. You pass in x and y and it looks to see what’s there. If the location is at the edge of a building, it looks for a neighboring building, figures out the relative locations of the buildings, and does a bogglingly complex calculation to find the horizontal and vertical gaps between the buildings, in pixels. I didn’t try to understand the details.

// a 2x2 part of the walkable array
//check for gaps between each of the tiles
int BuildingManager::MaxGap(int x, int y){

And WalledOffAstar() is the function that determines whether a wall has been created inside the bounding box. For some reason you pass in Startx and Starty, which are always at the Closest location. WalledOffAstar() uses the A* algorithm to look for a path between Closest and Furthest, a path which a unit with horizontal size unitH and vertical size unitV can follow. If no such path exists, then there’s a wall across the choke!

bool BuildingManager::WalledOffAstar(int Startx, int Starty, int Ignore, int unitH, int unitW){

When I heard that LetaBot used A* in its wall planning, I imagined that it used A* directly to search for building placements. But now we can see the real strategy. RecursiveWall() tries a building placement and then calls WalledOffAstar() to see whether that placement constitutes a wall. No? OK, try the next building placement.

Next: Finally, RecursiveWall() proper, finding the wall.

LetaBot’s wall 2 - the outline

Yesterday LetaBot found the center tile of the location it wanted to wall. Here is the outline of the rest:

  1. Precalculate some stuff.
  2. Initialize useHighGround to true.
  3. Call RecursiveWall() to calculate a wall or fail.
  4. If it failed, set useHighGround to false and call RecursiveWall() again.
  5. Do a final check: Is the wall in time?

The precalculation is surprisingly complicated. It looks like the main results are Closest and Furthest tiles in a bounding box that the wall must fit into. Closest is the tile in the bounding box closest to the original command center by ground and Furthest is the farthest reachable tile in the box. Both are required to be walkable and buildable.

If LetaBot can, it builds the wall on high ground. On many competitive maps, the main is on high ground with a ramp down, and the wall definitely belongs on high ground. On some the main is on level ground and the wall can be placed freely. On a few, like Jade, the main is on low ground with a ramp up. In that case, LetaBot will look to build its wall outside the main if it can. I’m not sure whether that’s good or bad; my Brood War knowledge is lacking. Conventional wisdom is that an up ramp means you should expand early so you don’t have to fight up your own ramp, so maybe any wall is a mistake. And I have seen LetaBot lose on Jade when its wall proved easy to attack and hard to defend. Can some terran expert tell us more?

The final check looks to see whether LetaBot has already started to build without waiting for the wall calculation to finish. Wall planning is done in a separate thread because it may take a long time. LetaBot is prepared for the case that wall planning takes too long and the bot has pre-empted the wall by starting to build elsewhere.

	  //also check if the producmanager isn't already building
	  //because if this is the case then don't go for a wall
	  bool StopWall = false;
	  //check if supply depot already build
	  BOOST_FOREACH( BWAPI::Unit* building , Broodwar->self()->getUnits() ){
		  if( building->getType().isBuilding() &&
			  building->getType() != BWAPI::UnitTypes::Terran_Command_Center ){
              StopWall = true;
		  }			  
	  }

	  if( ProdMan != NULL ){
	    if( ProdMan->TotalTypeInQueue( BWAPI::UnitTypes::Terran_Supply_Depot ) > 0
		    || ProdMan->TotalTypeInQueue( BWAPI::UnitTypes::Terran_Barracks ) > 0){
              StopWall = true;
	    }
	  }

	  if( StopWall == true ){
	     WallSound == false;
	     WallCalculated = false;
	  }

The first check looks for any existing building other than a command center, and the second check looks into the production manager’s queue for either a depot or a barracks. If either check finds that it is too late, the bot tells itself that no wall is possible after all.

Next: Starting on the details of RecursiveWall().

LetaBot’s wall 1 - locating the wall

LetaBot’s wall code is hard to follow, so I’ll take it in bite-size pieces. Today, deciding where the wall should go. It’s likely to be a good location for other defenses, too.

LetaBot can calculate how to wall off its main base with a barracks and 2 depots. It’s the classic terran wall. Unfortunately it doesn’t seem to recognize cases where fewer buildings are enough. The call to calculate the building locations for the wall is BuildingManager::WallOff(). If successful, it stores the calculated locations in the object for later use. From the BuildingManager class declaration in BuildingManager.h:

	bool WallCalculated;//True when the map calculation is done
	bool WallSound;
	BWAPI::TilePosition BarracksWall;
	BWAPI::TilePosition SupplyWall1;
	BWAPI::TilePosition SupplyWall2;

Below, I left out some confusing redundant code plus code with unrelated purposes.

void BuildingManager::WallOff(){

	BWAPI::TilePosition TileChoke;
		
	TileChoke = BWAPI::TilePosition( InfoMan->PosMainChoke );

It’s looking up a location recorded in the InformationManager. Here’s how InformationManager calculates it, at first naming the target choke BunkerChoke.

	BWTA::Region* natRegion = BWTA::getRegion( PosOurNat );
	BWTA::Region* mainRegion = BWTA::getRegion( PosOurBase );

	BWTA::Chokepoint* BunkerChoke = NULL;
	BOOST_FOREACH( BWTA::Chokepoint*  choke, BWTA::getChokepoints() ){
	   if( (choke->getRegions().first == natRegion && choke->getRegions().second == mainRegion) ||
		   (choke->getRegions().second == natRegion && choke->getRegions().first == mainRegion) ){
			BunkerChoke = choke;
	   }
   }

It walks through the choke points as reported by the Brood War Terrain Analysis library BWTA until it finds one which is between the main and the natural. In theory there could be more than one, but it doesn’t worry about that. The natural (at PosOurNat) is calculated earlier as the gas base which is closest to the main by ground distance. That should be fine for standard competitive maps, including maps with an internal mineral-only like Andromeda and Electric Circuit.

	if( BunkerChoke == NULL && mainRegion != NULL && natRegion != NULL ){
		int closestNat = 99999;
		BOOST_FOREACH( BWTA::Chokepoint*  choke, natRegion->getChokepoints() ){
		   if( BWTA::getGroundDistance(  BWAPI::TilePosition(choke->getCenter() ), OurBase ) < closestNat ){
			   BunkerChoke = choke;
			   closestNat = BWTA::getGroundDistance(  BWAPI::TilePosition(choke->getCenter() ), OurBase );
		   }
	   }
   }

If that doesn’t succeed because the map has no choke between the main and natural, it tries again. It could happen if the natural is internal to the main’s region or if the closest gas natural is farther and there are intervening regions (which is likely if the closest next base is mineral-only, as on Nostalgia or Korhal of Ceres). (It also fails on an island or semi-island map, but presumably LetaBot is not intended to play on those.) The code considers the chokes to the natural. It walks through these chokes and picks the one that is closest to the start location (OurBase) by ground distance. I’m not sure how well it works, but that’s what it says. It may mess up if there is an intervening region between the main and natural and the intervening region has another entrance. My first thought is to consider the chokes out of the main and do a path analysis: Go through the choke to the next region; from there, compute shortest ground paths to the possible enemy base locations; if none of the paths involves returning back through the main, then that choke is toward the enemy and may be worth walling.

   if( BunkerChoke != NULL){
     PosMainChoke = BunkerChoke->getCenter();
   }

If one of the two methods worked, then PosMainChoke is the center of BunkerChoke, all done.

Discussion. It looks like some maps will confuse the wall locating code, especially non-standard and old-school maps. If there is an internal natural, or an entrance to the main other than through the natural, LetaBot may do something unhelpful. For example, Alchemist (as played in CIG 2016) has two entrances to each base. It’s a 3-player map arranged as a big circle, so that the enemy is closer to one of your base entrances than to the other. This code will choose the natural base with its wall location sometimes toward the enemy and sometimes away—it probably should put the wall toward and the natural away (there are bases outside both doors). In another example, the old tournament map Bifrost has a front entrance to the main plus an internal mineral-only with a back entrance. Hypothetically, if we gave the internal base a geyser, then this code would build an unhelpful wall between the main and the internal natural, allowing enemy units that came through the back entrance access to the ledge above your mineral line. The right plan is to block the front and back entrances separately.

Even choosing a wall location can be tricky! LetaBot is optimized for the usual case, and nothing is wrong with that. It would be extra work for little gain to handle all cases well.

The name BunkerChoke is a hint: A wall is only one form of defense. The same choke may be good for other defenses, whether static defense or units standing guard. Even smart bots sometimes misplace their early defense. For example, watch Zia on Andromeda place early zerglings on guard well back from the ramp in the “choke to the main,” because it treats the high-ground mineral-only as a separate region. Or watch Skynet on Benzene misplace its early zealots, because there are two entrances to the main and Skynet does not realize that one of the entrances is blocked by map buildings. It may seem like an easy decision, but it’s hard.

Skynet’s LatencyTracker

Back when I was analyzing Skynet’s code, a class LatencyTracker came up. It sounds like it might have to do with latency in network games, doesn’t it? Latency is a confusing issue for bots, and Krasi0 asked me to look into it. It’s been on my to-do list ever since.

It turns out that LatencyTracker has nothing to do with network latency. It keeps track of the location and duration of storm and stasis. Oh well. :-(

It’s used in only a couple places in code like this. mUnit is of type BWAPI::Unit*. isStormInRange does what its name says: Is a storm calculated to be in range of the current unit? I’m not at all sure what the practical effect of this test is, since it looks redundant. Maybe it is working around a bug in some version of BWAPI, or maybe it is used with hypothetical units that BWAPI doesn’t know about. Or something.

bool UnitClass::isUnderStorm()
{
	if(exists())
		if(mUnit->isUnderStorm())
			return true;

	return LatencyTracker::Instance().isStormInRange(shared_from_this());
}

Skynet - avoiding splash damage

Skynet tries to avoid some area-of-effect spells and splash damage from some attacks. This is the fanciest skill that I’m writing up.

You can only avoid stuff that you can see coming. Skynet tracks threats in class AOEThreatTracker (where AOE stands for Area Of Effect), which creates and stores AOEThreat objects. Here is the key code from AOEThreatTracker::update(), which is called once per frame from the main onFrame() in Skynet.cpp:

	for each(Unit unit in UnitTracker::Instance().selectAllEnemy())
	{
		const BWAPI::UnitType &type = unit->getType();
		if((type == BWAPI::UnitTypes::Protoss_Scarab || type == BWAPI::UnitTypes::Terran_Vulture_Spider_Mine) && mUnitThreats.count(unit) == 0)
		{
			AOEThreat newThreat = AOEThreat(new AOEThreatClass(unit));
			mAllThreats.insert(newThreat);
			mUnitThreats[unit] = newThreat;
		}
	}

	for each(BWAPI::Bullet* bullet in BWAPI::Broodwar->getBullets())
	{
		const BWAPI::BulletType &type = bullet->getType();
		if((type == BWAPI::BulletTypes::Psionic_Storm || type == BWAPI::BulletTypes::EMP_Missile) && mBulletThreats.count(bullet) == 0)
		{
			AOEThreat newThreat = AOEThreat(new AOEThreatClass(bullet));
			mAllThreats.insert(newThreat);
			mBulletThreats[bullet] = newThreat;
		}
	}

Skynet recognizes enemy reaver scarabs and spider mines as splash damage threats. (A bot that might lay mines of its own should also recognize its own spider mines as threats. Skynet does not use mind control.) It also recognizes psionic storm and EMP as area effect spells to try to dodge.

What does it do with the information it’s tracking? Here’s the public interface from AOEThreatTracker.h:

	void update();

	AOEThreat getClosestGroundThreat(const Position &pos) const;
	AOEThreat getClosestAirThreat(const Position &pos) const;
	AOEThreat getClosestEnergyThreat(const Position &pos) const;
	AOEThreat getClosestThreat(Unit unit) const;

	bool isTargetOfThreat(Unit unit) const;

Psionic storm is both an air threat and a ground threat. Scarabs and mines are ground threats. EMP is the only energy threat. But of the 5 informational methods, only getClosestThreat() and isTargetOfThreat() are used in the rest of the code; the other 3 get methods serve no purpose. getClosestThreat() takes a unit and considers whether it is an air or ground unit to find the closest threat. It counts EMP as a threat to spellcasters only, even though all protoss units will lose their shields to it. It’s a little inconsistent; corsairs count as spellcasters, but Skynet never researches disruption web.

The results of all this tracking work are used in only one place, in BasicUnitAction::update(), which is a micro action like the others we’ve seen in the last days. The BasicUnitAction of course has very low priority among micro actions, so if the unit under consideration is already doing something else (like dragging the mine that is the threat), then none of this happens.

	const bool isTargetOfThreat = AOEThreatTracker::Instance().isTargetOfThreat(mUnit);
	if(!isTargetOfThreat)
	{
		const AOEThreat &closestThreat = AOEThreatTracker::Instance().getClosestThreat(mUnit);
		if(closestThreat)
		{
			const int distanceToThreat = mUnit->getDistance(closestThreat->getPosition());
			if(distanceToThreat < closestThreat->getRadius()+32)
			{
				stayAtRange(mUnit, closestThreat->getPosition(), closestThreat->getRadius() + 64, distanceToThreat);
				return true;
			}
		}
	}

First it remembers whether the current unit mUnit is the target of the threat, for later use. The explicit target does not dodge the threat (sometimes it can’t). If the current unit is not the target, but it is within range of at least one threat, then it finds the closest threat and tries to get out of its range. stayAtRange() calculates a direction and moves that way. There’s no attempt to avoid multiple threats, which are likely for spider mines; Skynet is happy to flee one and run into another. Also, I notice that AOEThreatTracker keeps careful track of each threat radius, but this code ignores it and flees to a fixed radius. I assume that’s good enough? Anyway, this limitation may explain why Skynet does not try to dodge lurker spines, which affect a long narrow area.

If the current unit is the target, the unit stays put so that the other units around it know what to do to avoid the threat. currentTargetUnit is whatever the current unit is already shooting at. This is not best if under attack by a reaver scarab, but it does help nearby units know which way to flee.

	// If the target of a threat, dont do anything to cause it to move
	if(isTargetOfThreat)
	{
		if(currentTargetUnit)
			mUnit->attack(currentTargetUnit);
		else
			mUnit->stop();
		return true;
	}

Skynet has other interesting skills. See BlockedPathManager and MineBlockingMineralTask, for example. But that’s enough for now.

Tomorrow: Wrapup and lessons learned.

Skynet - arbiter control

Arbiters are tough to use well. You want to maneuver your arbiter to cloak as many of your units as possible while staying safe. You want to stasis or recall at critical times and places. Occasionally you want to do damage with the pea-shooter. The goals pull in different directions, and you have to decide what’s best.

Here’s how Skynet decides. Skynet has two arbiter skills: It knows how to cloak units efficiently and how to stasis. Skynet does not use recall, and it (understandably) doesn’t put any emphasis on targeting enemies with the puny gun.

Arbiter control is in ArbiterAction::update(), which is like the other micro actions, set up by Behaviour::createDefaultActions(). First it considers stasis:

	if(mUnit->getEnergy() >= 100 && BWAPI::Broodwar->self()->hasResearched(BWAPI::TechTypes::Stasis_Field))
	{
		const int stasisSize = 48;

		bool stasisUrgently = false;
		if(UnitInformation::Instance().getUnitsTargetting(mUnit).size() >= 6 || (UnitInformation::Instance().getUnitsTargetting(mUnit).size() >= 1 && mUnit->totalHitPointFraction() < 0.3))
			stasisUrgently = true;

First, if stasis is possible and the arbiter is in danger of being lost, Skynet sets stasisUrgently to true.

		UnitGroup stasisChoices;
		for each(Unit enemy in UnitTracker::Instance().selectAllEnemy())
		{
			if(!UnitHelper::isArmyUnit(enemy->getType()))
				continue;

			if(enemy->isUnderStorm() || enemy->isStasised())
				continue;

			const int distance = mUnit->getDistance(enemy);
			if(distance > 250 || distance < stasisSize)
				continue;

			stasisChoices.insert(enemy);
		}

Then it comes up with a list of candidate enemies to stasis. Enemies under psionic storm are not candidates—let them suffer!

		if(stasisChoices.size() > 4 || (!stasisChoices.empty() && stasisUrgently))
		{
			UnitGroup stasisTargets = stasisChoices.getBestFittingToCircle(stasisSize);

			if(stasisTargets.size() > 4 || (!stasisTargets.empty() && stasisUrgently))
			{
				const Position &stasisLocation = stasisTargets.getCenter();
				if(mUnit->getDistance(stasisLocation) <= BWAPI::TechTypes::Stasis_Field.getWeapon().maxRange())
				{
					mUnit->useTech(BWAPI::TechTypes::Stasis_Field, stasisLocation);
					LatencyTracker::Instance().placingStasis(mUnit, stasisLocation);
					return true;
				}
				else
				{
					mUnit->move(stasisLocation);
					return true;
				}
			}
		}
	}

And finally Skynet decides whether and where to stasis. If it can stasis at least 5 enemy units, or at least 1 unit and stasisUrgently is set, then it does. UnitGroup::getBestFittingToCircle() finds a circle of the given size which covers as many units in the UnitGroup as possible, and returns the smaller UnitGroup which it covers.

It’s a simple heuristic. An arbiter will happily stasis 5 vultures uselessly when it reaches stasis energy after the army it was covering is destroyed by tanks and vultures. The “right” way to do this would be with a pay-me-now-or-pay-me-later tradeoff that compares the situation now with possible future situations, which would of course be far more complicated. And, since that’s not hard enough yet, it should also consider the effect on the battle: Stasis on a ramp to block reinforcements? Stasis the tanks that are doing the most damage, or the vessel that is detecting the cloaked army? And so on.

Next is maneuvering the arbiter to cloak units:

	UnitGroup unitsToCloak;
	for each(Unit unit in squadUnitGroup)
	{
		if(!UnitHelper::isArmyUnit(unit->getType()))
			continue;

		if(unit->getType() == BWAPI::UnitTypes::Protoss_Arbiter || unit->getType().isBuilding())
			continue;

		if(mUnit->getDistance(unit) > 250)
			continue;

		unitsToCloak.insert(unit);
	}

	if(!unitsToCloak.empty())
	{
		unitsToCloak = unitsToCloak.getBestFittingToCircle(136);
		if(!unitsToCloak.empty())
		{
			Position cloakLocation = unitsToCloak.getCenter();
			if(mUnit->getDistance(cloakLocation) > 110)
			{
				mUnit->move(cloakLocation);
				return true;
			}
		}
	}

Candidate units to cloak are basically military units in the squadUnitGroup which can be cloaked and are near enough. Skynet never tries to protect probes or observers with the cloaking field (UnitHelper::isArmyUnit() returns false for both), but it will try to cloak dark templar that are already cloaked (a simple omission from the above code and trivial to fix) and units that are already covered by another arbiter (only a little harder to fix). It does the getBestFittingToCircle() thing to find the largest circle of units that it can cloak, and moves toward the circle’s center if it’s not close enough. If it’s already close enough then the micro action does not occur, which I assume frees the arbiter to fire its gun when possible, flee danger within limits, and so on.

Tomorrow: Avoiding splash damage, finishing the series.

Skynet - mine dragging

Spider mines do a ton of splash damage when they explode, and they don’t discriminate. Everything in splash range suffers, enemy or friendly. So it’s popular to intentionally trigger enemy mines while running up to enemy units, dragging the mine into the enemy to cause damage. Mines can sometimes follow a unit for quite a distance before going off. Protoss usually drags mines with zealots, though dark templar are also good. Zerg usually uses speedy zerglings in small groups, because they’re cheap and fast. Terran can sometimes trigger enemy mines with drops, but doesn’t have an appropriate unit to drag mines otherwise.

Skynet knows how to drag mines with a zealot. It uses the same system of micro actions that I touched on yesterday. In Behaviour::createDefaultActions() it records a possible mine drag micro action for every zealot:

	if(unitType == BWAPI::UnitTypes::Protoss_Zealot)
		mMicroActions.push_back(MicroAction(new MineDragAction(mUnit)));

By the way, the micro actions are evaluated in Behaviour::update(), which checks them in order. Each unit can only execute one micro action at a time, and the first action to return true is what happens. Skynet’s only prioritization is to record the most important micro actions first in createDefaultActions(). Mine dragging is recorded near the end, as a low-priority action. Yesterday’s archon-zealot undetected unit attack is higher priority.

Here is the entirety of MineDragAction::update():

bool MineDragAction::update(const Goal &squadGoal, const UnitGroup &squadUnitGroup)
{
	for each(Unit unit in UnitInformation::Instance().getUnitsTargetting(mUnit))
	{
		if(unit->getType() == BWAPI::UnitTypes::Terran_Vulture_Spider_Mine)
		{
			int distance = std::numeric_limits<int>::max();
			Unit closestUnit;

			for each(Unit enemyUnit in UnitTracker::Instance().selectAllEnemy())
			{
				if(enemyUnit->getType().isFlyer() || enemyUnit->isLifted() || enemyUnit->getType().isBuilding() || enemyUnit->getType() == BWAPI::UnitTypes::Terran_Vulture_Spider_Mine)
					continue;

				int thisDistance = mUnit->getDistance(enemyUnit);
				if(thisDistance < distance)
				{
					distance = distance;
					closestUnit = enemyUnit;
				}
			}
			
			if(closestUnit && distance < 32*5)
			{
				mUnit->attack(closestUnit);
				return true;
			}
		}
	}

	return false;
}

This seems to only notice spider mines which have already popped up and are targeting the current zealot. If one is found, then the code checks for suitable enemy units (not stuff in the air, for example) that are close enough. If any is found, then the zealot targets the nearest for attack, end of story.

That’s basic mine dragging. A stronger version would also consider detected or remembered mines: If a mine in the ground is detected or remembered to be near valuable enemies, then check whether it can be activated and dragged into the enemies. You can remember a mine location if you detected it in the past, or if you saw it being laid or moving and reburrowing. That’s more complicated, but human players do it as a matter of course. Humans will even try to drag mines that they expect or hope are there, without being sure.

Tomorrow: Arbiter control.

Skynet - killing undetected units

Archons do splash damage that affects all enemy units, included undetected units. That means that you can use an archon to, for example, kill a burrowed lurker that you can’t detect, by attacking one of your own units. The usual way is to park a sacrificial zealot on top of the lurker and attack the zealot with the archon; the archon will do full damage to the zealot and splash damage to the lurker.

Archon splash affects both ground and air. An archon attacking a zealot will damage a cloaked wraith that is in splash range. An archon attacking a floating ebay should clear spider mines in a small area (I haven’t tested to make sure).

Here’s how Skynet does it. Behaviour::createDefaultActions() recognizes the unit types and records possible micro actions to test and execute later:

	if(unitType == BWAPI::UnitTypes::Protoss_Zealot || unitType == BWAPI::UnitTypes::Protoss_Archon)
	{
		firstTargets.insert(BWAPI::UnitTypes::Terran_Siege_Tank_Siege_Mode);
		mMicroActions.push_back(MicroAction(new ArconZealotKillUnDetected(mUnit)));
	}

When the micro action comes up for testing, it runs ArconZealotKillUnDetected::update(), which either fails the test and returns false or succeeds, executes the micro, and returns true. Given the original archon or zealot in mUnit, it looks for the closest enemy unit that meets the conditions:

	Unit lurker;
	int lurkerDistance = std::numeric_limits<int>::max();
	for each(Unit unit in UnitTracker::Instance().selectAllEnemy())
	{
		if(unit->exists() && !unit->isDetected() && (unit->isCloaked() || unit->getType().hasPermanentCloak() || (unit->getType() == BWAPI::UnitTypes::Zerg_Lurker && unit->isBurrowed())))
		{
			int thisDistance = mUnit->getDistance(unit);
			if(thisDistance < lurkerDistance)
			{
				lurkerDistance = thisDistance;
				lurker = unit;
			}
		}
	}

	if(!lurker || lurkerDistance > minDistance)
		return false;

The variable is called “lurker” but it looks for any unit that is undetected and is temporarily cloaked (like a wraith or a unit under an arbiter), permanently cloaked (like a dark templar), or is a burrowed lurker. I skipped over the next bit of code: It checks for the other unit (of type typeToFind) and if it’s close enough puts it in other; if it has a zealot it looks for an archon, or if an archon it looks for a zealot. If it finds the needed other unit, it executes the micro:

	if(typeToFind == BWAPI::UnitTypes::Protoss_Archon)
	{
		if(mUnit != UnitTracker::Instance().selectAllUnits(BWAPI::UnitTypes::Protoss_Zealot).getClosestUnit(lurker))
			return false;

		mUnit->move(lurker->getPosition());
		return true;
	}
	else
	{
		if(otherDistance <= 14)
		{
			mUnit->attack(other);
			return true;
		}
		else
		{
			mUnit->move(other->getPosition());
			return true;
		}
	}

If the unit is a zealot (because typeToFind is archon) and it’s not the closest zealot, it bails so that the target doesn’t get swarmed by zealots—one volunteer at a time, please! The zealot moves to the position of the target. If the unit is the archon, it moves toward the zealot (not toward the target; this corrects for miscoordination that could occur if there’s more than one target around) or, if the zealot is within splash range of the target, attacks the zealot.

The bottom line is that Skynet has the basic skill, but its implementation is not sophisticated. It knows all the possible targets, but can only use a zealot as the sacrifice—pros in a pinch will use anything that works as the sacrifice. It doesn’t know that it could use a reaver instead of an archon. It doesn’t consider whether the sacrifice is worth the damage to the target (you’ll hurt your zealot and the cloaked wraith will probably escape). It does no preparation (keep the archon and zealot near each other if they may need to work together) and minimal coordination (if there’s an observer just out of range and approaching, you shouldn’t have bothered; if you have high templar around maybe you should storm it). If the target evades, Skynet may keep microing both units, leaving them out of the fight. But of course most bots don’t have even the basic skill.

Reminder: Other units that do splash can also kill undetected enemies this way. Reavers, sieged tanks, and lurkers are the prime splash-dealing candidates. Corsairs, firebats, valkyries, and infested terrans also have occasional uses. (Mutalisk bounces are technically not splash and do not hit undetected enemies.) And, of course, spell effects like nuke, irradiate, psionic storm, ensnare, and plague affect units whether they are detected or not. Ensnare and plague are particularly useful because they make cloaked units visible.

Near the end of this 2008 game, Stork fires on his own carriers with corsairs to hold off cloaked wraiths.

Tomorrow: Mine dragging.

squad formation - Nova

Nova is from 2011 and was a mediocre performer at the time, so I figure that a current bot ought to do at least as well. Nova’s page points to source code and to Alberto Uriarte Pérez’s thesis “Multi-Reactive Planning for Real-Time Strategy Games,” among other stuff.

The thesis gives this diagram but is vague in explaining what Nova actually does. At least we can see that the centroid of the squad is important.

squad formation diagram showing centroid and some circles around it

The source code knows all, tells all, if we have but time to listen as it talks on and on. In SquadAgent::onFrame() I see this, freely leaving out irrelevant bits:

	switch (_state) {
		...
		case GetPosition:
			if (isSquadBio()) {
			 ...
				checkSpread(); // check to compact squad
			} ...
    }

_state == GetPosition means that the squad’s orders are to move to a position. isSquadBio() is true if the squad contains at least one marine or medic. So checkSpread() is doing the job (I deleted commented-out code below).

void SquadAgent::checkSpread()
{
	// if we are close to our base, don't checkSpread
	if (informationManager->home->getCenter().getApproxDistance(_center) < 30*TILE_SIZE) return;

	double maxSpread = MAX_SPREAD_BASE + _squadUnits.size() + _squadMaxSpread;
	double minSpread = MIN_SPREAD_BASE + _squadMaxSpread;

	if ( _movement == SquadAgent::Normal && _spread >  maxSpread) {
		// get nearest chokepoint
		BWTA::Chokepoint* bestChokepoint = BWTA::getNearestChokepoint(getClosestUnitTo(_positionTarget, UnitTypes::None, true)->_unit->getPosition());
		// get unit closest to chokepoint
		Position location;
		if (bestChokepoint != NULL) {
			location = getClosestUnitTo(bestChokepoint->getCenter(), UnitTypes::None, true)->_unit->getPosition();
		} else {
			location = getClosestUnitTo(_positionTarget, UnitTypes::None, true)->_unit->getPosition();
		}
		for(CombatUnitSet::const_iterator i=this->_squadUnits.begin();i!=this->_squadUnits.end();++i) {
			if ((*i)->_unit->getType() == UnitTypes::Terran_Dropship || (*i)->_unit->getType() == UnitTypes::Terran_Science_Vessel) continue; //ignore special micro
			(*i)->_unit->move(location);
		}
		_movement = SquadAgent::Cohesion;
	}
	if ( _movement == SquadAgent::Cohesion && _spread < minSpread ){
		for(CombatUnitSet::const_iterator i=this->_squadUnits.begin();i!=this->_squadUnits.end();++i) {
			if ((*i)->_unit->getType() == UnitTypes::Terran_Dropship || (*i)->_unit->getType() == UnitTypes::Terran_Science_Vessel) continue; //ignore special micro
			(*i)->_unit->attack(_positionTarget);
		}
		_movement = SquadAgent::Normal;
	}

	Unit* closest = getClosestUnitTo(_positionTarget, UnitTypes::None, true)->_unit;
	if ( (closest->getOrder() == Orders::AttackMove || closest->getOrder() == Orders::AttackTile || closest->getOrder() == Orders::AttackUnit) && _movement == SquadAgent::Cohesion ) {
		_movement = SquadAgent::Normal;
	}

	Broodwar->drawCircleMap(_center.x(), _center.y(), (int) maxSpread, Colors::Red, false);
	Broodwar->drawCircleMap(_center.x(), _center.y(), (int) minSpread, Colors::Green, false);
}

That’s clear enough, a little state machine. It calculates a maximum, minimum, and actual spread as drawn in the diagram from the thesis. If the squad’s movement is Normal and the actual spread is greater than the maximum, then figure out a location to gather the units and move them there, and set the movement state to Cohesion. The gather location is the location of the squad unit which is closest to the chokepoint which the most advanced unit is closest to. The cohesion state waits until the spread falls below the minimum, when it restores the regular orders and resets the state to Normal. There are exceptions for dropships and vessels, and an escape from cohesion mode, depending on the orders of the nearest unit to the destination, which I assume cancels cohesion mode when the squad comes under attack. Regular orders are to attack-move and cohesion orders are to move, so you have to cancel cohesion mode when under attack.

I assume that the gather location calculation is fancy for a good reason, but maybe a simpler gather location could be some squad unit which is farther ahead. Then the effect would be to make the leaders pause while stragglers caught up. This calculation looks like it will sometimes have the squad moving backward to gather up.

I've already seen a related idea with flocking, and you could implement other related ideas with potential fields too. Nova’s contribution is explicit gathering of the squad when it scatters too much.

LetaBot’s build order inference

LetaBot does an elaborate inference of its opponent’s opening build order. A couple posts ago I found no scouting inferences by LetaBot—I missed it. Martin Rooijackers had to tip me off. Thanks! It’s already there in the AIIDE 2015 version. There’s a ton of code in all the bots and I didn’t read most of it, so I may have missed many inferences in other bots too. This one is kind of fancy and breaks my earlier conclusion that no inferences were “deep or clever.”

AIIDE 2015 LetaBot declares terran and zerg build orders that an opponent may play. I imagine that the current version also knows protoss build orders. But in any case, at least for terran and zerg opponents LetaBot can compare what it sees against the different build orders and find the best match.

A build order declaration looks like this. A zerg build order from BuildOrderManager.cpp:

void BuildOrderManager::NinePoolSpeed(){
	BuildOrder newBo;
	newBo.YourRace = BWAPI::Races::Zerg;
	newBo.RaceEnemy = BWAPI::Races::Terran;
	newBo.Creator = "Dennis Soemers Text mining. Manual adjustments by Martin Rooijackers";
	newBo.Name = "9 Pool Speed";

	BOpiece addPiece;
	addPiece.supply = 9;
	addPiece.Building = BWAPI::UnitTypes::Zerg_Spawning_Pool;
	newBo.BOpieces.push_back( addPiece );
	addPiece.supply = 8;
	addPiece.Building = BWAPI::UnitTypes::Zerg_Drone;
	newBo.BOpieces.push_back( addPiece );
	addPiece.supply = 9;
	addPiece.Building = BWAPI::UnitTypes::Zerg_Extractor;
	newBo.BOpieces.push_back( addPiece );
	addPiece.supply = 8;
	addPiece.Building = BWAPI::UnitTypes::Zerg_Overlord;
	newBo.BOpieces.push_back( addPiece );

	newBo.WeakCounterBy.push_back( "2 Rax FE" );
	newBo.StrongCounterTo.push_back("14 CC");
	newBo.StrongCounterTo.push_back("BBS");
	newBo.WeakCounterTo.push_back("1 Rax FE");

	BOs.push_back(newBo);
}

Some counter-builds are declared near the bottom. The AIIDE 2015 version does not use them.

LetaBot draws the build order inference in BuildOrderManager.onFrame() so that the inference can change from moment to moment as more information turns up.

	for(int j=0; j<BOs.size(); j++){
		std::vector<BWAPI::UnitType>  EnemyBuildingsCopy;
	  for(int i=0; i<InfoMan->EnemyBuildings.size(); i++){
		  if( InfoMan->EnemyBuildings[i].building->getType() == BWAPI::UnitTypes::Zerg_Hatchery){
			  if( InfoMan->EnemyBase != BWAPI::TilePositions::Unknown ){
				  if(  InfoMan->EnemyBuildings[i].position.getDistance( BWAPI::Position(InfoMan->EnemyBase) ) > 4*32 ){
			        EnemyBuildingsCopy.push_back( InfoMan->EnemyBuildings[i].type );
				  }
			  }
		  } else {
			  EnemyBuildingsCopy.push_back( InfoMan->EnemyBuildings[i].type );
		  }
			  
	  }

	  int BuildAvailable = 0;
	  int BuildMissing = 0;
		  for(int k=0; k<BOs[j].BOpieces.size(); k++){
			  if( BOs[j].BOpieces[k].Building.isBuilding() ){
				  bool isThere = false;
				  for( int m=0; m<EnemyBuildingsCopy.size(); m++){
					  if( EnemyBuildingsCopy[m] == BOs[j].BOpieces[k].Building ){
						  isThere = true;
						  EnemyBuildingsCopy.erase( EnemyBuildingsCopy.begin() + m);
						  break;
					  }
				  }
				  if( isThere ){
					  BuildAvailable++;
				  } else {
					  BuildMissing++;
				  }
			  }
		  }


	  int score = BuildAvailable - BuildMissing;
	  if( BestScore < score){
		  BestScore = score;
		  EnemyBo = BOs[j].Name;
	  }
	}

Compute a score for each build order. Add to the score (BuildAvailable) for each part of the build order that LetaBot has seen, and subtract from the score (BuildMissing) for each part it has not seen. The enemy is (LetaBot concludes) following the build order that gets the highest score. In case of ties, the first high score gets it.

It’s simple and probably jumps to conclusions sometimes, but if the build orders are chosen and ordered correctly then I think it could avoid jumping to dangerous conclusions that make LetaBot do something foolish.

In the AIIDE 2015 version, LetaBot uses the inferred build order to decide how many marines to train for its marine-SCV rush, but not for any other purpose. Openings that give zerg a stronger early army need more marines. From TwoRaxSCVRush.cpp:

	if( Broodwar->enemy()->getRace() == BWAPI::Races::Zerg){
	//get marines to build based on BO
	if( BOMan->EnemyBo == "9 Pool"){
		MarinesToBuild = 8;
	}
	//get marines to build based on BO
	if( BOMan->EnemyBo == "OverPool"){
		MarinesToBuild = 6;
	}
	//get marines to build based on BO
	if( BOMan->EnemyBo == "12 Hatch"){
		MarinesToBuild = 6;
	}
	//get marines to build based on BO
	if( BOMan->EnemyBo == "9 Pool Speed"){
		MarinesToBuild = 10;
	}
	}

The latest version of LetaBot surely has many more uses for the inferred build order.

Bottom line: It looks to me like an excellent idea for adapting your opening strategy to the opponent’s, as long as the opponent is playing a recognized good strategy. If the opponent plays a bad strategy, why worry? Against zerg there should be few problems caused by unscouted buildings. Even against terran or protoss, I have the feeling that there are ways to manage the uncertainty from unscouted buildings or ambiguous build orders. For example, for build orders that branch you can declare one build order up to the branching point and further builds along each possible branch.

Who’s coming in with the next tip-off?

Update: If you catch LetaBot playing on the SSCAIT live stream you may be able to watch the build order inference in action. LetaBot reports its conclusion at the top center of the screen; at the start of the game it is “Unknown”. For some reason it doesn’t seem to show up every game (maybe only games versus zerg?). The conclusion can be way off when the opponent plays something LetaBot doesn’t know, like “1 Hatch Lurker” when it has seen two hatcheries and no gas. (I guess it should subtract from the score when it sees buildings that are outside the build order, or some other cleverness increment.)