archive by month
Skip to content

Skynet - arbiter control

Arbiters are tough to use well. You want to maneuver your arbiter to cloak as many of your units as possible while staying safe. You want to stasis or recall at critical times and places. Occasionally you want to do damage with the pea-shooter. The goals pull in different directions, and you have to decide what’s best.

Here’s how Skynet decides. Skynet has two arbiter skills: It knows how to cloak units efficiently and how to stasis. Skynet does not use recall, and it (understandably) doesn’t put any emphasis on targeting enemies with the puny gun.

Arbiter control is in ArbiterAction::update(), which is like the other micro actions, set up by Behaviour::createDefaultActions(). First it considers stasis:

	if(mUnit->getEnergy() >= 100 && BWAPI::Broodwar->self()->hasResearched(BWAPI::TechTypes::Stasis_Field))
	{
		const int stasisSize = 48;

		bool stasisUrgently = false;
		if(UnitInformation::Instance().getUnitsTargetting(mUnit).size() >= 6 || (UnitInformation::Instance().getUnitsTargetting(mUnit).size() >= 1 && mUnit->totalHitPointFraction() < 0.3))
			stasisUrgently = true;

First, if stasis is possible and the arbiter is in danger of being lost, Skynet sets stasisUrgently to true.

		UnitGroup stasisChoices;
		for each(Unit enemy in UnitTracker::Instance().selectAllEnemy())
		{
			if(!UnitHelper::isArmyUnit(enemy->getType()))
				continue;

			if(enemy->isUnderStorm() || enemy->isStasised())
				continue;

			const int distance = mUnit->getDistance(enemy);
			if(distance > 250 || distance < stasisSize)
				continue;

			stasisChoices.insert(enemy);
		}

Then it comes up with a list of candidate enemies to stasis. Enemies under psionic storm are not candidates—let them suffer!

		if(stasisChoices.size() > 4 || (!stasisChoices.empty() && stasisUrgently))
		{
			UnitGroup stasisTargets = stasisChoices.getBestFittingToCircle(stasisSize);

			if(stasisTargets.size() > 4 || (!stasisTargets.empty() && stasisUrgently))
			{
				const Position &stasisLocation = stasisTargets.getCenter();
				if(mUnit->getDistance(stasisLocation) <= BWAPI::TechTypes::Stasis_Field.getWeapon().maxRange())
				{
					mUnit->useTech(BWAPI::TechTypes::Stasis_Field, stasisLocation);
					LatencyTracker::Instance().placingStasis(mUnit, stasisLocation);
					return true;
				}
				else
				{
					mUnit->move(stasisLocation);
					return true;
				}
			}
		}
	}

And finally Skynet decides whether and where to stasis. If it can stasis at least 5 enemy units, or at least 1 unit and stasisUrgently is set, then it does. UnitGroup::getBestFittingToCircle() finds a circle of the given size which covers as many units in the UnitGroup as possible, and returns the smaller UnitGroup which it covers.

It’s a simple heuristic. An arbiter will happily stasis 5 vultures uselessly when it reaches stasis energy after the army it was covering is destroyed by tanks and vultures. The “right” way to do this would be with a pay-me-now-or-pay-me-later tradeoff that compares the situation now with possible future situations, which would of course be far more complicated. And, since that’s not hard enough yet, it should also consider the effect on the battle: Stasis on a ramp to block reinforcements? Stasis the tanks that are doing the most damage, or the vessel that is detecting the cloaked army? And so on.

Next is maneuvering the arbiter to cloak units:

	UnitGroup unitsToCloak;
	for each(Unit unit in squadUnitGroup)
	{
		if(!UnitHelper::isArmyUnit(unit->getType()))
			continue;

		if(unit->getType() == BWAPI::UnitTypes::Protoss_Arbiter || unit->getType().isBuilding())
			continue;

		if(mUnit->getDistance(unit) > 250)
			continue;

		unitsToCloak.insert(unit);
	}

	if(!unitsToCloak.empty())
	{
		unitsToCloak = unitsToCloak.getBestFittingToCircle(136);
		if(!unitsToCloak.empty())
		{
			Position cloakLocation = unitsToCloak.getCenter();
			if(mUnit->getDistance(cloakLocation) > 110)
			{
				mUnit->move(cloakLocation);
				return true;
			}
		}
	}

Candidate units to cloak are basically military units in the squadUnitGroup which can be cloaked and are near enough. Skynet never tries to protect probes or observers with the cloaking field (UnitHelper::isArmyUnit() returns false for both), but it will try to cloak dark templar that are already cloaked (a simple omission from the above code and trivial to fix) and units that are already covered by another arbiter (only a little harder to fix). It does the getBestFittingToCircle() thing to find the largest circle of units that it can cloak, and moves toward the circle’s center if it’s not close enough. If it’s already close enough then the micro action does not occur, which I assume frees the arbiter to fire its gun when possible, flee danger within limits, and so on.

Tomorrow: Avoiding splash damage, finishing the series.

Skynet - mine dragging

Spider mines do a ton of splash damage when they explode, and they don’t discriminate. Everything in splash range suffers, enemy or friendly. So it’s popular to intentionally trigger enemy mines while running up to enemy units, dragging the mine into the enemy to cause damage. Mines can sometimes follow a unit for quite a distance before going off. Protoss usually drags mines with zealots, though dark templar are also good. Zerg usually uses speedy zerglings in small groups, because they’re cheap and fast. Terran can sometimes trigger enemy mines with drops, but doesn’t have an appropriate unit to drag mines otherwise.

Skynet knows how to drag mines with a zealot. It uses the same system of micro actions that I touched on yesterday. In Behaviour::createDefaultActions() it records a possible mine drag micro action for every zealot:

	if(unitType == BWAPI::UnitTypes::Protoss_Zealot)
		mMicroActions.push_back(MicroAction(new MineDragAction(mUnit)));

By the way, the micro actions are evaluated in Behaviour::update(), which checks them in order. Each unit can only execute one micro action at a time, and the first action to return true is what happens. Skynet’s only prioritization is to record the most important micro actions first in createDefaultActions(). Mine dragging is recorded near the end, as a low-priority action. Yesterday’s archon-zealot undetected unit attack is higher priority.

Here is the entirety of MineDragAction::update():

bool MineDragAction::update(const Goal &squadGoal, const UnitGroup &squadUnitGroup)
{
	for each(Unit unit in UnitInformation::Instance().getUnitsTargetting(mUnit))
	{
		if(unit->getType() == BWAPI::UnitTypes::Terran_Vulture_Spider_Mine)
		{
			int distance = std::numeric_limits<int>::max();
			Unit closestUnit;

			for each(Unit enemyUnit in UnitTracker::Instance().selectAllEnemy())
			{
				if(enemyUnit->getType().isFlyer() || enemyUnit->isLifted() || enemyUnit->getType().isBuilding() || enemyUnit->getType() == BWAPI::UnitTypes::Terran_Vulture_Spider_Mine)
					continue;

				int thisDistance = mUnit->getDistance(enemyUnit);
				if(thisDistance < distance)
				{
					distance = distance;
					closestUnit = enemyUnit;
				}
			}
			
			if(closestUnit && distance < 32*5)
			{
				mUnit->attack(closestUnit);
				return true;
			}
		}
	}

	return false;
}

This seems to only notice spider mines which have already popped up and are targeting the current zealot. If one is found, then the code checks for suitable enemy units (not stuff in the air, for example) that are close enough. If any is found, then the zealot targets the nearest for attack, end of story.

That’s basic mine dragging. A stronger version would also consider detected or remembered mines: If a mine in the ground is detected or remembered to be near valuable enemies, then check whether it can be activated and dragged into the enemies. You can remember a mine location if you detected it in the past, or if you saw it being laid or moving and reburrowing. That’s more complicated, but human players do it as a matter of course. Humans will even try to drag mines that they expect or hope are there, without being sure.

Tomorrow: Arbiter control.

Skynet - killing undetected units

Archons do splash damage that affects all enemy units, included undetected units. That means that you can use an archon to, for example, kill a burrowed lurker that you can’t detect, by attacking one of your own units. The usual way is to park a sacrificial zealot on top of the lurker and attack the zealot with the archon; the archon will do full damage to the zealot and splash damage to the lurker.

Archon splash affects both ground and air. An archon attacking a zealot will damage a cloaked wraith that is in splash range. An archon attacking a floating ebay should clear spider mines in a small area (I haven’t tested to make sure).

Here’s how Skynet does it. Behaviour::createDefaultActions() recognizes the unit types and records possible micro actions to test and execute later:

	if(unitType == BWAPI::UnitTypes::Protoss_Zealot || unitType == BWAPI::UnitTypes::Protoss_Archon)
	{
		firstTargets.insert(BWAPI::UnitTypes::Terran_Siege_Tank_Siege_Mode);
		mMicroActions.push_back(MicroAction(new ArconZealotKillUnDetected(mUnit)));
	}

When the micro action comes up for testing, it runs ArconZealotKillUnDetected::update(), which either fails the test and returns false or succeeds, executes the micro, and returns true. Given the original archon or zealot in mUnit, it looks for the closest enemy unit that meets the conditions:

	Unit lurker;
	int lurkerDistance = std::numeric_limits<int>::max();
	for each(Unit unit in UnitTracker::Instance().selectAllEnemy())
	{
		if(unit->exists() && !unit->isDetected() && (unit->isCloaked() || unit->getType().hasPermanentCloak() || (unit->getType() == BWAPI::UnitTypes::Zerg_Lurker && unit->isBurrowed())))
		{
			int thisDistance = mUnit->getDistance(unit);
			if(thisDistance < lurkerDistance)
			{
				lurkerDistance = thisDistance;
				lurker = unit;
			}
		}
	}

	if(!lurker || lurkerDistance > minDistance)
		return false;

The variable is called “lurker” but it looks for any unit that is undetected and is temporarily cloaked (like a wraith or a unit under an arbiter), permanently cloaked (like a dark templar), or is a burrowed lurker. I skipped over the next bit of code: It checks for the other unit (of type typeToFind) and if it’s close enough puts it in other; if it has a zealot it looks for an archon, or if an archon it looks for a zealot. If it finds the needed other unit, it executes the micro:

	if(typeToFind == BWAPI::UnitTypes::Protoss_Archon)
	{
		if(mUnit != UnitTracker::Instance().selectAllUnits(BWAPI::UnitTypes::Protoss_Zealot).getClosestUnit(lurker))
			return false;

		mUnit->move(lurker->getPosition());
		return true;
	}
	else
	{
		if(otherDistance <= 14)
		{
			mUnit->attack(other);
			return true;
		}
		else
		{
			mUnit->move(other->getPosition());
			return true;
		}
	}

If the unit is a zealot (because typeToFind is archon) and it’s not the closest zealot, it bails so that the target doesn’t get swarmed by zealots—one volunteer at a time, please! The zealot moves to the position of the target. If the unit is the archon, it moves toward the zealot (not toward the target; this corrects for miscoordination that could occur if there’s more than one target around) or, if the zealot is within splash range of the target, attacks the zealot.

The bottom line is that Skynet has the basic skill, but its implementation is not sophisticated. It knows all the possible targets, but can only use a zealot as the sacrifice—pros in a pinch will use anything that works as the sacrifice. It doesn’t know that it could use a reaver instead of an archon. It doesn’t consider whether the sacrifice is worth the damage to the target (you’ll hurt your zealot and the cloaked wraith will probably escape). It does no preparation (keep the archon and zealot near each other if they may need to work together) and minimal coordination (if there’s an observer just out of range and approaching, you shouldn’t have bothered; if you have high templar around maybe you should storm it). If the target evades, Skynet may keep microing both units, leaving them out of the fight. But of course most bots don’t have even the basic skill.

Reminder: Other units that do splash can also kill undetected enemies this way. Reavers, sieged tanks, and lurkers are the prime splash-dealing candidates. Corsairs, firebats, valkyries, and infested terrans also have occasional uses. (Mutalisk bounces are technically not splash and do not hit undetected enemies.) And, of course, spell effects like nuke, irradiate, psionic storm, ensnare, and plague affect units whether they are detected or not. Ensnare and plague are particularly useful because they make cloaked units visible.

Near the end of this 2008 game, Stork fires on his own carriers with corsairs to hold off cloaked wraiths.

Tomorrow: Mine dragging.

Skynet skills

I meant to write another post or two about how particular bots keep their squads in formation, but after grepping source code for every keyword I could think of, I didn’t turn any up. We already know that OpprimoBot keeps its squads compact with a flocking algorithm, so my search was definitely not comprehensive... but I still gave up.

Instead I’ll write a short series about interesting skills coded into Skynet. Of bots whose source I have, the three candidates for Bot With the Most Hand-Coded Skills are ICEbot, Skynet, and Tscmoo, and Skynet is the easiest to read.

As an appetizer, here is a comment from TaskManagerClass::getPriorityList, which sorts tasks related to build order, research and production into priority order:

	// If I am vulnerable to counter attack / have no map control, place defense higher
	// If I am not behind on army size but its not safe to attack, tech
	// If I am not behind on army size but its safe to attack, produce
	// If I'm behind on army supply, produce

The ideas in the comment are not implemented. Skynet is already one of the cleverest bots in adapting its build to the game situation. Imagine how much cleverer it could be if Andrew Smith had had enough time to implement these ideas.

Tomorrow: Killing undetected lurkers.

squad formation - Nova

Nova is from 2011 and was a mediocre performer at the time, so I figure that a current bot ought to do at least as well. Nova’s page points to source code and to Alberto Uriarte Pérez’s thesis “Multi-Reactive Planning for Real-Time Strategy Games,” among other stuff.

The thesis gives this diagram but is vague in explaining what Nova actually does. At least we can see that the centroid of the squad is important.

squad formation diagram showing centroid and some circles around it

The source code knows all, tells all, if we have but time to listen as it talks on and on. In SquadAgent::onFrame() I see this, freely leaving out irrelevant bits:

	switch (_state) {
		...
		case GetPosition:
			if (isSquadBio()) {
			 ...
				checkSpread(); // check to compact squad
			} ...
    }

_state == GetPosition means that the squad’s orders are to move to a position. isSquadBio() is true if the squad contains at least one marine or medic. So checkSpread() is doing the job (I deleted commented-out code below).

void SquadAgent::checkSpread()
{
	// if we are close to our base, don't checkSpread
	if (informationManager->home->getCenter().getApproxDistance(_center) < 30*TILE_SIZE) return;

	double maxSpread = MAX_SPREAD_BASE + _squadUnits.size() + _squadMaxSpread;
	double minSpread = MIN_SPREAD_BASE + _squadMaxSpread;

	if ( _movement == SquadAgent::Normal && _spread >  maxSpread) {
		// get nearest chokepoint
		BWTA::Chokepoint* bestChokepoint = BWTA::getNearestChokepoint(getClosestUnitTo(_positionTarget, UnitTypes::None, true)->_unit->getPosition());
		// get unit closest to chokepoint
		Position location;
		if (bestChokepoint != NULL) {
			location = getClosestUnitTo(bestChokepoint->getCenter(), UnitTypes::None, true)->_unit->getPosition();
		} else {
			location = getClosestUnitTo(_positionTarget, UnitTypes::None, true)->_unit->getPosition();
		}
		for(CombatUnitSet::const_iterator i=this->_squadUnits.begin();i!=this->_squadUnits.end();++i) {
			if ((*i)->_unit->getType() == UnitTypes::Terran_Dropship || (*i)->_unit->getType() == UnitTypes::Terran_Science_Vessel) continue; //ignore special micro
			(*i)->_unit->move(location);
		}
		_movement = SquadAgent::Cohesion;
	}
	if ( _movement == SquadAgent::Cohesion && _spread < minSpread ){
		for(CombatUnitSet::const_iterator i=this->_squadUnits.begin();i!=this->_squadUnits.end();++i) {
			if ((*i)->_unit->getType() == UnitTypes::Terran_Dropship || (*i)->_unit->getType() == UnitTypes::Terran_Science_Vessel) continue; //ignore special micro
			(*i)->_unit->attack(_positionTarget);
		}
		_movement = SquadAgent::Normal;
	}

	Unit* closest = getClosestUnitTo(_positionTarget, UnitTypes::None, true)->_unit;
	if ( (closest->getOrder() == Orders::AttackMove || closest->getOrder() == Orders::AttackTile || closest->getOrder() == Orders::AttackUnit) && _movement == SquadAgent::Cohesion ) {
		_movement = SquadAgent::Normal;
	}

	Broodwar->drawCircleMap(_center.x(), _center.y(), (int) maxSpread, Colors::Red, false);
	Broodwar->drawCircleMap(_center.x(), _center.y(), (int) minSpread, Colors::Green, false);
}

That’s clear enough, a little state machine. It calculates a maximum, minimum, and actual spread as drawn in the diagram from the thesis. If the squad’s movement is Normal and the actual spread is greater than the maximum, then figure out a location to gather the units and move them there, and set the movement state to Cohesion. The gather location is the location of the squad unit which is closest to the chokepoint which the most advanced unit is closest to. The cohesion state waits until the spread falls below the minimum, when it restores the regular orders and resets the state to Normal. There are exceptions for dropships and vessels, and an escape from cohesion mode, depending on the orders of the nearest unit to the destination, which I assume cancels cohesion mode when the squad comes under attack. Regular orders are to attack-move and cohesion orders are to move, so you have to cancel cohesion mode when under attack.

I assume that the gather location calculation is fancy for a good reason, but maybe a simpler gather location could be some squad unit which is farther ahead. Then the effect would be to make the leaders pause while stragglers caught up. This calculation looks like it will sometimes have the squad moving backward to gather up.

I've already seen a related idea with flocking, and you could implement other related ideas with potential fields too. Nova’s contribution is explicit gathering of the squad when it scatters too much.

blocking one path of two

Here’s a perfect example of why path analysis and tactical analysis should go together.

This map has two paths through the center. Johan Kayser’s bot has thoroughly blocked one path. WuliBot knew better than to confront the bunker farm head on, but never seemed to conceive the idea of bypassing it by taking the other path through the center. WuliBot could have walked unopposed into the terran base.

Instead, terran eventually won.

keep your squads together

Left to itself, a group of units sent across the map will tend to string out into a line, often with gaps. If that line runs headlong into waiting enemies, it will be eaten alive. The enemy “crossed your T,” as in battleship tactics. The whole enemy force can fire, while only the front units of the approaching line can engage, so you get defeated in detail.

tactics diagram: crossing the T

Some bots keep their squads in compact formation as they move. They stay ready to engage any enemy that may suddenly appear. Other bots don’t care. They let their units string out and take the risk of being caught off guard.

AIUR is an example of a bot that doesn’t care about formation. It will happily attack in single-file and be ground up like sausage. In fact, I think that’s one of the main ways that it loses won games: Instead of gathering up its superior force and attacking with a coherent army, it fights with dribs and drabs as they arrive.

A number of terran bots that train marines and medics let the two get separated. The marines and medics form up into a squad and set out for a distant destination. Along the way, the marines find something to shoot at and stop to do that, while the medics continue on to the original destination. Separated, of course, they all die.

Exceptions. Letting units string out into a line is not always bad. If you’re in a hurry, then run run run at full speed. Staying compact would mean slowing down.

The line formation can be good.

tactics diagram: equals sign

Run your line of units parallel to the enemy formation, and when you’re in position, converge on them. In human play, this tactic is common with zerglings and zealots, whose melee attacks are most effective when they approach in shallow formation across the breadth of the enemy army. I haven’t seen any bot do this skillfully, even though it’s basic micro. For example, lining up zealots is key to breaking a terran tank push.

Tomorrow: The old terran bot Nova keeps its squads compact while moving. I’ll look into that.

the maximum shooting distance

I like Johan Hagelbäck’s writings (see this post), but one point bugged me. He kept repeating the idea that units should engage at their maximum shooting distance. Well, he talks about various kinds of unit micro that you can achieve with potential fields, and most of it is simplified from the ideal, but this one simplification bugged me.

Keeping near your maximum shooting distance is often a good idea. You’re close enough to harm the enemy, but being at the maximum range increases your options (you could run away or kite out of range during cooldown) and decreases the enemy’s (maybe they have to approach to hit you at all). If you have only one behavior, then staying near maximum shooting distance is a good one.

But look at the giant exceptions:

• If you have a group of ranged units, you want as many as possible to fire. If there are too many to fit into a single arc (you have a large number, or you’re fighting in a narrow space), then some of them must come closer. If you’re going to stand and fight, then bring the most possible firepower to bear. When your reinforcements join the battle, they have to be able to reach the front line—it’s a kind of coordination and hard to implement, but it’s critical to good micro. The front line can make gaps to allow reinforcements through (a skill also useful for reaver scarabs), or can move forward to let reinforcements into range.

• Sieged tanks can shoot farther than they can see, so you have to consider vision range too.

• Sieged tanks have a minimum range, so if you’re attacking them you may want to snuggle up. Also, if the enemy has sieged tanks on the field, any enemy unit you snuggle up to is at risk of taking splash damage from its own side. Few bots seem to know.

• If you’re blasting down static defense (cannons, say) with siege tanks, don’t siege just inside tank range like most bots, siege just outside cannon range. You’ll probably be able to destroy more cannons before you have to unsiege and move, so you’ll clear the cannons faster and be more likely to win overall (carriers may be coming, yes?). This advice will usually be right when the enemy front line is static defense, but might be wrong if the enemy has other units in wait. Humans commonly siege tanks at different ranges, partly to save micro but also because it’s more robust against surprises from different directions.

• If you have ranged units and there’s an obstacle between the armies, then your only reason to keep your distance is so that fewer enemies can shoot at you. Dragoons on a cliff can take potshots all day at zerglings down below, and the same for guardians uphill from marines.

• If you’re going to win the battle, then staying at maximum shooting distance gives the enemy better chances to escape. If you can’t surround, then keep close so you can pick off more escapees.

In full generality, the distance at which one unit should seek to engage another (given that you want to) depends on the range, damage, cooldown, speed, and acceleration of both units, plus the terrain, with consideration of special abilities (psionic storm, mine laying, medic healing...), and if you want to be utterly thorough, the full game situation too. For most unit pairs the basic calculation is easy enough. For example, if you have more speed and more range (speed vulture versus slow zealot, ranged dragoon versus unupgraded marine), who cares how close you get, as long as you don’t get hurt?

For army on army, the formation and distance you want depend on the sizes and unit mixes of both armies, plus the terrain, plus the game situation. The maximum shooting distance is a good base to start at, but you’ll need to expand beyond to win.

search versus knowledge

I want to elaborate on something I passed over lightly yesterday. Making decisions in a complicated situation needs both search and knowledge, and you can trade off how much you rely on each. You can search few options with knowledgeable selection and evaluation of the options, or you can search many options with less understanding of each. The left column opposes the right column:

searchknowledge
planningreaction
processor timedata space

Humans are slow (neural operations in milliseconds) and have to act in real time, so humans rely heavily on knowledge for fast reactions. In a fast-moving Starcraft game, humans are primarily reactive and knowledge-driven (do what you practiced), and don’t have time to think through plans in detail.

Computers are fast (processor operations in nanoseconds), so computers should use more search than humans. (Also our learning algorithms are slower than human learning, at least in terms of data volume, so gathering knowledge is harder.) AI has a history of discovering how well search works. It makes sense.

Starcraft is real-time, so Starcraft bots should use more knowledge and less search than (say) chess or go programs that have time to think. On general principles, Starcraft bots should be more search-oriented than human Starcraft players, but more knowledge-oriented than chess programs.

Now consider the abstraction hierarchy, strategy, tactics, unit control. Unit control decisions happen the most often (since there are many units and each may need new orders frequently), so unit control has the strongest real-time constraint. Unit control, by the same general principles, should be as reactive and knowledge-based as possible, to free up time (allowing faster reactions and more time to think about tactics and strategy). Tactical and strategy decisions don’t have to be made as often and are based on a smaller amount of more abstract information, which favors searches that compare alternatives.

That doesn’t mean you have to do it that way, it’s just what the high-level view of the situation suggests. A strategy catalog (a knowledge base of strategies) makes perfect sense to me. Depending on what you count as “strategy” it may be that you can encode all your openings and counters and deceptions as knowledge and make strategy decisions mostly by lookup (where the lookup index includes an opponent model and dash of randomness). Or maybe not, I don’t know! Tactics seem more varied and call more loudly for comparison of alternatives by search and evaluation.

pathing 9 - the Pathfinder Almighty

I’ll outline vaguely more or less how a bot could handle movement in all circumstances, taking everything into account. You knew from the start this wasn’t going to be anything practical, right? PerfectBot might have this kind of system; it’s not something to sit down and implement, it’s a destination to approach over time (if we can find the path). “Pathfinder Almighty” is of course a funny name, since if you manage to create it, it won’t be either of those things. It will be a tactical analysis system that can sometimes achieve its goals!

design wishes

  • Understand blocking by buildings.
    • Don’t lock yourself in by mistake.
    • Know what to do about your own and enemy walls.
    • Know about narrowing chokes to restrict runbys.
    • Know how buildings increase path lengths.
    • Know how many units still fit in the space (for moving groups).
  • Understand blocking by units.
    • Pass through narrow chokes smoothly (at least don’t get clogged up).
    • Don’t (say) siege on a ramp if other units want to move up.
    • Use units to physically block (say) zealots from reaching tanks.
    • Handle units in stasis, or under maelstrom, or locked down.
  • Account for possible enemy vision, enemy fire, enemy movement.
  • Account for uncertainty.
  • Minimum effort. Spend time planning only what benefits from being planned.

As I mentioned last time, uncertainty is small in the immediate future and grows with time.

I haven’t noticed any bot that seems to understand blocking with units. It would allow a lot of valuable skills, such as:

  • probe body-blocks the scouting SCV to slow it so the zealot can hit
  • block the ramp to delay attacks
    • with zealots until dark templar defense is up
    • with dark templar
    • with eggs
    • with stasis or maelstrom or lockdown
  • medics or SCVs block zerglings from reaching marines
  • vultures lay mines among dragoons and physically block their escape
  • vultures physically block zealots from reaching tanks

modeling the world

By the minimum effort and uncertainty management principles, we want to model the immediate situation (where you start from now, the stuff you can see) in detail and the distant situation (where you want to end up) only roughly, and probably with measures of uncertainty. “Enemy army is not in sight, might be close, but is most likely guarding their choke.”

So model the enemy locations that you can see as units in exact positions, and those in distant or future places maybe as an influence map or a risk map.

decision making

Here’s what seems to me a good way to divide up the work. Let there be a tactics boss whose job it is to give orders to squads, and a unit boss whose job it is to carry out those orders by issuing detailed orders to units. Orders to a squad are from some list similar to this: Scout a location, move safely to a location (e.g., transfer workers), attack an area/units (go for the enemy), defend an area (e.g. your own mineral line from harassment), hold a line against attack (stop the enemy at a choke), drop a location, assume a formation. Or something along those lines. The tactics boss should be responsible for ordering physical blocking (zealots on the ramp, medics in front of marines, etc.), whether with a formation order or with a “hold position here” order.

In general, the tactics boss should be responsible for coordination of forces: These units in front, these units behind. Or: This squad holds, the squad comes in back for the sandwich. Or: This squad pokes at the front lines to draw defenders forward, then the drop goes in. The tactics boss should only issue orders with a reasonable chance of success. The unit boss is responsible for executing the orders so they have the best local chances of success, but doesn’t take responsibility for any coordination above the squad level; if the holding squad can’t hold or the sandwich squad gets intercepted, that’s the fault of the tactics boss.

The unit boss has to account for a lot of details in a short time, so it should work by a reactive algorithm. It can’t be as simple as the potential fields and flocking examples I wrote up, but something in the same vein. It can rely on data prepared by the tactics boss. In the spirit of reacting to the future, every friendly unit could have an intended course for the immediate future and every visible enemy unit a projected course, so that units coordinate more smoothly.

The tactics boss has to make complex decisions and, it seems to me, should work by search. (I think a knowledge-based tactics boss is possible, but I don’t see how to create one without first creating a search-based tactics boss.) I think the key decisions in starting to design the search are: 1. Can it be a flat search like MaasCraft’s, or does it want another level of abstraction, making it a hierarchical search? 2. How should it account for uncertainty?

A flat search says: I have these actions, the enemy has those actions, let’s look at sequences of actions for both sides and try to find good ones. A hierarchical search might say: I have these goals. For each goal, what actions (allowing for enemy reactions) might help me achieve it? Is the goal feasible? What combinations of goals can be satisfied? It’s hierarchical because it introduces a new level of abstraction (goals) and searches across both goals and actions. The hierarchical search is more complicated and may seem to do more work, but knowing your goals may also speed up the action search because you can prune away actions that don’t make sense. I don’t know which way is better in this case. It likely depends on implementation choices and stuff.

MaasCraft, as I understand it (I hope somebody will correct me if I make a mistake), doesn’t truly account for uncertainty. Its search allows for the enemy to maneuver squads that MaasCraft knows about; MaasCraft does nothing to protect itself against unscouted squads. In estimating how hard it will be to destroy a base, MaasCraft accounts for enemy reinforcements that may be produced during the attack, and that’s the extent of it.

It may be good enough for all I know, at least if scouting is thorough. But I suspect it would be better to consider the uncertainty about the enemy’s army: Number, locations, unit mix, what upgrades will finish this side of the search horizon. You don’t want to miss the point at which the first few high templar have storm energy. A few more ultralisks can turn a battle, and the enemy may have a few more than you expect, or be about to produce them.

The natural kind of search to use is something in the MCTS family, as MaasCraft does. Algorithms in this family work by drawing a statistical sample of action sequences for both sides and letting the stats talk out which sequences are better. Given that, a natural way to account for uncertainty is to draw a sample of enemy army details too: We don’t know if zerg concentrated on drones or hydras; if hydras, they may have gone here or over there.

I think a smart evaluation function will be a key ingredient. I find MaasCraft’s evaluation much too primitive and low-level. A smart evaluator, even if it’s slower than you think you can get away with, will make search much more effective. I know by experience. A smart evaluator can reduce the number of nodes a search needs to examine by orders of magnitude.

And that’s about as far as I can work it out without starting to make arbitrary implementation decisions. Have fun with it if you can!

Anyway, this is what makes sense to me. You should do what makes sense to you. But whatever else this is, it’s rather a lot for something I started out calling pathfinding!

Zia’s evolution

Zia has followed a wandering course. The early versions opened 12-hatch and went mass mutalisks. Then it switched to a 9-pool opening. Then a fast rush opening. Now it is back to an economic opening, this time with mass lurkers and lings.

I suspect that Zia is collecting strategies. Some future version may have all the strategies available as choices. If it selects well, it will become a force to reckon with.

pathing 8 - what is pathfinding?

Or rather, what should we want pathfinding to be? The term “pathfinding” prejudices the issue: It means finding a path from here to a given goal. The term presupposes that selecting the goal and figuring out how to get there can be separated. Picking the goal depends on the game situation and on what the enemy might do. Choosing how to get there also depends on the game situation and on what the enemy might do. The goal you want may depend on which paths the enemy is likely to block or to have vision over, so goal finding and path finding ought to feed into each other. Maybe we should use the word “movement” instead.

Following a static path to move blindly to a goal is a way to make terrible mistakes.

Consider some use cases:

Scouting. The scout doesn’t mind being seen, as long as it gets to see too, but it wants to live as long as possible to see more. Later in the game you may see the most by reconnaissance in force, that is, scouting with the army. The most interesting targets to scout can change with new information: “There’s an expansion here, there shouldn’t be one over there too.” Or: “These units are screening a base that used to be empty; is it still empty?” The best scouting takes into account possible enemy intentions and tells you actual enemy intentions.

Defending against an ongoing attack. It’s not enough to arrive at the goal, you have to arrive in position to help. It’s a tactical calculation about how reinforcements can best coordinate with existing defenders. Do I need to retreat until I can gather enough units to hold the attack? Can I hold long enough for the reinforcements to make a sandwich?

Reinforcing an army with new production. Sometimes you can send new units to the front and it’s a pathfinding operation. Sometimes the enemy intercepts your reinforcements and it becomes a tactical calculation.

• Targets of opportunity. When a squad spots a target it can attack with little risk, it has to decide whether the original goal or the new target is better. Maybe other units should be rerouted to join in.

Air units. Air units can fly anywhere and don’t need pathfinding to reach a goal, but they still prefer some paths over others. If the enemy is near, air units often want an escape route: A nearby obstacle or friendly force to retreat behind when in danger. Mutalisks, corsairs, and wraiths can rely on their speed to escape, but they still want to watch enemy movements to make sure they don’t get cut off and trapped. Overlords want to be close enough to see and detect as necessary, but safe and ideally unseen themselves.

Dropping. The transports would like to stay unobserved until as late as possible, to reduce the enemy’s reaction time: Plan a path over friendly territory, above cliffs, at the extreme edge of the map, and so on. If there is no good path, then the drop target may be a poor one. After being seen, the transports have to circle around or thread through static defenses and moving defenders. If the defense is too strong, the drop should turn back, maybe unloading some units early to save what can be saved. It’s a matter of weighing risk and benefit, both of which are uncertain and depend on the overall game situation and on what the transports see immediately around them.

I’m beating a dead horse by now. Whether the goal is good depends on whether the path is good, and you have to reevaluate the goal when new information comes up en route. Tactical analysis and path analysis have to be integrated one way or another. And the enemy gets a vote, so A* is not a natural fit; A* thinks it is the only agent in the world. If you plan movement by search, it’s natural to use some kind of adversary search.

MaasCraft’s tactical search is one way of integrating tactical analysis and path planning. It’s too coarse-grained to catch all the nuances, but I think it’s an excellent start.

Another consideration is information. You do sometimes have to plan long paths, even if you don’t always follow them to the end. You have good information about the start of the path, because you can see it. You have less information about the end of the path, because a lot may have changed by then. It may make sense to plan the near segment of the path in detail (if you don’t handle details with a reactive algorithm), and distant segments only coarsely.

Next: The Pathfinder Almighty, one last post to wrap up the pathing series.

pathing 7 - terrain analysis libraries

Historically, the BWTA library (for Brood War Terrain Analysis) was the solution. Here’s what I learned about the situation nowadays.

roll your own

Of the 13 bots whose source I had handy, these do not use BWTA but apparently roll their own map analysis: MaasCraft, Skynet, Tscmoo. In some cases it may be a vote of no confidence in the BWTA library; in some it may mean that that author wanted to do it on their own. UAlbertaBot uses BWTA for some tasks and its own map analysis for others, and likely some other bots do too.

Ratiotile has a sequence of 6 posts from 2012 analyzing region and choke detection in Skynet and BWTA and reimplements Skynet’s algorithm with tweaks.

BWTA2

The BWTA2 library in C++ is the maintained version of the abandoned original BWTA. I assume this is the version that BWMirror for Java “mirrors”.

I see complaints online that BWTA was slow and crashed on some maps. BWTA2 should at least reduce those problems.

It looks like BWTA2 does pathfinding using a navigation mesh, one of the fancy algorithms that I left for later consideration in this post. I’m thinking that I won’t return to the fancy algorithms because they’re not necessary for Starcraft. It’s easy to guess how a complex data structure leads to a slow and buggy map analysis.

BWEM

The BWEM C++ library (for Brood War Easy Map) by Igor Dimitrijevic is an alternative to BWTA. It claims to provide comparable information and to be more reliable and much faster. It also lets you send events to register that a map building or mineral patch has been removed, and updates its information to match.

I’m not aware of any bot that uses BWEM other than Igor Dimitrijevic’s own Stone and Iron, but I wouldn’t be surprised to find out that a few new bots do. BWEM 1.0 was released in September 2015, and BWEM 1.2 in February.

BWEM calculates ground distances and similar info in O(1) time using grids of data that are written by simple flood-fill algorithms. It’s similar to how Skynet does it (I haven’t looked into MaasCraft or Tscmoo). It makes sense that it should be fast and reliable.

For pathfinding specifically, BWEM returns a high-level path as a list of chokepoints to pass through. Paths are precomputed when the map is analyzed, so the operation is a lookup. If you want low-level pathfinding too, you need to do it another way.

BWEM knows about mineral patches which block the building of a base, usually a base on an island. Base::BlockingMinerals returns a list of blocking minerals for a given base, so you can mine them out first.

what to do?

Take advice about what library to use from people who have used them, not from me. But if I were evaluating them, I’d start with BWEM as the more modern choice.

Next: Really now, what is pathfinding?

pathing 6 - flocking

Until the 1980s, most scholars believed that in a flock of birds, or a school of fish, or a herd of antelopes, one animal was the leader and the group moved together because they followed the leader—even though nobody could figure out which animal it was.

Today we know that animal groups commonly self-organize by simple rules. Craig Reynolds’ boids flocking rules from 1986 are still the prototype today: Each boid looks at its local flockmates (and nothing else) and aims for separation from them to avoid crowding, alignment with the direction they’re heading, and cohesion, or keeping toward the center of mass so the flock holds together.

Flocking algorithms are reactive algorithms for local collision avoidance and group behavior, not very different in idea from potential fields. From what I read, it seems common in games to use A* for pathfinding plus something in the flocking family for steering along the way, where “steering” means detailed movement in reaction to the immediate surroundings.

OpprimoBot can be configured to use a boids algorithm as an alternative to potential fields, and the boids algorithm is turned on for SSCAIT, supposedly because it performs better. So I looked at the source in NavigationAgent::computeBoidsMove() which is called to move one unit a step toward a chosen destination.

Lotsa stuff in there! It’s a long sweep of straight-line code and easy to read. It computes dx and dy relative to the current position of a unit and decides whether to move or attack, and then records the order it computed: From the current position (x, y) to (x+dx, y+dy). I see these steps, in order:

  1. add a delta in the direction of the specified goal
  2. add a delta toward the center of mass of the squad (cohesion)
  3. add a delta tending to keep squad units moving in the same direction (alignment)
  4. for ground units only, add a delta away from the closest squadmates if they’re too close (separation)
  5. make a more complicated check about whether to retreat from enemies; if so, add a delta for that
  6. add a delta away from terrain obstacles that are too close
  7. if retreating, record a move to the new location, otherwise an attack move

I think the retreat implements kiting for ranged units and escape for units that have no attack. I couldn’t find any case where one delta depended on earlier ones, so I think they’re independent and the order doesn’t matter.

That seems simpler overall than potential fields, and OpprimoBot’s author Johan Hagelbäck seems to believe it works better. It certainly doesn’t have the local optimum problem.

OpprimoBot’s boids movement algorithm produces only the simplest attack-move micro. If you want something fancier, it seems to me that a flocking algorithm could be adapted to create movement toward weakly-defended targets, flight from strong enemies, and focus-fire to kill chosen enemies efficiently. Or you could say that’s no longer pathfinding and code it separately.

Next: The terrain analysis libraries.

drawn games and timeouts

Sorry, too tired today to do flocking. Here’s a rule proposal instead.

When a bot game goes too long, over an hour or an hour and a half with no winner, the game is stopped and automatically adjudicated by score. Pro games have the advantage of referees. When a pro game reaches a point where neither player can win, or where neither player is trying to win (they pause the game and talk to the players), the referees call it a draw and order a regame. When a technical problem interrupts a game, a pro game is either replayed or adjudicated a win for one player.

The main purpose of the pro rules is to guarantee a decisive result, which is entertaining for spectators and keeps the tournament system simple. Draws are rare; there are no cases in history of a regame also being drawn. The main purpose of the bot rules is to save time by stopping the game when the winning bot is confused and doesn’t know how to finish the enemy off, or takes too long to do it. Game timeouts are common. Both rule sets want a fair result too, of course.

I think we’ve come far enough that we can update the bot rules. The rules we use today are good for yesterday’s bots, not for today’s stronger bots.

I propose that when a game times out, it should be declared lost by both players. You draw, you lose. The better bots (half the table, I estimate) are fully capable of winning won games, and this push from the rules would encourage them to be aggressive and thorough about it. Fewer games will time out and tournaments will run faster overall.

Alternately, we could give each player 1/2 point for a draw, as in chess. Giving points for a draw offers the loser more reason to lift its command center and hide it and the winner less reason to be aggressive, but I don’t expect it would differ much overall from you-draw-you-lose.

Is it unfair to lose because you can’t win fast enough? It seems fair to me, if the timeout period is long enough, or is conditional on lack of progress by some measure (units destroyed, minerals mined, resources spent). Some individual results will be unfair, of course, as always when adjudicating games instead of finishing them. The current system also produces unfair results, where the side that would have won given more time is adjudicated the loser by current score—I’ve seen that happen a number of times, and it always bugs me. If both sides lost, I would feel the result less unfair.