archive by month
Skip to content

what’s next for Steamhammer: the decision

I have decided what tactical skills to work on. My list included skills for specific units: Mutalisks, the most important; lurkers, which I’m most interested in for now; scourge, which Steamhammer spends heavy gas on and doesn’t always use well, defiler skills because Steamhammer often reaches late game. But those are only single unit types. And unit coordination skills, like storm dodging, scarab dodging, mine clearing and mine dragging, making the best use of the dark swarm that is on the map—all needed, all narrow and specific. And tactical analysis, my initial favorite. I have an algorithm in mind, which calls for a fast combat evaluator. MaasCraft’s tactical search also uses a fast combat evaluator. My idea is different, and I’m not satisfied with MaasCraft’s evaluator. Thinking through what’s needed, I concluded that the first draft would be easy to write, but would produce poor results. I think it’s likely that it needs a sophisticated combat evaluator to work well—I have an AI algorithm in mind for that too, but I fear I can’t finish it in time for SSCAIT in December.

To make the most progress before SSCAIT, I decided to work on the next level of pathfinding skills. Steamhammer currently calculates terrain paths without regard to where the enemy may be. On an empty map, ground units reach their destinations without getting stuck on terrain. When a unit is trying to reach its destination safely despite the enemy, a scouting unit or a drone transferring to another base, the unit reacts to dangers by leaving its path and running away from the enemy. It is not able to figure out a way around (though it may blunder into one), and it is not able to tell when its path is completely blocked and it should give up. So overlords scout less safely and less efficiently than they could, and worse, drones trying to transfer may end up burrowed all over the map, wasting supply and risking their lives to achieve nothing.

Steamhammer needs true safe pathfinding. It has to recalculate safe paths when the enemy is sighted. That opens the door to a lot of more specific skills.

• Don’t send drones to a place you know they can’t reach. This alone would save many games.
• Don’t even spawn extra drones inside a tight contain. They won’t get out.
• Better scouting, from maneuvering the early scouting worker to moving overlords and the Recon squad.
• Calculate least-danger paths for harassment. You can take hits as long as you escape.
• Similarly for drops.
• Reach safe spots to attack walls or other stuff from outside enemy range.
• Enemy vision is a kind of danger too. Find sneaky paths.
• Path through nydus canals. Nydus canals are part of my plan to support islands.

I don’t know how many of these I’ll get to by SSCAIT. There is a lot to it: Ground units and air units have different needs, safe paths and least-danger paths are different, sneaky paths are different. Safe drone transfers are the biggest weakness and have top priority. Part of the solution is to spread hatcheries out more, rather than putting all macro hatcheries in the main.

The first part of the job was to create a task manager to run background tasks. It’s simple, I wrote it yesterday. The idea is that pathfinding tasks will update safe pathfinding data structures behind the scenes, so that the calculation load is spread out and the data is reasonably up-to-date. Over time, I expect to add a lot of other kinds of tasks. Steamhammer runs fast, and for now there is little risk of overstepping the frame time limit. (Even in the late game when maxed, most frames take a handful of milliseconds, and spikes above 20ms are rare.) But I have thought up plenty of complicated tasks, and it seems likely to become an issue someday. I want the infrastructure to be ready, so that I can implement a principled solution instead of refactoring a lot of code when the day arrives.

updated paper on sneaky dropship paths

The CoG 2021 proceedings includes one Starcraft paper. Sneak-Attacks in StarCraft using Influence Maps with Heuristic Search by Lucas Critch and Dave Churchill is an updated version of a paper from last year, “Combining Influence Maps with Heuristic Search for Executing Sneak-Attacks in RTS Games” by the same authors, which I wrote up as paper on dropship paths.

The dropship paths themselves appear identical to last year’s. The main news is an experiment to test how well the sneaky paths work against the built-in AI, as compared to straight-line paths. Somewhere in the world, hidden from our mortal eyes, there now exists a version of UAlbertaBot which does sneaky drops—at least in one restricted case, fast four-zealot drop by protoss. (The bot AIUR plays fast four-zealot drop too, but it doesn’t have the fancy shuttle pathfinding. Perhaps the authors will give us chances to play against this UAlbertaBot version?) The pathfinder replans the path on a fixed schedule, once per second, and stops replanning when it nears the drop point, because it has probably been seen by then anyway and it’s time to stop dodging and get on with it. The shuttle does not react to being shot at (though if the enemy wasn’t seen before then it may change course when the path is replanned). It’s clear that improvements are possible.

Results: A shuttle taking the direct path was somewhat more likely to be seen along the way, and was substantially more likely to be shot down before unloading (versus terran, way more likely to be shot down by marines). The direct path was faster, which had countervailing advantages in some situations. It’s not clear to me how well the results will generalize to other opponents, because they depend on details of the built-in AI’s play. A bot author, as opposed to a researcher, would rather find out whether the sneaky drops produce more wins in a tournament against a range of opponents.

Steamhammer’s skill kit system makes it relatively easy to record data about specific opponents, and to use the data for decisions. It wouldn’t be hard to record the kind of data shown in the paper—survival, damage taken, moving into enemy sight—and use the information to decide whether to drop and what category of path to follow if so. Steamhammer could figure out during a tournament what works against each opponent. I plan to eventually collect at least enough data to decide whether and when to drop.

A new idea in the paper is to use the sneaky pathfinder defensively, to place your buildings to prevent enemy sneak attack: If you find a good path for the enemy to attack you along, place a building that has vision of the path so you see the attack coming. This seems like overkill to me. I don’t think you gain much from a fancy pathfinding algorithm to place buildings for vision. If you want vision all around the edges of your base, then by the time drops are possible, you should have enough pylons, or supply depots, or zerg overlords, to have all the vision you can use. And besides, you have to weigh vision against the risk to the buildings, especially if you have a low-ground main base. I would prefer to spend the effort to react correctly to an incoming attack that you do spot.

If you want source code, you could try contacting the authors.

Steamhammer’s pathfinding

UAlbertaBot comes with distance maps: Give a destination tile position, get back a grid with the approximate ground distance from each tile on the map to the destination, or -1 if you can’t get there from here. Since forking Steamhammer from UAlbertaBot, I’ve added to and refined the subsystem, and used it for more purposes. For example, Steamhammer uses the ground distance to decide when to transfer workers to a new base so that they arrive just after it finishes. Pathfinding is an obvious use of distance maps, so maybe it’s surprising that I didn’t implement it until now.

the implementation

The low-level implementation is simple. The move command the.micro.Move() accepts an optional distance map. If provided, it calls on a simple algorithm that searches the distance map and finds a shortest-distance path segment 8 to 12 tiles ahead of the moving unit. MoveNear() and MoveSafely() also accept the optional distance map and pass it on to Move(), which does the work.

Plumbing the distance map through the abstraction hierarchy is the hard part. I chose to allow the squad’s order (class SquadOrder) to optionally keep a distance map. Basically, at any given time a squad may either have a distance map to its target, or not. If it has a map, then it is passed along on every movement order for the squad’s units. CombatCommander decides on the squad orders and generally gives a distance map to ground squads, and none to air squads which can fly straight to their destinations. Most squad targets are bases, and it happens that Steamhammer precalculates a distance map for each base. So a squad order to target a base uses the base’s distance map for free, while a squad order to target an arbitrary location calculates a new distance map which belongs directly to the squad order. I’ve been planning that detail of the implementation for years.

Pathfinding is used for squads with ground units, for the scouting worker, and for sending a worker to construct a new base. It is not used for constructing other buildings (which are usually nearby though not always) or for transferring workers between bases.

If the debug flag to draw squad information is turned on, a squad which has a distance map is listed with a “#” symbol before its name. The “#” represents the grid of distances.


Pathfinding works well in test games, though not perfectly. The paths are not true shortest paths (see “disadvantages” below), but close enough for now. In test games on Medusa, units of the Ground, Recon, and Watch squads navigate well, no longer getting stuck in the dead end back doors of the bases. On the other hand, I commonly saw a small number of transferred drones stuck there. One oversight is that some units still tend to get stuck behind the neutral buildings on Benzene. Benzene is not in the AIST S4 map pool, so I didn’t notice.

I also need to update the precalculated maps when map obstacles are destroyed. I’m sure I’ll iron out the details in time. It didn’t seem vital for now.


Once a distance map is calculated, it’s cheap to follow not only a shortest path for one unit, but any shortest path for all units. If you like one shortest path better than another for any reason, you can choose it on the fly.

A distance map need not be calculated from a single destination tile. It can start from any set of destination tiles, so that you can do things like find the shortest path to enter a region, or choose the closest of 2 equal goals. A distance map need not record literal distance; it can pretend that a dangerous tile is larger than others so that you tend to path around it when you can. Distance maps of different kinds have many potential uses.

  • Ground unit pathing through terrain, the implemented feature.
  • Ground or air unit safe paths, to avoid all known risk for scouting, or to discover containments.
  • Ground or air unit least-danger paths, to reach a defended target with minimal damage.
  • Ground or air paths which are out of sight range of known enemies, for sneaking around.
  • Paths to safe firing zones, so we can attack a defended target from outside its defences.
  • Maintaining a contant distance from an objective, such as the convex hull of a group of enemy units, for arcs and surrounds.


As implemented, the distances mapped are not true unit movement distances, but Manhattan distances. Units moving diagonally, say from bottom right to top left, tend to move not diagonally, but first to the left and then upward. I intend to correct it, but haven’t decided on a plan yet.

Distance maps are a bit bulky and slow, and usually involve calculating a lot of paths that you won’t go anywhere near, so you don’t want to calculate too many maps each frame. It’s better to reuse them as much as possible, which adds complexity. UAlbertaBot comes with a distance map cache, but its replacement policy is “periodically throw out all entries in case some may be unused.” Something a little more sophisticated is called for, especially if you want to store different kinds of distance maps in the same cache.

If you want to map something that changes frequently, like detailed enemy firing zones, you can either recalculate the map frequently—which is fine up to a point—or accept that it will often be a little outdated—also usually fine.

The disadvantages don’t seem severe to me. Distance maps gain their flexibility by doing more calculation than is necessary for any one use case—guess what, work harder, earn more. It does call for a little attention to efficiency.

Next: Combat sim smoothing.

the story of a trapped drone

In a game Steamhammer - McRaveZ on La Mancha, McRave pulled drones to try to survive.

unsupported drone

But what happened to that one drone that seems to be leaning over the chasm? It collided with other units and was pushed across unwalkable terrain to a new position where it was trapped.

zerglings can’t reach the drone

Steamhammer’s zerglings desperately want to get at that drone, but they can’t... quite... reach it. The zerglings were nearly useless for the rest of the game, distracted by the stuck drone every time they reached the enemy base. Steamhammer floundered until it finally remembered that mutalisks are also a unit it can make.

Here is the walkability map. Steamhammer actually checks that it can reach the drone target, but the check is incorrect in this case: There is a narrow corridor, one walk tile or 8 pixels across, leading to the little platform where the drone waits. No unit is narrow enough to walk there (even a ghost is 15 pixels wide), but Steamhammer doesn’t know that.

8x8 walk tiles

The imprisoned drone illustrates a flaw in the map design: Accidents should not push units into places where they will be permanently trapped. Freakling would never allow such a blunder! Of course it also illustrates a bug in Steamhammer. The bug makes a regular appearance on a few maps, like Fortress, but this is the first time I’ve seen it affect play on a regular SSCAIT map.

Next: Steamhammer status. Soon: Some of Steamhammer’s best recent games.

Steamhammer 2.2 status update

It looks as though Steamhammer 2.2 with regions and without BWTA will likely not be ready right after SSCAIT finishes, as I had planned. It will take longer because more stuff needs fixing. For one example, today Steamhammer lost versus UITtest because of a bug where drones entirely stopped mining minerals. I currently have WorkerManager instrumented to the gills to trace it. For another, yesterday I fixed several causes of rare production freezes and also added a workaround for the serious macro glitch that happens when Steamhammer wants to spend a lot of gas and is collecting gas slowly. Game-losing bugs like this are second on my priority list after crashes.

Also on my list is a bug that can cause infinite creep colonies to be built in the main base, instead of a limited number in the natural for defense. And I want to get the second building construction delay bug; fixing the first one got the spawning pool to start earlier, but the spire seems to start later, and that loses games too.

Plus of course I have added a few openings and made the usual varied fixes and changes. I always do.

high and low ground

In my current implementation, regions are defined entirely by chokes, which are themselves defined as narrow places in the terrain. I’m considering whether to change the definition of chokes to include changes in ground level. A narrow ramp counts as a choke no matter what. I’m thinking that maybe a wide ramp should also count as a choke. It has a similar property of defensibility; the difference is that the defensibility is asymmetric: It is easier to hold the top than the bottom. Since it’s defensible you care about it and may want to treat it as a region edge, no matter which side you’re on.


I’m not going to do pathfinding right away (though I’ll likely take initial steps soon), but here is my current plan. I want to separate high-level and low-level pathfinding. A high-level path from A to B is a sequence of chokes to go through, basically the same as BWEM provides. If you’re already in the same region, the sequence is empty. A low-level path is a sequence of positions to go through. A high-level path can be planned fast and if necessary replanned fast. A low-level path has to attend to details, so it saves time if you never have to plan farther ahead than the next choke, especially if you may change your mind.

For high-level pathfinding, Steamhammer needs to know all the chokes and whether each one is open or closed at last report. Therefore chokes must be found without destructible map obstacles (not how my current implementation works), and then each choke analyzed to see whether it is blocked. I also want to know the choke width, in case not all units fit through, to play on maps like Third World with narrow ramps and Blue Storm with a narrow entrance. For threat-aware high-level paths, it needs a measure of threat for each region and/or choke.

For low-level pathfinding, I want to work at the 32x32 tile level as much as possible, because the 8x8 walk tiles are 16 times as much data. I thought of scaling the “how much room is around this walk tile?” data to tile size, taking the max of the walk tile room values within the tile and filling in that as the room around the tile, perhaps with forcing to zero if the value is small or other adjustments. Other grids will also be at tile scale: This tile is blocked by a building, the threat at this tile is such-and-such. A path could be found at tile level, and any tricky parts double checked at finer resolution if necessary.

For air units, regions don’t matter and probably all path planning can be done at tile scale. Then detailed movement decisions related to firing and cooldown will be made frame by frame at pixel scale as now.

zones? areas?

In related news, I’m thinking of changing the name “regions”. Starcraft has its own idea of regions, represented by BWAPI::Region objects. They work differently and Steamhammer currently doesn’t use them, but they could be useful for some purposes. It would be nice to avoid a name clash, so that when you see a variable named region you don’t have to look up the type to find out what it is.

I like short names, so I’m thinking Zone or Area. Any other suggestions?

potential path analysis

Suppose you want to place static defense, or immobile defense like sieged tanks or burrowed lurkers, to guard your natural. Or maybe you want to place spider mines, or other scout units, in a picket line to spot movement. Where should you put your stuff? Well, it depends on the terrain and on what you are guarding against. To protect against ground attacks on the natural base, you normally want defenses which cover the choke. Many bots do it directly: Here is the base, there is the choke, set defenses in between.

In the general case, you want to analyze the paths your opponent must take, or is likely to take. If you can place defenses to cover every approach (out to beyond the enemy’s range of fire), then an attack must engage the defenses before it can reach the defended place—stand and fight, or siege them down from range, or run by while taking damage, no matter what there will be shooting.

This is true even against air units. If a pro terran suspects a drop early in the game, they may build 2 turrets widely spaced across the main (leaving most of the army in the natural for all contingencies). The turrets don’t close down every path, and the drop can still happen, but they make life difficult for the shuttle. The drop might come in at any angle and protoss may try to be tricky, but protoss has to find the turrets and either try to thread through (risky) or stop to take one out (slow). Bots are not sharp enough at defense to get by with so few turrets, but can still obey the principle that follows from path analysis: To defend against a drop with a small number of transports, widely spaced turrets do the most to limit the enemy freedom of action.

Compare late game defense against arbiter recall. The arbiter cannot come in at just any angle—there’s a big terran army in position to catch it! And at this stage in the game terran has the minerals to afford fences of turrets across any vulnerable approaches to a mineral line—covering the direct paths. In addition, depending on the terrain, many terrans will build turrets right up against the edge of the map to guard against the arbiter sliding in along the edge where the army won’t see it. That is an example of covering the likely path.

Path analysis can also tell where it may make sense to place pylon walls. I would love to be able to automatically discover the possibility of pylon walls to protect the mineral line at the mineral only bases on Circuit Breaker. Pylons there between the nexus and cliff can keep vultures far enough away that the probes can evade some of their fire.

The advantage of path analysis over choke analysis is that it is more general. It automatically covers all cases. You don’t need special case code to figure out how to defend your natural on Moon Glaive, which has 2 entrances, or to defend a mineral only on Python, which is not behind a choke at all. You don’t have to calculate separately whether a photon cannon’s range is enough to cover a wide choke, or leaves a path open; the analysis tells you. If there is a backdoor entrance, you don’t have to go looking for it, a thorough path analysis will remind you. But how do you do the analysis?

shortest path method

Steamhammer inherited distance maps from UAlbertaBot (originally DistanceMap, more recently renamed GridDistances). UAlbertaBot may have borrowed the idea from Skynet—nothing new here, and I’m sure many bots use this idea. Anyway, the algorithm is trivial: Specify a tile as the start, and give it distance 0. Tiles adjacent to 0 get distance 1, tiles adjacent to 1 (and not already marked) get distance 2, and so on. If you only mark walkable tiles, then you get something reasonably close to a map saying how far away everything is by ground from the start tile.

You can use the distance map to effortlessly find shortest ground paths either to or from the start tile. It’s efficient if you find many paths to or from a specific place (such as a base you want to defend or attack). In fact, it’s more general: In principle, you can specify any subset of tiles as start tiles, even tiles that are far away from each other. Specify the set of tiles where your workers are, and you find shortest paths to any part of your mineral line, for example. You can also use tricks like adding fake extra distance to tiles which are under fire, to route around dangerous places as much as possible.

If you’re only worried about shortest paths, there’s an easy algorithm to find the set of all tiles that are on a shortest path to a given place, your origin (which might be a tile or an area). First, make a distance map starting from the origin. Then pick a point outside—say, the center of the map if it’s reachable—and look up its distance. Let’s pretend it’s 100. Mark all tiles that have distance 100, or perhaps all tiles with distance 100 which are not in a base you control (since the enemy won’t start out from there). Then walk inward along the distance map: Mark all adjacent tiles with distance 99, then 98, and so on. When you’re done, you know all the tiles that are on a shortest path from anywhere at distance 100 to your origin.

The analysis only includes shortest paths, but it is thorough. It counts shortest paths in every direction, including through back doors.

region and choke method

BWTA and BWEM both calculate chokes and regions, creating a graph of the overall map structure. BWEM does pathfinding on this graph, rather than directly on the map, and if you use BWTA it’s easy enough to implement something similar yourself. You can enumerate high-level paths through the graph, then work out low-level path details inside each region using the distance map method above (make distance maps from each choke).

random method

Instead of calculating all short paths, you might prefer to take a random sample from a larger set of paths. Start with a random point outside (not in a base you control), and create a path using a distance map toward your origin—but don’t always choose a direction that decreases the distance. If the distance decreases on average, the path will eventually lead to the origin. You can also generate random paths with the region and choke method.

To make sure the random paths are diverse enough, you could use a multi-scale method. Start with a coarse path whose steps are far apart. It can be done directly using a distance map; pick random angles and find the tile in that direction at a given Euclidean distance (of, say, a randomly chosen 10 plus or minus 5 tiles), and accept it if it has a ground distance that is not too great (this choice can be random too). Once you have a coarse path, you can fill in the detailed path between the coarse path tiles using the same distance map.


Steamhammer inherits from UAlbertaBot the use of waypoint navigation in 2 narrow cases, for moving the scouting worker around the edge of the enemy base, and for moving the transport around the edge of the map when dropping. The 2 implementations are completely separate.

Today I am fixing yesterday’s bug related to drop. Following my minimum energy trajectory, I naturally clean up any code that I work on. The drop navigation turns out to be extravagantly overcomplex; I imagine it was copy-pasted from the scout navigation with minimal changes, even though following the map edge is a far easier problem. In cleaning it up, while keeping the same general plan I am eliminating 1 persistent data structure (dropping _waypoints and keeping _mapEdgeVertices which I will rename to _waypoints) and 2 temporary data structures used unnecessarily to find the edge of the map. You seriously don’t need complex calculations to lay waypoints around the edge of the map. I decided to keep the overall waypoint plan because it provides flexibility for future uses, even though the flexibility isn’t needed now.

Back when I first worked on drop, I treated this code as a black box and touched it as little as possible while marveling at its opacity. Now it is becoming much shorter (and incidentally faster), and getting commented so it will be understandable. I also intend to decide accurately which direction to take around the edge. Steamhammer too often takes the long way around.

Waypoints seem like a good navigation tool whenever you want to plan movement several steps in advance. If you figure out a safe path to get an overlord to a good observation post, or to route vultures around the sunken to reach the mineral line from the far side, then you can represent the path as a sequence of waypoints. Executing the plan seems easy. I guess if you’ve moving a substantial ground army, you probably don’t want to send all units to a single point, but I can see ways to work with that (either plan ahead with “way zones” or react on the fly to keep to safe locations within some tolerance). Of course the plan may run into difficulties so that you have to replan, but that is always true.

If you sometimes plan only one step ahead, then you don’t need waypoints—but you can still use them. Waypoints seem like a general purpose representation of paths from A to B.

I’ve noticed the duplicated waypoint code before, between scouting and dropping, and thought of factoring it out into a waypoint class for wider use. I may do that. It would be part of the solution to the other and bigger bug, getting stuck on map obstacles.

Do people have opinions?

map analysis plans

One of my goals for Steamhammer is to remove the dependency on BWTA, a large, slow and troublesome library. Its startup time to analyze a new map for the first time is ridiculous, and I want full power to analyze map blocks. I could go to BWEM, and I may yet change my mind and do that. But I’m not 100% satisfied; I might have to modify the library to simplify hypothetical reasoning and add features like pathfinding for units which are pushed through minerals. Another idea is to start with UAlbertaBot’s existing distance maps. It calculates ground distances at build tile resolution (32 x 32 pixel tiles) rather than the walk tile resolution of the map (8 x 8 pixel tiles), but this doesn’t cause any obvious problems and is easy to change if necessary. UAlbertaBot doesn’t actually rely on many BWTA features.

I was leaning toward the native distance map solution already. It calls for writing more code, but not that much, and the final solution would end up simpler and better tailored. Now Dave Churchill has made exactly that change to UAlbertaBot. It smooths my path since I can borrow code, and I plan to follow.

Dave Churchill removed BWTA big-bang style from UAlbertaBot, dropping the dependency and replacing its major uses in one step. I think the largest piece was to add a BaseLocation class—not a large piece at all. My implementation strategy will be the opposite. I’ll replace uses of BWTA item by item, testing as I go, and when I’m satisfied I’ll drop BWTA. The development process should be gradual and stable.

As a first step I taught Steamhammer to pay attention to map blocks in calculating ground distance maps. It doesn’t notice when blocks are removed, or route units around blocks, it only calculates distances more accurately based on the map’s initial state. Even this small step improves play, since Steamhammer decides on expansion locations partly based on ground distance. Since I reflexively clean up any code that I touch, I also fixed a bug in checking whether a tile is walkable and reduced the unnecessarily large size of the data structures. Distance maps are better, faster, and cheaper—there should be less memory traffic in calculating a map, and lookups will be more compact in cache.

coping with map blocks

On 2-player maps, some games of Steamhammer against strong bio terrans, LetaBot and Tyr by Simon Prins, have gone like this: Steamhammer opens with mutalisks and, seeing defenses at the front and not knowing how to fly around them, falls into a standoff. Both sides build up for a while. The terran moves out with an infantry force. That’s a head-to-head fight that the zerg flyers understand, and they clean up the terrans before serious damage is done to zerg bases—not without losses, of course. Meanwhile, Steamhammer has long since sent out a drone to take a 3rd base, but the drone was confused by the blocking buildings on Benzene, or the blocking minerals on Destination or Heartbreak Ridge, and is still wandering back and forth looking for the way. Steamhammer has a mineral excess but does not make macro hatcheries because it is waiting for the new base to start. Zerg can’t keep up in production and loses to the next attack.

Map blocks are causing too many losses. I have to do something soon. Most bots stronger than Steamhammer already take measures (I think only Bereaver still suffers losses to map block issues). What are the options?

  1. Fix pathing: Go around the blocks.
    1. Goal monitoring: Detect units which seem stuck and reroute them.
    2. Detect that paths may go through blocks and (somehow) take extra care.
    3. Switch to BWEM. I want to, but maybe later?
  2. Clear the blocks.
    1. Clear blocks near our bases unconditionally. I think some bots do.
    2. Clear blocks which are adjacent to the main, or to an expansion we want.
    3. Goal monitoring: Clear blocks near stuck units. The units themselves can often do it.
    4. Hypothetical path analysis: Clear a block if it makes a path that we want to take shorter.
    5. Strategic analysis: Open paths which are more valuable to us than to the enemy.
  3. Work around it in some cheesy way.
    1. Take bases in an order that doesn’t run into map blocks.
    2. Hardcode, or add to the config file, a list of maps with blocks and what to do about them.
    3. Get vision of the base first, and then mineral walk to it, passing through blocks.

Obviously the ideal is to both fix pathing and clear the blocks, as in 1C plus 2E. Goal monitoring has some value, but not much value if the goals hardly fail.

To analyze from general principles whether it’s a good idea to clear a given block, you need hypothetical reasoning: If I clear this block, then that path opens, or that shorter route becomes available. I don’t notice direct support for hypothetical reasoning in either BWTA2 or BWEM. It doesn’t seem all too difficult to cobble something together if you can find out what choke each blocking mineral/building blocks and the choke-to-choke distances.

For now I don’t want an ideal solution, I want something simple that solves enough of the problem. I expect I’ll start with 2B and adjust as needed.

What do others do? Are there tricks or pitfalls?

Update: I should have talked about mineral walking. Quite a few competitive maps are designed with small mineral patches on one or both sides of a block. A worker at the block can see the minerals on the other side, right-click on them, and pass through the block, but another kind of unit cannot. Electric Circuit in the SSCAIT map pool has the feature, but the map is not used, because too many bots get units trapped behind the blocks. Of course, as long as the map is not used there is no incentive to fix the bugs....

And that suggests an idea for zerg bots: First send an overlord to any base that you want to take. With vision of the minerals, you can mineral walk the expansion drone to the base. Having vision also gives some assurance that the base is safe and worth taking, and it lets you deal with spider mines. I’ve added the idea to the post as 3C. I hadn’t thought of it before, but now I’m seriously considering it.

LetaBot’s wall 3 - low-level utilities

Today, a few low-level parts that RecursiveWall() relies on. As mentioned yesterday, LetaBot plans its wall to fit inside a bounding box around the center of the choke to be walled off. It finds the Closest and Furthest tiles from the original command center that are in the box.

First, the SpaceArray that stores the sizes of buildings. The information can be found under List of Unit and Building Sizes in Liquipedia. The top, bottom, left, and right numbers are pixel gaps that the building leaves along each side. It’s the basic data you need to figure out how tight a wall is. With the gap sizes of two adjacent buildings, you can find the total gap between the buildings and see which units fit through the gap. Here is a sample of the initialization code:

ExtraSpace SpaceArray[10];

	ExtraSpace Barracks;
	Barracks.Top = 8;
	Barracks.Bottom = 15;
	Barracks.Left = 16;
	Barracks.Right = 7;
	Barracks.width = 4;
	Barracks.heigth = 3;

	SpaceArray[2] = Barracks;

MaxGap() is the function that uses the SpaceArray to find the gap. You pass in x and y and it looks to see what’s there. If the location is at the edge of a building, it looks for a neighboring building, figures out the relative locations of the buildings, and does a bogglingly complex calculation to find the horizontal and vertical gaps between the buildings, in pixels. I didn’t try to understand the details.

// a 2x2 part of the walkable array
//check for gaps between each of the tiles
int BuildingManager::MaxGap(int x, int y){

And WalledOffAstar() is the function that determines whether a wall has been created inside the bounding box. For some reason you pass in Startx and Starty, which are always at the Closest location. WalledOffAstar() uses the A* algorithm to look for a path between Closest and Furthest, a path which a unit with horizontal size unitH and vertical size unitV can follow. If no such path exists, then there’s a wall across the choke!

bool BuildingManager::WalledOffAstar(int Startx, int Starty, int Ignore, int unitH, int unitW){

When I heard that LetaBot used A* in its wall planning, I imagined that it used A* directly to search for building placements. But now we can see the real strategy. RecursiveWall() tries a building placement and then calls WalledOffAstar() to see whether that placement constitutes a wall. No? OK, try the next building placement.

Next: Finally, RecursiveWall() proper, finding the wall.

blocking one path of two

Here’s a perfect example of why path analysis and tactical analysis should go together.

This map has two paths through the center. Johan Kayser’s bot has thoroughly blocked one path. WuliBot knew better than to confront the bunker farm head on, but never seemed to conceive the idea of bypassing it by taking the other path through the center. WuliBot could have walked unopposed into the terran base.

Instead, terran eventually won.

pathing 9 - the Pathfinder Almighty

I’ll outline vaguely more or less how a bot could handle movement in all circumstances, taking everything into account. You knew from the start this wasn’t going to be anything practical, right? PerfectBot might have this kind of system; it’s not something to sit down and implement, it’s a destination to approach over time (if we can find the path). “Pathfinder Almighty” is of course a funny name, since if you manage to create it, it won’t be either of those things. It will be a tactical analysis system that can sometimes achieve its goals!

design wishes

  • Understand blocking by buildings.
    • Don’t lock yourself in by mistake.
    • Know what to do about your own and enemy walls.
    • Know about narrowing chokes to restrict runbys.
    • Know how buildings increase path lengths.
    • Know how many units still fit in the space (for moving groups).
  • Understand blocking by units.
    • Pass through narrow chokes smoothly (at least don’t get clogged up).
    • Don’t (say) siege on a ramp if other units want to move up.
    • Use units to physically block (say) zealots from reaching tanks.
    • Handle units in stasis, or under maelstrom, or locked down.
  • Account for possible enemy vision, enemy fire, enemy movement.
  • Account for uncertainty.
  • Minimum effort. Spend time planning only what benefits from being planned.

As I mentioned last time, uncertainty is small in the immediate future and grows with time.

I haven’t noticed any bot that seems to understand blocking with units. It would allow a lot of valuable skills, such as:

  • probe body-blocks the scouting SCV to slow it so the zealot can hit
  • block the ramp to delay attacks
    • with zealots until dark templar defense is up
    • with dark templar
    • with eggs
    • with stasis or maelstrom or lockdown
  • medics or SCVs block zerglings from reaching marines
  • vultures lay mines among dragoons and physically block their escape
  • vultures physically block zealots from reaching tanks

modeling the world

By the minimum effort and uncertainty management principles, we want to model the immediate situation (where you start from now, the stuff you can see) in detail and the distant situation (where you want to end up) only roughly, and probably with measures of uncertainty. “Enemy army is not in sight, might be close, but is most likely guarding their choke.”

So model the enemy locations that you can see as units in exact positions, and those in distant or future places maybe as an influence map or a risk map.

decision making

Here’s what seems to me a good way to divide up the work. Let there be a tactics boss whose job it is to give orders to squads, and a unit boss whose job it is to carry out those orders by issuing detailed orders to units. Orders to a squad are from some list similar to this: Scout a location, move safely to a location (e.g., transfer workers), attack an area/units (go for the enemy), defend an area (e.g. your own mineral line from harassment), hold a line against attack (stop the enemy at a choke), drop a location, assume a formation. Or something along those lines. The tactics boss should be responsible for ordering physical blocking (zealots on the ramp, medics in front of marines, etc.), whether with a formation order or with a “hold position here” order.

In general, the tactics boss should be responsible for coordination of forces: These units in front, these units behind. Or: This squad holds, the squad comes in back for the sandwich. Or: This squad pokes at the front lines to draw defenders forward, then the drop goes in. The tactics boss should only issue orders with a reasonable chance of success. The unit boss is responsible for executing the orders so they have the best local chances of success, but doesn’t take responsibility for any coordination above the squad level; if the holding squad can’t hold or the sandwich squad gets intercepted, that’s the fault of the tactics boss.

The unit boss has to account for a lot of details in a short time, so it should work by a reactive algorithm. It can’t be as simple as the potential fields and flocking examples I wrote up, but something in the same vein. It can rely on data prepared by the tactics boss. In the spirit of reacting to the future, every friendly unit could have an intended course for the immediate future and every visible enemy unit a projected course, so that units coordinate more smoothly.

The tactics boss has to make complex decisions and, it seems to me, should work by search. (I think a knowledge-based tactics boss is possible, but I don’t see how to create one without first creating a search-based tactics boss.) I think the key decisions in starting to design the search are: 1. Can it be a flat search like MaasCraft’s, or does it want another level of abstraction, making it a hierarchical search? 2. How should it account for uncertainty?

A flat search says: I have these actions, the enemy has those actions, let’s look at sequences of actions for both sides and try to find good ones. A hierarchical search might say: I have these goals. For each goal, what actions (allowing for enemy reactions) might help me achieve it? Is the goal feasible? What combinations of goals can be satisfied? It’s hierarchical because it introduces a new level of abstraction (goals) and searches across both goals and actions. The hierarchical search is more complicated and may seem to do more work, but knowing your goals may also speed up the action search because you can prune away actions that don’t make sense. I don’t know which way is better in this case. It likely depends on implementation choices and stuff.

MaasCraft, as I understand it (I hope somebody will correct me if I make a mistake), doesn’t truly account for uncertainty. Its search allows for the enemy to maneuver squads that MaasCraft knows about; MaasCraft does nothing to protect itself against unscouted squads. In estimating how hard it will be to destroy a base, MaasCraft accounts for enemy reinforcements that may be produced during the attack, and that’s the extent of it.

It may be good enough for all I know, at least if scouting is thorough. But I suspect it would be better to consider the uncertainty about the enemy’s army: Number, locations, unit mix, what upgrades will finish this side of the search horizon. You don’t want to miss the point at which the first few high templar have storm energy. A few more ultralisks can turn a battle, and the enemy may have a few more than you expect, or be about to produce them.

The natural kind of search to use is something in the MCTS family, as MaasCraft does. Algorithms in this family work by drawing a statistical sample of action sequences for both sides and letting the stats talk out which sequences are better. Given that, a natural way to account for uncertainty is to draw a sample of enemy army details too: We don’t know if zerg concentrated on drones or hydras; if hydras, they may have gone here or over there.

I think a smart evaluation function will be a key ingredient. I find MaasCraft’s evaluation much too primitive and low-level. A smart evaluator, even if it’s slower than you think you can get away with, will make search much more effective. I know by experience. A smart evaluator can reduce the number of nodes a search needs to examine by orders of magnitude.

And that’s about as far as I can work it out without starting to make arbitrary implementation decisions. Have fun with it if you can!

Anyway, this is what makes sense to me. You should do what makes sense to you. But whatever else this is, it’s rather a lot for something I started out calling pathfinding!

pathing 8 - what is pathfinding?

Or rather, what should we want pathfinding to be? The term “pathfinding” prejudices the issue: It means finding a path from here to a given goal. The term presupposes that selecting the goal and figuring out how to get there can be separated. Picking the goal depends on the game situation and on what the enemy might do. Choosing how to get there also depends on the game situation and on what the enemy might do. The goal you want may depend on which paths the enemy is likely to block or to have vision over, so goal finding and path finding ought to feed into each other. Maybe we should use the word “movement” instead.

Following a static path to move blindly to a goal is a way to make terrible mistakes.

Consider some use cases:

Scouting. The scout doesn’t mind being seen, as long as it gets to see too, but it wants to live as long as possible to see more. Later in the game you may see the most by reconnaissance in force, that is, scouting with the army. The most interesting targets to scout can change with new information: “There’s an expansion here, there shouldn’t be one over there too.” Or: “These units are screening a base that used to be empty; is it still empty?” The best scouting takes into account possible enemy intentions and tells you actual enemy intentions.

Defending against an ongoing attack. It’s not enough to arrive at the goal, you have to arrive in position to help. It’s a tactical calculation about how reinforcements can best coordinate with existing defenders. Do I need to retreat until I can gather enough units to hold the attack? Can I hold long enough for the reinforcements to make a sandwich?

Reinforcing an army with new production. Sometimes you can send new units to the front and it’s a pathfinding operation. Sometimes the enemy intercepts your reinforcements and it becomes a tactical calculation.

• Targets of opportunity. When a squad spots a target it can attack with little risk, it has to decide whether the original goal or the new target is better. Maybe other units should be rerouted to join in.

Air units. Air units can fly anywhere and don’t need pathfinding to reach a goal, but they still prefer some paths over others. If the enemy is near, air units often want an escape route: A nearby obstacle or friendly force to retreat behind when in danger. Mutalisks, corsairs, and wraiths can rely on their speed to escape, but they still want to watch enemy movements to make sure they don’t get cut off and trapped. Overlords want to be close enough to see and detect as necessary, but safe and ideally unseen themselves.

Dropping. The transports would like to stay unobserved until as late as possible, to reduce the enemy’s reaction time: Plan a path over friendly territory, above cliffs, at the extreme edge of the map, and so on. If there is no good path, then the drop target may be a poor one. After being seen, the transports have to circle around or thread through static defenses and moving defenders. If the defense is too strong, the drop should turn back, maybe unloading some units early to save what can be saved. It’s a matter of weighing risk and benefit, both of which are uncertain and depend on the overall game situation and on what the transports see immediately around them.

I’m beating a dead horse by now. Whether the goal is good depends on whether the path is good, and you have to reevaluate the goal when new information comes up en route. Tactical analysis and path analysis have to be integrated one way or another. And the enemy gets a vote, so A* is not a natural fit; A* thinks it is the only agent in the world. If you plan movement by search, it’s natural to use some kind of adversary search.

MaasCraft’s tactical search is one way of integrating tactical analysis and path planning. It’s too coarse-grained to catch all the nuances, but I think it’s an excellent start.

Another consideration is information. You do sometimes have to plan long paths, even if you don’t always follow them to the end. You have good information about the start of the path, because you can see it. You have less information about the end of the path, because a lot may have changed by then. It may make sense to plan the near segment of the path in detail (if you don’t handle details with a reactive algorithm), and distant segments only coarsely.

Next: The Pathfinder Almighty, one last post to wrap up the pathing series.

pathing 7 - terrain analysis libraries

Historically, the BWTA library (for Brood War Terrain Analysis) was the solution. Here’s what I learned about the situation nowadays.

roll your own

Of the 13 bots whose source I had handy, these do not use BWTA but apparently roll their own map analysis: MaasCraft, Skynet, Tscmoo. In some cases it may be a vote of no confidence in the BWTA library; in some it may mean that that author wanted to do it on their own. UAlbertaBot uses BWTA for some tasks and its own map analysis for others, and likely some other bots do too.

Ratiotile has a sequence of 6 posts from 2012 analyzing region and choke detection in Skynet and BWTA and reimplements Skynet’s algorithm with tweaks.


The BWTA2 library in C++ is the maintained version of the abandoned original BWTA. I assume this is the version that BWMirror for Java “mirrors”.

I see complaints online that BWTA was slow and crashed on some maps. BWTA2 should at least reduce those problems.

It looks like BWTA2 does pathfinding using a navigation mesh, one of the fancy algorithms that I left for later consideration in this post. I’m thinking that I won’t return to the fancy algorithms because they’re not necessary for Starcraft. It’s easy to guess how a complex data structure leads to a slow and buggy map analysis.


The BWEM C++ library (for Brood War Easy Map) by Igor Dimitrijevic is an alternative to BWTA. It claims to provide comparable information and to be more reliable and much faster. It also lets you send events to register that a map building or mineral patch has been removed, and updates its information to match.

I’m not aware of any bot that uses BWEM other than Igor Dimitrijevic’s own Stone and Iron, but I wouldn’t be surprised to find out that a few new bots do. BWEM 1.0 was released in September 2015, and BWEM 1.2 in February.

BWEM calculates ground distances and similar info in O(1) time using grids of data that are written by simple flood-fill algorithms. It’s similar to how Skynet does it (I haven’t looked into MaasCraft or Tscmoo). It makes sense that it should be fast and reliable.

For pathfinding specifically, BWEM returns a high-level path as a list of chokepoints to pass through. Paths are precomputed when the map is analyzed, so the operation is a lookup. If you want low-level pathfinding too, you need to do it another way.

BWEM knows about mineral patches which block the building of a base, usually a base on an island. Base::BlockingMinerals returns a list of blocking minerals for a given base, so you can mine them out first.

what to do?

Take advice about what library to use from people who have used them, not from me. But if I were evaluating them, I’d start with BWEM as the more modern choice.

Next: Really now, what is pathfinding?

pathing 6 - flocking

Until the 1980s, most scholars believed that in a flock of birds, or a school of fish, or a herd of antelopes, one animal was the leader and the group moved together because they followed the leader—even though nobody could figure out which animal it was.

Today we know that animal groups commonly self-organize by simple rules. Craig Reynolds’ boids flocking rules from 1986 are still the prototype today: Each boid looks at its local flockmates (and nothing else) and aims for separation from them to avoid crowding, alignment with the direction they’re heading, and cohesion, or keeping toward the center of mass so the flock holds together.

Flocking algorithms are reactive algorithms for local collision avoidance and group behavior, not very different in idea from potential fields. From what I read, it seems common in games to use A* for pathfinding plus something in the flocking family for steering along the way, where “steering” means detailed movement in reaction to the immediate surroundings.

OpprimoBot can be configured to use a boids algorithm as an alternative to potential fields, and the boids algorithm is turned on for SSCAIT, supposedly because it performs better. So I looked at the source in NavigationAgent::computeBoidsMove() which is called to move one unit a step toward a chosen destination.

Lotsa stuff in there! It’s a long sweep of straight-line code and easy to read. It computes dx and dy relative to the current position of a unit and decides whether to move or attack, and then records the order it computed: From the current position (x, y) to (x+dx, y+dy). I see these steps, in order:

  1. add a delta in the direction of the specified goal
  2. add a delta toward the center of mass of the squad (cohesion)
  3. add a delta tending to keep squad units moving in the same direction (alignment)
  4. for ground units only, add a delta away from the closest squadmates if they’re too close (separation)
  5. make a more complicated check about whether to retreat from enemies; if so, add a delta for that
  6. add a delta away from terrain obstacles that are too close
  7. if retreating, record a move to the new location, otherwise an attack move

I think the retreat implements kiting for ranged units and escape for units that have no attack. I couldn’t find any case where one delta depended on earlier ones, so I think they’re independent and the order doesn’t matter.

That seems simpler overall than potential fields, and OpprimoBot’s author Johan Hagelbäck seems to believe it works better. It certainly doesn’t have the local optimum problem.

OpprimoBot’s boids movement algorithm produces only the simplest attack-move micro. If you want something fancier, it seems to me that a flocking algorithm could be adapted to create movement toward weakly-defended targets, flight from strong enemies, and focus-fire to kill chosen enemies efficiently. Or you could say that’s no longer pathfinding and code it separately.

Next: The terrain analysis libraries.

pathing 5 - potential fields

Pathfinding algorithms come in two kinds, planning algorithms and reactive algorithms. The planning algorithms are the A* gang, which rely on predictions of future events to find good ways forward. When something unpredictable happens (and the enemy will try to make sure that it does), planners have to spend time replanning. Because A* doesn’t try to predict enemy actions, the original plan may have been poor in the first place.

Potential fields are a family of reactive algorithms, or more precisely a data structure and framework for making up reactive navigation algorithms. Reactive algorithms, popularized in robotics by Rod Brooks, look at the immediate situation and react to it. They don’t know or care what happens next, they just try to do something sensible in the moment.

Potential fields seem popular; a lot of bots say they use them. My favorite sources are by Johan Hagelbäck. Ratiotile pointed out the thesis in this comment (thanks!).

A Multi-Agent Potential Field based approach for Real-Time Strategy Game Bots - 2009 thesis with a simpler game than Starcraft
Potential-Field Based navigation in StarCraft - 2012 paper covering OpprimoBot

The idea is to define a potential for each map location (x, y). For its next move, the unit seeks the highest potential in its immediate neighborhood (or, if you think like a physicist, rolls downhill toward the lowest potential). The goal generates an attractive potential that reaches across the map. Obstacles generate a short-range repulsive potential. And so on—the Starcraft paper has a table of OpprimoBot’s sources of potential. To get the overall potential, the separate potentials from each origin are combined, by adding or with max depending on the effect.

For efficiency, you don’t calculate the full potential field across the map. For each unit, calculate the potential for a small selection of points where you may want to move next. Also, use a bounding box (or something) to prune away sources of potential which are too far away to matter.

Advantages. The strengths of potential fields cover the weaknesses of A*. They naturally handle collisions and destructible obstacles and dynamic hazards and even combat behavior. The thesis suggests a number of tricks for different situations.

It’s easy to think about and easy to implement. The problem shifts from “what behavior do I want in this complex situation?” to “what potential should each object generate?” The problem decomposition is given for you, if you like. You still have to solve the problem, giving attractive fields to goals (“attack the enemy here”) and repulsive fields to hazards (“combat simulator says you’re going to lose, run away”).

Simple micro is easy. To get kiting, give enemies an attractive potential field (up to just inside your firing range) when you’re ready to fire and a repulsive one during cooldown. Some bots overdo it and kite, for example, dragoons against overlords; there’s no need to avoid units which can’t hurt you (and it can reduce your firing rate).

Simple cooperative behavior falls out. If your tanks avoid each other as obstacles and are attracted to within siege range of enemies, then they’ll form up into a line at the right range.

Complicated behavior is possible. Nothing says that a reactive algorithm has to be 100% reactive—a planner can calculate potential fields too, any kind of potential field for any purpose. You can, say, scan the enemy base and use a planner to decide where each dropship should go, and generate a potential field to pull them to their targets. The dropships will dodge unexpected enemies on their way, if the enemies generate their own fields. Potential fields are general-purpose.

Problems. The famous problem is that the fields may combine in a way that creates a local optimum where a unit gets stuck permanently. OpprimoBot reduces but does not solve the local optimum problem with a “pheromone trail,” meaning that each unit’s own backtrail generates a repulsive field that keeps the unit moving. IceBot uses a different solution, called potential flow, which mathematically eliminates the possibility of local optimum points. One paper is Potential Flow for Unit Positioning During Combat in StarCraft (pdf) by IceBot team members, 2013.

It may not be easy to tune the potential fields to get good overall behavior. OpprimoBot apparently has hand-tuned fields. The Berkeley Overmind used machine learning to tune its potential fields.

In general, the weakness of potential fields is that they don’t foresee what happens next. The space between these enemy groups is safe—until you get sandwiched. The Berkeley Overmind used to compute the convex hull of the threats to avoid getting caught inside (and then fly around the outside looking for weak points). But that’s only one example of a way to get into trouble by ignoring what’s next. The Starcraft paper gives a simpler example: If the base exit is to the north and the enemy is to the south, units won’t get out.

The bottom line is that potential fields look great for local decision-making, and not so great for larger decisions like “which choke do I go through?” Which immediately suggests that you could use A* to plan a path at the map region level (“go through this sequence of chokes”) and potential fields along the way. OpprimoBot’s documentation says that it uses Starcraft’s built-in navigation for units which have no enemy in sight range, and otherwise switches to potential fields or flocking.

I see obvious ways to combine potential fields with forward search to gain advantages from both, and that may go into the final Pathfinder Almighty. I think we knew from the start that the Pathfinder Almighty, which takes all available information into account, was neither a pure planning algorithm (since it expects surprises) nor a pure reactive algorithm (since it has to foresee the range of enemy reactions).

Tomorrow: Flocking.