archive by month
Skip to content

CIG 2016 and the Terran Renaissance

Looking at the entrants to CIG 2016, I think the Terran Renaissance is confirmed. The authors of terran bots have been pushing hard to get into the forefront, and I think they’ve passed the other races and succeeded. Protoss and zerg have not been putting in the same effort to reach the serrated leading edge.

I think terrans Iron and Letabot have the best chances to come in #1. Tscmoo terran can never be counted out, especially since it seems to have gotten a last-minute update (the neural network diagram got bigger; apparently it can remember more in its long short-term memory). I judge that zerg 4-pooler ZZZKBot still has a chance to make it into the top 3. Random UAlbertaBot and zerg Overkill haven’t been updated and seem a cut below. If Krasi0 were playing, I would forecast a terran sweep, though not with full confidence.

Protoss XelnagaII boldly gives itself a new version number, so I consider it an unknown. Protoss MegaBot has mixed results in early going on SSCAIT, but its description emphasizes strategy so maybe it’s an opponent-modeling bot that will do better in a long tournament (I can hope, anyway). And there’s no telling what other bots may have updates that I don’t know about.

We learned that Sungguk Cha’s bot is called Navinad, and Johan Kayser’s bot is called SRbotOne. OpprimoBot is (as in past tourneys) listed as playing terran, not random—I assume that’s its best race, though I would have guessed zerg.

Salsa has the best shot at the bottom of the score chart, with Bonjwa runner-up for the caboose. Not that I would discourage either of them. Salsa learned to play on its own from scratch—that’s an achievement in itself, just not the kind that the tournament is trying to measure.

pathing 5 - potential fields

Pathfinding algorithms come in two kinds, planning algorithms and reactive algorithms. The planning algorithms are the A* gang, which rely on predictions of future events to find good ways forward. When something unpredictable happens (and the enemy will try to make sure that it does), planners have to spend time replanning. Because A* doesn’t try to predict enemy actions, the original plan may have been poor in the first place.

Potential fields are a family of reactive algorithms, or more precisely a data structure and framework for making up reactive navigation algorithms. Reactive algorithms, popularized in robotics by Rod Brooks, look at the immediate situation and react to it. They don’t know or care what happens next, they just try to do something sensible in the moment.

Potential fields seem popular; a lot of bots say they use them. My favorite sources are by Johan Hagelbäck. Ratiotile pointed out the thesis in this comment (thanks!).

A Multi-Agent Potential Field based approach for Real-Time Strategy Game Bots - 2009 thesis with a simpler game than Starcraft
Potential-Field Based navigation in StarCraft - 2012 paper covering OpprimoBot

The idea is to define a potential for each map location (x, y). For its next move, the unit seeks the highest potential in its immediate neighborhood (or, if you think like a physicist, rolls downhill toward the lowest potential). The goal generates an attractive potential that reaches across the map. Obstacles generate a short-range repulsive potential. And so on—the Starcraft paper has a table of OpprimoBot’s sources of potential. To get the overall potential, the separate potentials from each origin are combined, by adding or with max depending on the effect.

For efficiency, you don’t calculate the full potential field across the map. For each unit, calculate the potential for a small selection of points where you may want to move next. Also, use a bounding box (or something) to prune away sources of potential which are too far away to matter.

Advantages. The strengths of potential fields cover the weaknesses of A*. They naturally handle collisions and destructible obstacles and dynamic hazards and even combat behavior. The thesis suggests a number of tricks for different situations.

It’s easy to think about and easy to implement. The problem shifts from “what behavior do I want in this complex situation?” to “what potential should each object generate?” The problem decomposition is given for you, if you like. You still have to solve the problem, giving attractive fields to goals (“attack the enemy here”) and repulsive fields to hazards (“combat simulator says you’re going to lose, run away”).

Simple micro is easy. To get kiting, give enemies an attractive potential field (up to just inside your firing range) when you’re ready to fire and a repulsive one during cooldown. Some bots overdo it and kite, for example, dragoons against overlords; there’s no need to avoid units which can’t hurt you (and it can reduce your firing rate).

Simple cooperative behavior falls out. If your tanks avoid each other as obstacles and are attracted to within siege range of enemies, then they’ll form up into a line at the right range.

Complicated behavior is possible. Nothing says that a reactive algorithm has to be 100% reactive—a planner can calculate potential fields too, any kind of potential field for any purpose. You can, say, scan the enemy base and use a planner to decide where each dropship should go, and generate a potential field to pull them to their targets. The dropships will dodge unexpected enemies on their way, if the enemies generate their own fields. Potential fields are general-purpose.

Problems. The famous problem is that the fields may combine in a way that creates a local optimum where a unit gets stuck permanently. OpprimoBot reduces but does not solve the local optimum problem with a “pheromone trail,” meaning that each unit’s own backtrail generates a repulsive field that keeps the unit moving. IceBot uses a different solution, called potential flow, which mathematically eliminates the possibility of local optimum points. One paper is Potential Flow for Unit Positioning During Combat in StarCraft (pdf) by IceBot team members, 2013.

It may not be easy to tune the potential fields to get good overall behavior. OpprimoBot apparently has hand-tuned fields. The Berkeley Overmind used machine learning to tune its potential fields.

In general, the weakness of potential fields is that they don’t foresee what happens next. The space between these enemy groups is safe—until you get sandwiched. The Berkeley Overmind used to compute the convex hull of the threats to avoid getting caught inside (and then fly around the outside looking for weak points). But that’s only one example of a way to get into trouble by ignoring what’s next. The Starcraft paper gives a simpler example: If the base exit is to the north and the enemy is to the south, units won’t get out.

The bottom line is that potential fields look great for local decision-making, and not so great for larger decisions like “which choke do I go through?” Which immediately suggests that you could use A* to plan a path at the map region level (“go through this sequence of chokes”) and potential fields along the way. OpprimoBot’s documentation says that it uses Starcraft’s built-in navigation for units which have no enemy in sight range, and otherwise switches to potential fields or flocking.

I see obvious ways to combine potential fields with forward search to gain advantages from both, and that may go into the final Pathfinder Almighty. I think we knew from the start that the Pathfinder Almighty, which takes all available information into account, was neither a pure planning algorithm (since it expects surprises) nor a pure reactive algorithm (since it has to foresee the range of enemy reactions).

Tomorrow: Flocking.

pathing 4 - up-to-date A* stuff

For a view of the A* family today, I liked this site the best. Maybe I should have skipped the last two posts and pointed “go there!” But I don’t regret stopping at a couple points along the road to get an idea of the historical development.

Amit’s A* Pages

There’s more stuff here than ought to fit into one post, but I’ll keep to the highlights.

Practical techniques. Amit talks about different choices of heuristic, ways of adjusting the “actual” sunk cost g(x) and/or the heuristic future cost h(x) to get different behavior, and other little tricks. Worth reading, not worth repeating.

Implementation details. Amit has a long section on how to make A* efficient. For the priority queue at the heart of A*, he examines the tradeoffs of too many algorithms and then says that he’s never needed anything other than a binary heap. OK, done!

Various fancier data structures. This seems like a key section. If you represent a Starcraft map as a grid of tiles, then A* will have to do a ton of work to step over each tile that might be in a path. Amit offers these choices that seem relevant to Starcraft.

  • visibility graph - draw polygons around obstacles and navigate from corner to corner
  • navigation mesh - break walkable areas into convex polygons and navigate from edge to edge
  • hierarchical representations - where each hierarchy level may be of a different type
  • skip links - add extra graph edges to take long paths in one A* search step (cheap imitation hierarchical representation)

And there are choices about how to use each representation. It an important topic, but I don’t know enough to judge which ideas are worth going into in detail. I’ll come back to it when I’ve gotten further along.

Path recalculation when the map changes on you, ditto: I’ll revisit if necessary when I know more.

Islands. If the map has areas you can’t reach by walking but you may want to go to, you’d better mark them as only reachable by air. Do a preprocessing step, I guess, or at least cache the results. You don’t want to have to repeatedly search the whole map during the game to find out that you can’t walk there. That includes not only islands to expand to but cliffs you might want to drop on. Obvious, but I hadn’t thought of it.

Group movement more or less says “see flocking.” I’ll do flocking after potential fields, since they’re related. Is it Opprimobot that claims to use a flocking algorithm? Some bot does; I’ll check it out.

Coordinated movement, like moving in formation or moving through a narrow passage in order, earns a paragraph. It’s potentially interesting for Starcraft. If you want to do that coordination in top-down planning style, a better source is the article pair Coordinated Unit Movement and Implementing Coordinated Movement by Dave Pottinger on Gamasutra.

Tomorrow: Potential fields.

pathing 3 - hierarchical algorithms based on A*

As the next step in tracing the development of A* methods, I picked this older paper from 2004 by Adi Botea, Martin Müller, and Jonathan Schaeffer of the games group at the University of Alberta:

Near Optimal Hierarchical Path-Finding

Links to code in the paper are expired, unsurprisingly. The presented algorithm seems OK to me (it’s about saving time by accepting a slightly worse path), but I was most interested in the literature review from pages 5-8.

They talk about a bunch of algorithms based on A* but in some way more efficient. These fancier algorithms are hierarchical, with at least two levels of abstraction. If there are two levels, then the higher level is something like “go through these map regions” (maybe giving border points to pass through), and the lower level is more like “go through these points within a region.” Finding a path means doing a hierarchical search to find a high-level path (“go through these regions”) and a low-level path (“go through these map grid points”). Each level may be an A* search itself.

Note: This is not the same kind of hierarchical search that I promised to talk about, though it’s related. Pathfinding hierarchies have only a simple form of abstraction.

My conclusions:

The hierarchical search, if it has two levels, goes something like this: The top-level search plans a path through map regions. To find the cost of traversing a region (“go from this choke to that one”), it calls on the low-level search. The low-level search should cache answers, because it’s going to be asked the same questions frequently. Details vary by algorithm but seem easy enough to figure out.

Maps are dynamic. Not everything stays put. In Starcraft, blocking minerals can be mined out and map buildings can be destroyed. Your own and the opponent’s buildings and units act as physical obstacles too. For full accuracy, the low-level search has to take everything into account. Ideas I’ve seen so far for coping with this within the A* family seem to amount to “replan if the map changes,” which is OK for occasional changes like the destruction of map obstacles.

Be lazy. Not only maps are dynamic, goals are dynamic too; before you reach the end of your path, you may change your mind and want to go somewhere else after all. So don’t spend cpu time to plan the whole path in full accuracy. Make sure the high-level path is good and plan only your immediate moves in full accuracy. One idea is to have a quick low-level search that only takes into account map features and a full-accuracy low-level search that takes everything into account.

Group movement is important in Starcraft, and this paper doesn’t talk about it. You usually want your units together (not straggling separately past obstacles, or taking different bridges when they might find trouble before joining up again), and if the enemy is around then you care about good formation. That deserves another post.

Next: One or two posts about the latest in the A* family. I still have reading to do, so probably not tomorrow.

pathing 2: the classic A* algorithm

The A* algorithm (pronounced “A star”) is famous in AI. It’s a general-purpose best-first graph search algorithm, and finding paths is only one of its uses (though the biggest).

The Wikipedia article makes it sound more complicated than it is. If you need an introduction from zero, I thought a better source was A* Pathfinding for Beginners by Patrick Lester, from 2005 (though some further reading links are broken now).

The situation: You start at a node in a graph. In the graph, every arc between nodes gives the distance between them. Somewhere out there in the graph are goal nodes. You want to find the closest goal node, or at least one of them. Luckily you don’t have to search blindly, because you have a heuristic function h(x) which gives you a guess, for each node x, telling how close it may be to a goal. Of course h(x) = 0 when x is a goal node.

The algorithm: You keep an “open list” of nodes that are in line to be searched. For each node x in the open list you remember its distance from the start, which is traditionally called g(x). The open list starts out containing the start node, with distance 0 from itself. Each search step is: From the open list, pick the node x with the lowest value g(x) + h(x), the one which is estimated to be closest to a goal (that’s what makes it a best-first algorithm). g is how far you have come, h is how far you estimate you have to go, their sum is the estimated total distance. If x is a goal, done. Otherwise remove x from the open list and add any of x’s neighbors that have not already been visited (a node that is or ever was on the open list has been visited). That’s all there is to it; code may be filled out with special-case improvements or implementation details, but the idea is that simple.

Because you always take the best node from the open list, the open list can be implemented as a priority queue. Different ways of breaking ties in the priority queue give different search behavior. All the variants are called A*.

What is the mysterious heuristic function h(x)? If h is the exact distance to the goal, then A* wastes no time on side paths and proceeds straight to the goal. But if you had that, you wouldn’t need A*. If h never overestimates the distance to the goal, then A* is guaranteed to find the shortest path: It took what it thought was shortest at each step, and may have made optimistic mistakes but never overshot, so it could not have overlooked a shorter path.

So for pathfinding on a map, it generally works to set h(x) = the straight line distance between the start and end points. The actual distance to the goal may be longer than the straight-line distance, but never shorter, so you’ll find the best path. There may be smarter heuristics that know about map regions or obstacles, but they’re not obvious (and not for this post).

A* is mathematically optimal in a certain sense; can can call it “the fastest” algorithm that solves its exact problem. But don’t be fooled. You may be able to do better than A* by solving a different problem. If you imagine pathfinding on a featureless grid with no obstacles, you can calculate a straight-line path from the start to the goal without examining any nodes in between, because you already know all about them—you’re solving an easier problem and you can do it in one step. There may be ways to cast Starcraft pathfinding as an easier problem than graph search (I’m pretty sure there are), so A* is not necessarily optimal for us.

A* is not able to solve pathfinding in full generality. Make each map tile a graph node. A tile blocked by a building or destructible map obstacle can be given a larger distance from its neighbors to represent the time it takes to clear the obstacle. You can even represent that tiles which are under enemy fire are unsafe to travel over by making the distance to those tiles longer, so that other paths are preferred when available. But A* assumes that the graph is static. It can’t cope with other units moving around or with buildings being built or lifting off. Starcraft is too dynamic for A* to solve the whole range of pathfinding problems. A* in its basic form can only do part of the job.

It looks like there’s a ton of stuff in the A* space, so it needs at least two more posts before I move on to the next pathfinding topic. Tomorrow: Hierarchical algorithms based on A*.

Tscmoo terran apparent neural network output

I was watching the new Tscmoo terran with its reputed neural networks.

screenshot showing what looks like neural network output

Hmm, what are those red and blue dots?

detail of apparent neural network output

I read that as the output of the neural network. The dot diagram is incomprehensible unless we know about the network layout. The text is the interpretation; it looks like strategy instructions or hints to the rest of the program. I timed a couple of updates and found them 15 seconds apart, which fits with strategy information.

I can’t tell what the details mean. How can the army composition be tank-vulture if you open with two starports (see those wraiths on the screen)? Is that a prediction for the opponent, maybe? What does “support_wraiths” mean, since I didn’t notice the wraiths seeming to support or be supported by anything?

crazy new Tscmoo protoss strategy

Whoa, did y’all see that? Tscmoo protoss has a hilarious new strategy: Cannon contain into mass dark archons with mind control!

The cannon contain may win a lot of games against unprepared bots, but the dark archons—I’ve never seen that many at once.... This is even wilder than Tscmoo terran’s nuke strategy.

dark archons and zealots

Update: An even funnier picture: Mass dark archons chasing after a floating engineering bay.

dark archons chase an ebay

pathing 1: intro

I don’t know much about pathfinding, so I decided to write about it.

I wrote earlier about threat-aware pathing, but I didn’t know how to implement it efficiently. Bots are real time and have a lot to think about, so faster is better.

I’ll write about classic A* pathing and its descendants, about potential fields, and about any other interesting ideas my shovel turns up (I’ve seen the corners of a couple other things that might be cool). I’ll look into the pathfinding features of libraries like BWEM. I hope to learn something about how it all works at the low level with BWAPI and how it might interact with Starcraft’s built-in pathfinding. In the end I’ll try to put pieces together to outline how to find paths in full generality, taking into account strategic and tactical goals, obstructions like terrain, destructible map objects, buildings and units, and fields of vision and fields of fire for both sides. I’ll try, but I don’t promise to succeed!

This should be old hat for old hands, but a lot of it is new to me. Maybe I’ll get some help in the comments.

Tomorrow: The classic A* algorithm.

local preponderance of force

I want to borrow a term from the real life military again, an organizing principle for thinking about many kinds of tactical maneuvers.

Remember Lanchester’s Square Law as used by MaasCraft? It is an approximation which says that the power of a force of ranged units is proportional to the square of the number of units. According to this, a force of 6 dragoons is not twice as strong as 3 dragoons, it is 6^2 / 3^2 = 4 times as strong.

In other words, outnumbering the enemy by a little more can be a lot better. 6 dragoons versus 5 is 36/25 ~ 1.4:1 power ratio by the square law, and 7 dragoons versus 5 is 49/25 or nearly 2:1 power ratio. Though it’s good to remember that there are simplifying assumptions behind the derivation of Lanchester’s Square Law. It breaks down if, say, the rear dragoons have to maneuver to get into position. Even in the ideal case it’s not exact; it’s a continuous approximation of a discrete situation.

Having a local preponderance of force is the technical military term for outnumbering the other side. You’ve got more oomph in the fight. If you have a local preponderance of force then you usually want to join battle, because it will help you pull ahead globally and eventually win. That’s the thinking behind UAlbertaBot’s tactical decision making, which amounts to “if it looks like I’ll win then fight, else run away.”

Preponderance of force is why you often want to keep your army together. If you get into a battle, you’ll have the biggest force you could have.

• Killerbot, tscmoo zerg, and Overkill retreat zerglings to their sunkens when faced with a bigger force. Zerglings and sunkens together are more than their sum.

• LetaBot can spread out its army when defending, but when attacking usually tries to concentrate it into a single force for the strongest possible strike. Sometimes LetaBot misbuilds its wall or otherwise leaves some units behind in its base. When that happens, it’s plain to see that the divided army is vastly weaker.

But preponderance of force is also why you often want to split your army, on the principle “hit ‘em where they ain’t” or “fight the base, not the army”.

• Fast units like vultures and mutalisks go harassing on their own because they can race to undefended spots, where they have local preponderance of force, and cause trouble until defenders catch up.

• Ranged units can achieve preponderance of force over units with shorter range by standing back out of reach: They can shoot you and you can’t shoot them because of a cliff, or intervening forces, or whatever. This is the idea behind tank drops on a cliff (which I’ve seen only from IceBot), and it is why LetaBot puts its infantry in front of its tanks.

• Overlord hunting and depot sniping are extreme cases of local preponderance of force. Shoot stuff that can’t shoot back.

• Air units of all kinds often split from the ground army because they are not hindered by terrain. They can achieve local preponderance of force by outmaneuvering ground units, for example using cliffs.

• Drops work best when the drop lands far from defenders and close to juicy targets like workers. It’s true both for harassment drops and doom drops.

Anyway, preponderance of force is a key organizing principle for tactics, an idea that you can use to understand many kinds of tactical choices.

A bot with a strong enough understanding of preponderance of force (maybe from a combat simulator) could theoretically figure out for itself all the uses above, and more beside.

Notes about breaking the general rule: 1. Often it’s correct to fight immediately when you have a preponderance of force, but not always. You may do better by waiting until your advantage is bigger. “I could break this static defense, but I’ll have more left over and end up stronger overall if I wait for the reavers.” Or: “This is a good angle for wraith harass, but now I see a better one.”

2. Sometimes you can come out ahead in a fight from behind, even without superior micro, provided you can engage and disengage at will. Suppose it’s mutas versus marines and the marines win a stand-up fight. The mutas may still be able to poke in and pick off a marine or two before they dance back out of range, taking damage but no losses. The mutas are willing to fight (briefly) because they can come out ahead. It works because the mutas are fast and have more hit points. A similar idea is to send in battlecruisers to fight until they start to take too much damage, then retreat and repair.

3. When you’re ahead in income, it may be faster and safer to win by attrition even if you lose more in every fight. Keep attacking so the enemy must make units and not workers, and can’t catch up in income. If your ratio of income (3:1) beats your ratio of losses (2:1), you’re making progress. And of course the reverse thinking goes for the other side: Fight only the most advantageous battles; you have to win battles by a wide margin to have a chance.

humans don’t understand bots

Igor Dimitrijevic’s comment on yesterday’s post reminded me: It’s difficult to understand much of a bot’s behavior by watching it.

Krasi0 is a good example. In the last several months I’ve watched the old veteran bot grow much stronger, returning to the top ranks. I can describe in general terms some things Krasi0 has improved at: It is more aggressive, it is better at protecting its workers from danger, it is smarter about where it sieges its tanks. (It also solved crashing bugs.) But I feel sure that the points that I’ve noticed are only the tip of the iceberg. There must be not only details but whole classes of behaviors that I did not pick up on at all—otherwise it could not have improved so much.

I guess humans don’t have the perceptual bandwidth to take it all in, at least not without the experience or prior knowledge to know what to look for. Starcraft play is too complicated for us to follow! I’m sure I could understand more if I studied replays closely.

I’ll take it as a reminder not to be too glib in drawing conclusions.

Speaking of glib conclusions about bot behavior, MaasCraft looks more interesting when it plays against more interesting opponents. I concluded earlier from watching 2014 replays that it mostly moved its army forward and back along the path between bases. Well, that’s what its opponents were doing too, so it may not have had much choice. Today’s bots try more complicated maneuvers, and today’s MaasCraft reacts with its own more complicated maneuvers. I’ve seen it (seemingly intentionally) split its army to trap stray units, for example. It reacts sensibly to multi-prong attacks.

MaasCraft is still scoring poorly, but now its tactical search is showing sparks of promise—I suspect due to changes in its opponents, not itself. As a reminder, LetaBot has a search descended from the same code, turned off in some versions but likely to be turned on in final versions.

react to the future

Humans don’t react to what they see in front of them—not as such. It would be too slow. Humans react to what they expect based on what they see.

Watch a bot chase the enemy scout. The chasing units line up behind the fleeing scout. If there’s more than 1 chaser, then the others are likely wasting their time. Bots react to what they see, and it’s slow.

Watch a progamer chase the scout. The pro maneuvers back and forth, trying to cut off the scout’s line of escape and limit its choices. The pro is not reacting to the scout’s current position, but to its possible future paths.

No pro sees a moving dropship without taking into account “Where is it going?” No pro storms hydralisks without considering “Which way could they run?” And so on. The pro is constantly assessing the opponent’s intentions and reacting to the future, not to the immediate situation.

I keep harping on search as The Cure For All Ills, and search does bring the future into view. But here the underlying issue is goal inference, or recognizing the opponent’s intentions. I expect that little packages of heuristic rules could recognize intentions in a lot of interesting cases without needing search, so that bots could also react to the future that they expect. I also expect that the heuristics would have to be robust or adaptive, though, so that the bot can’t be tricked too easily. Or this: Deep learning should be great at figuring out how to recognize the opponent’s goals, though it will need training data brought in supertankers.

I gave examples of micro intentions (which way can the scout run?) and tactical intentions (where is the dropship going?), but the same works for strategic intentions: Do I expect harassment or mass attack? Do I need air defense, drop defense, detection? What weaknesses does the enemy plan leave open for me to exploit? Reading the enemy’s goals helps at all levels of abstraction.

object permanence

When I saw IceBot destroy a pylon in its base and kill the probe, and then repeatedly scout the rest of its base as if looking for more proxies, I realized: Bots do not have object permanence (certainly most bots don’t). IceBot had its choke locked up, and no second probe could have gotten in. A 3-year-old knows that objects don’t simply appear and disappear but must move or be moved from place to place, and Brood War bots do not. To be sure, a lot of them aren’t that old yet.

Maybe it’s time for bots to start doing simple reasoning about moving objects: They came from somewhere, they have a goal, they have to pass through points in between. Drop, nydus canals, and recall are the only tricks, and they have limits. “Oh, that SCV train is probably going to a new expansion—there or there.” Or: “That army might be aiming for my natural. I should siege up before it gets in range.” It’s a kind of goal inference.

Or do we still have more important things to take care of first?

you need more than 1 strategy

Martin Rooijackers aka LetaBot read my posts about Zia and wrote to point out that a zerg bot facing terran wants both mutalisk and lurker options. The reason is that terran may counter the mutas. He mentioned 5 barracks with +1, which should hard counter mutas. He also called out valkyrie and goliath possibilities, specifically pointing out that valkyries force mutas to spread out, which reduces their potential. Zerg needs to scout the build and react before overcommitting to mutalisks—at the latest when the first fliers arrive at the terran base and see what’s up.

Zerg can’t stick with tier 1 units (zerglings and hydralisks) because any likely terran midgame army will walk over them. And hive tech takes time. Lair units are key to the middlegame.

If zerg always goes mutas, any terran with strategy learning will find a way to counter the mutas and gain an advantage every game. I think this has already happened with Zia and Tscmoo terran. If zerg sometimes opens mutas and sometimes lurkers, then terran faces a risk trying to counter mutas with marines—the lurkers counter marines. Terran’s best play becomes less committal and more cautious, and that favors zerg.

Mainline pro play has the zerg starting with a limited number of mutas and using the time they buy with cautious harassment to get lurkers and rapidly tech to hive. But pros of course are totally comfortable with adaptation and tech switches. Not all games follow the main line. Today’s game of Flash (T) vs. Zero (Z) was a great example: Flash opened 14 CC, Zero responded logically with 3 hatcheries before pool and went lurkers while Flash prepared for mutalisks.

Any bot with only one strategy stands at a disadvantage against bots with opponent modeling. It’s true for all matchups. Today’s simple strategy learning will find a counter-strategy within a dozen games, usually less. Humans, and tomorrow’s sophisticated opponent modeling bots, may counter the strategy of the first game in the second, and should quickly find strong counters to most fixed strategies.

To beat humans, or to beat opponent modeling bots, you’ll need strategy flexibility plus either learning or a dose of randomness, ideally both. I promise. If sophisticated opponent modeling doesn’t arrive fast enough for me, I’ll provide it myself. It will make bots much more interesting to watch and to play against.

Zia and mutalisk micro

Zia’s mutalisk cloud is scary when it gets big. Eventually the mutas not only one-shot the units that they target, but their bounces instantly kill nearby units. The mutalisks sweep a path of destruction. But think about it—is that efficient? If mutalisk bounces at 1/3 power kill instantly, then the main attack must usually be gross overkill. Most of the firepower is wasted.

The idea of individual mutalisk control, as introduced by the Berkeley Overmind and copied by other zergs since, is to waste no firepower. Each flier independently dances in and out for safety and ideally attacks at near its maximum rate. But watch how Tscmoo zerg implements this: Its mutalisk cloud is also scary when it gets large, but usually not as scary as it could be, because it spreads out too much. Sometimes half the mutas are posing for pictures with the ground army while half are on the job. And the attackers often pick some targets over here, some over there, and don’t kill either as fast as they should. Tscmoo doesn’t focus its fire enough; it’s the opposite mistake from Zia.

Causing damage does not win games. Maximizing your damage output is not the winning move. You want to balance between killing the most important enemies and staying alive.

Try to imagine PerfectBot’s muta micro. Even PerfectBot can’t truly play perfectly, because calculating optimal micro is infeasible. But surely PerfectBot focuses fire efficiently, switching mutas fluidly between targets, taking into account importance and time to kill based on distance and damage rate and expected losses, to reduce overkill to near zero and spend less time flying between targets and strike a good balance between killing the most important stuff fast and staying alive. “This takes 5 more shots to kill, 12 are shooting, might lose 1, so switch 6 to new targets.” Zia and Tscmoo zerg are no competition for Jaedong, but I think Jaedong would boggle at PerfectBot’s mutalisks.

How close can we get to PerfectBot micro today? 1. Given a set of targets in priority order, calculating how to focus them down efficiently with minimal waste seems intricate but ultimately not that hard. 2. Folding in a desire to also minimize losses makes optimal decisions computationally infeasible. Even approximations seem tough. 3. Prioritizing the targets depends on the total game situation and will have to be done heuristically. For now I guess we’ll have to settle for a simplified algorithm.

Watching Zia last week, I thought it picked targets usually one at a time (simple 3) and once the target was chosen ignored damage taken while chasing it down (very simple 2), so the intricate-but-not-hard efficient killing calculation by itself should be a big improvement. Zia-this-week has been updated and has fancier micro than Zia-last-week, so I’m already behind the times! I got the impression that Zia-this-week is better about picking targets and switching targets and avoiding damage, but that it still wastes shots with too much overkill.

drop idea 4: ferry

A ferry drop is when you use the same transport (or transports) repeatedly to bring more units over. If it’s across a single cliff, you can also call it an elevator. A ferry drop is usually over a short distance, of course.

See the video of Oriol defeating Krasi0 in 2010 (!) by constantly ferrying units on Python from 6 main to 9 main. Oriol is a zerg player and went off-race with protoss this game. The bot lost because it kept units at the front of its natural instead of defending its main with enough forces. I’m sure Krasi0 today would put up a tougher fight, but bots still don’t try to interpret their opponent’s intentions so they are easy to take off guard.

Bots that invest heavily in static defense at their natural are likely to be vulnerable to early ferry drops into their main. XIMP is the obvious example. Also Killerbot seems vulnerable in ZvT before its lurkers are out, and in ZvP before its mutas are out. Where ferry drop works in the early game, other early flying tricks like zerg slow drop and terran factory float are also likely to work.

A ferry drop is more likely to succeed if you can keep it unscouted, whether by distraction or by force or by knowing where your opponent can’t see. Of course those are advanced skills.

Ferry drops are especially menacing if you can ferry a dangerous army into the enemy main before it is noticed. I could be wrong, but my guess is that the stratagem is more likely to succeed against zerg bots. Protoss and terran have spread-out buildings which give vision over more terrain in their bases. Zerg should have no trouble monitoring its borders with overlords, but most bots don’t.

drop idea 3: push up the cliff

Consider a terran bot which is on low ground under the cliff of its enemy’s base and pushing toward the enemy natural. Some bots bring vision along and fire on buildings in the enemy base. Because terrans are strong in defense, defeating the push needs either a greatly superior force, or a coordinated push-break which today’s bots can’t pull off. Looking at it from the other side, the attacker needs to move the push forward to tighten the screws, but every movement introduces some risk—the push is weaker while tanks are repositioning.

Instead of pushing the long way toward the natural, terran could bring a dropship and push up the cliff, the short way into the enemy main. Pros don’t do that—they go for more dynamic dropship play. But players who are at the level of bots often like the cliff push. It’s hard to defeat at that level. The units dropped on high ground are somewhat cut off but are supported from below, and if they are lost the cliff protects the low-ground units and the cliff push can be resumed with reinforcements.