archive by month
Skip to content

pathing 3 - hierarchical algorithms based on A*

As the next step in tracing the development of A* methods, I picked this older paper from 2004 by Adi Botea, Martin Müller, and Jonathan Schaeffer of the games group at the University of Alberta:

Near Optimal Hierarchical Path-Finding

Links to code in the paper are expired, unsurprisingly. The presented algorithm seems OK to me (it’s about saving time by accepting a slightly worse path), but I was most interested in the literature review from pages 5-8.

They talk about a bunch of algorithms based on A* but in some way more efficient. These fancier algorithms are hierarchical, with at least two levels of abstraction. If there are two levels, then the higher level is something like “go through these map regions” (maybe giving border points to pass through), and the lower level is more like “go through these points within a region.” Finding a path means doing a hierarchical search to find a high-level path (“go through these regions”) and a low-level path (“go through these map grid points”). Each level may be an A* search itself.

Note: This is not the same kind of hierarchical search that I promised to talk about, though it’s related. Pathfinding hierarchies have only a simple form of abstraction.

My conclusions:

The hierarchical search, if it has two levels, goes something like this: The top-level search plans a path through map regions. To find the cost of traversing a region (“go from this choke to that one”), it calls on the low-level search. The low-level search should cache answers, because it’s going to be asked the same questions frequently. Details vary by algorithm but seem easy enough to figure out.

Maps are dynamic. Not everything stays put. In Starcraft, blocking minerals can be mined out and map buildings can be destroyed. Your own and the opponent’s buildings and units act as physical obstacles too. For full accuracy, the low-level search has to take everything into account. Ideas I’ve seen so far for coping with this within the A* family seem to amount to “replan if the map changes,” which is OK for occasional changes like the destruction of map obstacles.

Be lazy. Not only maps are dynamic, goals are dynamic too; before you reach the end of your path, you may change your mind and want to go somewhere else after all. So don’t spend cpu time to plan the whole path in full accuracy. Make sure the high-level path is good and plan only your immediate moves in full accuracy. One idea is to have a quick low-level search that only takes into account map features and a full-accuracy low-level search that takes everything into account.

Group movement is important in Starcraft, and this paper doesn’t talk about it. You usually want your units together (not straggling separately past obstacles, or taking different bridges when they might find trouble before joining up again), and if the enemy is around then you care about good formation. That deserves another post.

Next: One or two posts about the latest in the A* family. I still have reading to do, so probably not tomorrow.

pathing 2: the classic A* algorithm

The A* algorithm (pronounced “A star”) is famous in AI. It’s a general-purpose best-first graph search algorithm, and finding paths is only one of its uses (though the biggest).

The Wikipedia article makes it sound more complicated than it is. If you need an introduction from zero, I thought a better source was A* Pathfinding for Beginners by Patrick Lester, from 2005 (though some further reading links are broken now).

The situation: You start at a node in a graph. In the graph, every arc between nodes gives the distance between them. Somewhere out there in the graph are goal nodes. You want to find the closest goal node, or at least one of them. Luckily you don’t have to search blindly, because you have a heuristic function h(x) which gives you a guess, for each node x, telling how close it may be to a goal. Of course h(x) = 0 when x is a goal node.

The algorithm: You keep an “open list” of nodes that are in line to be searched. For each node x in the open list you remember its distance from the start, which is traditionally called g(x). The open list starts out containing the start node, with distance 0 from itself. Each search step is: From the open list, pick the node x with the lowest value g(x) + h(x), the one which is estimated to be closest to a goal (that’s what makes it a best-first algorithm). g is how far you have come, h is how far you estimate you have to go, their sum is the estimated total distance. If x is a goal, done. Otherwise remove x from the open list and add any of x’s neighbors that have not already been visited (a node that is or ever was on the open list has been visited). That’s all there is to it; code may be filled out with special-case improvements or implementation details, but the idea is that simple.

Because you always take the best node from the open list, the open list can be implemented as a priority queue. Different ways of breaking ties in the priority queue give different search behavior. All the variants are called A*.

What is the mysterious heuristic function h(x)? If h is the exact distance to the goal, then A* wastes no time on side paths and proceeds straight to the goal. But if you had that, you wouldn’t need A*. If h never overestimates the distance to the goal, then A* is guaranteed to find the shortest path: It took what it thought was shortest at each step, and may have made optimistic mistakes but never overshot, so it could not have overlooked a shorter path.

So for pathfinding on a map, it generally works to set h(x) = the straight line distance between the start and end points. The actual distance to the goal may be longer than the straight-line distance, but never shorter, so you’ll find the best path. There may be smarter heuristics that know about map regions or obstacles, but they’re not obvious (and not for this post).

A* is mathematically optimal in a certain sense; can can call it “the fastest” algorithm that solves its exact problem. But don’t be fooled. You may be able to do better than A* by solving a different problem. If you imagine pathfinding on a featureless grid with no obstacles, you can calculate a straight-line path from the start to the goal without examining any nodes in between, because you already know all about them—you’re solving an easier problem and you can do it in one step. There may be ways to cast Starcraft pathfinding as an easier problem than graph search (I’m pretty sure there are), so A* is not necessarily optimal for us.

A* is not able to solve pathfinding in full generality. Make each map tile a graph node. A tile blocked by a building or destructible map obstacle can be given a larger distance from its neighbors to represent the time it takes to clear the obstacle. You can even represent that tiles which are under enemy fire are unsafe to travel over by making the distance to those tiles longer, so that other paths are preferred when available. But A* assumes that the graph is static. It can’t cope with other units moving around or with buildings being built or lifting off. Starcraft is too dynamic for A* to solve the whole range of pathfinding problems. A* in its basic form can only do part of the job.

It looks like there’s a ton of stuff in the A* space, so it needs at least two more posts before I move on to the next pathfinding topic. Tomorrow: Hierarchical algorithms based on A*.

Tscmoo terran apparent neural network output

I was watching the new Tscmoo terran with its reputed neural networks.

screenshot showing what looks like neural network output

Hmm, what are those red and blue dots?

detail of apparent neural network output

I read that as the output of the neural network. The dot diagram is incomprehensible unless we know about the network layout. The text is the interpretation; it looks like strategy instructions or hints to the rest of the program. I timed a couple of updates and found them 15 seconds apart, which fits with strategy information.

I can’t tell what the details mean. How can the army composition be tank-vulture if you open with two starports (see those wraiths on the screen)? Is that a prediction for the opponent, maybe? What does “support_wraiths” mean, since I didn’t notice the wraiths seeming to support or be supported by anything?

crazy new Tscmoo protoss strategy

Whoa, did y’all see that? Tscmoo protoss has a hilarious new strategy: Cannon contain into mass dark archons with mind control!

The cannon contain may win a lot of games against unprepared bots, but the dark archons—I’ve never seen that many at once.... This is even wilder than Tscmoo terran’s nuke strategy.

dark archons and zealots

Update: An even funnier picture: Mass dark archons chasing after a floating engineering bay.

dark archons chase an ebay

pathing 1: intro

I don’t know much about pathfinding, so I decided to write about it.

I wrote earlier about threat-aware pathing, but I didn’t know how to implement it efficiently. Bots are real time and have a lot to think about, so faster is better.

I’ll write about classic A* pathing and its descendants, about potential fields, and about any other interesting ideas my shovel turns up (I’ve seen the corners of a couple other things that might be cool). I’ll look into the pathfinding features of libraries like BWEM. I hope to learn something about how it all works at the low level with BWAPI and how it might interact with Starcraft’s built-in pathfinding. In the end I’ll try to put pieces together to outline how to find paths in full generality, taking into account strategic and tactical goals, obstructions like terrain, destructible map objects, buildings and units, and fields of vision and fields of fire for both sides. I’ll try, but I don’t promise to succeed!

This should be old hat for old hands, but a lot of it is new to me. Maybe I’ll get some help in the comments.

Tomorrow: The classic A* algorithm.

local preponderance of force

I want to borrow a term from the real life military again, an organizing principle for thinking about many kinds of tactical maneuvers.

Remember Lanchester’s Square Law as used by MaasCraft? It is an approximation which says that the power of a force of ranged units is proportional to the square of the number of units. According to this, a force of 6 dragoons is not twice as strong as 3 dragoons, it is 6^2 / 3^2 = 4 times as strong.

In other words, outnumbering the enemy by a little more can be a lot better. 6 dragoons versus 5 is 36/25 ~ 1.4:1 power ratio by the square law, and 7 dragoons versus 5 is 49/25 or nearly 2:1 power ratio. Though it’s good to remember that there are simplifying assumptions behind the derivation of Lanchester’s Square Law. It breaks down if, say, the rear dragoons have to maneuver to get into position. Even in the ideal case it’s not exact; it’s a continuous approximation of a discrete situation.

Having a local preponderance of force is the technical military term for outnumbering the other side. You’ve got more oomph in the fight. If you have a local preponderance of force then you usually want to join battle, because it will help you pull ahead globally and eventually win. That’s the thinking behind UAlbertaBot’s tactical decision making, which amounts to “if it looks like I’ll win then fight, else run away.”

Preponderance of force is why you often want to keep your army together. If you get into a battle, you’ll have the biggest force you could have.

• Killerbot, tscmoo zerg, and Overkill retreat zerglings to their sunkens when faced with a bigger force. Zerglings and sunkens together are more than their sum.

• LetaBot can spread out its army when defending, but when attacking usually tries to concentrate it into a single force for the strongest possible strike. Sometimes LetaBot misbuilds its wall or otherwise leaves some units behind in its base. When that happens, it’s plain to see that the divided army is vastly weaker.

But preponderance of force is also why you often want to split your army, on the principle “hit ‘em where they ain’t” or “fight the base, not the army”.

• Fast units like vultures and mutalisks go harassing on their own because they can race to undefended spots, where they have local preponderance of force, and cause trouble until defenders catch up.

• Ranged units can achieve preponderance of force over units with shorter range by standing back out of reach: They can shoot you and you can’t shoot them because of a cliff, or intervening forces, or whatever. This is the idea behind tank drops on a cliff (which I’ve seen only from IceBot), and it is why LetaBot puts its infantry in front of its tanks.

• Overlord hunting and depot sniping are extreme cases of local preponderance of force. Shoot stuff that can’t shoot back.

• Air units of all kinds often split from the ground army because they are not hindered by terrain. They can achieve local preponderance of force by outmaneuvering ground units, for example using cliffs.

• Drops work best when the drop lands far from defenders and close to juicy targets like workers. It’s true both for harassment drops and doom drops.

Anyway, preponderance of force is a key organizing principle for tactics, an idea that you can use to understand many kinds of tactical choices.

A bot with a strong enough understanding of preponderance of force (maybe from a combat simulator) could theoretically figure out for itself all the uses above, and more beside.

Notes about breaking the general rule: 1. Often it’s correct to fight immediately when you have a preponderance of force, but not always. You may do better by waiting until your advantage is bigger. “I could break this static defense, but I’ll have more left over and end up stronger overall if I wait for the reavers.” Or: “This is a good angle for wraith harass, but now I see a better one.”

2. Sometimes you can come out ahead in a fight from behind, even without superior micro, provided you can engage and disengage at will. Suppose it’s mutas versus marines and the marines win a stand-up fight. The mutas may still be able to poke in and pick off a marine or two before they dance back out of range, taking damage but no losses. The mutas are willing to fight (briefly) because they can come out ahead. It works because the mutas are fast and have more hit points. A similar idea is to send in battlecruisers to fight until they start to take too much damage, then retreat and repair.

3. When you’re ahead in income, it may be faster and safer to win by attrition even if you lose more in every fight. Keep attacking so the enemy must make units and not workers, and can’t catch up in income. If your ratio of income (3:1) beats your ratio of losses (2:1), you’re making progress. And of course the reverse thinking goes for the other side: Fight only the most advantageous battles; you have to win battles by a wide margin to have a chance.

humans don’t understand bots

Igor Dimitrijevic’s comment on yesterday’s post reminded me: It’s difficult to understand much of a bot’s behavior by watching it.

Krasi0 is a good example. In the last several months I’ve watched the old veteran bot grow much stronger, returning to the top ranks. I can describe in general terms some things Krasi0 has improved at: It is more aggressive, it is better at protecting its workers from danger, it is smarter about where it sieges its tanks. (It also solved crashing bugs.) But I feel sure that the points that I’ve noticed are only the tip of the iceberg. There must be not only details but whole classes of behaviors that I did not pick up on at all—otherwise it could not have improved so much.

I guess humans don’t have the perceptual bandwidth to take it all in, at least not without the experience or prior knowledge to know what to look for. Starcraft play is too complicated for us to follow! I’m sure I could understand more if I studied replays closely.

I’ll take it as a reminder not to be too glib in drawing conclusions.

Speaking of glib conclusions about bot behavior, MaasCraft looks more interesting when it plays against more interesting opponents. I concluded earlier from watching 2014 replays that it mostly moved its army forward and back along the path between bases. Well, that’s what its opponents were doing too, so it may not have had much choice. Today’s bots try more complicated maneuvers, and today’s MaasCraft reacts with its own more complicated maneuvers. I’ve seen it (seemingly intentionally) split its army to trap stray units, for example. It reacts sensibly to multi-prong attacks.

MaasCraft is still scoring poorly, but now its tactical search is showing sparks of promise—I suspect due to changes in its opponents, not itself. As a reminder, LetaBot has a search descended from the same code, turned off in some versions but likely to be turned on in final versions.

react to the future

Humans don’t react to what they see in front of them—not as such. It would be too slow. Humans react to what they expect based on what they see.

Watch a bot chase the enemy scout. The chasing units line up behind the fleeing scout. If there’s more than 1 chaser, then the others are likely wasting their time. Bots react to what they see, and it’s slow.

Watch a progamer chase the scout. The pro maneuvers back and forth, trying to cut off the scout’s line of escape and limit its choices. The pro is not reacting to the scout’s current position, but to its possible future paths.

No pro sees a moving dropship without taking into account “Where is it going?” No pro storms hydralisks without considering “Which way could they run?” And so on. The pro is constantly assessing the opponent’s intentions and reacting to the future, not to the immediate situation.

I keep harping on search as The Cure For All Ills, and search does bring the future into view. But here the underlying issue is goal inference, or recognizing the opponent’s intentions. I expect that little packages of heuristic rules could recognize intentions in a lot of interesting cases without needing search, so that bots could also react to the future that they expect. I also expect that the heuristics would have to be robust or adaptive, though, so that the bot can’t be tricked too easily. Or this: Deep learning should be great at figuring out how to recognize the opponent’s goals, though it will need training data brought in supertankers.

I gave examples of micro intentions (which way can the scout run?) and tactical intentions (where is the dropship going?), but the same works for strategic intentions: Do I expect harassment or mass attack? Do I need air defense, drop defense, detection? What weaknesses does the enemy plan leave open for me to exploit? Reading the enemy’s goals helps at all levels of abstraction.

object permanence

When I saw IceBot destroy a pylon in its base and kill the probe, and then repeatedly scout the rest of its base as if looking for more proxies, I realized: Bots do not have object permanence (certainly most bots don’t). IceBot had its choke locked up, and no second probe could have gotten in. A 3-year-old knows that objects don’t simply appear and disappear but must move or be moved from place to place, and Brood War bots do not. To be sure, a lot of them aren’t that old yet.

Maybe it’s time for bots to start doing simple reasoning about moving objects: They came from somewhere, they have a goal, they have to pass through points in between. Drop, nydus canals, and recall are the only tricks, and they have limits. “Oh, that SCV train is probably going to a new expansion—there or there.” Or: “That army might be aiming for my natural. I should siege up before it gets in range.” It’s a kind of goal inference.

Or do we still have more important things to take care of first?

you need more than 1 strategy

Martin Rooijackers aka LetaBot read my posts about Zia and wrote to point out that a zerg bot facing terran wants both mutalisk and lurker options. The reason is that terran may counter the mutas. He mentioned 5 barracks with +1, which should hard counter mutas. He also called out valkyrie and goliath possibilities, specifically pointing out that valkyries force mutas to spread out, which reduces their potential. Zerg needs to scout the build and react before overcommitting to mutalisks—at the latest when the first fliers arrive at the terran base and see what’s up.

Zerg can’t stick with tier 1 units (zerglings and hydralisks) because any likely terran midgame army will walk over them. And hive tech takes time. Lair units are key to the middlegame.

If zerg always goes mutas, any terran with strategy learning will find a way to counter the mutas and gain an advantage every game. I think this has already happened with Zia and Tscmoo terran. If zerg sometimes opens mutas and sometimes lurkers, then terran faces a risk trying to counter mutas with marines—the lurkers counter marines. Terran’s best play becomes less committal and more cautious, and that favors zerg.

Mainline pro play has the zerg starting with a limited number of mutas and using the time they buy with cautious harassment to get lurkers and rapidly tech to hive. But pros of course are totally comfortable with adaptation and tech switches. Not all games follow the main line. Today’s game of Flash (T) vs. Zero (Z) was a great example: Flash opened 14 CC, Zero responded logically with 3 hatcheries before pool and went lurkers while Flash prepared for mutalisks.

Any bot with only one strategy stands at a disadvantage against bots with opponent modeling. It’s true for all matchups. Today’s simple strategy learning will find a counter-strategy within a dozen games, usually less. Humans, and tomorrow’s sophisticated opponent modeling bots, may counter the strategy of the first game in the second, and should quickly find strong counters to most fixed strategies.

To beat humans, or to beat opponent modeling bots, you’ll need strategy flexibility plus either learning or a dose of randomness, ideally both. I promise. If sophisticated opponent modeling doesn’t arrive fast enough for me, I’ll provide it myself. It will make bots much more interesting to watch and to play against.

Zia and mutalisk micro

Zia’s mutalisk cloud is scary when it gets big. Eventually the mutas not only one-shot the units that they target, but their bounces instantly kill nearby units. The mutalisks sweep a path of destruction. But think about it—is that efficient? If mutalisk bounces at 1/3 power kill instantly, then the main attack must usually be gross overkill. Most of the firepower is wasted.

The idea of individual mutalisk control, as introduced by the Berkeley Overmind and copied by other zergs since, is to waste no firepower. Each flier independently dances in and out for safety and ideally attacks at near its maximum rate. But watch how Tscmoo zerg implements this: Its mutalisk cloud is also scary when it gets large, but usually not as scary as it could be, because it spreads out too much. Sometimes half the mutas are posing for pictures with the ground army while half are on the job. And the attackers often pick some targets over here, some over there, and don’t kill either as fast as they should. Tscmoo doesn’t focus its fire enough; it’s the opposite mistake from Zia.

Causing damage does not win games. Maximizing your damage output is not the winning move. You want to balance between killing the most important enemies and staying alive.

Try to imagine PerfectBot’s muta micro. Even PerfectBot can’t truly play perfectly, because calculating optimal micro is infeasible. But surely PerfectBot focuses fire efficiently, switching mutas fluidly between targets, taking into account importance and time to kill based on distance and damage rate and expected losses, to reduce overkill to near zero and spend less time flying between targets and strike a good balance between killing the most important stuff fast and staying alive. “This takes 5 more shots to kill, 12 are shooting, might lose 1, so switch 6 to new targets.” Zia and Tscmoo zerg are no competition for Jaedong, but I think Jaedong would boggle at PerfectBot’s mutalisks.

How close can we get to PerfectBot micro today? 1. Given a set of targets in priority order, calculating how to focus them down efficiently with minimal waste seems intricate but ultimately not that hard. 2. Folding in a desire to also minimize losses makes optimal decisions computationally infeasible. Even approximations seem tough. 3. Prioritizing the targets depends on the total game situation and will have to be done heuristically. For now I guess we’ll have to settle for a simplified algorithm.

Watching Zia last week, I thought it picked targets usually one at a time (simple 3) and once the target was chosen ignored damage taken while chasing it down (very simple 2), so the intricate-but-not-hard efficient killing calculation by itself should be a big improvement. Zia-this-week has been updated and has fancier micro than Zia-last-week, so I’m already behind the times! I got the impression that Zia-this-week is better about picking targets and switching targets and avoiding damage, but that it still wastes shots with too much overkill.

drop idea 4: ferry

A ferry drop is when you use the same transport (or transports) repeatedly to bring more units over. If it’s across a single cliff, you can also call it an elevator. A ferry drop is usually over a short distance, of course.

See the video of Oriol defeating Krasi0 in 2010 (!) by constantly ferrying units on Python from 6 main to 9 main. Oriol is a zerg player and went off-race with protoss this game. The bot lost because it kept units at the front of its natural instead of defending its main with enough forces. I’m sure Krasi0 today would put up a tougher fight, but bots still don’t try to interpret their opponent’s intentions so they are easy to take off guard.

Bots that invest heavily in static defense at their natural are likely to be vulnerable to early ferry drops into their main. XIMP is the obvious example. Also Killerbot seems vulnerable in ZvT before its lurkers are out, and in ZvP before its mutas are out. Where ferry drop works in the early game, other early flying tricks like zerg slow drop and terran factory float are also likely to work.

A ferry drop is more likely to succeed if you can keep it unscouted, whether by distraction or by force or by knowing where your opponent can’t see. Of course those are advanced skills.

Ferry drops are especially menacing if you can ferry a dangerous army into the enemy main before it is noticed. I could be wrong, but my guess is that the stratagem is more likely to succeed against zerg bots. Protoss and terran have spread-out buildings which give vision over more terrain in their bases. Zerg should have no trouble monitoring its borders with overlords, but most bots don’t.

drop idea 3: push up the cliff

Consider a terran bot which is on low ground under the cliff of its enemy’s base and pushing toward the enemy natural. Some bots bring vision along and fire on buildings in the enemy base. Because terrans are strong in defense, defeating the push needs either a greatly superior force, or a coordinated push-break which today’s bots can’t pull off. Looking at it from the other side, the attacker needs to move the push forward to tighten the screws, but every movement introduces some risk—the push is weaker while tanks are repositioning.

Instead of pushing the long way toward the natural, terran could bring a dropship and push up the cliff, the short way into the enemy main. Pros don’t do that—they go for more dynamic dropship play. But players who are at the level of bots often like the cliff push. It’s hard to defeat at that level. The units dropped on high ground are somewhat cut off but are supported from below, and if they are lost the cliff protects the low-ground units and the cliff push can be resumed with reinforcements.

drop idea 2: zealot and other bombs

Tanks have long range. Tanks cannot shoot up. Dropping stuff on tanks is a standard way to fight them without getting blown to bits on the way in.

Protoss usually drop zealots: “zealot bombing.” Dark templar are also okay, and they’re especially okay against bots which are quick to scan when a dark templar walks into range. Zerg usually drop zerglings or ultralisks. (Fancy twist: Add a defiler for swarm. Drop other units first to soak up shots.) Terran can go with whatever mech units are convenient. But any units threaten to wreak havoc—probe bombing can be effective.

Here are IceBot’s undefended cliff tanks all but calling out to be bombed.

cliff tanks ripe for bombing

Sieged tanks have a minimum range, so drop right on them, ideally spreading your units to drop one or a few on top of each tank. The terran will be faced with a dilemma: Unsiege and lose defensive power, or stay put and let the tanks inflict splash damage on each other. Either way, this would be a good time for the rest of your army to rush in and break the push (a more advanced skill).

Terran has defensive options, of course. Anti-air can make drops impossible (you could go drop on an expansion somewhere instead). If the terran mix is tanks and vultures, then it may be inefficient to drop zerglings, but you could try ultralisks. If the terran mix is tanks and infantry, as LetaBot often favors, then dropping on the army may be a bad idea. But notice: Even the threat of bombing the tanks constrains the terran’s play. Anti-air and tanks must stay together for the defense to work. Whatever gives the terran enemy fewer choices gives you more.

The idea behind the protoss bulldog opening is to defeat rear tanks with zealot bombs while dragoons overrun frontal defenses. Bulldog would be a difficult opening to code because of the army coordination it calls for, but it would be cool to see a bot play it. Bulldog counters LetaBot’s siege-expand opening.

drop idea 1: take the islands

Drop has countless uses. A lot of the uses are complex and tough to code, though (I dare you to do reaver drop with all the trimmings). Right now, a few bots do simple harassment drops. No bot has much fun with other drops. I’m going to spend a few posts on drop ideas that seem to me like cool next steps. Who knows, I might even be right!

If you have a macro bot, why not take island expansions?

The SSCAIT map pack, as distributed, includes 3 maps with island expansions, out of 15 maps total (20% of the maps have islands). In all 3 cases, to take an island, you have to drop a worker, mine out the blocking minerals, and then build or float in the expansion. It makes a pretty small state machine. Maps without blocking minerals are thought to favor terran because command centers can float, so they are little used nowadays.

The SSCAIT maps with islands:

  • Andromeda
  • Empire of the Sun
  • Python
Andromeda island Empire of the Sun island Python island

Probably the hard part is not taking the island, but populating it efficiently. You want to be able to schedule a task to transfer workers when the expansion is ready to mine—whether by drop, nydus, or recall. (And if you’re thorough, you might want to be able to transfer the workers elsewhere when the island mines out. Nydus and islands are in love and belong together.)

Since no bots take islands yet, surely few bots are capable of attacking islands—probably only bots that already go air. Against many opponents you can get one or two invulnerable expansions, which ought to be a winning edge.

Of course most maps don’t include islands, and if opponents do attack then islands are harder to defend. But a chance at a big macro edge on 20% of maps is nothing to sneeze at. How many opponents will even scout the island?

new zerg bot Zia

Speaking of zerg being easiest, I’m pleased with the aggressive new zerg bot Zia (games cast by Nepeta). Zia’s description says it’s “just for fun.”

Zia goes for middle game play with mutalisks and zerglings, complete with upgrades that eventually reach 2-2. (One game in the video also has a hydralisk switch which did not seem advisable.) Other mutalisk bots follow the lead of the Berkeley Overmind and spread out their mutas for (what is advertised as) maximum overall damage rate. Zia doesn’t seem to care about bot fashion. Zia concentrates its mutas into a bunch more like a human player, which may be less efficient in creating damage from an operations research point of view but is aggressive and can be more efficient at actually killing targets—and that’s how you win games.

Attack-oriented opponents, even strong ones like Iron and tscmoo terran, seemed unable to defend against so many mutas flying so aggressively and backed by lings. Defense-oriented bots like LetaBot (which holds its base with active defense and saves up its strength for mighty strikes) didn’t have much trouble, though. I also saw Krasi0, a hardened veteran that learned mutalisk defense facing the Berkeley Overmind, adapt effortlessly and win.

I like Zia, but by now you must realize that I’m not psychologically capable of finishing a post without asking for more. Mutalisk targeting looks a little haphazard—too much zipping about, hitting the wrong enemies, and taking incidental damage. When you concentrate your force, it becomes much more important to concentrate it in the right place! One thing the Berkeley Overmind did right was to fly around the perimeter of the defended area looking for weak points. Also the lings are not attacking as much as they should, as if they were sometimes ordered to move while next to a target. Would that be fun stuff to improve?

Update: Zia starts +1 attack for its mutalisks before spending on the first mutalisk. That’s dedication!