archive by month
Skip to content

Zia and mutalisk micro

Zia’s mutalisk cloud is scary when it gets big. Eventually the mutas not only one-shot the units that they target, but their bounces instantly kill nearby units. The mutalisks sweep a path of destruction. But think about it—is that efficient? If mutalisk bounces at 1/3 power kill instantly, then the main attack must usually be gross overkill. Most of the firepower is wasted.

The idea of individual mutalisk control, as introduced by the Berkeley Overmind and copied by other zergs since, is to waste no firepower. Each flier independently dances in and out for safety and ideally attacks at near its maximum rate. But watch how Tscmoo zerg implements this: Its mutalisk cloud is also scary when it gets large, but usually not as scary as it could be, because it spreads out too much. Sometimes half the mutas are posing for pictures with the ground army while half are on the job. And the attackers often pick some targets over here, some over there, and don’t kill either as fast as they should. Tscmoo doesn’t focus its fire enough; it’s the opposite mistake from Zia.

Causing damage does not win games. Maximizing your damage output is not the winning move. You want to balance between killing the most important enemies and staying alive.

Try to imagine PerfectBot’s muta micro. Even PerfectBot can’t truly play perfectly, because calculating optimal micro is infeasible. But surely PerfectBot focuses fire efficiently, switching mutas fluidly between targets, taking into account importance and time to kill based on distance and damage rate and expected losses, to reduce overkill to near zero and spend less time flying between targets and strike a good balance between killing the most important stuff fast and staying alive. “This takes 5 more shots to kill, 12 are shooting, might lose 1, so switch 6 to new targets.” Zia and Tscmoo zerg are no competition for Jaedong, but I think Jaedong would boggle at PerfectBot’s mutalisks.

How close can we get to PerfectBot micro today? 1. Given a set of targets in priority order, calculating how to focus them down efficiently with minimal waste seems intricate but ultimately not that hard. 2. Folding in a desire to also minimize losses makes optimal decisions computationally infeasible. Even approximations seem tough. 3. Prioritizing the targets depends on the total game situation and will have to be done heuristically. For now I guess we’ll have to settle for a simplified algorithm.

Watching Zia last week, I thought it picked targets usually one at a time (simple 3) and once the target was chosen ignored damage taken while chasing it down (very simple 2), so the intricate-but-not-hard efficient killing calculation by itself should be a big improvement. Zia-this-week has been updated and has fancier micro than Zia-last-week, so I’m already behind the times! I got the impression that Zia-this-week is better about picking targets and switching targets and avoiding damage, but that it still wastes shots with too much overkill.

hitting the wall

In a game last month, Tyr by Simon Prins was winning until this happened.

screenshot of game in which Tyr blocked itself into its own base

Pro tip:Don’t do that. The Great Wall doesn’t keep barbarians out, it keeps you in. The protoss bot by Roman Danielis dropped a dark templar inside the base from a shuttle and cleaned house. (You thought that a dark templar carried a warp blade? This one had a mop.)

I shouldn’t pick on Tyr—I think Tyr is above average in placing its depots in tight clumps along the edge of its base. It hit a bug this time, and I think the bug has been fixed. Building placement is hard, and it’s especially hard if you are terran, because terran bases need a lot of room for production buildings and supply depots. Some bots space all buildings apart from each other so that there’s a gap between any two and blocking is impossible. Terran bots that do that end up running out of room and having to build depots and factories outside the base in vulnerable open areas. Here ICEbot spaced out the buildings in its main and its construction had to spill into the map.

screenshot of terran buildings spilling out of the base

The fancy solution is do a path analysis in deciding on building placement. Does it block units off from anywhere they need to get to? Can fresh units get to the front without threading a maze? The analysis will be trivial sometimes, but not when the base starts filling up. PerfectBot will analyze both its own and its opponent’s building placement, for both attack and defense. “Oh, the opponent built an obstacle course. The enemy army is here so I’ll drop there.

Knowing when the opponent has built a wall is a valuable skill—notice how well LetaBot’s ramp wall-in works against many opponents. If you don’t want to do real pathfinding around buildings, you could try to recognize a wall when your units fail to make progress toward their goal. I’m not sure it’s easier and I’m pretty sure it works less well, but monitoring progress toward goals is a good idea regardless. Units that find themselves behind a wall may want to destroy a building in the wall, without caring whether the building is enemy or friendly. Tyr could have won if it had opened its own wall.

Most bots probably rely on heuristics to place buildings so that they rarely cause problems. Nothing wrong with that; it should work well enough and it will be easy, and there’s a lot to do so ease counts. But I hope a few authors will go for more ambitious solutions.

precomputing a catalog of build orders

Today is an idea from the future work section of Dave Churchill’s thesis “Heuristic Search Techniques for Real-Time Strategy Games.” I don’t talk about it using the same words and I toss out a few side ideas that are not in the thesis, but the important points were all written down by Dave Churchill, author of UAlbertaBot. He even wrote up a preliminary experiment.

The idea is to, he wrote, “Perform a build-order search which does not attempt to achieve a given goal set of units, but instead attempts to maximize a given army value.” The BOSS build order search from UAlbertaBot can find near-optimal build orders if you tell it what units you want. It works in a general way, and with the right mods it can find build orders that, say, “make my army as big as possible at a given time, who cares what the units are” or “make my army beat this unit mix” or anything else you can turn into a number.

Of course if you ask something complicated it will take longer and you may not be able to run it in real time during a game. I think the natural course is to calculate a large catalog of good build orders (or even a catalog per map). Then on one side you can play the build orders, and on the other side match the opponent’s play to the catalog.

I’ll suggest a few recipes. You may think of tastier ones. Before we get cooking, here are the ingredients.

Ingredient: Build order search. You’ll have to insert your own evaluation function into the code. You may want to read the BOSS paper first and check whether any of BOSS’s optimizations and shortcuts gets in your way. Besides BOSS, I understand that tscmoo calculates build orders, though I haven’t looked into how it works and whether it would be good for cataloging.

Ingredient: Army comparison. An army does not have a strength in itself, it is strong or weak relative to the enemy. If you want to find counter-builds, not only builds, you need to compare armies. One way is with a battle simulator like SparCraft: Set the two armies down in some formation and have them wargame it out. MaasCraft used to do the same job faster but less accurately with a classic military formula. To estimate whether an advantage is enough to win, you might try a second war game giving the weak side a 20-second head start on its build, to represent that it is defending its base and doesn’t have to cross the map.

Recipe: Tech rushes, the kind of attack where you arrange for your army to be particularly strong at a given time, such as when you’ve teched up to a new unit type or when research finishes. Tech rushes are keystones of the build catalog because they are the strongest instantaneous attacks.

Step 1. List the unit combinations you care about: Zerglings, hydralisks, hydraling, lurkling, hydralurk, hydralurkling, mutaling—that might be all for zerg, maybe a few more. I don’t know any reason to precompute build orders into the late game. You can skip unit combinations that are too gas-heavy, like muta+lurker or tank+wraith—surely it’s better to include at least one unit that costs mostly minerals, so that you can spend efficiently.

Step 2. For each unit in one of your combinations, list the research and upgrades that you care about. For hydralisks you’ll list speed, range, missile attack, and carapace. +3 upgrades are for late game, and even +2 might be too late to be worth precomputing. Combine the units and upgrades into a longer list: Slowlings, speedlings, +1 speedlings, etc. Each is a target army that you might aim for in a tech rush.

Step 3. Brute force it. For each target army in the list, do the build order search to find out how fast you can build it. That’s the earliest timing you can hit. Then, for that time (fast tech rush) and later times (every n frames) until some time limit (slow but heavy tech rushes), search to find the largest target army you can make at that time, counting only the target military units and not the workers. That will give a curve of possible tech rushes versus time, from 4-pool to whatever upper time and tech limits you set.

Recipe: Timing attacks, where you recognize what the enemy is doing and strike while they are weak. I expect that strong tech rushes are likely to risk strong timing attacks.

For each tech rush that you want to counter, find timings when the build is weaker: Before units come out or before research finishes. For each potentially vulnerable timing, pull tech rushes from the catalog and for each one do the army comparison to see whether it’s a counter, in which case it is a timing attack too. And of course label the counters in the catalog.

The same if you want to counter any other known strategy, like what your opponent just beat you with. In that case you may be able to calculate a counter in game—after it’s hopeless and before the gg should be enough time.

Recipe: Universally safe builds are those safe against most or all tech rushes. Brute force it! Do you need static defense? The best safe build is the one that gives you the most workers. Some tech rushes may be safe, but maybe you should also consider how many workers are cut and seek out more economic builds. For universal safety you only have to fear early attacks, because later you can get practical safety.

Recipe: Practically safe builds are those safe against tech rushes that are consistent with your scouting info (ones the opponent could still play). One way to precompute them might be to mine replays for typical scouting info and brute force a range of safe builds so that you can switch into one from where you are. Again, the best safe build is the one that gives you the most workers.

More is possible, and by now you should get the idea. The catalog is a tree of builds, of course, in one representation. The branching structure of the tree tells you when you have to scout for what information.

I expect that PerfectBot will have some kind of knowledgebase created by large-scale search of strategy space. That doesn’t mean we should have it today, or that this way of making one will work well. It’s super simplified compared to real play, after all. I feel confident that some build catalog could be made useful, but I dunno how many new ideas it might hunger for.

high-level view of strategy inference

PerfectBot 1.0, which deploys every processor cycle optimally to play the best possible Brood War, is scheduled for release at the end of time. I doubt it will be later. How will PerfectBot figure out its enemy’s opening plan?

Saa, who knows? But we can guess part of it.

Needless to say, PerfectBot picks up on more cues than bots today. A few possibilities:

  • direct cues
    • what is the map? (different strategies will be good)
    • what are the respective base positions?
    • what buildings did we see? (tech, production, expansion...)
    • what units did we see?
    • what minerals has the enemy mined?
    • does the enemy have gas? still mining?
    • what information did the enemy collect about what I’m doing?
  • effort cues (how much should we have seen?)
    • where and when and how long did we scout?
    • could there be unfound hidden buildings?
    • do we have units forward to put pressure on or to see the enemy moving out?
  • behavior cues (what is the enemy army doing?)
    • when did the enemy scout?
    • are the first enemy units hanging back or out on the map?
    • what goals does the enemy building placement support? (e.g. a wall-in is better for some strategies than others)
    • is the enemy trying extra hard to prevent scouting?

PerfectBot does at least two kinds of inference. First, it figures out what is physically possible under the rules of the game. “Mutalisks can be out at time t, not earlier.” Second, it recognizes cues associated with strong strategies. By combining the two, PerfectBot can narrow down the situation accurately.

How do strategy cues work? The enemy has three choices about the message it sends with every visible action it takes:

  1. play a straight move that supports the chosen strategy
  2. pay a strategic cost to take a deceptive action
  3. play badly with disadvantage

Straight moves tend to be strategy cues, but they also make your strategy effective. Bad moves can be ignored in strategy inference; in theory they only decrease the hazard that PerfectBot faces in the range of possible futures.

Deceptive moves come with a more difficult theory. Think of a deceptive move as a move to confuse the opponent: A move that looks strong for a different strategy than you are playing (researching air weapons to look like dragoon range while you prepare a reaver drop), or that goes above and beyond to hide information (blocking the ramp with drones to keep the scout out). First, a deceptive move always incurs a cost, meaning that it makes the strategy in some way less likely to succeed. If it had no cost, it would be a straight move that happened to give little information away. Second, the payback for the cost of deception does not occur in the game in which you play a deceptive move (not against an opponent as smart as PerfectBot). The theory is related to the theory of bluffing in poker: Game theory says you should bluff a certain amount of the time, varying by situation, and the payback for bluffing is the opponent’s higher uncertainty all the time. PerfectBot knows its opponent may be pulling a trick, and (since it’s perfect) it knows what the available tricks are, but it still has to prepare for a wider range of future events (could be dragoons, could be reaver drop).

Based on the combination of cues and analysis of possibilities, PerfectBot can do something like assign probabilities to future events (“overlords hanging around outside my base, high chance of slowdrop around time t”) and play to maximize its odds of winning.

You can’t deceive today’s bots because they’re not smart enough to take the bait. So first we should concentrate on the strategy cues of straight moves: If you know what the good strategies are and how to play them, you can find the straight cues. Knowing the good strategies, for example with a database of builds like LetaBot’s, is a strong start.

Instead of relying on human knowledge of good strategy, we could try to compute good strategies with a search through strategy space. I imagine that’s how PerfectBot gets its knowledge. I’ll write about that tomorrow—it’s easy enough that I can imagine trying it this year.

I’m also looking forward to ideas for combining physics knowledge of what’s possible with cues. If the opponent is going mutas, when will they come out? (Ever notice experts building turrets literally at the last second?) Maybe try a build order search along the lines of UAlbertaBot’s BOSS, but applied to the opponent.

High level theory like this is super vague but I hope it’s helpful for thinking about concrete ideas so that tomorrow’s bots can become smart enough to be tricked.