archive by month
Skip to content

what inferences do bots draw from scouting?

Yesterday I wondered what bots concluded from their scouting info. Today I try to find out.

I grabbed the AIIDE 2015 sources of 10 bots from Starcraft AI Competition - Data Archive. What inferences could I catch them drawing from scouting data? There’s a ton of code and I didn’t read most of it, so I may have missed a lot. Here’s what I failed to miss.

Bots that I did not catch making any inferences: AIUR, GarmBot, LetaBot, Overkill, Skynet, tscmoo (though it’s hard to read), Xelnaga. I think all these bots are adaptive (except possibly Xelnaga), but they seem to adapt based on directly observed data tied to decision code, not based on separately drawn inferences. You could take adaptation choices as implicit inferences, if you like.

One inference I intentionally skipped over was inference of where the enemy base is, given that all but one starting spot has been scouted. I think it’s a common ability.

Bottom line: Adaptivity is common, but explicit inferences seem scarce, and those that I found are not deep or clever. Certain bots have special-purpose “You are XIMP, I will beat you like this” code or settings, but I want some bot to say, “You built that much static defense? Are you nuts or what? Please excuse me while I take the map... hmm, siege down the front or drop?” Maybe Killerbot does that?

IceBot

IceBot recognizes enemy strategies in MentalState.cpp (I love the name). Here it recognizes terran marine rushes based on the game time, the scv count, the marine count, and the buildings it sees. The bb == 0 check (no barracks has been seen) presumably recognizes that no useful scouting happened, or that it’s facing proxy barracks.

        bc = enemyInfo->CountEunitNum(UnitTypes::Terran_Command_Center);
        bb = enemyInfo->CountEunitNum(UnitTypes::Terran_Barracks);
        ba = enemyInfo->CountEunitNum(UnitTypes::Terran_Academy);
        vf = enemyInfo->CountEunitNum(UnitTypes::Terran_Factory);
        vs = enemyInfo->CountEunitNum(UnitTypes::Terran_Starport);
        scv = enemyInfo->CountEunitNum(UnitTypes::Terran_SCV);
        marine = enemyInfo->CountEunitNum(UnitTypes::Terran_Marine);
        tank = enemyInfo->CountEunitNum(UnitTypes::Terran_Siege_Tank_Tank_Mode);
        vulture = enemyInfo->CountEunitNum(UnitTypes::Terran_Vulture);

        if (Broodwar->getFrameCount() <= 24*60*2)
        {
            if (bb > 0) STflag = TrushMarine;
        }
        if (Broodwar->getFrameCount() <= 24*60*2 + 24*30)
        {
            if (marine > 0) STflag = TrushMarine;
        }
        if (Broodwar->getFrameCount() >= 24*60*3)
        {
            if (bb == 0
                ||
                (bb >= 2 && (vf == 0 || bc == 1))
                ||
                (scv > 0 && scv <= 11))
            {
                STflag = TrushMarine;
            }
        }

IceBot has similar code to recognize zergling rushes. The protoss code is the most elaborate, recognizing 6 different protoss strategies. In MentalState.h is the enumeration of all strategies it knows of, though it doesn’t seem to recognize or use all of them.

	enum eStrategyType
	{
		NotSure = 1,
		PrushZealot,
		PrushDragoon,
		PtechDK,
		PtechReaver,
		BeCareful,
		P2Base,
		PtechCarrier,
		ZrushZergling,
		Ztech,
		Zexpansion,
		TrushMarine,
		Ttech,
		Texpansion,
	};

Tyr

Tyr makes an attempt to infer something about its opponent’s strategy, though it doesn’t try hard. The possible strategy classes, from ScoutGroup.java:

	public static int unknown = 0;
	public static int zealotPush = 1;
	public static int cannons = 2;
	public static int tech = 3;
	public static int defensive = 4;
	public static int besiege = 5;

With several rules like this to detect protoss strategies:

				if(gatewayCount >= 2)
					opponentStrategy = zealotPush;

Curiously, an expanding opponent is under the “tech” strategy class.

				if(nexusCount >= 2)
					opponentStrategy = tech;

This version of Tyr can classify a terran strategy as “defensive” or “unknown”. It doesn’t have any rules for zerg strategy. The “besiege” strategy class is referred to once in the rest of the code but is never recognized.

UAlbertaBot

UAlbertaBot seems to draw few inferences, but it knows to suspect cloaked units as soon as it sees a Citadel of Adun—it doesn’t wait for the Templar Archives. I didn’t notice any sign that it suspects mines when it sees vultures or lurkers when it sees hydra den + lair. From InformationManager.cpp:

bool InformationManager::enemyHasCloakedUnits()
{
    for (const auto & kv : getUnitData(_enemy).getUnits())
	{
		const UnitInfo & ui(kv.second);

        if (ui.type.isCloakable())
        {
            return true;
        }

        // assume they're going dts
        if (ui.type == BWAPI::UnitTypes::Protoss_Citadel_of_Adun)
        {
            return true;
        }

        if (ui.type == BWAPI::UnitTypes::Protoss_Observatory)
        {
            return true;
        }
    }

	return false;
}

the early scout

Watch Tscmoo’s early game closely (these games will do; follow the minimap) and you’ll see that the bot scouts like it has OCD. It scouts for enemy expansions repeatedly, it scouts in its base for proxies, it scouts around its base for proxies. It often has two workers scouting at once, and when it loses one it sends another. It seems less interested in looking inside the enemy base, but it tries to go there too. It can notice tricks that kill other bots.

Humans are different. Expert humans have confidence in their ability to hold off rushes and are aware of how far it sets them back to lose mining time, especially early, so they commonly scout later than is recommended for beginners. While in the dark they may add a few quick checks for proxies depending on the map and matchup. To make up for it, humans infer far more information from their scouting data. Bisu’s probe does not see buildings and units, it sees strategies and timings and intentions far into the future.

That’s hard for bots, of course. There are academic papers about strategy inference, and I found them unconvincing. One step at a time.

All bots should use scouting information about the location of enemy buildings, and I’m sure most do. Adaptive bots, from what I’ve seen, look at enemy buildings and units and adjust to counter them, or switch to a strategy that counters. I’ve heard of counting workers too. BroodWarBotQ used to try fancy strategy inference; I don’t know how well it worked. ZZZKBot, which has to scout early for its 4-pool, knows how to infer the location of an enemy zerg base when it sees the first overlord, a rare skill. Those are all the uses of scouting information that I know of. I’d love to hear about bots that do other cool stuff.

Do any bots count supply? Humans are always on the lookout for missing pylons, which could power a proxy. For example, if there are more zealots and probes than the pylons can support, then you’ve missed a pylon somewhere and may want to search for it. Or you could cut it down to “At this frame number/with this probe count I expect 2 pylons. Where’s the second one?”

Do any bots count minerals? How many minerals are left in each mineral patch is visible as long as the patch is in your sight range. If you add up the enemy’s total minerals mined and the total needed to produce the enemy buildings and units that you’ve seen (plus the number you can see being carried by workers), then the difference is the stuff produced that you haven’t seen yet (plus minerals being saved up, and minerals carried by workers that were lost or traded for gas or are out of sight). You may be able to suspect or rule out a hidden expansion or a proxy. Bots can do this much more easily than humans!

I’m sure some bots look at buildings and not only units to guess what the enemy will build. If you see a barracks, you don’t know (without more information) whether it was made to produce units or as a prerequisite for a factory. If you see 2 barracks, you can be pretty sure. A hydra den or a protoss stargate gives stronger clues. Surely some bots understand, but I don’t know which ones. Do any current bots try to put together a holistic picture?

I also wonder whether some bots scout too early for their own good. When you should scout depends on your strategy, of course. If your bot is on the dumb side and only cares where the enemy is so it knows which direction to attack, maybe it can scout very late.

what AIUR learned

After Overkill yesterday, I wrote a not-quite-as-little Perl script to read AIUR’s learning files. AIUR learns more data: Overkill learns a table (opponent, strategy), while AIUR learns a table (opponent, strategy, map size) where map size is the number of starting positions, which is 2, 3 or 4 in AIIDE 2015.

Unlike Overkill, AIUR recorded every game exactly once, missing none and adding none, so its data should be easier to interpret.

Here’s a sample table for one opponent. Compare it against AIUR’s row in Overkill’s table from yesterday. See the full AIUR learning results.

overkill234total
 nwinsnwinsnwinsnwins
cheese1867%333%10%2259%
rush10%10%10%30%
aggressive10%10%10%30%
fast expo10%10%20%40%
macro10%333%2512%2914%
defensive540%933%1540%2938%
total2752%1828%4520%9031%

For reference, here are AIUR’s “moods,” aka strategies.

We see that against Overkill, the cannon rush was relatively successful on 2-player maps, 3-player maps were a struggle, and on 4-player maps AIUR discovered a little late that the defensive mood was better than the macro mood. We also see that AIUR barely explored further when it found a reasonably successful try. If the best strategy was one that happened to lose its first game and didn’t get tried again, it would never know. With so many table cells to fill in, the tremendously long tournament was not long enough for AIUR to explore every possibility thoroughly.

AIUR selected strategies with an initial phase of try-everything-approximately-once followed by an epsilon-greedy algorithm, with epsilon set at 6%. Epsilon-greedy means that 6% of the time it chose a strategy at random, and otherwise it made the greedy choice, the strategy with the best record so far. With 90 games against each opponent to fill in 18 table cells, most cells never came up in the 6% random sample.

It should be clear why AIUR was still improving steadily at the end of the tournament! I offered a theory that AIUR learned so much because of its extreme strategies. If you read through the full set of tables, you’ll see that a strategy which works on one map size only sometimes works on other sizes too. The combination of opponent and map size paid off in ways that neither could alone, though only sometimes.

Overkill and AIUR fought a learning duel during the tournament. Both are running learning algorithms which assume that the opponent does not change (or at least settles down in the long run), and both bots violated the assumption. AIUR violated it more strongly. Was that an advantage? Could there be a connection with AIUR’s late discovery of the defensive strategy on 4-player maps?

I updated the zip archive of the Perl scripts and related files to add AIUR’s script alongside Overkill’s. By the way, I haven’t tested it on Windows, so it might need a tweak or two for that (nothing more than one or two very small changes).

what Overkill learned

I wrote a little Perl script to read Overkill’s learning files from AIIDE 2015 and add up the numbers. The three strategy names are as Overkill spells them. The opponents are listed in tournament order, so the strongest are at the top.

 NinePoollingTenHatchMutaTwelveHatchMutatotal
opponentnwinnwinnwinnwin
tscmoo5726%1911%1811%9420%
ZZZKBot8046%80%80%9639%
UAlbertaBot6130%2015%100%9123%
Aiur1354%6680%30%8273%
Ximp20%3083%5793%8988%
IceBot425%7283%1457%9077%
Skynet1362%1968%5884%9078%
Xelnaga7581%1250%30%9074%
LetaBot78100%1070%20%9094%
Tyr633%2564%5377%8470%
GarmBot2796%2796%36100%9098%
NUSBot66100%1377%1173%9093%
TerranUAB30100%30100%30100%90100%
Cimex56100%3394%20%9196%
CruzBot30100%30100%29100%89100%
OpprimoBot2496%33100%33100%9099%
Oritaka5698%1070%2488%9092%
Stone5693%1267%2181%8987%
Bonjwa30100%30100%30100%90100%
Yarmouk30100%30100%30100%90100%
SusanooTricks32100%2396%32100%8799%
total82680%55280%50483%188281%

The number n here is not the number of games played. There were 90 rounds. Some games were perhaps not recorded due to crashes or other errors, which could explain why some opponents have n< 90. Also, when the 10-hatch mutalisk strategy failed, Overkill assumes it must have lost due to a rush that would also kill the 12-hatch muta strategy. In that case Overkill records 2 game records, a 10-hatch muta loss and a 12-hatch muta loss, explaining why some opponents have n> 90. At least that’s what the code says; some of the data in the table doesn’t seem to match up (see the Xelnaga row). What did I miss?

Some of the strategy choices make sense intuitively. Overkill learned to get early zerglings against ZZZKBot and UAlbertaBot which play rushes, and learned that a more economy-oriented strategy worked against XIMP with its later carriers. These are examples of learning as a substitute for scouting and adapting.

Look at the bottom row. Each strategy ended up with virtually the same winning rate; the UCB algorithm evened them out accurately. But it didn’t use the strategies equally often; the 9-pool was more successful on average against this set of opponents. The early zerglings are important against many opponents, for whatever reason.

Look at the individual lines. Except for weaker opponents that Overkill defeats no matter what, for most opponents one or two strategies were clearly better and were played more often. How much did Overkill learn? If it had played strategies randomly, then the winning rate would be the average of the strategy winning rates. The gain can be estimated as the total winning rate minus the mean of the strategy winning rates—how far did you rise above ignorance? The number varies from zero to huge for different opponents. Because of sampling effects, the estimate will statistically tend to be higher than the truth.

This learning method has to play weak strategies to find out that they’re weak, so it can’t be perfect. The regret for each opponent can be estimated as the difference between the total winning rate and the winning rate of the best strategy if you’d known to play it from the start—how far did you fall short of omniscience? For many of the opponents, the regret estimated that way is 6% to 7%. If the learning algorithm converges to an exact solution, then in an infinitely long tournament the regret will fall to 0. Thinking about numbers like this can give you an idea of when learning makes sense.

The Perl script and related files are available as a zip archive.

LetaBot man-vs-machine team tournament

Martin Rooijackers aka LetaBot is organizing another man-vs-machine tournament, this time a team tournament. He will go so far as to accept your bot in compiled form and put it on a team and operate it for you, so that your only commitment is to send in your bot.

He wrote to me: “The thing that will make this interesting is that unlike other man vs machine tournaments, this one will have the all-kill format. So since some bots are better at certain match-ups like TvZ, these specialized bots can thus still win the tournament.” I agree it’s an entertaining format and offers chances you don’t get otherwise.

It’s cool that he keeps running new competitions in different formats. It’s certainly not going to get stale. Kudos for the hard work!

If you have a bot, then I have a suggestion for what point in your development path is a good time to participate in a man-machine competition: Before you are ready. You can’t be ready until you’ve done it once before!

panic button and fish story

Yesterday’s post was about prior knowledge. The posts before were about learning. Today’s is about prior knowledge for learning.

I was inspired by a remark from Dave Churchill, author of UAlbertaBot, in his new A History of Starcraft AI Competitions: In AIIDE 2015 “UAlbertaBot had [only] a 2/3 winning percentage against some of the lower ranking bots due to the fact that one of the 3 races did not win against those bots.” UAlbertaBot, playing random, had its learning turned off, presumably because the selected strategy for each race was dominant. With learning turned on, it would have lost games trying weaker strategies before settling on the dominant strategy, ending up behind overall—so the thinking, if my guess is good.

Well, that’s like Bisu defeating Savior. When somebody comes up with a counter for the game plan you thought was dominant, don’t you think you should try something different?

You can have it both ways. You can restrict yourself to playing your dominant strategy unless and until it turns out to lose repeatedly. You don’t have to lose games exploring your options; you can take losing to mean that you should start exploring your options.

The panic button implementation is simple. Start out recording the game results as usual, as if learning were turned on, but ignore them and always pick your dominant strategy. But when you get to (say) >10 games with <10% win rate, hit the panic button and let your algorithm try alternatives. It’s unlikely to make things worse!

The fish story implementation is also simple. Pretend, before the first game with a new opponent, that you actually have a history with this opponent. Tell yourself a fish story: “Oh, strategy A, I tried that a few times and always won. And strategy B sucked, I tried that a time or two and lost.” It’s literally a few lines of code to slide fictitious history into your learning data, and you’re done. Your strategy selection algorithm will look at it and say “Strategy A, duh,” and as long as A keeps winning it will explore others at a low rate.

The simpleminded learning algorithms that bots use today assume that you start out knowing nothing about which choices are better. And that’s just false. You always know that some strategies are stronger than others, that some are safe and work against many opponents while others are risky and only exploit certain weaknesses. With the fish story, your bot can start out knowing that A is reliable (“it won repeatedly”), B is a fallback (“it lost once”), and C can be tried if all else fails (“it lost some times”) in a last ditch attempt to trick a few points out of a killer opponent. Or any combination you want.

If you have prior knowledge about your opponents but you’re not sure whether they’ll have updates for the tournament, you can go Baron Munchausen and tell yourself a different fish story about each opponent.

Many variations and other ideas work too. Think about your strategy choices and your selected algorithm and how you would like it to behave. You can probably find your own ways.

Update: Dave Churchill told me the real reason behind UAbertaBot’s decision: He ran out of time! He wrote that he actually implemented a “panic button” method, but did not have time before the tournament to test it and make sure it was solid. I think it’s enough that UAlbertaBot can play random—progress comes one step at a time.

strategy selection in LetaBot

Martin Rooijackers sent me some information about his creation LetaBot.

Up through AIIDE 2015 LetaBot selected builds by learning, but now it has jettisoned learning and selects builds based on scouting information. LetaBot opens with a build that is safe against rushes and transitions to counter whatever it scouts—at least up to a point, it’s a work in progress. LetaBot now has “an extensive flowchart” (that’s how he put it) of terran build orders from Liquipedia. That makes it sound like LetaBot will make more than one transition if it thinks it should.

Rooijackers credits Dennis Soemers (author of protoss bot MaasCraft, which played in AIIDE 2014 and CIG 2014) with pulling the build orders out of Liquipedia, and says he got more build order tips from mapmaker CardinalAllin.

You can see why he might have wanted to change—LetaBot didn’t really benefit from learning in AIIDE 2015. An advantage of prior knowledge over learning is that knowledge is available from the start; you don’t lose games figuring stuff out. A disadvantage is that you can’t take special advantage of surprise weaknesses in the opponent’s play. And I notice how much AIUR wins with strategies that are objectively bad.

Ideally bots should both have prior knowledge and learn during a competition, of course. Prior knowledge says “here are the builds or strategies that work” and learning adds “and these are the particular ones that you should pick to gain advantage over this opponent/this opponent on this map/etc.”

I think that offline learning would be a good way to gain knowledge of builds and strategies, especially if you have vast resources like most of us. You don’t want to go with builds that are good, you want to go with builds that are good for you, based on your skills; that’s true for any player. So every time you make a tweak to micro that may affect your choices, be sure to spend a few cpu-years on offline learning to re-learn your openings from scratch. Should be no problem if you’re as rich as Google, and who isn’t?

how much does learning help?

Here’s the cumulative win rate graph for the bots that looked like they might be learning. I count UAlbertaBot as not learning, since the author said so.

winning rates for the learning bots

The gyrations on the far left are mostly statistical noise. AIUR learns well, as we know. Tscmoo and Overkill also improve noticeably, each gaining about 3% in win rate between round 20 and the end (enough to move up 1 place in the tournament). LetaBot has a slight upward trend. The others look flat or even trend downward; either they are mislearning, or they are losing more games to the smarter learning bots, or they are drifting due to statistical noise. Statistical noise is usually bigger than your intuition says.

Among the learning bots, the three bots which learned best also finished best.

The non-learning bots:

winning rates for non-learning bots

Most look flat; all trends are slight, except that XIMP gains over 2% from round 20 to the end. Are the weak learning bots mislearning against it? It would be interesting to compare the non-learning bots that better withstood the increasing pressure of the successful learners to see if some common factor in their play made them harder to exploit, but that would be a tough analysis.

Bottom line: Tscmoo and Overkill each learned enough to overtake their nearest opponents, which was possible only because their nearest opponents were so near. AIUR increased its win rate by a giant 10% and overtook a few opponents early, but after round 25 no opponents were in reach. No other bot improved enough to make a difference. Learning, as implemented so far, can give a small edge to a few bots that do it well enough.

With more smarts, bots can learn more and faster. I’ll be suggesting ideas later on (I don’t run Machine Learning in Games for nothing). I hope to see bolder learning curves in this year’s competitions!

which bots learn?

The AIIDE 2015 tournament results include an archive of the directories the bots were allowed to read and write. The tournament was divided into round robins, and after each bot had played every other on the current map the accumulated files in the bot’s “write” directory were copied to its “read” directory, where they could be read back in in the following round. Bots with nothing in their write directories did not learn. Bots with files there at least recorded some information.

Here are the bots that look like they tried to learn, sorted by final standing. Entries learning? yes mean only that the bot wrote files there, not that the bot read them back in or used the information (that’s harder to figure out).

Bottom line: Tscmoo’s files have a curious variety of information. It may be doing something interesting. Nobody else tried anything beyond the straightforward. All bots that stored data wrote one text file per opponent, possibly because the contest rules suggested it; more sophisticated schemes risk slowness or loss of data.

 botlearning?comments
1tscmooyesone file per opponent, human readable-ish
2ZZZKBotno 
3Overkillyesone file per opponent, with lines opponent|strategy|game result
4UAlbertaBotyesthough learning was said to be turned off for this tournament
5AIURyesone file per opponent, 91 numbers each
6XIMPno 
7ICEbotno 
8Skynetyesone file per opponent, 7 to 13 lines each in the form “build_2_3 2 0”
9Xelnagayesone file per opponent, each a single integer in the range [-1,3]
10LetaBotyesone file per opponent, much repetitive information
11Tyryesone file per opponent, each “win <number>” or “loss <number>”
12GarmBotno 
13NUSBotno 
14TerranUABno 
15Cimexyesone file per opponent, each empty or with only two numbers
16CruzBotyesone file per opponent, six flags 0 or 1 for each
17OpprimoBotno 
18Oritakano 
19Stoneno 
20Bonjwano 
21Yarmoukno 
22Susanootricksno 

There’s a folder for Nova, a bot which did not participate. I suppose it intended to.

AIUR learns more

The protoss bot AIUR by Florian Richoux has a set of hand-coded strategies and learns over time which strategies win against which opponents. That’s a popular religion; other bots like Overkill (see my post on it) and Tscmoo worship at the same altar. But a funny thing happened on the way through the tournament. In the AIIDE 2015 competition report, look at the graph of winning rate over time for the different bots. Let me steal the image showing the top half of participants:

win rates by round in AIIDE 2015

AIUR’s line is the one in the middle that keeps rising and rising. Look carefully and you can see it leveling off, but it hasn’t reached its asymptote at the end of the very long tournament. AIUR’s learning seems to learn more, and to keep on learning, even though AIUR’s learning method is about the same as other bots. Howzat happen?

Of course AIUR doesn’t do exactly the same thing as other bots. After all, it calls its strategies “moods,” which sounds entirely different. It doesn’t learn an opponent -> strategy mapping, it learns opponent + map size -> strategy, where map size means the number of starting bases, usually 2, 3, or 4. It can figure out that its cannon rush works better on 2-player maps, for example. I imagine that that’s part of the answer, but could it be the whole story?

I have a theory. My theory is that AIUR’s extreme strategies make good probes for weakness. AIUR’s strategies range from absolutely reckless cannon rush, dark templar rush, and 4-zealot drop cheese to defensive and macro-oriented game plans. AIUR’s strategies stake out corners of strategy space. Compare Overkill’s middle-of-the-road zergling, mutalisk, and hydralisk strats, with no fast rushes or slow macro plays, nothing highly aggressive and nothing highly cautious. My theory is that if an enemy makes systematic mistakes, then one of AIUR’s extreme strategies is likely to exploit the mistakes, and AIUR will eventually learn so.

If true, that could explain why AIUR learns more effectively in the long run. Presumably the reason that it takes so long to reach its asymptote is that it has to learn the effect of the map size. The tournament had 27 games per opponent on 2-player maps, 18 on 3-player, and 45 on 4-player, not enough to test each of its 6 strategies repeatedly. It could learn faster by doing a touch of generalization—I’ll post on that some other day.

AIUR also claims to implement its strategies with a further dose of randomness. Intentional unpredictability could confuse the learning algorithms of its enemies. I approve.

bot authors are busy

12 bots have been freshly uploaded at SSCAIT this month, 6 of them in the last week. It’s a burst of activity. The 12 bots are over a quarter of the 47 enabled bots. Progress must be happening! Go for it, bot authors!

The rankings will be even more uncertain than usual until the updated bots get enough games in.

Demis Hassabis mentioned Starcraft

In an interview while AlphaGo’s go match against Lee Sedol was underway, Starcraft came up as a potential future project: DeepMind founder Demis Hassabis on how AI will shape the future. DeepMind is of course the Google subsidiary that created AlphaGo.

The key question and answer:

Is beating StarCraft something that you would personally be interested in?
Maybe. We’re only interested in things to the extent that they are on the main track of our research program.

I read that answer as “probably not,” but can you imagine the effect?

threat-aware pathing

Of all the mistakes bots make, I think the worst is wandering blindly into enemy fire. If you know where the enemy is, then don’t get shot at for nothing—react to the threat one way or another.

Examples: Send an SCV to scout the zerg base, where the sunken kills it. Oops, no scout, send another. The sunken kills it. Oops, no scout, etc.

Or: The enemy is ravaging your expansion. You’ve lost all the workers. That’s not efficient, better balance your bases by sending more workers to the dying base, into the teeth of the attack. Or: Your main mines out and it’s time to transfer workers to a new base, but the enemy is at the gates. So what? Transfer workers through the enemy army.

Or: You’re sieging down static defense, but your attacking tanks/reavers wander too near. Or their supporting units do.

Or: The marine needs to get into the bunker, but a zealot is in the straight line path. Isn’t the straight path the only path?

Or: You’re making a drop. Or: You’re repositioning an overlord or an observer. And so on. I’ve seen bots make these blunders and more.

Bots just gotta be aware of threats. You can’t always know where enemy units are, but when you do, your pathfinding algorithm needs to take them into account. That’s threat-aware pathing. Or anything, as long as it has the same effect.

Leaving aside complications like cloaked units, when a unit or a squad sees enemy fire covering its future path, it has 4 possibilities: 1. It can go in and fight. 2. It can go around the enemy to reach its goal. 3. It can give up on its goal. 4. It can run by, accepting damage to reach its goal.

All four options are important and appear frequently in human games. In more detail:

1. Fight. This makes sense in a lot of cases, but not if your goal is to scout or to transfer workers.

2. Go around. Maybe you can transfer workers over a different bridge and avoid danger. That’s an elementary skill. Maybe you can get drop tech and fly around. That’s an advanced skill. Even if you are intending to fight, it makes sense to stay out of range of as much enemy fire as you can. Burrow the lurkers out of bunker range. Or, when the enemy tanks on high ground can only defend part of the enemy natural, you can slide dragoons into the far side and get shots in until the tanks are able to reposition. (You might want to consider that an issue of micro rather than pathing. Depends on your code.)

3. Give up. If the pather says there’s no safe path, maybe you shouldn’t try. Don’t transfer workers through the enemy army, leave them idle until you can develop a solution.

4. Run by. Bunker, meet rear view mirror. Or let the shuttle take hits, as long as the reaver drops into a good position. If you have a battle simulator that can estimate whether the runby will succeed, it will often be worth it to get in a scout or to aim for the mineral line. (In other words, a battle simulator is more useful if it can simulate runbys too.)

The old zerg bot Berkeley Overmind had threat-aware pathing in 2010. We can do it today too.

going random

I get the impression that the decision to go random is fundamentally similar for a bot and for a human. Going random is cool and offers a potential advantage, but it’s hard. From a practical point of view, most bot authors and most human players will be more successful if they settle on one race and concentrate on playing their race well.

The observation that people repeat is that if you play one race, you have to learn three matchups, each with unique strategies and timings and tactical possibilities and so on. If you play random, you have to learn six matchups. Watching the three different random bots now playing at SSCAIT (UAlbertaBot, OpprimoBot, and Travis Shelton’s bot), to me they seem shallower than the race specialist bots. That’s my impression.

Also, if you can play 3 races, then you probably don’t play all of them equally well. Tscmoo, with its large selection of strategies for all races, is an example. It can play random, but usually doesn’t. It usually goes with whichever race it plays best at the time.

Just an observation. Do whatever you like, but notice the tradeoff.

Race picking is another issue. That means choosing a race depending on the map and opponent. Of current tournaments, I think only those organized by LetaBot and run by hand (see the YouTube channel) allow race picking. I think the other competitions require you to stick to terran, protoss, zerg, or random.

should I write a bot?

I see this question occasionally, and my first reaction is: “Why are you asking the internet?!? Shouldn’t you know what you want more than we do? Yeesh, kids nowadays!”

I’ll skip over my second reaction, “What’s your real question?” and my third reaction “Of course you should.” Pay attention to my fourth reaction, that’s the one that counts.

It depends on what you have in mind. The world already has enough zerg rushbots that 4-pool or 5-pool or 6-pool, enough protoss bots that go mass zealots, and enough terran bots that go straight bio every game. Those are all monotonobots. The world does not have enough of the other kind, the funbots that have any other strategy.

The reason we have so many monotonobots with the same strategies is that those strats are relatively easy to code and relatively successful. They have good bang for the buck. But they are lazy, they offer nothing to the community, they are boring to watch and boring to talk about, we learn nothing, and the top bots can beat them because you have to beat the common strategies to become a top bot. Also, they can psychologically hold back the authors: Once you’ve done pretty well with a modest amount of work, it’s discouraging to put more work into a more ambitious strategy and have it turn out less successful at first. If you want to climb a mountain, climb it and don’t make a foothill your first goal.

Well, decide for yourself what you want to climb. If you aim for the peak, prepare for a lot of hard work and never get discouraged. If you just want a fun hike, it will be more fun to set out in a direction that strikes your fancy and blaze your own trail. Either way, a monotonobot isn’t worth it.

Not that there’s anything wrong with rushes in themselves. If your bot is strong but zergling rushes from time to time to butcher weak enemies efficiently or to remind strong opponents not to fast expand without defense, I won’t complain.

Plenty of the funbots are more interesting than successful. I like Jakub Trancik’s “MOAR CANNONS PLZ” bot for its uniqueness (and it’s hilarious when it wins despite building cannons in plain sight in the enemy base). I like Vladimir Jurenka’s bot for its occasional blind reaver drop coordinated with a 4-dragoon attack, even though it fails to start up half the time. I like Roman Danielis’s bot for its rich late game army, when it manages to live that long. I like Henri Kumpulainen’s PenaBot for its big army play even though it’s slower than an ensnared reaver (I hope he keeps working on it), and I love Aurelien Lermant’s GarmBot for its absurd and creative extreme macro game where it spends more effort distracting the enemy than fighting. The funbots spin a galaxy of ideas, and each idea adds a different challenge for other bots to meet. That’s how to contribute to the community, and how to have fun, and how to find the way to a stronger bot.

crazy game Iron-Killerbot

Here’s a whacky shot from a game between terran Iron by Igor Dimitrijevic and zerg Killerbot by Marian Devecka. I caught this on the SSCAIT live stream yesterday (yesterday in my time zone, anyway).

screenshot of game between Iron and KillerBot

Killerbot is the strongest bot in existence, if you ask me, but here it is in trouble. Killerbot expanded, placed 3 sunkens and a maze of buildings to deter any runby, and set about teching up. All as usual. Iron also did its usual thing, rushing with speed vultures and a seemingly excessive number of SCVs.

Killerbot is so cautious about runbys that its blocking buildings sometimes block its own lurkers in its base. Caution didn’t help this time. Iron’s vultures ran through the natural and brought SCVs with them into the main. You can see a couple of the SCVs in the screenshot; the vultures are still in the main but are out of view. Iron killed the drones, killed the one defending hydralisk, and set about slowly-slowly tearing down the buildings. Killerbot has sharp strategies but it has the same fragilities as other bots—it said “hmm, the main is underpopulated” and sent drones in from the natural one at a time to die until it had none left.

If Iron expanded now, it could eventually accumulate enough vultures to kill the sunkens and eliminate Killerbot. But I believe this version does not know how to expand and is afraid of static defense (I saw it lose a game against AIUR on points because it did not attack the single well-placed cannon defending all the buildings in the main but instead gradually lost its vultures). On the other side, the queens do not have broodling. Stalemate.

At this point the tournament system had some kind of problem and aborted the game. In the regame, there was no runby and Killerbot won easily. So I never found out who would have won on points.

Lessons!

  1. Runbys rule. I think the feature was recently added to Iron, and I expect it to help a lot. I haven’t seen any other bot do a runby. Runbys are common in human games, so I think they’re vital.
  2. Expanding is a robustness measure. Iron already builds extra command centers, all in the same base. I expect a future version will know how to expand.
  3. If there are things you can’t attack, then there are games you can’t win. If you fear sunkens, or if you can’t find floating command centers in the corner, you didn’t finish the game under your own power.
  4. It’s good to have a fallback strategy to break loose a stuck game. ZZZKBot eventually makes mutalisks. LetaBot used to go wraiths. Even Jakub Trancik’s mass cannon bot finally switches to zealots.

Update later the same day: Igor Dimitrijevic sent me more information. He says my guesses about Iron are correct. About the new runby feature, he writes “Up to 3 times, if some conditions are met (some sunken / bunker / cannon prevents my vultures from reaching the enemy main base), some vultures and/or SCVs (at least one vulture) are ordered to runby, which is something they would otherwise never do.” Later he turned the runby feature off, and I’m not sure when or if it will make a return.

Iron has been updated again and it has another new feature: It makes tanks. Tanks can easily siege down the static defenses that used to hold Iron at bay. Igor is confident that, when tanks and the ability to expand are tuned properly with everything else, Iron will rise near the top of the SSCAIT table. Hmm... maybe it will. I can’t judge. How will Iron fight flying units or cloaked units or tank pushes? Is the rush so strong that it will stop the opponent from getting that far?