archive by month
Skip to content

AIIDE 2020 - what bots wrote data

I looked in each bot’s final write directory to see what files it wrote, if any, and in its AI directory to see if it had prepared data for any opponents. Standard disclaimers apply: A bot does not necessarily use the data it writes. Preparation for specific opponents is not necessarily in the form of data in the AI directory, it might be in code.

#botinfo
1StardustNothing. Stardust relies on its great execution.
2PurpleWaveThe learning files have a sequence of PurpleWave’s strategy choices followed by a sequence of “fingerprinted” enemy strategies. (PurpleWave also has specific preparation for its opponents, but that’s in code rather than data.) There are also debug logs that show some decisions, but are probably only for the author.
3BananaBrainThe learning files look just like last year’s: One file for each opponent in the form of brief records of results. Each record consists of date+time, map, BananaBrain’s strategy (“PvZ_9/9proxygate”), the opponent’s recognized strategy (“Z_9pool”), a floating point number which we were told last year is the game duration in minutes, and the game result. Pre-learned data for 6 opponents, with the largest file by far for Stardust. Maybe if you have pegged your opponent as having a narrow range of adaptation, you don’t have to leave room for surprises.
4DragonVery simple game records with strategy and game result, like "siege expand" won.
5McRaveTwo files for each opponent, named like ZvU UAlbertaBot.txt and ZvU UAlbertaBot Info.txt. The first file is short and counts wins and losses overall and for each of McRave’s strategies. The info file (now working correctly, unlike last year) has detailed game records with aspects of the opponent’s strategy (2Gate,Main,ZealotRush), McRave’s strategy at 3 levels of abstraction (PoolHatch,Overpool,2HatchMuta), timings, and unit counts. I want to look more closely at the game records and see how they are used (maybe they are only logs for the author).
6MicrowaveResult and history files for each opponent that look similar to last year’s. The result files count wins and losses for each Microwave strategy, and no longer limit the counts to 10—apparently Microwave no longer deliberately forgets history. The history files have a one-line record of data about each game and look the same as last year. Also pre-learned history files for all 12 opponents.
7SteamhammerSteamhammer’s learning file format is documented here.
8DaQinCarried over from last year. Learning files straight from its parent Locutus (very similar to the old format Steamhammer files). There is no visible pre-learned data (in a quick check I also found no opponent-specific code).
9ZZZKBotLearning files for each opponent that look the same as last year, with detailed but hard-to-interpret information about each game.
10UAlbertaBotCarried over from past years. For each opponent, a file listing strategies with win and loss counts for each.
11WillyTA single log file with 150 lines apparently giving data for 150 games against various opponents. Each line looks like 20201009,Ecgberht,T,01,0. The items look like date, opponent, opponent race, a number 01 02 or 03, and win/loss. There were 150 rounds in the tournament, so maybe this is a log of one game per round—the dates seem to back that up, but if so, how is the single game chosen? Is it the last one played? This is either broken, or else it is doing something I can’t fathom.
12EcgberhtTwo files for each opponent, named like Dragon_Terran.json and Dragon_Terran-History.json. The plain file counts wins and losses of each of Ecgberht’s strategies separately for each map size (number of starting locations, 2 3 or 4). (The map size breakdown is similar to AIUR’s.) There is also an overall win/loss count, plus flags named naughty and defendHarass. Of all bots in the tournament, only ZZZKBot is flagged naughty, so maybe it means the opponent likes fast rushes. defendHarass tells whether the opponent defends its workers if Ecgberht’s scouting SCV attacks them (that way it can exploit weak opponents without risking its SCV against prepared ones). The history file is a list of game records, giving opponent name, opponent race, game outcome, Ecgberht’s strategy, the map, and the opponent’s recognized strategy (which is often Unknown).
13EggBotNothing. EggBot is the only entrant other than Stardust to record no data.

In recent years, nearly all top bots have relied on opening learning to adapt to their opponents. The strongest bot without learning was Iron, which came in #1 in AIIDE 2016 and slipped down the ranks until it fell to #8 in AIIDE 2019, scoring under 50%. Stardust is the only high finisher since then to get by without. Stardust plays with a restricted set of units, only zealots and dragoons with observers as needed. On the one hand, that shows the value of specializing and becoming extremely skilled at the most important aspects of the game (the opposite of Steamhammer’s development strategy). On the other hand, it points out how much headroom all bots have to improve.

Trackbacks

No Trackbacks

Comments

McRave on :

My info file started as an output for debugging purposes, but is fairly useful in determining if my strategy detection is functional. The 3 timings listed after my strategy are the time in minutes/seconds that I detected it. I use this alongside my detection code to observe if the timings/characteristics are correct.

In addition to strategy detections, it lets me observe map strengths/weaknesses and investigate each one individually.

In theory, this info could be fed back into my bot for my own strategy selection as well as strategy expectations of the enemy. However with the amount of "unknown" builds/openers/transitions it detects, it's not very useful right now. Those will need to be filled in over time.

Jay Scott on :

Hmm, writing a log in csv format suggests that you have tooling to analyze it....

Bruce on :

I've found it refreshing to work without opening learning, as I was definitely using it in Locutus as a crutch to avoid doing necessary underlying work on stuff like worker defense or reacting to scouting information. While it of course worked to a certain extent, it also resulted in a lot of embarrassing losses from exploring builds that only work in very specific situations (like opening three cannons in the main against a fast expand).

On the other hand, the lack of build variation means Stardust is extremely vulnerable to being countered (especially now that it is "out in the wild" for local testing), so obviously I need to move towards some middle-ground. I haven't decided what that will be, though I'm fairly certain it won't be a large selection of learned openings. To start with it might just be randomization of timings, then later on moving towards learned reactions, possibly with some variation in the opening (greedy vs. normal vs. aggressive). We'll see when I get time to work on it!

Jay Scott on :

Just so. You are following a principled approach. We already saw a game in an SSCAIT broadcast where Hao Pan countered the dragoons logically with mass tanks. I notice that that’s not something Stardust needs learning to counter; it only needs standard PvT skills.

Bruce on :

Yes, especially Stardust's PvT is weaker at the moment as there were few terran opponents in CoG and AIIDE that all were barely beatable with brute-force dragoons. It's near the top of the to-do list for the SSCAIT tournament to start rounding this out, both with a better unit mix and skills (e.g. handling spider mines).

Similarly I want to add some corsair play in PvZ and shuttle/reaver in all matchups "soon", but as for what "soon" means in calendar months, your guess is as good as mine :)

Jay Scott on :

Here is an idea to consider: Use learning solely to predict the enemy’s play or range of possibilities, not to directly make any decision. Then react to the prediction—react to the expected future.

Bruce on :

Yes that would also be very useful, and the whole area of strategy recognition is something I want to dig a bit more into.

Right now Stardust is mostly using unit counts at specific frame breakpoints to determine the enemy strategy, which is both time-consuming to implement and risky (enemies can find timings to either avoid a necessary reaction or trigger an overreaction). Perhaps an economic model is more appropriate (tracking how much the enemy has spent to determine what it could be building).

Jay Scott on :

In my view, an economic model is a necessary part of any forecasting. To understand the game is to model the game, whether explicitly or implicitly, and to model the game you need to model its important aspects.

Dan on :

Part of what amazes about Stardust is it's ability to win even when hard countered. Opening DT expand against 4-Gate-plus-forge is a free ticket to gaining a 20 supply advantage, but Stardust positions and fights so effectively that it's not enough to win, at least for my bot.

Execution matters most. Especially now that top bots are less trivially exploitable than in years past, execution trumps strategy.

A few bots over the years have really demonstrated the strength and viability of relying on accurate reactions over build order roulette to succeed. As humans do. I tend to invest in both but really one to three excellent reactive builds is plenty for bots at current levels of play.

Jay Scott on :

It is not the main stream, but there is a long tradition of bots which get by with one basic build and win with accurate reactions and superior execution compared to their contemporaries: ICEBot (last updated 2013), Iron, SAIDA. The earliest example may be the Berkeley Overmind—though back then, bots were simpler and I think most had only one build.

I’m reminded of stories I’ve read about people arriving in Korea to aim for pro status: A typical element was “First you have to train up your speed.”

Tully Elliston on :

> Execution matters most. Especially now that top bots are less trivially exploitable than in years past, execution trumps strategy.

I think both matter, but strategy is a lower hanging fruit than execution.

My interpretation is that execution only makes a quantum leap in most the bots when when another bot introduces a new "minimum level" of execution that other bots must meet to beat it.

MicroDK on :

You are not right about the history files of Microwave. They are not the same as last year. I have added two extra data points: expected opponent strategy and recognized opponent strategy.

For now very much based on Steamhammer but I intend to expand it to more specific strategies

Jay Scott on :

Later I parsed the history files and could not help noticing the difference between v2.1 and v2.7 history lines. So that’s what the two strategy names are.... Thanks!

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Form options

Submitted comments will be subject to moderation before being displayed.