AIIDE 2020 - what bots wrote data
I looked in each bot’s final write
directory to see what files it wrote, if any, and in its AI
directory to see if it had prepared data for any opponents. Standard disclaimers apply: A bot does not necessarily use the data it writes. Preparation for specific opponents is not necessarily in the form of data in the AI
directory, it might be in code.
# | bot | info |
---|---|---|
1 | Stardust | Nothing. Stardust relies on its great execution. |
2 | PurpleWave | The learning files have a sequence of PurpleWave’s strategy choices followed by a sequence of “fingerprinted” enemy strategies. (PurpleWave also has specific preparation for its opponents, but that’s in code rather than data.) There are also debug logs that show some decisions, but are probably only for the author. |
3 | BananaBrain | The learning files look just like last year’s: One file for each opponent in the form of brief records of results. Each record consists of date+time, map, BananaBrain’s strategy (“PvZ_9/9proxygate”), the opponent’s recognized strategy (“Z_9pool”), a floating point number which we were told last year is the game duration in minutes, and the game result. Pre-learned data for 6 opponents, with the largest file by far for Stardust. Maybe if you have pegged your opponent as having a narrow range of adaptation, you don’t have to leave room for surprises. |
4 | Dragon | Very simple game records with strategy and game result, like "siege expand" won . |
5 | McRave | Two files for each opponent, named like ZvU UAlbertaBot.txt and ZvU UAlbertaBot Info.txt . The first file is short and counts wins and losses overall and for each of McRave’s strategies. The info file (now working correctly, unlike last year) has detailed game records with aspects of the opponent’s strategy (2Gate,Main,ZealotRush ), McRave’s strategy at 3 levels of abstraction (PoolHatch,Overpool,2HatchMuta ), timings, and unit counts. I want to look more closely at the game records and see how they are used (maybe they are only logs for the author). |
6 | Microwave | Result and history files for each opponent that look similar to last year’s. The result files count wins and losses for each Microwave strategy, and no longer limit the counts to 10—apparently Microwave no longer deliberately forgets history. The history files have a one-line record of data about each game and look the same as last year. Also pre-learned history files for all 12 opponents. |
7 | Steamhammer | Steamhammer’s learning file format is documented here. |
8 | DaQin | Carried over from last year. Learning files straight from its parent Locutus (very similar to the old format Steamhammer files). There is no visible pre-learned data (in a quick check I also found no opponent-specific code). |
9 | ZZZKBot | Learning files for each opponent that look the same as last year, with detailed but hard-to-interpret information about each game. |
10 | UAlbertaBot | Carried over from past years. For each opponent, a file listing strategies with win and loss counts for each. |
11 | WillyT | A single log file with 150 lines apparently giving data for 150 games against various opponents. Each line looks like 20201009,Ecgberht,T,01,0 . The items look like date, opponent, opponent race, a number 01 02 or 03, and win/loss. There were 150 rounds in the tournament, so maybe this is a log of one game per round—the dates seem to back that up, but if so, how is the single game chosen? Is it the last one played? This is either broken, or else it is doing something I can’t fathom. |
12 | Ecgberht | Two files for each opponent, named like Dragon_Terran.json and Dragon_Terran-History.json . The plain file counts wins and losses of each of Ecgberht’s strategies separately for each map size (number of starting locations, 2 3 or 4). (The map size breakdown is similar to AIUR’s.) There is also an overall win/loss count, plus flags named naughty and defendHarass . Of all bots in the tournament, only ZZZKBot is flagged naughty , so maybe it means the opponent likes fast rushes. defendHarass tells whether the opponent defends its workers if Ecgberht’s scouting SCV attacks them (that way it can exploit weak opponents without risking its SCV against prepared ones). The history file is a list of game records, giving opponent name, opponent race, game outcome, Ecgberht’s strategy, the map, and the opponent’s recognized strategy (which is often Unknown ). |
13 | EggBot | Nothing. EggBot is the only entrant other than Stardust to record no data. |
In recent years, nearly all top bots have relied on opening learning to adapt to their opponents. The strongest bot without learning was Iron, which came in #1 in AIIDE 2016 and slipped down the ranks until it fell to #8 in AIIDE 2019, scoring under 50%. Stardust is the only high finisher since then to get by without. Stardust plays with a restricted set of units, only zealots and dragoons with observers as needed. On the one hand, that shows the value of specializing and becoming extremely skilled at the most important aspects of the game (the opposite of Steamhammer’s development strategy). On the other hand, it points out how much headroom all bots have to improve.
Comments
McRave on :
In addition to strategy detections, it lets me observe map strengths/weaknesses and investigate each one individually.
In theory, this info could be fed back into my bot for my own strategy selection as well as strategy expectations of the enemy. However with the amount of "unknown" builds/openers/transitions it detects, it's not very useful right now. Those will need to be filled in over time.
Jay Scott on :
Bruce on :
On the other hand, the lack of build variation means Stardust is extremely vulnerable to being countered (especially now that it is "out in the wild" for local testing), so obviously I need to move towards some middle-ground. I haven't decided what that will be, though I'm fairly certain it won't be a large selection of learned openings. To start with it might just be randomization of timings, then later on moving towards learned reactions, possibly with some variation in the opening (greedy vs. normal vs. aggressive). We'll see when I get time to work on it!
Jay Scott on :
Bruce on :
Similarly I want to add some corsair play in PvZ and shuttle/reaver in all matchups "soon", but as for what "soon" means in calendar months, your guess is as good as mine :)
Jay Scott on :
Bruce on :
Right now Stardust is mostly using unit counts at specific frame breakpoints to determine the enemy strategy, which is both time-consuming to implement and risky (enemies can find timings to either avoid a necessary reaction or trigger an overreaction). Perhaps an economic model is more appropriate (tracking how much the enemy has spent to determine what it could be building).
Jay Scott on :
Dan on :
Execution matters most. Especially now that top bots are less trivially exploitable than in years past, execution trumps strategy.
A few bots over the years have really demonstrated the strength and viability of relying on accurate reactions over build order roulette to succeed. As humans do. I tend to invest in both but really one to three excellent reactive builds is plenty for bots at current levels of play.
Jay Scott on :
I’m reminded of stories I’ve read about people arriving in Korea to aim for pro status: A typical element was “First you have to train up your speed.”
Tully Elliston on :
I think both matter, but strategy is a lower hanging fruit than execution.
My interpretation is that execution only makes a quantum leap in most the bots when when another bot introduces a new "minimum level" of execution that other bots must meet to beat it.
MicroDK on :
For now very much based on Steamhammer but I intend to expand it to more specific strategies
Jay Scott on :