AIIDE 2018 - what bots wrote data
As usual, here is my examination of what each bot kept in its AI
directory to read at startup, and what it wrote into its write
directory for learning and/or debugging. The AI
directory is not the only place a bot might keep prepared data; some bots have configuration files, and the binary might contain anything. This time I left out the up/down arrows. The performance curves seem more complicated than in CIG, and I want to look at them separately. Having files doesn’t mean that the files are used; they might be sitting there unread.
# | bot | info |
---|---|---|
1 | SAIDA | SAIDA stored three classes of files, 131 DefeatResult files (though officially it lost 106 games and timed out 8 times), 18 Error files, and 229 Timeout files. The DefeatResult files are 33 to 80 lines long and have nicely-formatted readable information including the enemy’s build order history with timings, and unit counts and unit loss counts for both sides. I expect that the enemy build timings are key information for the learning mechanism. The error files range from 2 to 2500 lines long and report internal errors that the bot presumably was able to ignore or recover from. The timeout files report when specific managers ran over. |
2 | CherryPi | CherryPi has a couple of larger files in AI , 77MB and 3MB, which are likely offline machine learning data. CherryPi’s survey answers mention offline learning. In the write directory it wrote a JSON file for each opponent. The JSON file gives a list of the build orders CherryPi played, and for each build order, a list of booleans under the name “wins_” that look like the win/loss history. It’s interesting that they give the sequence of wins and losses, not simply the counts. It suggests that their learning method is watching for when the opponent figures something out and starts to perform better. It’s also interesting that the build given as having been played most often versus SAIDA is “zvt3hatchlurker”, which does not seem appropriate versus SAIDA’s mech play—but does claim more wins than the alternatives tried. In the files I checked, the total number of win/loss booleans is slightly over 100, the official number of games played. It looks like the tournament manager played 103 rounds before time ran out, then its results were pruned back to 100 rounds so the maps were equally used. |
3 | CSE | Log file and learning data that looks like that of Locutus. |
4 | BlueBlueSky | Log file and learning data that looks like that of Locutus. |
5 | Locutus | Log file and learning data that... is that of Locutus, not very different from Steamhammer data. Locutus also has pre-learned data for 11 opponents, 2 of which have 2 names. |
6 | ISAMind | Log file and learning data that looks like that of Locutus. Also ISAMind’s machine learning data. |
7 | DaQin | Log file and learning data that looks like that of Locutus, except that DaQin stores data about only one game per opponent, although the survey answers say differently. Was something broken for this tournament? If so, it doesn’t show in DaQin’s win rate, which is about as expected. |
8 | McRave | For each opponent, a file listing the 15 protoss strategies that McRave could play, with 2 numbers that look like wins/losses. The numbers sometimes add up to 100 or so, but some are lower. McRave is listed with 83 crashes and 120 frame timeouts, which is likely why. |
9 | Iron | Nothing. #9 Iron is the highest-ranked bot which wrote no learning data. |
10 | ZZZKBot | Looks about the same as last year’s format. Even the timestamps say 2017. |
11 | Steamhammer | Steamhammer’s familiar data, game records with obscure timing numbers. |
12 | Microwave | As before, a file listing 7 or 8 strategies and win/loss counts for each, limited to a max count of 10. |
13 | LastOrder | Machine learning data in AI , but no online learning data, only a 2 byte file log_detail_file . |
14 | Tyr | For each opponent, a 1 to 4 line file apparently telling whether the previous game was a win or a loss, a small integer, and the strategy Tyr followed, possibly with a few following items named “flags”. |
15 | MetaBot | In AI/learning , a file for each of Skynet, UAlbertaBot, and XIMP, with 91 numbers in each file. 91 is the count of parameters that AIUR learns, and AIUR itself has the same 3 files, so this is AIUR's old pre-learned data about these 3 opponents. In write , a mess of mostly log files, but also with apparent learning data per opponent. states_* files list which head was played for some games against each opponent; this is probably log data, but could also be used for learning. skynet_* files per opponent look like Skynet learning data, no doubt for games where Skynet played. [opponent].txt files are the 91 numbers, likely learning data from when AIUR played. So there are 2 levels of learning here: Learning which head should play, and learning inside that head. |
16 | LetaBot | A 619-line file battlescore.txt with 103 game records of 6 lines each, which I think is one record for each round played (though only 100 rounds were official). It could be a log file or learning data. |
17 | Arrakhammer | Nothing. |
18 | Ecgberht | Nothing. The author has explained that learning did not work due to an incorrect run_proxy.bat file. |
19 | UAlbertaBot | The familiar UAlbertaBot format. For each opponent, a file listing 11 opening strategies with a win/loss count for each. |
20 | XIMP | Nothing. |
21 | CDBot | Nothing. |
22 | AIUR | A carryover from past years. Pre-learned data against 3 old opponents (as already mentioned under MetaBot), plus for each opponent, the familiar 91 lines of numbers. |
23 | KillAll | KillAll is a Steamhammer fork, but it uses a different learning file format. There is a file for each opponent+map combination. It looks like each file gives a game count (usually 10), a chosen opening or “None”, and a list of 8 openings with 3 numbers for each; the last number is floating point. I guess I have to read the code to find out what the numbers mean. |
24 | WillyT | A log file with 103 lines, presumably 1 per round played. |
25 | AILien | AILien's idiosyncratic learning file format. One file per opponent, with numbers saying what units are preferred and a few odds and ends. It looks as though AILien saved data for only 1 game per opponent. If this is the same version of AILien that I looked at earlier, then I expect learning was turned off and the recorded data was not used. |
26 | CUNYBot | In AI , a file output.txt with a list of build orders and some data on each one. In write , 487 files in these groups: output.txt an apparent log file with 103 lines, [map]_v_[opponent]_status.txt which looks like detailed information per game with a variety of hard-to-understand values, 226 files [map]Veins([x],[y]) with mostly over 200K lines per file where the (x,y) values are too large to be tile positions and too small to be pixel positions (so I guess they are "Veins"). It looks complex. |
27 | Hellbot | Nothing. |
Lesson: Learn about your opponent! All the winning kids are doing it!
Some interesting and some complicated stuff here. As for CIG, I’ll be looking at what different bots learned. This time it should be more informative.
Comments
McRave on :
CUNYBot on :
[map]_v_[opponent]_status.txt is something stored for a potential ML project. Roughly the game state from CUNYBot's perspective every 3 seconds.
LetaBot on :
Proxy on :
Tully Elliston on :
I just won with a macro build order? Great, next game: 4Pool!
Dilyan on :
Jay Scott on :
MarcoDBAA on :
Purplewave finally found out and starts to win many games vs him (also uses carriers late) by using DTs. PWs learning seems to be really robust in general, bot doesn´t seem to need updates at the moment to stay near the top.
Tyr on :
The flags it records are a primitive form of opponent plan recognition. If it thinks it recognizes something it will write that to a file and this can affect the strategy selection for the next game.