archive by month
Skip to content

AIIDE 2018 - what bots wrote data

As usual, here is my examination of what each bot kept in its AI directory to read at startup, and what it wrote into its write directory for learning and/or debugging. The AI directory is not the only place a bot might keep prepared data; some bots have configuration files, and the binary might contain anything. This time I left out the up/down arrows. The performance curves seem more complicated than in CIG, and I want to look at them separately. Having files doesn’t mean that the files are used; they might be sitting there unread.

#botinfo
1SAIDASAIDA stored three classes of files, 131 DefeatResult files (though officially it lost 106 games and timed out 8 times), 18 Error files, and 229 Timeout files. The DefeatResult files are 33 to 80 lines long and have nicely-formatted readable information including the enemy’s build order history with timings, and unit counts and unit loss counts for both sides. I expect that the enemy build timings are key information for the learning mechanism. The error files range from 2 to 2500 lines long and report internal errors that the bot presumably was able to ignore or recover from. The timeout files report when specific managers ran over.
2CherryPiCherryPi has a couple of larger files in AI, 77MB and 3MB, which are likely offline machine learning data. CherryPi’s survey answers mention offline learning. In the write directory it wrote a JSON file for each opponent. The JSON file gives a list of the build orders CherryPi played, and for each build order, a list of booleans under the name “wins_” that look like the win/loss history. It’s interesting that they give the sequence of wins and losses, not simply the counts. It suggests that their learning method is watching for when the opponent figures something out and starts to perform better. It’s also interesting that the build given as having been played most often versus SAIDA is “zvt3hatchlurker”, which does not seem appropriate versus SAIDA’s mech play—but does claim more wins than the alternatives tried. In the files I checked, the total number of win/loss booleans is slightly over 100, the official number of games played. It looks like the tournament manager played 103 rounds before time ran out, then its results were pruned back to 100 rounds so the maps were equally used.
3CSELog file and learning data that looks like that of Locutus.
4BlueBlueSkyLog file and learning data that looks like that of Locutus.
5LocutusLog file and learning data that... is that of Locutus, not very different from Steamhammer data. Locutus also has pre-learned data for 11 opponents, 2 of which have 2 names.
6ISAMindLog file and learning data that looks like that of Locutus. Also ISAMind’s machine learning data.
7DaQinLog file and learning data that looks like that of Locutus, except that DaQin stores data about only one game per opponent, although the survey answers say differently. Was something broken for this tournament? If so, it doesn’t show in DaQin’s win rate, which is about as expected.
8McRaveFor each opponent, a file listing the 15 protoss strategies that McRave could play, with 2 numbers that look like wins/losses. The numbers sometimes add up to 100 or so, but some are lower. McRave is listed with 83 crashes and 120 frame timeouts, which is likely why.
9IronNothing. #9 Iron is the highest-ranked bot which wrote no learning data.
10ZZZKBotLooks about the same as last year’s format. Even the timestamps say 2017.
11SteamhammerSteamhammer’s familiar data, game records with obscure timing numbers.
12MicrowaveAs before, a file listing 7 or 8 strategies and win/loss counts for each, limited to a max count of 10.
13LastOrderMachine learning data in AI, but no online learning data, only a 2 byte file log_detail_file.
14TyrFor each opponent, a 1 to 4 line file apparently telling whether the previous game was a win or a loss, a small integer, and the strategy Tyr followed, possibly with a few following items named “flags”.
15MetaBotIn AI/learning, a file for each of Skynet, UAlbertaBot, and XIMP, with 91 numbers in each file. 91 is the count of parameters that AIUR learns, and AIUR itself has the same 3 files, so this is AIUR's old pre-learned data about these 3 opponents. In write, a mess of mostly log files, but also with apparent learning data per opponent. states_* files list which head was played for some games against each opponent; this is probably log data, but could also be used for learning. skynet_* files per opponent look like Skynet learning data, no doubt for games where Skynet played. [opponent].txt files are the 91 numbers, likely learning data from when AIUR played. So there are 2 levels of learning here: Learning which head should play, and learning inside that head.
16LetaBotA 619-line file battlescore.txt with 103 game records of 6 lines each, which I think is one record for each round played (though only 100 rounds were official). It could be a log file or learning data.
17ArrakhammerNothing.
18EcgberhtNothing. The author has explained that learning did not work due to an incorrect run_proxy.bat file.
19UAlbertaBotThe familiar UAlbertaBot format. For each opponent, a file listing 11 opening strategies with a win/loss count for each.
20XIMPNothing.
21CDBotNothing.
22AIURA carryover from past years. Pre-learned data against 3 old opponents (as already mentioned under MetaBot), plus for each opponent, the familiar 91 lines of numbers.
23KillAllKillAll is a Steamhammer fork, but it uses a different learning file format. There is a file for each opponent+map combination. It looks like each file gives a game count (usually 10), a chosen opening or “None”, and a list of 8 openings with 3 numbers for each; the last number is floating point. I guess I have to read the code to find out what the numbers mean.
24WillyTA log file with 103 lines, presumably 1 per round played.
25AILienAILien's idiosyncratic learning file format. One file per opponent, with numbers saying what units are preferred and a few odds and ends. It looks as though AILien saved data for only 1 game per opponent. If this is the same version of AILien that I looked at earlier, then I expect learning was turned off and the recorded data was not used.
26CUNYBotIn AI, a file output.txt with a list of build orders and some data on each one. In write, 487 files in these groups: output.txt an apparent log file with 103 lines, [map]_v_[opponent]_status.txt which looks like detailed information per game with a variety of hard-to-understand values, 226 files [map]Veins([x],[y]) with mostly over 200K lines per file where the (x,y) values are too large to be tile positions and too small to be pixel positions (so I guess they are "Veins"). It looks complex.
27HellbotNothing.

Lesson: Learn about your opponent! All the winning kids are doing it!

Some interesting and some complicated stuff here. As for CIG, I’ll be looking at what different bots learned. This time it should be more informative.

Trackbacks

No Trackbacks

Comments

McRave on :

Yay learning analysis time with Jay! Actually probably my favorite section you do each tournament.

CUNYBot on :

The MapVeins files are a full flood-fill of the map's minitiles in order to get to position XY. I keep paths to every single major start position or expo on the map and load them instead of recalculating them. It's the biggest CPU sink in my script by a wide wide margin.

[map]_v_[opponent]_status.txt is something stored for a potential ML project. Roughly the game state from CUNYBot's perspective every 3 seconds.

LetaBot on :

Because I am still working on my SC2 bot, my AIIDE entry was the same as 2017. So there is no learning, just some data it writes to analyze manually later.

Proxy on :

In DaQin's .json file the read directory is set to "bwapi-data/write/". This would explain why it only records one game. I imagine it was set to this to make it easier to test in chaoslauncher but then the author forgot to change it back. Last time I checked the SSCAIT version also has this bug.

Tully Elliston on :

I'm looking foward to the next meta, where bots begin to change their build orders and strategies counter-intuitively to bamboozle learning bots.

I just won with a macro build order? Great, next game: 4Pool!

Dilyan on :

BananaBrain I thin does that.

Jay Scott on :

Steamhammer also does that. If it recognizes that you play more than one strategy against it, it does not respond with what it thinks is the strongest answer. It chooses randomly, preferring stronger answers

MarcoDBAA on :

Did not watch any AIIDE games but CPis “zvt3hatchlurker” probably did reasonable well, because SAIDA has some problems with invisible units. It is spamming scan and if it has none of them anymore the army ignores the danger and there are heavy casualties.

Purplewave finally found out and starts to win many games vs him (also uses carriers late) by using DTs. PWs learning seems to be really robust in general, bot doesn´t seem to need updates at the moment to stay near the top.

Tyr on :

Tyr is supposed to record learning from multiple games. The fact that it only wrote results for one previous game is due to a bug in the way the read and write folders are handled. In fact this limits it's strategy selection to just two builds per opponent race, even though it knows more.

The flags it records are a primitive form of opponent plan recognition. If it thinks it recognizes something it will write that to a file and this can affect the strategy selection for the next game.

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Form options

Submitted comments will be subject to moderation before being displayed.