archive by month
Skip to content

new bot BetaStar

Based on its authors, BetaStar looks to be associated with China’s State Key Laboratory for Novel Software Technology (page in Chinese) at Nanjing University; it is sometimes translated as “National Key Laboratory” instead. Here are a couple papers that seem to be from the same research group that produced it. They are both about Starcraft II.

On Reinforcement Learning for Full-length Game of StarCraft (September 2018)

Efficient Reinforcement Learning with a Mind-Game for Full-Length StarCraft II (March 2019; by “mind game” they mean a simplified abstract version of the real game)

The papers have similar titles, but the contents are different. The first is about hierarchical reinforcement learning for macro action choices; the second builds on it with the “mind game” to make learning faster and more effective.

I did not find information about how BetaStar is related to this past research. Presumably a new research paper is in preparation, or published somewhere I did not look. Included in the binary are 3 files with the air of learned data, less than 1M each in size, named pvp_params.csv, pvt_params.csv, and pvz_params.csv. If BetaStar uses the methods of the research papers above, then these will be data for choosing macro actions—what to build next, what to research, etc.

BetaStar is derived from CSE. The line of descent is UAlbertaBot > Steamhammer > Locutus > CSE > BetaStar. It has strong inherited skills. CSE finished #3 in AIIDE 2018, ahead of Locutus. I could not figure out where the name BetaStar comes from. In this context, it sounds like an algorithm name, but if so I guess it is a new algorithm.

So I found a bunch of clues about BetaStar, but I don’t actually know a thing about how it works! It finished #9 in the CoG tournament (formerly CIG), with a 59.04% win rate, ahead of MetaBot and behind ZZZKBot. As I write, it has the BASIL elo of 2314, which is above average but well below the top; it is ranked #31 out of 83 active bots, between Arrakhammer and TyrProtoss. It has 55 wins, 23 losses, and—this is the amazing part—49 crashes. It often fails to start games at all. Its basic play is quite strong, with Locutus dragoon micro and other skills. So far, between SSCAIT and BASIL, Steamhammer has several losses and only 1 win against it. BetaStar scored 2-0 versus #3 PurpleWave on BASIL. But besides crashing, BetaStar has play bugs that other bots do not: It likes to build 2 cybernetics cores. It sometimes plays an unsafe opening and loses to a rush, then repeats the opening against the same opponent and loses the same way (see vs legacy 1 and vs legacy 2—#50 legacy is 3-0 versus BetaStar). It seems that BetaStar has both great strengths and great weaknesses.

Without much to go on, I nevertheless read BetaStar as similar to past Chinese research projects I’ve looked into: It is meant to be a one-off project to demonstrate a specific research goal, and aspects outside the goal were afforded no more effort than absolutely needed; solving crashes was not part of the goal. If so, it may have a successor next year in the same way that it succeeded CSE, but I don’t expect updates in the meantime.

It’s good to have a new bot! They have been few lately.

Steamhammer 2.3.3 test version

I’ve uploaded tournament test version Steamhammer 2.3.3 to SSCAIT, zerg only as usual. This version has critical bug fixes, and changes to make defilers more active. The bug fixes are the important part, but it’s a delight to see Steamhammer swarm cannons with alacrity. At least sometimes!

I expect that the final tournament version will be 2.3.4, and that I won’t much benefit from another test version. I have an idea for one new feature that is low risk but should significantly improve the strength.

Next: New bot BetaStar.

AIIDE 2019 registered participants

First, an aside: The original CIG 2019 tournament web site (https://cilab.sejong.ac.kr/sc_competition2019/) is gone. Presumably there is a new one with the results. Did they change the name to CoG? It’s confusing, and I don’t have time to dig through it myself as usual; it’s not a priority. Who can tell me?

The AIIDE 2019 list of registered participants arrived in my e-mail today. That’s my priority, and I do have time for it. There are 26 bots altogether, 21 registered players and five held over from last year.

The holdover bots from last year are #1 SAIDA, #3 CSE, #6 Iron, #9 ZZZKBot, and #17 UAlbertaBot (ranks from last year’s tournament). Since they are unchanged from last year, top competitors will be ready for them. In particular, this version of SAIDA is strong in regular play but vulnerable to a number of known exploits. I expect SAIDA to finish in the middle, losing most games to the top protoss bots and to Steamhammer and Microwave among the zergs. CSE and ZZZKBot may possibly be more robust. UAlbertaBot is much weaker and risks falling out of the tournament this year, not being held over for another year.

Twelve bots are familiar from past tournaments. Over half of them are protoss, with only two terrans and three zergs. The favorites are the protoss death squad of Locutus, McRave, and PurpleWave, and possibly Dragon, plus terran XiaoYi from the newcomer bots in the next table. Don’t miss that Dragon is listed as playing protoss, not terran as on SSCAIT. I expect BananaBrain, DaQin, LetaBot, MetaBot, Microwave, and Steamhammer to stand in the middle. CDBot and Stormbreaker have come out on the lower end in past tournaments.

botauthor
BananaBrainJohan de Jong
CDBotSeevan Yang
DaQinLion GIS
DragonVegard Mella
LetaBotMartin Rooijackers
LocutusBruce Nielsen
McRaveChristian McCrave
MetaBotAnderson Tavares
MicrowaveMicky Holdorf
PurpleWaveDan Gant
SteamhammerJay Scott
StormbreakerMingqiang Li

Nine bots are new to me. That’s an ample supply of newcomers; almost as many as the updated returning competitors. It’s interesting that only two are protoss, although this is a protoss age—the race distribution of newcomers is fairly even, trending toward terran.

botauthor
AITPYang Xia
ApolloApollo Hanl
BunkerBoxeRHaoda Fan
DanDanBotTaeyoung Kim
FirefrogFeng Gao
KimBotTaeja Kim
MurphFrancisco Javier Sacido
OpheliaJean Chassoul
XiaoYiBenchang Zheng

BunkerBoxeR, whose name refers to BoxeR the terran bonjwa, is on github and comes with a handy text explanation of its strategy: A bunker rush—and not a trivial one, but one that adapts to the situation.

Taeyoung Kim is the name of the former professional Starcraft II zerg Freaky. Surely it’s not the same person?

Protoss Murph by Francisco Javier Sacido is by the same author as terran Ecgberht.

Ophelia is on github. It is written in Lua. The available code is brief, despite recent commits. It does not give me the impression of a complete bot—for example, the file micro.lua contains no code.

XiaoYi is the Chinese name of a company that in English is called YI Technology. The bot scored highly in the CIG tournament—and that’s all I know about it. Perhaps they repurpose some cool computer vision algorithms?

Steamhammer 2.3.2 with bug fix

The test paid for itself! Version 2.3.1 has a crashing bug that can strike replacement defilers. I got two crashes on SSCAIT, though strangely none on BASIL. I’ve uploaded Steamhammer 2.3.2, which fixes the bug—I hope it’s the only new defiler bug. Also in the new version are a micro improvement for smoother maneuvering, plus tweaks to plague usage.

The tournament test versions are configured to play zerg only.

I’ve seen a few games with queens and some parasites cast, but Steamhammer hasn’t infested a command center in public yet. I’m sure it will before long. It leaves the infested command center sitting there, blocking the base. I’m curious to find out whether that causes more problems for the opponent, or for itself.

Steamhammer 2.3.1 uploaded

I’ve uploaded Steamhammer 2.3.1 to SSCAIT. It is a test version to smoke out any major new bugs before AIIDE. Opponents that tune against Steamhammer will be able to test against a near-final version, but if I catch an important bug that will more than make up for it.

This version sometimes makes one queen, and knows how to parasite and infest. That’s the only difference that a casual watcher will notice. (In the stream you’ll be able to see the extra vision from parasited enemy units, but the OpenBW player does not show fog of war.) I put in a wide variety of smaller bug fixes and improvements, though unless you watch closely you may not detect them. The improvements tend to make Steamhammer’s behavior more complex, and I foresee a high chance of surprises.

No detailed change list until after tournament submission. It’s not all that short.

hiatus starts to tail off

I have regained enough time and energy to make occasional posts. Expect a low level of activity.

I registered Steamhammer for AIIDE 2019. I rolled back the BWAPI 4.4.0 upgrade for now, but I have started to make a few other changes that I have strength for. Last year the AIIDE version of Steamhammer got a multi-page change list broken down by category; this year it will be a short list. I expect to upload versions to SSCAIT once or twice before the deadline, to try to catch any bugs I’m adding in.

Doing little may actually be good for tournament performance. Last year I added so many features that I was fixing bugs for months, and the current mostly-fixed version is performing well on BASIL. At the moment, Steamhammer is the top zerg. When it loses, the most frequent cause is a gross strategic blunder, and apparently Steamhammer’s wide choice of openings allows it to eventually learn to avoid most strategic blunders. A long tournament like AIIDE should show the same effect.