archive by month
Skip to content

Steamhammer is submitted

I submitted Steamhammer to AIIDE 2020, a day ahead of the deadline. I don’t believe in working right up to the limit and risking last-minute bugs, better to turn out a stable version. As it was I did suffer a late bug introduced by an attempted fix, but fortunately it was easy to detect: I knew something was wrong when I noticed 3 spawning pools and 2 spires in the base. It was caused by a tricky race condition, but when I looked back at my change, my familiarity with the code let me see the problem immediately, and I knew how to correct it. I ran long tests after that and found nothing more. The new behaviors have zero known bugs (so I expect to learn something!), though of course they have limitations.

I’m feeling a little paranoid because in double- and triple-checking I discovered a couple mistakes in my submission zip and had to correct them. One was carried over from last year—I had a wrong filename in one line of the README instructions. Luckily for last year, the instructions were clear enough anyway. Actually, triple-checking is an understatement. Some points I verified four times. I ought to feel secure that I got it right.

The version is 3.2.19. Everyone memorize that version number, there will be a quiz. The next step for me is to prepare the change list so I can post it on 1 October. It may take me that long to write it up.

Steamhammer prep for AIIDE

With a couple days left to the deadline, I’m making final preparations for Steamhammer in AIIDE 2020. All the new behaviors are coded and tuned, as best possible. Mostly I’m making final tests and fixing remaining bugs. New code was written to have low risk of bad side effects, has been carefully tested, and seems to have few bugs. I have identified a couple of existing bugs, formerly unseen or unimportant, that are awakened by the changes.

I always wish I could have done more, but I’m happy enough with the improvements. I worked much harder than I have in the past year, when I was distracted by other business. Many long-time weaknesses are finally addressed. Ridicubad midgame play is slashed (and maybe I can do something about the remaining horrors before SSCAIT). Win rates are strongly up against some opponents (though Stardust remains out of reach). Microwave had pulled ahead of Steamhammer lately after months of Steamhammer dominance, but now I see them neck-and-neck rivals. Either might finish ahead in the tournament.

Expect upload to SSCAIT on 1 October, followed shortly by source release. The change list is long. I ran through about 20 test versions.

new bot EggBot

I was surprised to see AIIDE entrant EggBot by Nathan MacNeil make its debut on SSCAIT rather than waiting a few days for the AIIDE deadline. It makes sense, though: For a new bot that can make big progress in a short time, earning some experience to make that progress is way more important than holding your secrets in hope of surprise wins.

EggBot describes itself as “a (not great) cannon rush bot built from scratch that follows up with a gateway and zealots.” And in fact its play has the air of a typical beginner’s bot. On SSCAIT it seems affected by a bug and struggles to place cannons at all. On BASIL it seems to play more nearly as intended, and has scored 7-19 for a rating of 1901 elo with zero crashes, which is not bad at all for a first cut.

EggBot places its cannons near the enemy’s base entrance in containing position. It doesn’t creep the cannons forward to win; only outlying enemy buildings are likely to come under cannon fire. Occasionally it places the cannons somewhere outside the natural instead; so far I’ve seen that happen only when the enemy base location is known before the base is seen, and not usually then. It follows up with 1 gateway and makes zealots to win.

The cannons are often in plain sight, so an enemy that reacts in time will have good chances. Even an old bot like Tomas Cere can pull workers to stop the attack (in this game, EggBot placed one cannon defensively in its base). An opponent that has no defensive reactions is likely to lose hopelessly (in this game, EggBot seemed to hit a bug and did not follow up with zealots, but it still won on points).

EggBot is not efficient at spending its money and tends to let its mineral bank grow out of control. Besides bug fixes, that’s what I would work to improve if I were tuning it for AIIDE: Get each cannon down as soon as resources allow, then make the right number of gateways to grow the zealot count as fast as possible.

Steamhammer island plans

Among many other aspirations, I’ve always wanted Steamhammer to be able to take island bases. It’s not quite time yet, but getting closer. Here is my outline plan, with steps in order:

• BWAPI 4.4.0 helps make drop research practical. BWAPI 4.1.2 has a bug that prevents research in a hive.

• Pathfinding. Pathfinding is... on the critical path, according to my plan. The current release version of Steamhammer already has all the parts needed for pathfinding, except the final step of actually following paths.

• Nydus canals—with pathfinding support, because nydus is useless without. By hive tech time when nydus becomes available, nydus is cheap and valuable. If nothing else, Steamhammer would benefit hugely from transferring drones between distant bases without sending them through the center. In fact, if you have a hive and have drones to transfer far away, it’s normally better to hold the drones until you can get a nydus up. They are likely to get there just as soon, and more safely.

• Take island bases after hive, and nydus them up. Transferring units to and from islands by nydus is far simpler than transporting them by overlord. Given pathfinding, the regular base defense squads can defend islands with no extra code. The only drawback is that if the nydus is lost and the hive is destroyed so that the nydus can’t be replaced, then units will be stranded. But then, if the hive is gone we’re probably losing anyway.

It’s also possible to take an island before hive, and nydus it up later. The island will drone itself up slowly and struggle to defend itself before nydus, and drones will have nowhere to run from an attack, but if the enemy has no air units it may be worth it.

Many current bots do not scout islands. I’ve seen that in Ecgberht games; it can build island bases, though it doesn’t mine them efficiently (at least that I’ve seen)—it may lose a game on points with only an island surviving. A bot that’s good at islands might force other bots to adapt.

At some time in the more distant future I’ll add general overlord transport skills, so that Steamhammer can do mass drops, bypass contains, and fight with ground units on island maps.

RPS analyzer redux

In January I wrote about the idea of an RPS analyzer that tries to predict the opponent’s strategy rock-paper-scissors style, picking up patterns in the sequence of opening tries. For example, many bots never vary a winning build until it loses, and some of those cycle through their choices in a fixed order. If successful, an RPS analyzer could improve the odds of predicting the opponent’s strategy from the get-go, so that Steamhammer could start its counter before it got scouting information to verify. At the time, I wasn’t sure whether an RPS analyzer would be a good idea.

I still think that an RPS analyzer is useless in the limit of very strong bots: PerfectBot will follow game theory and have exactly the right amount of randomness in its choices, and a pattern detector will detect nothing helpful. But as I prepare Steamhammer for AIIDE 2020 I have been playing over many games, and I’ve become convinced that most current learning bots follow predictable or partially predictable patterns that can be exploited.

The current release version of Steamhammer pays limited attention to its predictions, which are accurate only in simple cases. It deliberately shifts its attention over time from openings that counter the specific predicted enemy play toward openings which counter the opponent’s range of play. The changes to make it pay close attention to accurate predictions are easy, though. And there are easy-to-implement algorithms to detect patterns and return a prediction with its confidence level. Do you think it can be ready in time for the AIIDE deadline at the end of the month?

new bot ExampleClient

A new bot ExampleClient by Dan Gant has appeared on the Starcraft AI Ladder. It hasn’t been uploaded to SSCAIT or BASIL, so I imagine it is on the Ladder primarily as a robustness test (it did crash a few times and is currently inactive). Another possible reason that it is not on SSCAIT is that its strategy might break some older bots; see below. The name “ExampleClient” suggests that perhaps it is intended as a starting point bot for new authors.

ExampleClient is terran, and it plays a strategy similar to the current version of Stone: It makes one SCV with its initial 50 minerals, and sends all 5 SCVs to the enemy base, where they try to destroy buildings. But there is a cute twist: It lifts off its command center and flies it out of sight. It seeks a location on the map that is not reachable by ground and not visible to ground units, and leaves the command center floating there. When there is such a location, only air units can find and destroy the CC. When there is no such location on the map (as on Circuit Breaker), it sends the command center to the extreme lower left—the location is hidden by the minimap, so the CC is virtually invisible when watching the replay. The enemy will scout every starting base and see ExampleClient at none of them. See my post breaking scouting assumptions from 2017 for similar ideas.

Most of the bots on the Ladder seem unconcerned that they cannot find ExampleClient’s starting base. Steamhammer, for example, once it has defeated the worker rush, takes the situation in stride as end-game cleanup where the enemy owns no bases, makes mutalisks, and hunts down the floating CC. That is possible in part because it distinguishes “I inferred the location of the enemy base by seeing all the other bases” from “I know the location of the enemy base because I have seen it.” BananaBrain is higher ranked but does not seem to draw that distinction: I have seen it take all bases on the map except the one where it “knows” the enemy must be, even as observers fly over the empty base location, and hold its forces back as if it were afraid to attack the vacuum. In some games, BananaBrain does not make air units and never finds the CC, so that it only wins the game on points. Some bots are fooled, though they still win!

There are a lot of other cute ideas that I have never seen a bot try.

a few items

I am working furiously on Steamhammer, not leaving myself much time for posting. The version I submit for AIIDE should be substantially stronger than the current release version, with visible changes that close observers will notice.

I doubt it will have much chance versus Stardust, though. After watching games where Stardust lost, I wrote 3 new openings to try to exploit its weaknesses, to at least make a dent. One was the hydra opening that Jealous suggested in a cast. But no, a rubber ball does not make a dent in a concrete wall.

A new bot EggBot appeared on the AIIDE 2020 participant list at some point after I wrote up the participants. I guess it must have been omitted by mistake. It is by Nathan MacNeil of the Memorial University of Newfoundland (Dave Churchill’s institution), who is apparently a graduate student. A grad student should have an interesting project in mind, so I have hopes for EggBot—weak or strong, I hope it will be interesting.

I still want to post about encouraging new bot authors, but you can tell it’s not my actual priority because I’m spending my time on coding and testing instead. Still, it’s an important discussion. The addition of S A B C D E F ranks on the BASIL rankings could be a good step if it encourages new authors to make progress: “I want to move up to D!”

AIIDE 2020 participants

The AIIDE 2020 registration deadline was yesterday, and today the list of participants is out (though as I write I don’t see an update to the web site yet). I wrote up the new map pool earlier.

First, the familiar names.

botauthor
BananaBrainJohan de Jong
DragonVegard Mella
EcgberhtFrancisco Javier Sacido
McRaveChristian McCrave
MicrowaveMicky Holdorf
PurpleWaveDan Gant
StardustBruce Nielsen
SteamhammerJay Scott
WillyTNico Klausner
ZZZKBotChris Coxe

Stardust crushed CoG and is at the head of the BASIL ladder; it is of course the favorite to win #1. McRave is playing zerg again, as in CoG. I’m hoping that Dragon will show us something new. For the other bots, I feel that I know more or less what to expect.

Then the new entrants that have not competed in AIIDE before.

botauthor
DanDanBotKim TaeYoung
RandofooEdgar Yajure
TaijWang Bin

I think “Kim Tae-Young” is a more standard way to anglicize the Korean name. DanDanBot registered last year too, but ended up not competing. The name “Wang Bin” also looks somehow familiar, though I don’t see a past mention related to Starcraft. It is possible that the second author of this recent paper Triple-GAIL: A Multi-Modal Imitation Learning Framework with Generative Adversarial Nets is the same person, but the name is common (at least as anglicized), so I can’t be sure.

Unknown bots are unknown. Let’s hope some of them are fun!

Plus 2 holdovers from last year.

botauthor
DaQinLion GIS
UAlbertaBotDave Churchill

DaQin first competed in AIIDE in 2018. UAlbertaBot is of course the perennial benchmark, though it risks landing in last place (last year it finished third to last, ahead of newcomers AITP and BunkerBoxeR).

I count 15 participants in total, 4 terran, 6 protoss, 4 zerg, 1 random. That’s a relatively even balance; protoss domination is not showing much. It is “traditional” for some participants to drop out before the tournament gets under way, so we’ll see how that goes. Last year half of AIIDE dropped out. Let’s hope there is no such trouble this year.