archive by month
Skip to content

Steamhammer 2.1 change list

Steamhammer 2.1 is uploaded. The headline feature is that terran and protoss play acceptably well again, unlike in version 2.0, so Randomhammer is updated. Zerg should also play significantly better: Zergling micro is fixed, scourge are more effective (I hope), and defilers are more active.

I reset Steamhammer’s learned data for version 2.0, because 2.0 plays differently. Since then it has learned enough to play its best against some opponents, but it is still short of equilibrium. It reached final elo 2164, which is probably lower than its elo if it had enough games under its belt. Version 2.1 is not that different from 2.0, so it is continuing with the same data.

I reset Randomhammer’s data for 2.1, since the existing data is for version 1.4.7. It will have to figure its opponents out from scratch again, which will take months.

Stand by for source. I’m way behind in updating Steamhammer’s web page. Here is the change list:

opponent model

• Fixed a bug in InformationManager that could fail to recognize an in-base proxy.

macro

• Configured WorkersPerPatch to 2.0 for terran and protoss, to reflect mineral locking. Mineral locking helps terran and protoss over zerg, because bases tend to have more workers. Reducing the maximum worker count per base while maintaining full mining efficiency means that much more cash and supply to spend on tech and army.

tactics

• Medics and tanks act in clusters. They are controlled by their own code, which needed updating.

• Bug fix: A cluster of all medics which happened to reach the front line would then continue to advance heedlessly into the enemy position and die. Now it falls back toward the base—normally it will soon merge with another cluster that is on the way. To implement this, I split the former “no fight” case into 2 cases, “I have no fighting units” and “there are no enemies nearby,” which behave differently. All medics is an example of no fighting units.

• Steamhammer knows a number of places that a cluster of units can retreat to. I moved “retreat to join up with another cluster” (which can be an advance rather than a retreat) ahead of “retreat to the location of a cluster unit which is not near the enemy” in priority order. It helps clusters merge a little more often.

• When there are many clusters, Steamhammer saves execution time by not updating all clusters every frame. It divides the clusters into “phases” and updates one phase per frame. In Steamhammer 2.0, the phases were calculated incorrectly, which contributed to bad micro (though it wasn’t the biggest cause). Fixed.

combat sim

• Combat sim scores are based on the cost of units, not the destroyScore, because destroyScore is sometimes strange. For now I set score = mineral cost + gas cost.

• Steamhammer uses the combat sim scores in an unusual way to decide who won the simulation: It’s the side that ended up with more stuff surviving. It seems illogical, but it tested better than alternatives I tried, such as the side that lost less. Still, there are pathological cases where it gives a wrong answer. I fixed this pathological case: If you lost nothing, then you won the simulation, even if you finished with less stuff surviving. (If the other side also lost nothing, then you still won, because a draw counts as a win.)

• I also changed the units included in the combat sim in special cases. 1. If you have nothing but ground units that can’t shoot up, then ignore enemy air units that can’t shoot down. Because the side with more surviving stuff wins, this can affect who wins the simulation. This fixes another pathological case, where for example zerglings might run away from corsairs. 2. If you’re scourge, include only ground enemies that can shoot up. Scourge are never afraid of the air units that they want to destroy, only of useless death by ground enemies.

At some point I’ll add the other natural special case, for air units that can’t shoot down. All this combat sim stuff could be way more sophisticated....

micro

• I fixed the biggest cause of poor micro in Steamhammer 2.0. As part of choosing an enemy target, the melee and ranged unit controllers called CanCatchUnit() to see whether the enemy unit would be able to escape if chased. It was meant to reduce goose chases. Anyway, the results of CanCatchUnit() are apparently wrong. I haven’t looked into what the trouble was, because I found that removing the calls had no effect on goose chases—long chases have become rare due to other changes. The error caused units to overlook targets that they could, and often should, attack. Zergling micro became weak, and zealot micro was worse. It’s all back to normal now.

• Vultures and wraiths would become fixated on their targets, unable to switch away even to retreat in an emergency. Fixed.

• Like clusters, defiler actions are divided into phases. There was a bug in coordinating the cluster phases and defiler phases, so that defiler actions might be skipped for a long time depending on the phase of the cluster the defiler was in. Fixing it makes defilers more active, though they still don’t swarm or plague as often as they should.

• Scourge is allowed to spread out more in regrouping. Mutalisks should usually group tight, scourge should spread out some. Formerly, all air units behaved the same.

• I made a couple adjustments to defiler plague. 1. Plague on a building was formerly worth 0.6 of plague on units with the same hit point loss; I changed the discount to 0.3, half as much, so that defilers will try harder to plague units. Buildings have a lot of hit points and threaten to dominate the scoring. (Static defense buildings are treated the same as mobile units, though.) 2. Plague gets a bonus for carrier interceptors, to exploit the plague-on-interceptor behavior, but I didn’t see Steamhammer trying hard to plague XIMP’s interceptors (only the carriers themselves). I increased the bonus by a factor of 4.

zerg

• Scourge are in their own squad, the Scourge squad. They behave somewhat better in my tests, but it’s primitive and they still have a long way to go. I mentioned a couple other improvements to scourge behavior above. It was surprisingly difficult to get scourge to do anything sensible.

• Get an evolution chamber and a spore colony for air defense when needed even if still in book. Steamhammer formerly waited until the book line came to an end before it dared defend itself. I think this will be a net gain, though it will make mistakes sometimes.

• Tweak: Enemy dragoons and dark templar loom a little larger as reasons to make static defense.

• Fixed a bug in deciding to get defilers. Battlecruisers are an excellent reason to get defilers; valkyries, not so much.

• Strongly avoid spawning mutalisks versus large valkyrie counts. Valkyries in numbers pass through mutas like they’re hardly there.

openings

• Fixed a typo in the opening name Over10Hatch2SunkHard in the AntiZealot strategy combo. When this opening was selected, Steamhammer couldn’t find it and played its default 9 pool instead, a poor choice against mass zealots.

• Added the zerg opening AntiFactoryHydra, which may be better against SAIDA’s unit mix than Steamhammer's original AntiFactory, and the terran opening 10-10-10FD in a form close to that popularized by Flash. 10-10-10 is an opening stem that gets a super-fast factory, and 10-10-10 FD is a followup that continues into an attack with 2 tanks and 8 marines, which is strong against protoss that techs too hard or expands fast.

• I configured terran and protoss counters to Naked expand by protoss. That means configuring which openings are to be tried as counters to an expected enemy plan. 10-10-10FD should be a good counter to protoss Naked expand.

debug drawing

• In the game info display (drawn in the upper left of the screen when turned on), Steamhammer 2.0 added an overall score versus this opponent, shown next to the opponent name, “2-3” meaning 2 wins and 3 losses. Steamhammer 2.1 also adds a score for the chosen opening, drawn next to the opening name. The context makes it easier to interpret Steamhammer’s choices. The numbers are specific to the matchup. Randomhammer will show different numbers depending on what race it rolled.

• Squads have 2 settable flags, “fight visible only” (only include visible enemies in the combat sim, not all known enemies) and “meatgrinder” (be more aggressive, willing to accept high losses). Visible-only is used by the Recon squad, and meatgrinder tested poorly and is not used. If the squad info display is turned on, it draws cyan V and M left of the squad’s information line if the flags are turned on.

Killerbot-SAIDA games

I see that Killerbot by Marian Devecka has figured out its own way to beat SAIDA with its persistent mutalisk pressure: win 1 and win 2. I’m not sure how consistent it is, but as soon as SAIDA starts taking serious economic damage, terran only goes downhill. The update is a few days old now, and the most visible change is that Killerbot makes a few unupgraded hydralisks early, presumably only when the enemy has or is expected to get vultures. The hydras counter any early vulture or wraith tricks that terran might try. The idea is well known among human players, and Steamhammer uses it too.

I was looking at Steamhammer’s 2 hatch muta loss to SAIDA today, and thinking “With good muta micro and good decisions, zerg should win this.” I’ve been promising good muta micro for 2 years and haven’t delivered....

Update: Apparently SAIDA reacted by becoming more aggressive. It seems to have found an attack timing that works against Killerbot. Is this the same mechanism that solves rushes by adjusting timings, or a different kind of reaction? If it’s the same, the mechanism is quite general.

Steamhammer 2.1 progress

I have fixed the bugs affecting terran play, a bug affecting defilers, and a few bugs and weaknesses affecting all races. I also improved scourge control, and added a new opening to maintain my sanity. There is still a critical bug affecting protoss, which causes units to wander around without fighting, carrying banners “make levity not war.” Another severe weakness affects base defense, causing defenders to hang back from the action. I’ve spent the last 2 days trying to fix base defense, and it doesn’t work. I might have to think up a different solution.

Everything is good except the last 2 critical problems. Not that there’s any shortage of other weaknesses, but these 2 are so bad that they can’t be ignored. Surely they won’t resist me for long, though. Stand by!

Update the next day: I got everything working well enough, so I thought. I ran final tests and found... the newly implemented scourge micro had stopped working, though I hadn’t touched anything related. Now what has gone wrong?

various short items

SAIDA

SAIDA has been updated and is again defeating Krasi0 and Locutus. The arms race continues!

CIG 2018

I started poking at the detailed results file to figure out how to reproduce the official results exactly... then I discovered that the build order problem was wider than it first seemed. I canceled my plans. We don’t need per-map crosstables and race balance analysis of a tournament with such badly distorted results.

Steamhammer 2.1

I haven’t been working that hard on it, but I have made progress. I fixed some of the bugs introduced along with squad clustering, and found the causes of others. 2.1 should have smoother play in many cases. To say the same thing differently, Steamhammer 2.1 is suffering from feature creep, or at least bug fix creep. Hang on, it shouldn’t take too much longer.

looking at TitanIron

TitanIron is, as all signs indicated, a fork of Iron. It forks from the latest Iron, the AIIDE 2017 Iron. The Iron that played in CIG 2018 was carried over from the previous CIG 2017 tournament, and is an earlier version.

#15 TitanIron crashed in 30% of its games. Its win rate was 51.46% overall, or 73.59% in non-crash games. #6 Iron itself (an earlier version) finished with 74.31% win rate, so TitanIron does not seem to be an improvement, even discounting poor code quality. Curious point: #9 LetaBot upset Iron, because LetaBot copes well with vulture and wraith harassment. But TitanIron upset LetaBot. Another curious point: TitanIron performed poorly on the map Andromeda and strongly on Destination, and about equally well on the other 3 maps. Andromeda seems a surprising map to have trouble with.

I watched some replays. In Iron-TitanIron games, the two played identical build orders until the first factory finished, when Iron made 1 vulture first while TitanIron immediately added a machine shop to get the vulture upgrades faster. The bigger difference came later, when Iron built a starport and made wraiths while TitanIron did not. I got the impression that TitanIron rarely or never goes air. The expense of going air puts TitanIron ahead in vultures for a while, so that it won some games, but it seemed that if the vulture pressure did not push Iron over the edge, then Iron would strike back and take the advantage. I watched only 1 game Locutus-TitanIron, because Locutus’s proxy pylon trick misled TitanIron just as it does Iron, and Locutus won easily. I watched a strange game against AIUR where TitanIron built a second command center far from its natural, slowly floated it over, left it in the air, and built a new command center underneath. Not all the bugs are crashing bugs. In the picture, TitanIron is losing to AIUR. Notice the nicely spaced tanks, the spider mines directly next to one tank, the barracks floating in an unhelpful position, and the spare command center in the air.

extra command center

Overall, my impression is that TitanIron’s play is often similar to Iron’s. Unlike Iron, it does not make air units (it seems to have drop skills, but I didn’t run into any games with drop). Against protoss, TitanIron makes more tanks and uses them more cautiously and often clumsily. TitanIron also seems a bit fonder of expanding and growing its economy.

TitanIron adds over 4,000 lines of code to Iron. It was made by a team of 10, so that’s not an excessive amount of new code. The crash rate and the score suggest that the team was not disciplined enough in code quality and testing (of course Steamhammer crashed even more, so I don’t get to brag). Read on and you’ll see what most of the new lines of code do. I question the choices of where to spend effort. I’m not sure what the plan behind TitanIron was supposed to be.

openings

Iron does not play different openings as such. Conceptually, I see Iron as playing one opening which it varies reactively. TitanIron adds a directory opening with code which allows it to define specific build orders. The build order system is loosely modeled on Steamhammer’s, using similar names (which are not the same as UAlbertaBot’s names)—some members of the team have worked on Steamhammer forks.

TitanIron knows 3 specific build orders, named 8BB CC (1 barracks expand), SKT (tanks first), and 5BB (marines). Based on watching replays, TitanIron retains and uses Iron’s reactive opening, with modifications.

opponent-specific strategies

Iron does not recognize opponents by name. TitanIron recognizes 2 specific opponents: Locutus and PurpleSwarm. The zerg PurpleSwarm is a curious choice, since it did not play in CIG. Maybe they found it an interesting test opponent? In any case, Locutus is the main focus. It is recognized in 4 strategy classes, Locutus, SKT, TankAdvance, and Walling. In Iron’s codebase, any number of strategies can be active at the same time, and other parts of the code check by name which strategies are active to suit their actions to the situation.

	Locutus::Locutus()
	{
		std::string enemyName = him().Player()->getName();
		if (enemyName == “Locutus” || enemyName == “locutus”)
		{
			me().SetOpening(“SKT”);
			m_detected = true;
		}
	}

SKT (defined in opening/opening.cpp) builds a barracks and refinery on 11, then adds 2 factories and gets tanks before vultures. It sounds as though it should refer to the “SK terran” unit mix of marines and medics with science vessels and no tanks, but it doesn’t. The Locutus strategy turns itself off (if I understand the code’s intent correctly) after all 4 dark templar of Locutus’s DT drop are dead, or after frame 13,000. Various buildings (barracks, factory, e-bay, turret) recognize when the Locutus strategy is active and carry out scripted actions. The name “Locutus” also activates the TankAdvance strategy which seems to first guard the natural and then perform a tank push, and deactivates the Walling strategy after frame 11,000 or when above 12 marines, causing the barracks to lift and open the wall.

TitanIron scored a total of 1 win out of 125 games against Locutus, so the special attention does not seem to have paid off.

PurpleSwarm gets less attention. (The question is why it got any.)

	Purpleswarm::Purpleswarm()
	{
		std::string enemyName = him().Player()->getName();
		if (him().Race() == BWAPI::Races::Zerg &&
			(enemyName == “Purpleswarm” || enemyName == “purpleswarm” || enemyName == “PurpleSwarm”))
		{
			me().SetOpening(“5BB”);
			m_detected = true;
		}
	}

5BB (also defined in opening/opening.cpp) builds barracks on 10 and 12, later adding a third barracks and training marines up to 30. I don’t see any other cases where TitanIron uses this opening. The rest of the code has no special instructions for PurpleSwarm or 5BB.

other new files

Besides the opening directory, TitanIron adds 16 files in the strategy and behavior directories, defining 8 strategies and behaviors. The added strategies are:

  • GuardNatural
  • Locutus
  • PurpleSwarm
  • SKT
  • TankAdvance

These are remarkable for being all and only the classes used when Locutus or PurpleSwarm is recognized. Do they have any other purpose? I didn’t dig into it, but I suspect that GuardNatural and TankAdvance may be used more widely against protoss.

The added unit behaviors are:

  • GuardLoc - guard a location
  • HangingBase - carry out drops
  • SKTAttack - related to SKT

GuardLoc has some connection with GuardNatural, but seems to be a general-purpose behavior, as far as I can tell. I’m not sure how HangingBase got its name.

The new opening directory and the newly added strategy and behavior files account for about 2/3rds of the lines of code added to Iron. The rest is scattered through the code and not as easy to inventory, but surely much of it must be uses of the new openings, strategies, and behaviors. I do see a lot of changes related to expanding.

SAIDA’s learning and SAIDA’s weaknesses

SAIDA is holding its position as #1 on SSCAIT, but it is under constant attack from other bots and loses some games. On the one hand, SAIDA has weaknesses against early harassment and timing attacks, especially if the opponent denies scouting. On the other hand, SAIDA appears to have a learning mechanism that recognizes rush timing and figures out a defense. The SAIDA page describes it as “He also catches perfect rush timing by using information he collected.” That’s a vague description, but the behavior does appear to involve learning from experience. MicroDK noted that SAIDA writes data only after it loses; this must be why. For example, BananaBrain tried a dark templar rush and won a series of games, but finally the learning kicked in and SAIDA figured out how to get turrets in time to stop it (SAIDA’s code was not updated). Since then, BananaBrain has mostly lost games, defeating SAIDA only once, in this game where the turret was seconds late.

Other examples include PurpleSpirit winning one game with BBS then being unable to win with it again, and Krasi0 winning with its fast barracks marine cheese with similar results.

In the latest attacks, Locutus won with center gates, making only 2 zealots before switching into dragoons, and Krasi0 added a bunker to its marine cheese to overcome SAIDA’s vulture counter to the marines (SAIDA crashed this game). Will SAIDA learn to defeat these tricks too? I don’t know, let’s find out!

How powerful is this learning mechanism? Surely there must be attacks that it cannot figure out how to forestall—or can’t figure out in reasonable time. If you find 2 winning tricks and switch between them, can it learn to defend against both? If you DT rush once so that it learns to get early turrets, does it get early turrets for the rest of time after you switch back to regular play? The unnecessary turrets give you a small advantage, and at a high level of play, small advantages are big.

Here are some of the weaknesses I see in SAIDA’s play.

  • Poor defense against unscouted early attacks, mitigated by the learning mechanism. SAIDA loses more SCVs than it should.
  • SAIDA recovers poorly from economic setbacks. It does not replenish lost SCVs as well as it should, and stops expanding after a while. If you gain an early lead, you can win by holding on and waiting for SAIDA to mine out.
  • SAIDA is vulnerable to mine drags. It sees no danger in having its spider mines and its forces next to each other. It will even place mines in its mineral line, begging you to blow up its SCVs.
  • SAIDA does not know how to build in safe locations. On some maps, like Moon Glaive, parts of the main base are easily sieged from outside. Krasi0 has won games by blasting down factories that are in range, and SAIDA keeps trying to rebuild in places that are also in range.
  • SAIDA is consistent and predictable. It varies to counter the opponent, but at heart always plays the same strategy and the same tactics. The dropships always fly along the edge.

SAIDA also has great strengths. The greatest may be the big red animated arrow that points out the main attack position. As long as SAIDA has a monopoly on big animated arrows, I think it will remain #1.

CIG 2018 - what Locutus learned

Locutus only recorded 8 games. It is configured to retain 200 game records, and I read the source code and verified that Locutus does not intentionally drop game records before the limit of 200. Recording exactly 8 games is the same problem that McRave suffered, and must be due to CIG problems. I don't know what the underlying problem was. My suspicion is that CIG organizers or tournament software may have accidentally or mistakenly cleared learning data for some bots. If that is what happened, and it happened once 8 games before the end of the tournament, it seems likely that it happened more than once. Who knows, though? The error might be somewhere else. Maybe they mistakenly shipped us data from after round 8 instead of round 125—in that case the tournament may have run normally, and only the data about it is wrong.

Locutus has prepared data for some opponents, stored in the AI directory. When Locutus finds it has no game records for a given opponent, it looks in AI to see if it has prepared data, and if so, it reads in those game records. At the end of the game, it writes out the prepared game records along with the record for the newly played game, and from then on the prepared records are treated like any others and retained unless and until the 200 record limit is passed.

How many other bots were affected by the 8 game problem?


Here is Locutus’s prepared data. Against some opponents, like McRave, Locutus picks out openings to avoid at first. If other openings don’t win either, I’m sure Locutus will come back and try these anyway. Against others, it picks out winners to try first. For some, it simply provides data. Most but not all of the prepared data is for opponents which were carried over from last year, for which pre-learning is sure to be helpful... if it is done on the same maps.

#3 mcrave

openinggameswins
12Nexus5ZealotFECannons10%
Turtle10%
2 openings20%

#6 iron

openinggameswins
DTDrop14100%
1 openings14100%

#7 zzzkbot

openinggameswins
ForgeExpand5GateGoon2100%
1 openings2100%

#11 ualbertabot

openinggameswins
4GateGoon10%
9-9GateDefensive250%
ForgeExpand5GateGoon1593%
3 openings1883%

#14 aiur

openinggameswins
4GateGoon3100%
9-9GateDefensive1100%
2 openings4100%

#16 ziabot

openinggameswins
9-9GateDefensive10%
ForgeExpand5GateGoon1100%
2 openings250%

#19 terranuab

openinggameswins
DTDrop10100%
1 openings10100%

#21 opprimobot

openinggameswins
DTDrop11100%
1 openings11100%

#22 sling

openinggameswins
ForgeExpand5GateGoon2100%
1 openings2100%

#23 srbotone

openinggameswins
DTDrop7100%
PlasmaProxy2Gate1100%
2 openings8100%

#24 bonjwa

openinggameswins
DTDrop6100%
PlasmaProxy2Gate1100%
2 openings7100%

overall

totalPvTPvPPvZPvR
openinggameswinsgameswinsgameswinsgameswinsgameswins
12Nexus5ZealotFECannons10% 10%
4GateGoon475% 3100% 10%
9-9GateDefensive450% 1100% 10% 250%
DTDrop48100% 48100%
ForgeExpand5GateGoon2095% 5100% 1593%
PlasmaProxy2Gate2100% 2100%
Turtle10% 10%
total8092%50100%667%683%1883%
openings played72423

Here is Locutus’s learned data. In every case, the number of games recorded is 8 plus the number of games in the prepared data. With only 8 games there is not much to go on, but the prepared data does seem to have helped Locutus choose successful openings.

#2 purplewave

openinggameswins
12Nexus5ZealotFECannons10%
4GateGoon10%
9-9GateDefensive580%
Proxy9-9Gate10%
4 openings850%

#3 mcrave

openinggameswins
12Nexus5ZealotFECannons10%
4GateGoon367%
Proxy9-9Gate5100%
Turtle10%
4 openings1070%

#4 tscmoo

openinggameswins
4GateGoon10%
9-9GateDefensive10%
ForgeExpand5GateGoon425%
Proxy9-9Gate250%
4 openings825%

#5 isamind

openinggameswins
4GateGoon683%
9-9GateDefensive1100%
Proxy9-9Gate1100%
3 openings888%

#6 iron

openinggameswins
DTDrop2295%
1 openings2295%

#7 zzzkbot

openinggameswins
ForgeExpand5GateGoon786%
ForgeExpandSpeedlots250%
Proxy9-9Gate10%
3 openings1070%

#8 microwave

openinggameswins
ForgeExpand5GateGoon8100%
1 openings8100%

#9 letabot

openinggameswins
DTDrop888%
1 openings888%

#10 megabot

openinggameswins
4GateGoon8100%
1 openings8100%

#11 ualbertabot

openinggameswins
4GateGoon10%
9-9GateDefensive250%
ForgeExpand5GateGoon2391%
3 openings2685%

#12 tyr

openinggameswins
4GateGoon8100%
1 openings8100%

#13 ecgberht

openinggameswins
DTDrop888%
1 openings888%

#14 aiur

openinggameswins
12Nexus5ZealotFECannons10%
2GateDTExpo1100%
4GateGoon580%
9-9GateDefensive1100%
Proxy9-9Gate475%
5 openings1275%

#15 titaniron

openinggameswins
DTDrop8100%
1 openings8100%

#16 ziabot

openinggameswins
9-9GateDefensive10%
ForgeExpand5GateGoon683%
ForgeExpandSpeedlots250%
Proxy9-9Gate1100%
4 openings1070%

#17 steamhammer

openinggameswins
ForgeExpand5GateGoon8100%
1 openings8100%

#18 overkill

openinggameswins
ForgeExpand5GateGoon8100%
1 openings8100%

#19 terranuab

openinggameswins
DTDrop18100%
1 openings18100%

#20 cunybot

openinggameswins
ForgeExpand5GateGoon8100%
1 openings8100%

#21 opprimobot

openinggameswins
DTDrop19100%
1 openings19100%

#22 sling

openinggameswins
ForgeExpand5GateGoon10100%
1 openings10100%

#23 srbotone

openinggameswins
DTDrop15100%
PlasmaProxy2Gate1100%
2 openings16100%

#24 bonjwa

openinggameswins
DTDrop14100%
PlasmaProxy2Gate1100%
2 openings15100%

#25 stormbreaker

openinggameswins
ForgeExpand5GateGoon8100%
1 openings8100%

#26 korean

openinggameswins
ForgeExpand5GateGoon8100%
1 openings8100%

#27 salsa

openinggameswins
ForgeExpand5GateGoon8100%
1 openings8100%

overall

totalPvTPvPPvZPvR
openinggameswinsgameswinsgameswinsgameswinsgameswins
12Nexus5ZealotFECannons30% 30%
2GateDTExpo1100% 1100%
4GateGoon3382% 3187% 20%
9-9GateDefensive1164% 786% 10% 333%
DTDrop11297% 11297%
ForgeExpand5GateGoon10693% 7997% 2781%
ForgeExpandSpeedlots450% 450%
PlasmaProxy2Gate2100% 2100%
Proxy9-9Gate1573% 1182% 250% 250%
Turtle10% 10%
total28890%11497%5480%8693%3471%
openings played102644

CIG 2018 - what Steamhammer learned

I wrote a new script to analyze Steamhammer’s learning data. A couple points: 1. Steamhammer crashed in nearly half of its games in CIG 2018. It can’t save learning data after a crash, so against some opponents Steamhammer had few opportunities to experiment. The number of crashes varied strongly depending on the opponent. 2. Steamhammer was set to remember the previous 100 games, since I figure there’s no play advantage to remembering more. The tournament was 125 rounds long. So in the tables below, “100 games” means that Steamhammer played at least 100 games without crashing, and up to 25 games may have been dropped, the early games. Against some weak opponents, Steamhammer learned, within 25 games, how to win 100% of the remaining games, and those tables give a 100% win rate for remembered games. Steamhammer did not score 100% against any opponent overall; it always had some losses in early games.

I should be able to run the same analysis for Steamhammer forks which retain Steamhammer’s opponent model file format.

#1 Locutus

openinggameswins
2HatchHydraBust10%
3HatchHydraExpo20%
3HatchLingBust10%
3HatchLingExpo10%
4HatchBeforeGas10%
OverpoolSpeed956%
6 openings1533%

A mystery is solved. Why was Steamhammer’s crash rate higher than I expected? Because many opponents learned to make Steamhammer crash. A crash for the opponent is a win, and the bot doesn’t care how it wins, so if it can learn a plan that makes the opponent crash reliably, it will. The stronger opponents tend to be learning bots, so Steamhammer crashed more often on average against strong opponents. This also means that my glib conclusion that Steamhammer won 66% of non-crash games, so it seems to have kept up with general progress is not sound. The non-crash games were mostly against weak opponents.

Locutus was lucky that it could figure out how to break Steamhammer. As Bruce mentioned in a comment, this Locutus version had a bug when facing certain zergling timings, and Steamhammer quickly figured out how to exploit the bug. It’s possible that Steamhammer minus the crash would have upset Locutus.

#2 PurpleWave

openinggameswins
11Gas10PoolMuta10%
3HatchHydra30%
3HatchLurker10%
4PoolSoft10%
7Pool12Hatch10%
7PoolSoft10%
9Hatch8Pool10%
9HatchExpo9Pool9Gas10%
9PoolSpeed10%
AntiFactory10%
Over10Hatch60%
Over10Hatch1Sunk70%
Over10Hatch2Sunk180%
Over10HatchBust10%
Over10HatchSlowLings40%
OverhatchMuta10%
OverpoolHatch10%
OverpoolTurtle30%
ZvP_3HatchPoolHydra20%
ZvP_4HatchPoolHydra10%
ZvT_12PoolMuta10%
ZvZ_Overpool11Gas10%
22 openings580%

PurpleWave shut out Steamhammer. It didn’t learn to make Steamhammer crash because every game was a win for it anyway. Steamhammer desperately tried alternatives all over the map, including crazy all-ins and openings intended for ZvT and ZvZ, and nothing worked.

#3 McRave

openinggameswins
11Gas10PoolLurker10%
4HatchBeforeGas10%
9HatchExpo9Pool9Gas10%
9PoolSpeed5100%
ZvP_3HatchPoolHydra20%
5 openings1050%

#4 tscmoo

openinggameswins
9PoolExpo10%
9PoolHatch10%
9PoolSunkHatch10%
AntiFact_2Hatch10%
Over10Hatch2Sunk10%
OverhatchExpoLing1315%
OverpoolSpeed2223%
7 openings4018%

#5 ISAMind

openinggameswins
3HatchHydraExpo10%
4HatchBeforeGas10%
OverpoolSpeed4100%
ZvP_2HatchMuta70%
ZvP_3HatchPoolHydra60%
5 openings1921%

#6 Iron

openinggameswins
2HatchHydra10%
3HatchLingExpo20%
4PoolHard10%
6PoolSpeed10%
9Hatch8Pool10%
9HatchMain9Pool9Gas10%
9PoolSunkSpeed10%
AntiFact_13Pool40%
AntiFact_2Hatch8312%
AntiFactory10%
Over10Hatch10%
PurpleSwarmBuild10%
ZvP_2HatchMuta10%
ZvT_12PoolMuta10%
14 openings10010%

Iron is not a learning bot, so it did not learn to crash Steamhammer. Still, these results show a weakness in Steamhammer: Its best opening against Iron is AntiFactory, which it tried only once in these 100 games. Steamhammer did not explore enough. I tried to fix the weakness in Steamhammer 2.0.

#7 ZZZKBot

openinggameswins
11Gas10PoolMuta10%
8Pool729%
9HatchMain9Pool9Gas10%
9PoolSpeed10%
OverhatchMuta10%
Overpool+110%
OverpoolSpeed10%
ZvZ_12HatchMain20%
ZvZ_12Pool10%
ZvZ_12PoolLing4858%
ZvZ_Overgas9Pool20%
ZvZ_Overpool9Gas20%
12 openings6844%

#8 Microwave

openinggameswins
9PoolSunkHatch580%
9PoolSunkSpeed2767%
OverpoolSunk10%
OverpoolTurtle333%
ZvZ_12PoolLing10%
5 openings3762%

This looks like successful learning. Too bad Steamhammer only successfully played 37 of the 125 games.

#9 LetaBot

openinggameswins
11Gas10PoolLurker10%
2HatchLurkerAllIn40%
3HatchHydraExpo10%
3HatchLurker1338%
9HatchExpo9Pool9Gas4536%
OverpoolLurker1331%
ZvP_2HatchMuta10%
ZvT_12PoolMuta10%
ZvT_13Pool10%
ZvT_3HatchMuta10%
10 openings8131%

#10 MegaBot

openinggameswins
11Gas10PoolLurker10%
3HatchHydra10%
3HatchHydraExpo10%
3HatchLingExpo2143%
Over10Hatch10%
OverhatchExpoLing1100%
ZvP_3HatchPoolHydra20%
7 openings2836%

#11 UAlbertaBot

openinggameswins
3HatchLingExpo10%
5PoolHard2Player10%
9PoolExpo10%
9PoolSpeed10%
9PoolSunkHatch4633%
9PoolSunkSpeed2948%
Over10Hatch1Sunk20%
OverpoolSpeed10%
ZvZ_Overpool9Gas10%
9 openings8335%

#12 Tyr

openinggameswins
9PoolHatch5100%
ZvP_3HatchPoolHydra50%
2 openings1050%

#13 Ecgberht

openinggameswins
11Gas10PoolLurker1050%
2HatchLurker2361%
2HatchLurkerAllIn4475%
Over10HatchBust333%
OverpoolLurker875%
OverpoolSpeed333%
ZvT_13Pool10%
7 openings9265%

#14 Aiur

openinggameswins
11Gas10PoolLurker1100%
5PoolHard2Player1100%
9PoolSunkHatch1100%
9PoolSunkSpeed2100%
Over10Hatch10%
Over10Hatch1Sunk250%
Over10Hatch2Hard1100%
Over10HatchSlowLings1100%
OverpoolSpeed2100%
OverpoolTurtle367%
10 openings1580%

#15 TitanIron

openinggameswins
3HatchLingBust10%
AntiFact_13Pool650%
AntiFact_2Hatch10%
AntiFactory7442%
Over10Hatch2Sunk10%
OverhatchExpoMuta10%
OverpoolLurker10%
ZvZ_Overgas9Pool1421%
ZvZ_Overpool9Gas10%
9 openings10037%

This selection of openings implies that TitanIron plays a factory-first build against zerg, like Iron, and is a non-learning bot, like Iron. Later I’ll look into the source and find out for sure.

#16 Ziabot

openinggameswins
11Gas10PoolMuta425%
2.5HatchMuta10%
3HatchHydraBust10%
6PoolSpeed10%
8Pool771%
9Hatch8Pool10%
9PoolHatch450%
ZvP_2HatchTurtle10%
ZvZ_12Pool10%
ZvZ_12PoolMain1625%
ZvZ_Overpool11Gas1050%
ZvZ_Overpool9Gas5374%
12 openings10056%

Low win rates against Zia and some other opponents suggest to me that Steamhammer had other new weaknesses besides crashing. I think Steamhammer should score over 80% against Zia.

#18 Overkill

openinggameswins
11Gas10PoolMuta1090%
4PoolHard2396%
6PoolSpeed28100%
9Hatch8Pool10%
OverhatchLing250%
OverpoolSpeed1392%
ZvZ_12HatchExpo250%
ZvZ_12PoolMain10%
8 openings8091%

#19 TerranUAB

openinggameswins
2HatchLurker5290%
AntiFact_13Pool888%
AntiFact_2Hatch978%
AntiFactory3190%
4 openings10089%

#20 CUNYbot

openinggameswins
11Gas10PoolMuta978%
OverhatchLing3497%
ZvZ_12PoolLing2796%
ZvZ_Overgas9Pool10%
ZvZ_Overpool9Gas1989%
5 openings9092%

#21 OpprimoBot

openinggameswins
11Gas10PoolLurker367%
2HatchLurker250%
2HatchLurkerAllIn683%
6PoolSpeed19100%
OverpoolLurker10%
OverpoolSpeed580%
ZvT_12PoolMuta2095%
ZvT_3HatchMuta20100%
ZvT_3HatchMutaExpo24100%
9 openings10094%

#22 Sling

openinggameswins
4PoolHard475%
4PoolSoft6100%
5PoolHard2Player3100%
ZvZ_12HatchMain10%
ZvZ_Overgas9Pool10%
5 openings1580%

The selection of fast rush openings suggests that Sling played a macro strategy which was countered by fast rushes. But I don’t want to draw strong conclusions based on 15 non-crash games out of 125.

#23 SRbotOne

openinggameswins
11Gas10PoolLurker1493%
2HatchLurker1090%
2HatchLurkerAllIn1090%
3HatchLurker17100%
4PoolSoft17100%
5PoolHard7100%
9HatchExpo9Pool9Gas475%
9PoolLurker3100%
OverpoolLurker5100%
9 openings8795%

The wide range of lurker openings means that SRbotOne by Johan Kayser fought with mostly barracks units. Well, we already knew that.

#24 Bonjwa

openinggameswins
9PoolExpo6100%
9PoolSunkHatch5100%
9PoolSunkSpeed5100%
AntiFact_2Hatch3100%
AntiFactory5100%
ZvT_2HatchMuta1100%
6 openings25100%

#25 Stormbreaker

openinggameswins
11Gas10PoolMuta1100%
4PoolHard1100%
9PoolSunkHatch8100%
9PoolSunkSpeed8100%
OverhatchLing1100%
OverhatchMuta7100%
OverpoolSpeed1100%
OverpoolSunk7100%
ZvZ_12HatchExpo2100%
ZvZ_12HatchMain3100%
ZvZ_12PoolLing1100%
ZvZ_12PoolMain3100%
12 openings43100%

#26 Korean

openinggameswins
4PoolHard1100%
4PoolSoft3100%
5PoolHard5100%
5PoolHard2Player3100%
5PoolSoft1100%
6PoolSpeed6100%
OverhatchLing9100%
OverhatchMuta12100%
ZvZ_12HatchExpo13100%
ZvZ_12HatchMain16100%
ZvZ_12PoolLing14100%
ZvZ_12PoolMain17100%
12 openings100100%

#27 Salsa

openinggameswins
4PoolHard2100%
4PoolSoft4100%
5PoolHard7100%
5PoolHard2Player1100%
5PoolSoft1100%
6PoolSpeed8100%
OverhatchLing11100%
OverhatchMuta8100%
ZvZ_12HatchExpo12100%
ZvZ_12HatchMain20100%
ZvZ_12PoolLing13100%
ZvZ_12PoolMain12100%
ZvZ_Overgas9Pool1100%
13 openings100100%

overall

totalZvTZvPZvZZvR
openinggameswinsgameswinsgameswinsgameswinsgameswins
11Gas10PoolLurker3168% 2871% 333%
11Gas10PoolMuta2669% 10% 2572%
2.5HatchMuta10% 10%
2HatchHydra10% 10%
2HatchHydraBust10% 10%
2HatchLurker8782% 8782%
2HatchLurkerAllIn6473% 6473%
3HatchHydra40% 40%
3HatchHydraBust10% 10%
3HatchHydraExpo50% 10% 40%
3HatchLingBust20% 10% 10%
3HatchLingExpo2536% 20% 2241% 10%
3HatchLurker3171% 3073% 10%
4HatchBeforeGas30% 30%
4PoolHard3291% 10% 3194%
4PoolSoft3197% 17100% 10% 13100%
5PoolHard19100% 7100% 12100%
5PoolHard2Player989% 1100% 7100% 10%
5PoolSoft2100% 2100%
6PoolSpeed6397% 2095% 4398%
7Pool12Hatch10% 10%
7PoolSoft10% 10%
8Pool1450% 1450%
9Hatch8Pool40% 10% 10% 20%
9HatchExpo9Pool9Gas5137% 4939% 20%
9HatchMain9Pool9Gas20% 10% 10%
9PoolExpo875% 6100% 20%
9PoolHatch1070% 5100% 450% 10%
9PoolLurker3100% 3100%
9PoolSpeed862% 683% 10% 10%
9PoolSunkHatch6650% 5100% 1100% 1392% 4732%
9PoolSunkSpeed7265% 683% 2100% 3574% 2948%
AntiFact_13Pool1856% 1856%
AntiFact_2Hatch9721% 9621% 10%
AntiFactory11257% 11158% 10%
Over10Hatch90% 10% 80%
Over10Hatch1Sunk119% 911% 20%
Over10Hatch2Hard1100% 1100%
Over10Hatch2Sunk200% 10% 180% 10%
Over10HatchBust425% 333% 10%
Over10HatchSlowLings520% 520%
OverhatchExpoLing1421% 1100% 1315%
OverhatchExpoMuta10% 10%
OverhatchLing5796% 5796%
OverhatchMuta2993% 10% 2896%
Overpool+110% 10%
OverpoolHatch10% 10%
OverpoolLurker2854% 2854%
OverpoolSpeed6156% 862% 1573% 1587% 2322%
OverpoolSunk888% 888%
OverpoolTurtle933% 633% 333%
PurpleSwarmBuild10% 10%
ZvP_2HatchMuta90% 20% 70%
ZvP_2HatchTurtle10% 10%
ZvP_3HatchPoolHydra170% 170%
ZvP_4HatchPoolHydra10% 10%
ZvT_12PoolMuta2383% 2286% 10%
ZvT_13Pool20% 20%
ZvT_2HatchMuta1100% 1100%
ZvT_3HatchMuta2195% 2195%
ZvT_3HatchMutaExpo24100% 24100%
ZvZ_12HatchExpo2997% 2997%
ZvZ_12HatchMain4293% 4293%
ZvZ_12Pool20% 20%
ZvZ_12PoolLing10479% 10479%
ZvZ_12PoolMain4973% 4973%
ZvZ_Overgas9Pool1921% 1421% 520%
ZvZ_Overpool11Gas1145% 10% 1050%
ZvZ_Overpool9Gas7674% 10% 7476% 10%
total159664%68562%15526%63382%12329%
openings played6937363113

This summary table took me hours to get right, so I hope it's useful.

Steamhammer played 69 openings in 1596 non-crash games, which is around 2/3rds of the openings it knows. No single matchup had more than 37 different openings. There were far more games against terran and zerg than against protoss and random, partly due to the crashing pattern. Against the random opponents (Tscmoo and UAlbertaBot), it settled on mostly general-purpose openings, as you might expect. Its best matchup was ZvZ, with a Jaedong-like 82% win rate (and lately, Jaedong crashes half the time too, so they’re just alike).

Openings that were both popular and successful include 2HatchLurker and 2HatchLurkerAllIn versus terran, 6PoolSpeed with a 97% win rate against mostly weak opponents, 9PoolSunkSpeed used across all matchups, and ZvZ specialties OverhatchLing, ZvZ_12PoolLing, and ZvZ_Overpool9Gas. None of the opening choices surprises me, though some of the win rates do.

CIG 2018 - Overkill was broken

Did Overkill actually perform much worse in CIG 2018 than in past years? Here are the bots carried over from 2017 to 2018, with win rates in both years, with numbers from the official results. We see that Overkill collapsed in win rate from 2017 to 2018, a far bigger change than any other bot. Iron performed poorly in 2017 because it failed on the map Hitchhiker. Other bots mostly had modestly lower win rates in the stronger field this year. My 2017 crosstable was calculated from the detailed results, which included some corrupted data and are a little different from the official results—except for Sling, which was a lot different: 26.07% in 2017 versus its official 18.08%, reducing its year-over-year difference.

bot20172018
UAlbertaBot65.59%60.58%
Overkill62.75%34.68%
Ziabot61.75%51.08%
Iron61.62%74.31%
Aiur59.83%51.54%
TerranUAB36.78%34.40%
SRbotOne34.14%24.37%
OpprimoBot30.69%27.11%
Bonjwa30.67%23.57%
Sling18.08%26.52%
Salsa4.64%1.54%

Was the difference due to the maps? N0. In 2017, Overkill scored 57% or more on every map (CIG 2017 bots x maps). In 2018, Overkill scored 38% or below on every map (official results). And 3 of the 5 maps were the same: Tau Cross, Andromeda, and Python.

Did they run different versions of Overkill? The source that they distributed for Overkill is identical in both years. Theoretically they might have run something different by mistake—but it produced the expected files in the write directory, so it would be a surprise.

Finally I downloaded the Overkill replays and watched some. The poor bot’s build orders were severely distorted, skipping over drones and buildings. It would do things like take gas on 7 and then stop all construction, or follow a normal-ish build but drop many drones so that its economy was anemic. Sometimes drones moved erratically instead of mining. It looked similar to play I’ve seen from Steamhammer when latency stuff is way out of whack. Of the games I looked at, some were hopelessly muddled, some were close to normal with only occasional dropped drones, and none were 100% good. I don’t know what the problem was, something corrupted or a server setting that Overkill could not cope with, but whatever it was, Overkill was badly broken and far short of its normal strength.

43864-OVER_ZIAB.REP (Overkill’s last game of the tournament) is an example replay that shows the problems.

It’s possible that some other bots may have been affected. If the difference was in a server setting that Overkill was not ready for, then it would be surprising if every other bot was ready.

CIG 2018 - what Overkill learned

After analyzing AIUR yesterday, I ran a similar (but much simpler) analysis for the classic zerg #18 Overkill. The version in CIG 2018 has not been updated since 2015 and is the same version that still plays on SSCAIT. In 2015 it was a sensation, placing 3rd in both CIG and AIIDE—its place of 18 in this tournament, with about 35% win rate, suggests huge progress over the past 3 years. But keep reading; Overkill appears to have been broken in this tournament. I did this analysis once before: See what Overkill learned in AIIDE 2015.

Classic Overkill knows 3 openings, a 9 pool opening which stays on one base for a good time, and 10- and 12-hatch openings to get mutalisks first. When it chooses 9 pool, that means that the opponent is either rushing (so the 9 pool is necessary to defend) or is being too greedy (which the 9 pool can exploit). Overkill counts some games twice in an attempt to learn faster, so sometimes its total game count is larger than the number of rounds in the tournament (125).

NinePoollingTenHatchMutaTwelveHatchMutatotal
opponentnwinnwinnwinnwin
#1 Locutus420%420%410%1250%
#2 PurpleWave430%430%420%1280%
#3 McRave440%440%430%1310%
#4 tscmoo400%400%472%1271%
#5 ISAMind420%420%410%1250%
#6 Iron547%320%393%1254%
#7 ZZZKBot472%390%472%1332%
#8 Microwave546%350%422%1313%
#9 LetaBot526%330%402%1253%
#10 MegaBot6012%240%417%1258%
#11 UAlbertaBot410%410%482%1301%
#12 Tyr400%390%472%1261%
#13 Ecgberht5716%244%4212%12312%
#14 Aiur9434%147%1712%12528%
#15 TitanIron3611%200%6916%12512%
#16 Ziabot160%160%9323%12517%
#17 Steamhammer10748%70%1010%12442%
#19 TerranUAB2467%30%9883%12578%
#20 CUNYbot1844%617%10166%12561%
#21 OpprimoBot3667%30%8676%12571%
#22 Sling6746%60%5242%12542%
#23 SRbotOne2374%425%9589%12284%
#24 Bonjwa7592%425%4687%12588%
#25 Stormbreaker7091%20%5387%12588%
#26 Korean7799%20%4693%12595%
#27 Salsa46100%3294%46100%12498%
total130536%5976%137240%327432%

The 10 hatch opening was useless in this tournament—against every opponent, 10 hatch was the worst choice, at best tying for 0. In 2015, 10 hatch was about as successful as the other openings.

Signs are that something was wrong with Overkill in this tournament. In AIIDE 2015, then #3 Overkill scored 23% against then #4 UAlbertaBot, 68% against #5 AIUR, and 99% against #17 OpprimoBot. In CIG 2018, it was 1.6% against UAlbertaBot, 28% against AIUR, 71% against OpprimoBot. All versions appear to be the same in both tournaments—I didn’t look closely, but I did unpack the sources and check dates (in particular, Overkill has file change dates up to 8 October 2015 in both tournaments). Overkill had 14 crash games in CIG 2018, not enough to account for the difference. It’s hard to believe that the maps could have shifted results that much.

Tomorrow: What went wrong with Overkill?

CIG 2018 - what AIUR learned

Here is what the classic protoss bot AIUR learned about each opponent over the course of CIG 2018. AIUR has not been updated in many years and has fallen behind the state of the art, but its varied strategies and learning still make it a tricky opponent in a long tournament. Seeing AIUR's counters for each opponent tells us something about how the opponent played. For past editions, see AIIDE 2017 what AIUR learned and what AIUR learned (AIIDE 2015).

This is generated from data in AIUR's final write directory. There were 125 rounds and 5 maps, one 2-player and two each 3- and 4-player maps. For some opponents, all games were recorded, giving 25 games on the 2-player map and 50 games each on 3- and 4-player maps. For most opponents, fewer games were recorded. AIUR recorded 2932 games, and the results table lists 318 crashes for AIUR. 2932 + 318 = 3250, the correct total game count. Unrecorded games were lost due to crashes, and for no other reason.

First the overview, summing across all opponents.

overall234total
 nwinsnwinsnwinsnwins
cheese7249%12765%13235%33149%
rush2941%26933%26155%55944%
aggressive1323%22568%18478%42271%
fast expo3324%18548%20748%42546%
macro4633%18052%13560%36153%
defensive14175%31473%37955%83465%
total33454%130056%129856%293256%
  • 2, 3, 4 - map size, the number of starting positions
  • n - games recorded
  • wins - winning percentage over those games
  • cheese - cannon rush
  • rush - dark templar rush
  • aggressive - fast 4 zealot drop
  • fast expo - nexus first
  • macro - aim for a strong middle game army
  • defensive - try to be safe against rushes

Looking across the bottom row, you can see that AIUR had a plus score on every size of map, and that it had to choose different strategies to do so well. It's a strong result for a bot which has essentially no micro skills and has not been updated since 2014. It does still have the best cannon rush of any bot, if you ask me.

#1 locutus234total
 nwinsnwinsnwinsnwins
cheese10%80%2512%349%
rush10%100%60%170%
aggressive10%40%50%100%
fast expo10%140%50%200%
macro10%70%40%120%
defensive10%714%50%138%
total60%502%506%1064%

Even against the toughest opponents, AIUR can scrape a small edge with learning. Against Locutus, it pulled barely above zero, but got a few extra wins because it discovered that its cannon rush occasionally scores on 4-player maps. Results against PurpleWave below are similar. I suspect that if AIUR had played the cannon rush every game, Locutus would have adapted and nullified the edge. Maybe it did, and that’s why the edge is so small.

#2 purplewave234total
 nwinsnwinsnwinsnwins
cheese10%80%3918%4815%
rush10%80%20%110%
aggressive10%100%30%140%
fast expo40%80%20%140%
macro10%100%20%130%
defensive30%60%20%110%
total110%500%5014%1116%


#3 mcrave234total
 nwinsnwinsnwinsnwins
cheese1100%10%10%333%
rush10%412%10%432%
aggressive00%20%30%50%
fast expo10%10%4217%4416%
macro10%30%10%50%
defensive10%20%20%50%
total520%502%5014%1059%

Against McRave, the choice is nexus first. McRave must have settled on a macro opening itself.

#4 tscmoo234total
 nwinsnwinsnwinsnwins
cheese1127%10%10%1323%
rush10%10%30%50%
aggressive10%119%10%138%
fast expo520%3315%10%3915%
macro10%20%2214%2512%
defensive10%20%2218%2516%
total2020%5012%5014%12014%

Against the unpredictable Tscmoo, AIUR wavered before settling on an unpredictable set of answers. Notice that not all the strategies are well explored: If you win less than 1 game in 5, then playing an opening 3 times is not enough. If the tournament were much longer, AIUR would likely have scored higher because of its slow but effective learning.

#5 isamind234total
 nwinsnwinsnwinsnwins
cheese10%20%40%70%
rush1100%3719%388%7614%
aggressive00%10%30%40%
fast expo10%50%20%80%
macro10%10%20%40%
defensive10%40%10%60%
total520%5014%506%10510%

ISAMind may be based on Locutus, but unlike Locutus it is vulnerable to AIUR’s dark templar rushes. It’s a sign that it is not as mature and well tested.

#6 iron234total
 nwinsnwinsnwinsnwins
cheese10%10%50%70%
rush10%2619%20%2917%
aggressive00%20%20%40%
fast expo10%10%3110%339%
macro10%195%40%244%
defensive10%10%60%80%
total50%5012%506%1059%


#7 zzzkbot234total
 nwinsnwinsnwinsnwins
cheese40%20%20%80%
rush40%40%10%90%
aggressive30%20%10%60%
fast expo30%30%10%70%
macro70%50%40%160%
defensive40%3429%4112%7919%
total250%5020%5010%12512%

4 pooler ZZZKBot is of course best countered by a defensive anti-rush strategy. Well, it helped, but the rush is too strong for AIUR to survive reliably. On the 2-player map, AIUR found no answer.

#8 microwave234total
 nwinsnwinsnwinsnwins
cheese20%20%10%50%
rush10%277%10%297%
aggressive10%10%10%30%
fast expo10%20%10%40%
macro10%10%922%1118%
defensive1822%1724%3625%7124%
total2417%5012%4922%12317%

Microwave apparently also played a rushy style versus AIUR. That’s interesting. I think that AIUR’s defensive strategy is good against pressure openings generally, so Microwave was likely playing low-econ but not necessarily fast rushes.

#9 letabot234total
 nwinsnwinsnwinsnwins
cheese10%10%10%30%
rush10%10%333%520%
aggressive00%333%10%425%
fast expo10%4149%4349%8548%
macro1100%333%10%540%
defensive10%10%10%30%
total520%5044%5044%10543%

Fast expo makes sense against LetaBot’s “wait for it... wait for it... here it comes!” one big smash.

#10 megabot234total
 nwinsnwinsnwinsnwins
cheese10%20%30%60%
rush250%40%3811%4411%
aggressive10%30%30%70%
fast expo10%30%20%60%
macro20%3628%20%4025%
defensive1894%20%20%2277%
total2572%5020%508%12526%

Why did MegaBot have so much more trouble on the 2-player map? According to the official per-map result table, MegaBot did fine overall on Destination (the one 2-player map), so its trouble came only against AIUR. Maybe I should watch replays and diagnose it.

#11 ualbertabot234total
 nwinsnwinsnwinsnwins
cheese10%10%10%30%
rush20%4337%20%4734%
aggressive10%20%10%40%
fast expo10%20%10%40%
macro1833%10%10%2030%
defensive10%10%4416%4615%
total2425%5032%5014%12423%


#12 tyr234total
 nwinsnwinsnwinsnwins
cheese10%10%10%30%
rush1100%10%3281%3479%
aggressive00%3746%875%4551%
fast expo1100%333%367%757%
macro10%633%333%1030%
defensive10%20%333%617%
total540%5040%5072%10555%

I suspect that Tyr suffered here because it is a jvm bot and could not write its learning file.

#13 ecgberht234total
 nwinsnwinsnwinsnwins
cheese1100%3889%250%4188%
rush1100%10%4367%4567%
aggressive00%475%10%560%
fast expo1100%10%20%425%
macro10%367%10%540%
defensive10%367%10%540%
total560%5082%5060%10570%


#15 titaniron234total
 nwinsnwinsnwinsnwins
cheese10%10%250%425%
rush10%10%333%520%
aggressive00%4279%4288%8483%
fast expo10%10%10%30%
macro1100%250%10%450%
defensive1100%30%10%520%
total540%5068%5078%10571%

TitanIron appears to have been too predictable. Notice that the winning strategy on most maps was never tried (without crashing) on the 2-player map. It might have won there too.

#16 ziabot234total
 nwinsnwinsnwinsnwins
cheese1650%250%10%1947%
rush10%20%10%40%
aggressive10%10%333%520%
fast expo10%250%00%333%
macro10%10%10%30%
defensive333%4269%4457%8962%
total2339%5062%5052%12354%


#17 steamhammer234total
 nwinsnwinsnwinsnwins
cheese10%10%10%30%
rush367%475%9100%1688%
aggressive3100%17100%15100%35100%
fast expo20%20%250%617%
macro1100%10100%10%1292%
defensive14100%16100%22100%52100%
total2483%5092%5094%12491%


#18 overkill234total
 nwinsnwinsnwinsnwins
cheese10%30%250%617%
rush00%250%10%333%
aggressive00%10%1060%1155%
fast expo10%367%00%450%
macro00%00%00%00%
defensive1688%4190%3778%9485%
total1878%5080%5072%11876%


#19 terranuab234total
 nwinsnwinsnwinsnwins
cheese1100%888%10%1080%
rush1100%11100%30100%42100%
aggressive00%475%250%667%
fast expo1100%16100%683%2396%
macro1100%989%1090%2090%
defensive1100%250%10%450%
total5100%5092%5090%10591%


#20 cunybot234total
 nwinsnwinsnwinsnwins
cheese10%250%475%757%
rush1100%10%20%425%
aggressive00%475%1392%1788%
fast expo10%250%250%540%
macro1100%989%13100%2396%
defensive1100%32100%15100%48100%
total560%5090%4990%10488%


#21 opprimobot234total
 nwinsnwinsnwinsnwins
cheese1100%12100%683%1995%
rush1100%5100%7100%13100%
aggressive00%7100%4100%11100%
fast expo1100%11100%17100%29100%
macro1100%8100%7100%16100%
defensive1100%7100%9100%17100%
total5100%50100%5098%10599%


#22 sling234total
 nwinsnwinsnwinsnwins
cheese10%10%10%30%
rush1100%5100%250%888%
aggressive00%13100%13100%26100%
fast expo1100%7100%10100%18100%
macro1100%8100%11100%20100%
defensive1100%16100%13100%30100%
total580%5098%5096%10596%


#23 srbotone234total
 nwinsnwinsnwinsnwins
cheese10%250%10%425%
rush1100%9100%367%1392%
aggressive00%13100%16100%29100%
fast expo1100%10100%8100%19100%
macro1100%786%6100%1493%
defensive1100%9100%16100%26100%
total580%5096%5096%10595%


#24 bonjwa234total
 nwinsnwinsnwinsnwins
cheese1100%9100%475%1493%
rush1100%13100%10100%24100%
aggressive00%7100%10100%17100%
fast expo1100%6100%7100%14100%
macro1100%7100%8100%16100%
defensive1100%8100%11100%20100%
total5100%50100%5098%10599%


#25 stormbreaker234total
 nwinsnwinsnwinsnwins
cheese475%10%475%967%
rush00%580%10100%1593%
aggressive00%18100%7100%25100%
fast expo00%00%6100%6100%
macro00%9100%8100%17100%
defensive20100%17100%15100%52100%
total2496%5096%5098%12497%


#26 korean234total
 nwinsnwinsnwinsnwins
cheese7100%2100%10100%19100%
rush00%7100%8100%15100%
aggressive00%5100%8100%13100%
fast expo00%8100%8100%16100%
macro00%5100%6100%11100%
defensive14100%23100%10100%47100%
total21100%50100%50100%121100%

Well, if you win every game, learning cannot help.

#27 salsa234total
 nwinsnwinsnwinsnwins
cheese9100%15100%9100%33100%
rush00%00%3100%3100%
aggressive00%11100%8100%19100%
fast expo00%00%4100%4100%
macro00%8100%7100%15100%
defensive15100%16100%19100%50100%
total24100%50100%50100%124100%

many ways to defend against SAIDA’s drops in TvT

SAIDA introduces new drop skills in TvT. Bots have not faced drops like this before, and are not adept at defending. SAIDA likes to drop tanks and goliaths with several ships, at the edge of the map, using your mineral line or your buildings for cover. The drops are able to destroy a base if the defender is weak or disorganized.

There are a lot of defensive possibilities; I’ll list some. Make SAIDA pay for those drops!

active defense

• Drop prediction. If you spot moving dropships, you may be able to guess where they are going. You can try to divert wraiths or goliaths to intercept the path, or send defenders to the predicted drop zone.

• Wraiths. Seek out those dropships and make them hurt. If they’re loaded, you can force them to unload prematurely. If the drop already happened, shoot down as many as you can. Even if the dropships escape to friendly territory, you have caused delays and gained time.

• Counter drops. Drop your own units directly on top of the enemy units. SAIDA’s tanks will have to unsiege, and (if you have coordination skills) you can take the opportunity to move in with other units. A good counter drop can turn the enemy drop from a benefit into a cost.

• If all else fails, maneuver tanks to pin down and destroy the dropped units. Don’t let them stay alive and kill more of your stuff. So far, Tscmoo has done the best job of this.

exploit predictability

Bots tend to have stereotyped play. SAIDA likes to fly along the edge of the map and drop on the edge where its units cannot be surrounded. An opponent could record the events, notice the obvious pattern, and prepare special defenses. Or at least: Once you’ve seen dropships, set up some turrets to see them coming and restrict their movement. By the time big drops can happen, terran commonly has excess minerals and can afford to throw up a bunch of turrets even if they aren’t efficiently placed.

• Place turrets along the edge where SAIDA may approach. (This is more common in TvP as defense against arbiter recall.) If you have high confidence, you could even detail goliaths to lie in ambush. If SAIDA doesn’t know about the turrets, it will have to fly into range and take damage before it can evade. At worst, you will have seen the drop and can try to predict where it will go next.

• Lay spider mines in potential drop zones, such as behind your mineral line. I don’t know whether SAIDA will drop on the mines and blow up, or scan the mines and drop elsewhere, but it’s to your advantage either way. Laying mines near your mineral line is not as clever against protoss or zerg drops, because protoss and zerg can more easily drag the mines into your workers. Terran doesn’t have a good unit to drag mines with.

stay alive

• Lift the command center and run SCVs. There’s a good chance you can keep the CC alive and quickly restore the base to operation once the drop is cleared. Lifting the command center is a basic terran skill; I find it surprising that Krasi0 doesn’t have it yet.

Steamhammer 2.1 status

My energy is recovering slowly from “blrgh, is it day again?” toward “I wonder what’s for lunch?”

I got a modest amount of work done for Steamhammer 2.1. I fixed 4 different bugs in terran play, and now terran is up to snuff—there was a good one where medics liked to break away and advance on their own. Steamhammer is better than before with barracks units, still klutzy with factory units though vultures may get stuck on each other less often. At least one protoss bug is not as easy and needs actual work to solve. I also feel like fixing scourge, so we’ll see how long it takes. Should be more on the order of days than weeks.

For Steamhammer 2.2, I think the headline feature will be dropping BWTA. That will be a relief. When it looks solid, I can move to BWAPI 4.2.0 and be free of the bugs in 4.1.2. The 4.1.2 bugs effectively make drop more expensive for zerg, which has discouraged me from working on drop skills.

Last year, the end-of-year Steamhammer version 1.4a3 (gotta love that version number) was not only the absolutely strongest Steamhammer of the year, it was also relatively strongest: It showed the best results against other bots. Steamhammer finished higher in SSCAIT than in AIIDE. I’m seeing early signs that it might work out the same this year. This year, the AIIDE version includes a lot of necessary work, but not all of it is polished enough. By the end of the year, the new bugs should be smoothed out and other important problems fixed. I’m expecting a strong chance that Steamhammer will again finish higher in SSCAIT than in AIIDE. I think I am being taught a lesson in good tournament preparation.

Still coming soon-ish: CIG 2018 analysis.

new bot SAIDA

I think we have a new champion.

New terran SAIDA has been playing extremely impressive games on SSCAIT, scoring 10-0 as I write. In games so far, it breaks down both Krasi0 and Locutus with a strategy like this: Stay home on 2 bases and build up a strong tank force, move out and establish a contain as close to the enemy natural as possible, use the space this gives to reduce other enemy bases around the map with vulture raids, small tank attacks, and drops with multiple dropships (a unique skill for terran bots). Based on its debug drawing, it seems to have a sophisticated understanding of what its enemy is doing, and from the way it varies its play against different opponents, it makes use of that understanding. It scouts carefully. It can place tanks on high ground appropriately. Its drop positioning is strong. When rushed, it places a bunker in a strong rear position and pops marines in and out at the right times. When PurpleWave tried forward 2 gates, SAIDA scouted it and correctly focussed down the pylon first, then took its vultures away from the gates to hit the protoss main, perfectly done. The bot has a lot of powerful and rare skills.

SAIDA appears to play with primarily factory units against all races. If so, it may be vulnerable against the strongest zergs. Or maybe not, we haven’t seen the games yet! Looking into the binary, I see that SAIDA knows names for a wide variety of strategies by all races. If it also knows counters for those strategies–which I think we can expect—then it is prepared for anything it is likely to see. For example, here are the names it knows for zerg:

opening“main” (current) strategy
Zerg_4_DroneZerg_main_zergling
Zerg_5_DroneZerg_main_maybe_mutal
Zerg_9_DroneZerg_main_hydra
Zerg_9_HatZerg_main_lurker
Zerg_9_OverPoolZerg_main_mutal
Zerg_9_BalupZerg_main_fast_mutal
Zerg_12_PoolZerg_main_hydra_mutal
Zerg_12_HatZerg_main_queen_hydra
Zerg_12_Ap
Zerg_sunken_rush
Zerg_4_Drone_Real

I’m not sure what all of the names mean, but most are obvious. Steamhammer, with its huge opening repertoire, knows openings which are technically not on this list. But for practical purposes, I expect this should cover everything before hive tech.

I can see flaws in SAIDA’s play: It makes too many turrets, its tank positioning versus protoss is too compact and vulnerable, it makes micro errors. But the flaws are not easy for other bots to exploit.

This bot must be the product of long development. The rest of us have work to do!

Steamhammer 2.0 download

Here is the Steamhammer 2.0 download link. It’s the same zip file I submitted for AIIDE 2018, so unlike an SSCAIT upload it doesn’t include BWAPI.

Steamhammer 2.0 download with source and binary.

It is meant to play zerg only. The configuration file contains nothing for terran or protoss.

You can fill in openings yourself and it will play, but my testing shows unacceptable bugs for both terran and protoss. For example, vultures often get stuck on targets as if married to them, unable to switch away until one or the other dies (this bug allows no divorce for any cause). In zealots versus zerglings, the zealots like to move back and forth and give the zerglings free hits. “It’s only fair, we’re so much taller and stronger.”

Steamhammer 2.1 will come out when the bot again plays all races acceptably. I’m not sure how long it will take. After the big effort for AIIDE, my energy is low and I need a break. I haven’t even updated Steamhammer’s web page yet.

Steamhammer 2.0 change list

Some parts of the change list are already posted; think of those posts as part of this one (ahem, “included here by reference are....”). Here is the rest. You may notice that it is slightly long.

code changes

• More stuff in UnitUtil: Cooldown and FramesToReachAttackRange() functions added, and used in micro. GetWeapon() functions reworked for simplicity. Damage-per-frame functions added to compare weapon strengths. IsCompletedResourceDepot() added, factoring out code that was repeated in several places.

UnitInfo::estimateHealth() estimates the health of a unit which may not been seen for a while, accounting for protoss shield regeneration and zerg hp regeneration. (Terran medic healing and SCV repair are not easy to predict.)

InformationManager::enemyHasSiegeMode() added, and used in tactical calculations.

• In GameCommander, I sorted the manager calls so that managers which gather information are called first, and managers which use the information are called later. They had gotten jumbled over time. The main effect is that Steamhammer reacts 1 frame faster after discovering the race of a random opponent, a crucial difference that I estimate will, over Steamhammer’s lifetime, save approximately zero games.

• I renamed SquadData::addSquad() to createSquad(), since that’s what it does, and reworked it for simplicity. I removed the declaration of SquadData::clearSquad(), which was not implemented.

• Some calls in WorkerManager iterate through bases instead of through units to find base-related information. It’s faster and simpler (but I did have to spend time to fix a bug that I introduced in the process).

• More unnecessary includes removed, bringing a negligible improvement in compile times.

• I removed the configuration option Config::Micro::UnitNearEnemyRadius. The value is now chosen dynamically in code.

• In the game info display (turned on with Config::Debug::DrawGameInfo and drawn in the upper left), most labels were not needed because the meaning is obvious. I removed the labels of items other than “Opp Plan”. Less clutter is better.

• The TimerManager display, turned on with Config::Debug::DrawModuleTimers, had grown disorganized and probably incorrect. I straightened it out. I also improved comments in the code to prevent future disorganization.

opponent model

The opponent model has always distinguished between opponents that appear to follow the same plan every game, and opponents that vary their play. Until this version, it used the information only in a minor way. Now it selects openings using an entirely different method for multi-strategy enemies. If you play the same every game, Steamhammer will try to find the best single response, exploiting your predictability. If you mix up your play to confuse Steamhammer, then Steamhammer will mix up its play to confuse you; you get minimal predictability to exploit. We’ll see how well it works, but I’m expecting it to make a big difference in a long tournament like AIIDE.

• Against a single-strategy opponent, Steamhammer sticks with the variant of epsilon-greedy that it has always used: With probability epsilon, explore randomly; otherwise, choose the best known opening according to a weighted win rate that tries to take the maps into account. It’s not strictly classic epsilon-greedy, though, because epsilon varies according to the loss rate: If we are losing a lot, explore more often and play the best known opening less often. (It’s an adaptation to having more openings available than can be tried.) I have modified it so that the exploration rate increases more rapidly with the loss rate, because I found it was too often repeating an opening that won 1 game out of many, instead of looking for a better choice.

• Against a multi-strategy opponent, Steamhammer starts with the same weighted-win-rate calculation that it uses in the single-strategy case. Each opening it has already played is an option, and has a measured win rate. Exploring a new opening is an option, and its win rate is the mean win rate of openings that have been tried, its best estimate of how likely a newly explored opening is to win—but with a hard floor, so that the exploration rate never goes too close to zero. It randomly chooses among the possibilities, giving each a probability proportional to the square of its win rate. Squaring the rates gives openings with win rates near zero little chance of being chosen—unless most openings have win rates near zero. If all win rates are zero, because there are no wins yet, the hard floor on exploration means that Steamhammer explores every time. If the win rates are high for some openings and low for many others, exploration will be rare and Steamhammer will most often randomly choose one of the better openings. It’s ad hoc but makes a certain amount of sense.

macro

• Mineral locking. Steamhammer’s implementation follows Locutus in outline, but is different in detail. Mineral locking helps mainly in macro games, which Steamhammer is now able to play better because of the squad changes, so this was an opportune time to add it.

construction

• Prefer to expand to bases near the edge of the map. Bases near the edge are usually (not always) more protected than bases near the center of the map. In practice, Steamhammer now mostly avoids taking the risky center bases on Heartbreak Ridge and other maps, and the exposed mineral only bases on Python, at least until later in the game. I had to tune it carefully so that the natural of the 1 o’clock base on Tau Cross, far from the edge, is still preferred over the exposed 3rd base at the edge. Someday I’ll implement map analysis and figure out a real measure of exposure to attack.

• Bug fix: Don’t try to build at a location which a worker cannot reach. It never happened, as far as I know, but there was a mistake in the code.

tactics

• In the overview I said that workers are not transferred to base that is in danger, but there is more to it than that. Each base keeps track of whether it is being attacked severely enough that workers appear to be in danger; the estimate is made by CombatCommander::updateBaseDefenseSquads(), which I wrote about under base defense, and can be accessed via the Base objects for each base with base->inWorkerDanger(). If a worker is idle, meaning it is due to be assigned a new task if one is available, then it is not assigned a task at a base where workers are in danger. A worker is normally made idle after completing any task: worker was just created, gas collection is being turned off so worker no longer needs to mine gas, worker is finished defending itself and should be put back to work, and other cases. Workers are not only not transferred to an endangered base, they are often (not always) transferred away from the endangered base because they had to defend themselves, or were pulled for defense, or otherwise changed tasks. It’s hardly perfect, but it saves workers and greatly improves Steamhammer’s resilience to attack. It was a critical improvement. Related weaknesses remain: Steamhammer may still try to build at the endangered base, or transfer drones to a distant base through the enemy army.

• Fixed CombatCommander bugs in deciding which enemy base to attack.

• Rules for recognizing enemy cloaking and assigning detectors to our squads are slightly improved. Steamhammer pays attention to whether it has cloaked units itself and needs to catch enemy observers. This allows overlords to stay safer in some game situations: We don’t have lurkers, therefore we don’t have a strong need to hunt observers, therefore overlords can stay home.

• In feeding units to the combat sim, an enemy building which was last seen uncompleted is entered as completed if is currently out of sight. It is sometimes an improvement, sometimes a mistake. It would be better to track the estimated completion time and use that (a number of bots do).

micro

• Previous versions changed former stateless module Micro into an object with state, but did nothing new. This version updates the state with each of our unit’s orders and a little more information, but still doesn’t use the state for anything. It’s all prepared for serious work, though.

• Enemy unit movement prediction is smarter. It takes distance into account. Also, prediction is used differently in different cases to get better results in practical situations. Mutalisks, wraiths, and vultures now use prediction; as instant-acceleration units they are kited by a different routine than other units, which didn’t use prediction until now.

• Kite only units which the enemy has targeted. If nobody wants to shoot you, you don’t have to step back from the shooting. This especially helps hydralisks, which fire more slowly when kited and tend to get in each other’s way.

• Some bits of micro explicitly take latency into account in their calculations, especially kiting. It’s more accurate.

• Don’t chase an enemy if we’re predicted to be unable to catch it. CanCatchUnit() (defined in Common.cpp) figures it out. It makes less difference than I expected.

• Targeting priority: A ghost which is nuking is the highest priority target. An enemy defiler is also a high priority. These two were oversights. There are other targeting tweaks, for example to reduce cases of attacking the pylon when the cannon behind it is firing; it’s not fully successful.

zerg

• If we are short of mutas or lurkers (by a simple hardcoded count), then do not substitute a drone for a muta or lurker, no matter how much we may want drones. This was the biggest cause of delayed tech switches, and fixing it makes strategic play crisper at key times. This was one of the most important fixes in version 2.0.

• Late in the game, add extractors willy-nilly everywhere that they are possible. Steamhammer was too often gas-starved in the late game, too slow to get more gas when it was needed. This is also an important fix.

• Limit scourge more stringently. Sometimes Steamhammer makes so much scourge that it has no gas for anything else, causing macro problems as it delays production to get more gas.

• The building manager is much more cautious about turning a failed expansion into a macro hatchery. It only orders the change if it appears that we actually need a macro hatchery.

• Build a queen’s nest sooner in some situations. This is to speed hive tech when it is needed. It doesn’t make a big difference.

• If the enemy has too many corsairs or valkyries, get air carapace. This gets air armor for overlords even if we are not making any other air units.

• Favor guardians more, especially versus mass cannons. Past Steamhammer versions reduced guardian use by too much, and this starts to correct it.

• Emergency reaction: If we’re dangerously short on drones, don’t spend one on a building. Oops.

• Emergency reaction: If we have no bases but do have hatcheries, then make sure that the drone limit is at least 3. Having “no bases” means that no hatchery is at a predefined base location; we might still have hatcheries that can mine. Steamhammer formerly thought that, without bases, it had no need for drones. That was OK if it had drones already, not if they were all dead. Now Steamhammer has a chance to recover, provided the enemy is also prostrate after a base trade.

• Don’t automatically make a sunken in reaction to a proxy. It was often an overreaction.

• Don’t automatically get an early sunken versus protoss 2 gate (the sunken was usually too early) or against zerg 2 hatch (it was often unnecessary).

• When planning a morphed unit type (such as a lurker), don’t count it as using up a larva. This minor bookkeeping fix should occasionally make for better production decisions.

• Cancel grossly excess overlords even in the opening book. This may help if an emergency situation comes up in the opening, but it is mainly meant to mitigate bugs which cause production loops. No simple production loops are possible, but there seem to still be some complex loops where unrelated emergency reactions fire, and each prevents the other from recognizing that it has already taken action. It’s rare, though.

• Don’t let rebuilding the spawning pool cause a production jam, and don’t allow multiple copies of the spawning pool. It was a rare but deadly bug in production unjamming.

• Fixed a crash if a hydralisk den, lair, or spire was dropped in the opening (in reaction to an emergency). It was that specific.

• Fixed a rare bug that could prevent gas from being retaken after it was lost.

• Fixed a bug in defensive reactions that could request zerglings when there was no spawning pool.

• Fixed an unimportant bug in deciding to make a hive. It had no real consequences.

zerg openings

• Fixed a number of openings that had suffered bit decay: They were broken or mistuned due to code changes, such as queue reordering and mineral locking.

• Added new turtle openings designed to exploit particular enemy strategies: 11HatchTurtleHydra, 11HatchTurtleLurker, 12HatchTurtle, ZvZ_OverpoolTurtle. These are meant to stop specific rushes and leave Steamhammer in a sound position.

• Finished up and optimized the anti-forge-expand macro openings ZvP_3BaseSpire+Den (a good success) and 4HatchBeforeGas (not as effective). Steamhammer is finally able to play macro games well, so this was important. To play these openings truly well, though, Steamhammer needs greater ability to understand and react to the enemy’s timings.

• I fiddled with opening probabilities, mainly to continue to make openings more equally likely so that the opponent model can explore more efficiently. But also to include the new openings and to adjust to Steamhammer’s new strengths and weaknesses.

• I split the opening 9PoolSpeed into 2 variants. One variant keeps the same name and makes fewer zerglings (hit them by surprise, then transition to a normal game), the other is called 9PoolSpeedAllIn and makes more zerglings (hit them hard and maintain pressure). Both are more effective in their ways than the former compromise opening.

• A few ZvZ openings make 1 drone fewer, to keep zergling numbers as high as possible.

• Some other changes.