Overkill AIIDE 2017 updates
PurpleWaveJadian pointed out in a comment that Sijia Xu had updated Overkill’s repository, from last year’s version to bring it up to date with the AIIDE 2017 version of Overkill. There are extensive changes. The bot looks substantially more capable.
Here’s what caught my eye. This is based on a quick read through a long list of diffs, so I may have missed or misunderstood a lot.
• New machine learning library with a couple different learning methods.
• Support for lurkers, scourge, guardians, devourers, ultralisks—all the regular zerg combat units. Only queens and defilers are omitted, the spell units. Last year it supported only zerglings, hydralisks, and mutalisks.
• Major changes in tactical calculations. Well, there would have to be to support new units.
• Changes to building construction. I’m not sure what the idea is here.
• It detects production deadlocks, to avoid freezes. The method is written to be simple and general-purpose rather than extensive and tweakable like Steamhammer’s, so it may be worth a look. See ProductionManager::checkProductionDeadLock()
.
• Scouting seems to be completely rewritten. Overlords scout around using an influence map. It looks like 1 zergling can be singled out to be a scout zergling.
• Overkill 2016 learned a model of when to make zerglings, hydralisks, or mutalisks. The actions in this Overkill version include the combat units except for guardians and devourers, sunken colonies, tech buildings to construct, upgrades and tech to research, expanding, attacking versus harrassing with mutalisks, and doing nothing. So it not only has a wider scope for strategy with more unit types, it looks as though it learns more of its strategy. I see declarations for 4 neural networks, which apparently cooperate to do different parts of the job.
• It chooses from among the same 3 openings as before: 9 pool, 10 hatch, 12 hatch. The details of the build orders have been revised.
How does it play? Well, I only looked at the source, and not too closely. Overkill first got a learning model in the CIG 2016 tournament. Here are its historical tournament results:
tournament | # | % |
---|---|---|
CIG 2015 | 3 | 81% |
AIIDE 2015 | 3 | 81% |
CIG 2016 qualifier | 4 | 71% |
CIG 2016 final | 5 | 51% |
AIIDE 2016 | 7 | 62% |
CIG 2017 | 7 | 60% |
The CIG 2016 qualifier is a better comparison to the other tournaments than the final, since it includes all entrants. The CIG 2017 entry is probably not much different from the AIIDE 2017 one. [Update: Not so. CIG says that the CIG 2017 was the same as the CIG 2016 version. See comments.] It seems that Overkill’s win rate has been falling as it learns more on its own. Is that a sign of how hard it is to do strategy learning? I would like to see it play before I draw conclusions!
Next: I’ll look at Overkill’s CIG 2017 replays and see what I can see.
Comments
PurpleWaveJadien on :
Jay Scott on :
krasi0 on :