archive by month
Skip to content

Overkill AIIDE 2017 updates

PurpleWaveJadian pointed out in a comment that Sijia Xu had updated Overkill’s repository, from last year’s version to bring it up to date with the AIIDE 2017 version of Overkill. There are extensive changes. The bot looks substantially more capable.

Here’s what caught my eye. This is based on a quick read through a long list of diffs, so I may have missed or misunderstood a lot.

• New machine learning library with a couple different learning methods.

• Support for lurkers, scourge, guardians, devourers, ultralisks—all the regular zerg combat units. Only queens and defilers are omitted, the spell units. Last year it supported only zerglings, hydralisks, and mutalisks.

• Major changes in tactical calculations. Well, there would have to be to support new units.

• Changes to building construction. I’m not sure what the idea is here.

• It detects production deadlocks, to avoid freezes. The method is written to be simple and general-purpose rather than extensive and tweakable like Steamhammer’s, so it may be worth a look. See ProductionManager::checkProductionDeadLock().

• Scouting seems to be completely rewritten. Overlords scout around using an influence map. It looks like 1 zergling can be singled out to be a scout zergling.

• Overkill 2016 learned a model of when to make zerglings, hydralisks, or mutalisks. The actions in this Overkill version include the combat units except for guardians and devourers, sunken colonies, tech buildings to construct, upgrades and tech to research, expanding, attacking versus harrassing with mutalisks, and doing nothing. So it not only has a wider scope for strategy with more unit types, it looks as though it learns more of its strategy. I see declarations for 4 neural networks, which apparently cooperate to do different parts of the job.

• It chooses from among the same 3 openings as before: 9 pool, 10 hatch, 12 hatch. The details of the build orders have been revised.

How does it play? Well, I only looked at the source, and not too closely. Overkill first got a learning model in the CIG 2016 tournament. Here are its historical tournament results:

tournament#%
CIG 2015381%
AIIDE 2015381%
CIG 2016 qualifier471%
CIG 2016 final551%
AIIDE 2016762%
CIG 2017760%

The CIG 2016 qualifier is a better comparison to the other tournaments than the final, since it includes all entrants. The CIG 2017 entry is probably not much different from the AIIDE 2017 one. [Update: Not so. CIG says that the CIG 2017 was the same as the CIG 2016 version. See comments.] It seems that Overkill’s win rate has been falling as it learns more on its own. Is that a sign of how hard it is to do strategy learning? I would like to see it play before I draw conclusions!

Next: I’ll look at Overkill’s CIG 2017 replays and see what I can see.

Trackbacks

No Trackbacks

Comments

PurpleWaveJadien on :

Overkill participated in CIG 2017 but it wasn't updated. It's listed under "Last year bot" -- you may have to build it and run some games to see the difference.

Jay Scott on :

You’re right, that is how it is listed. So it should be the same Overkill as in CIG 2016, with the first-cut learning model. In that case we have no clues about whether the 2017 learning model is better. The repository does not include offline learning data which the bot needs to play up to its potential, so building it and playing games probably doesn’t tell us how strong this version truly is.

krasi0 on :

Well, it's no secret that proper (machine) learning at this complexity level is hard. I am also speaking from experience here :)

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Form options

Submitted comments will be subject to moderation before being displayed.