archive by month
Skip to content

AIIDE 2018 results announced

AIIDE 2018 results are out, and they’re exciting!

  1. SAIDA
  2. CherryPi
  3. CSE
  4. BlueBlueSky
  5. Locutus

SAIDA is first and Locutus is only fifth! #1 SAIDA scored 96% overall, 83% against #2 CherryPi, and higher against every other opponent, so the winner was clear. That sounds to me like a fair claim to be world champion. All the top bots became much stronger this year. Look how far down #9 Iron has been pushed.

Zergs #10 ZZZKBot (last year’s winner), #11 Steamhammer, #12 Microwave, #13 LastOrder all scored about 50%. That looks miserable, but it put them right behind #9 Iron and in the upper half of the table—there were 27 participants total. The top bots wiped the floor with all others. Former contenders #19 UAlbertaBot, #20 XIMP, and #22 AIUR scored in the vicinity of 33%.

I’m pleased that #11 Steamhammer finished about as high as it did last year. I have more or less kept up with progress—except progress at the summit.

Stand by for tables and analysis. I’ll be looking at the top bots to see what makes them tick. I don’t know anything about #3 CSE and #4 BlueBlueSky so that should be interesting.

Trackbacks

No Trackbacks

Comments

Bytekeeper on :

BlueSky is a Locutus fork, I bet CSE is too. In a few years you can call it "Steamhammer Fork Tournament" ;)

Chanhyeon Bae on :

I am SAIDA team leader.
Thank you for your good review of the SAIDA team. We hope that our participation will accelerate StarCraft AI research. Facebook and Samsung, as well as other IT companies, are challenging themselves and hoping to compete in good faith.

Jay Scott on :

Yes, I also hope for faster progress! The survey answers of the CherryPi team imply that they were also aiming for #1.

Bruce on :

CSE and BlueBlueSky are also both Locutus forks, which I had suspected based on them being Protoss bots on 4.1.2.

I've pushed the sources of all of the AIIDE Locutus forks to branches on my git repository to make it easier to compare them:

https://github.com/bmnielsen/Locutus/compare/cse_aiide2018
https://github.com/bmnielsen/Locutus/compare/bluebluesky_aiide2018
https://github.com/bmnielsen/Locutus/compare/isamind_aiide2018
https://github.com/bmnielsen/Locutus/compare/daqin_aiide2018

I feel it is unfortunate that there ended up being 5 very similar bots in the tournament, as it has pushed the rank and scores down for a lot of bots that didn't deserve it. With very few exceptions (mainly McRave and to a lesser extent CherryPi), they performed identically against all opponents outside the Locutus "family". So I would think of the "true" rankings as being 1. SAIDA 2. CherryPi 3. The Locutuses 4. McRave 5. Iron etc., with the Locutuses having their own internal family tournament.

Related to this, I have decided to modify my license for future source releases to require my permission before entering a fork into any tournament. I'm not sure exactly what criteria I will use to decide whether or not to grant permission, but if it is a Protoss fork I want it to be different enough to have its own "personality" (which I think is the case for most if not all Steamhammer / UAB forks). I want to encourage collaboration in the spirit of SH / UAB, but five clones in a tournament is not what I had in mind. This obviously can't be applied retroactively, so the AIIDE sources can be freely used.

Congrats on a strong performance!

Jay Scott on :

The surveys for the top 3 are included in the results. CSE’s survey says it is 6 weeks of work. :-/

Last year the performance of direct Steamhammer forks was more varied, even though some also had little total work.

McRave on :

It's very crushing to see someone work 6 weeks on a bot and smash people who work months. We're quickly reaching Episode 3: Revenge of the Sith, who will that be?

MarcoDBAA on :

Totally agree with everything.

Top bot authors also shouldn´t release a new source shortly before a tournament in general.

DaveChurchill on :

I said during the presentation that we may have to investigate BkueBlueSky and so if someone presents me with an argument showing that it changed very little then I will consider removing it.

However for CSE it did significantly better than Locutus so it's harder for me to want to remove that one.

We cannot simply remove forks from the competition because code sharing a huge part of the community. UAlbertaBot code now exists in about 2 dozen bots since the popularity of Steamhammer and its forks.

But please forward me any arguments you have for bots being too similar and I will review them and possibly cull them from the official results

McRave on :

I think most people will point to BlueBlueSky and DaQin, a lot of their changes is just rebranding or renaming functions. Just my 2 cents.

Tully Elliston on :

Really, really need some rules that prevents very similar forks of top bots competing in these tournaments.

MicroDK on :

I would also like the tournament organizers to be more strict to when forks are viewed as different enough... we are in a grey area but AIIDE2018 showed that something has to be done. In my mind the Locutus forks can be ignored in the result.

MicroDK on :

* CSE is not much different, they have added proxy and sneak openings, proxy location feature and proxy detection but the rest is the same. * BlueBlueSky only added proxy opening, proxy featutre
and enemy proxy detection. Also added hold choke location when enemy is doing proxy. * DaQin has added enemy natural location, micro changes (carriers and kiting) and minor strategy changes. *
Isamind has the least changes, but it has more impact on how the bot plays. It is using a NN to recognise opponent plans and choose its strategy. I would keep Isamind since using NN is interesting and maybe CSE since they added new types of openings.

Wei Guo on :

Hi, MicroDK:
I'm Wei Guo, one of the CSE's authors. Sorry for my poor English.
It's true that CSE was made in 6 weeks, but we really worked very hard. Actually, we worked for 7 days per week, and sometimes overnight. I had a fever for a week and kept on working full days on CSE. Not only myself, almost everyone of us were overworked.
I'm not good at playing Starcraft, I learned to play this game from scrach on July, 2018. To build CSE, I invited many people who are good at playing Starcraft to give me advice. Sometimes they refused me, sometimes I have to paying them for just a little help, and finally some of them inspired me a lot. What I learned from them is some human players always use the same opening build order and changing strategy accroding to enemy's move. No matter which opening build order was selected by the opponent bot, human players can always win the bot.
CSE has many differences between Locutus, the main diffence I think is it runs the same initial build order everytime on PvP match(except on the map "HeartBreakRidge"), which is totally different from Locutus(Locutus basically use a bandit model to switch between several openings, if Locutus use any of its opening in every games, it will loss badly since the enemy will learned to counter it). To achive that, CSE's single initial build order have to be very robust to different enemy plans.
If you reviewing the source code, you'll find that CSE always changing its build order and micro(currently Dark Templar's micro) based on the enemy's move. It make dicision according to combat situation(whether it's on the win side or not), proxy detection, zealot rush detection, dark templar detection, and previous game records. And it detect or speculate those enemy plans through different ways.
The phrase "sneak" you mentioned is a strategy of CSE. It was activated base on several conditions, and maybe interrupted by some other conditions and then swich to other bahaviors.
CSE also have several other improvements and features. Locutus is an excellent bot, we respect all of the bot authers who puts a lot of efforts on their bot. We really appreciate Bruce who made such a great bot, and we really worked hard on CSE and really really want to make a nice bot and implement our ideas.

MicroDK on :

Thanks for your insight! I only spent a short time reading the code changes and properly did not catch everything. The important thing is that you actually changed how the bot plays. Its important for bot authors to get the help we can from expert players. ;)

Wei Guo on :

Totally agree. The expert players are really important, they helped us a lot. :)

Dilyan on :

Upload the bot on sscait, guys so we have some fun toghetter.

Wei Guo on :

Ok, I'll ask my boss for that, but currently I can't upload without his permission.

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Form options

Submitted comments will be subject to moderation before being displayed.