archive by month
Skip to content

SSCAIT halfway point

The SSCAIT 2019 round robin stage is half finished, so it is time to take stock.

There remain few surprises. Among the top 16, Microwave has been doing well to hold #8 so far, putting it in the top half of the finals bracket, an advantage. TyrProtoss, Killerbot by Marian Devecka, Xiao Yi, and Arrakhammer are at places 14-17 with nearly equal winning percentages. Xiao Yi and Arrakhammer in particular are virtually tied. It’s likely that one of the 4 will draw the short straw and miss the finals.

For most improved, I vote MadMixP (because StyxZ made its improvement earlier). MadMix introduced a new cannon contain opening which has been tripping up opponents, including Steamhammer. I am pleased with the progress of Simplicity, which is growing stronger and more well-rounded. Ecgberht is tricky, not strong on fighting but still dangerous.

Former champion and benchmark player IceBot is below the 50% mark. Until February 2015, it was the strongest bot on SSCAIT. Other old school champions like XIMP by Tomas Vajda and UAlbertaBot by Dave Churchill are less robust and are ranked lower yet. Onward!

Trackbacks

No Trackbacks

Comments

Dan on :

Great field this year. The pace of development is blistering. We're looking at the high likelihood that no champion prior to 2017 makes it into playoffs.

Just going 50% against this crowd is already an achievement. That'd put you on par with hall-of-famer Skynet.

I think these are the four big factors leading to rapid improvement over the past couple of years:
1. FAP. Having access to combat simulation gives a huge edge over older bots which tended to lack it. Trying to beat a bot with better fight/flight decisions is an incredibly uphill battle
2. Forking. Half of the top 17 are forks of other bots in the top 16 (or which would have made the top 16 if they were participating). That includes forks with transformative improvements on their source material.
3. SC-Docker. It's way easier to test against strong opponents now. Testing against the built-in AI holds you back because you get away with terrible errors and fail to develop an eye for what causes winning or losing. Folks like Johan and I (and surely others) have automated testing setups for rapidly assessing the quality of changes and preventing regressions.
4. More events and ladders. BASIL especially. We have access to so many more replays now that we see patterns and catch issues more readily. Running games locally is one thing, but being able to see "who beat my bot on BASIL today, and how?" aids robustness in a way that local testing doesn't quite fulfill.

Jay Scott on :

Agreed, if you can score 50% you should be proud. And still ambitious. :-)

MarcoDBAA on :

To have a 50% win rate is extremely difficult now, because only bots with Liquipedia page (and these were the better ones mostly) were allowed to compete. I am ok with it, because the tournament (group stage) really was too long last year. That does not mean, that I want low tier bots disabled for the normal ladder, but I talked about this here already. Maybe make high tier ones (and newly updated bots) more likely to be randomly chosen?

The older learning bots do better than these that only have one strategy or only react IN the match. Skynet is still relatively strong (for being geriatric :P), TyrProtoss is crazily doing better than last year (for now) without being updated. Bereaver and Ximp (already out) are in deep trouble. Bereaver could still win more games late, but it was figured out (at least vs protoss and zerg). The last and current update of Killerbot seems a bit experimental (also no learning) and the bot is therefore relatively disappointing. Think that you definitely need the learning feature now to win an AI tournament. And nearly all top 16 bots were updated in 2019.

MadMix seems to get in by being able to cheese and also being very variable. It feels like a newer AIUR bot. Styx is most improved I think. Hao Pan is also clearly stronger than last year, but does not seem to be able to win vs the top ones. Yes, Simplicity might become a top bot later...

But I think, that the same 2 (PW and Locutus) will be in the finals. Dragon may surprise (inconsistent), if it gets a good draw maybe? Unsure about Betastar (ZNZZ doing worse than expected). One difference from Locutus I observed is that it likes to "control" the center of the map, and going for enemy expansions from there (often ignoring the main, see games vs insanitybot for example). Not sure if this really helps. I like Bananabrain more than last year, I wouldn´t really believe in a tournament win however. Everyone else should have very high betting odds (therefore without a chance to win it).

And the winner would need to play vs krasi0 really, but it is not their fault, that the bot is missing from the tournament again.
We did not even get krasi0Z, and that is the real scandal :( xD.

Jay Scott on :

Dragon is stronger against top opponents than against all opponents. It has some chance.

MarcoDBAA on :

For sure, this tournament format (more exciting in my opinion than a league) helps. You can throw games vs Skynet for example. ;)

Dan on :

I don't think it's a hard given that you need learning to succeed, or even variety in your game plan. SAIDA demonstrated that a single build that's deep with reactions can still succeed. Leaning on learning certainly makes you less exploitable, but also introduces more losses due to exploration.

Short elimination series mean almost anything can happen. There's already a bunch of 1-1 matchups in the top 16.

MarcoDBAA on :

Didn´t SAIDA learn too? I remember it changing timings? Earlier turrets vs DT rushes for example (that changed matches vs Bananabrain before the last tournament I think). Or is that wrong?

Also, it did not win, but lost 0:3 vs your bot and 0:3 vs Locutus.
And that was last year. :P

But right, too much (random) exploration isn´t helpful in this format either. Not enough games.

Antiga / Iruian on :

Saida is more like a list of 300 reactionary IF statements from a static opener. (By FAR the most complex decision tree created so far and very cool to read). The adaptation that it does is adjusting the timing that it does things based on previous experience, and only very specific things (mostly turrets and vessels). So like saw DT at a certain time and it didn't have turrets, it'll move the detection time up next game. It's learning capacity is pretty limited.

MarcoDBAA on :

Ok, what I observed then already is (mostly) the extent of its learning capacity.

Anyway, it looks like it gets more and more difficult to depend on cheese/rush strats (= no real normal play) or one strong non-adapting build.

MicroDK on :

Microwave still needs to play top 3 bots so I expect the winrate to go down soon. ;)

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Form options

Submitted comments will be subject to moderation before being displayed.