Cadenzie versus Locutus 5 game showmatch
Thanks to SCHNAIL we’re starting to see bot matches against strong human players. Today I write about the Cadenzie (Z) 5 game match versus Locutus from last Tuesday. It comes with an interview on Making Computer Do Things. Watch the games first if you’re interested.
Cadenzie is not a top pro; to me her play looks a little slow and awkward compared to the best. But she is very strong, and when the match was announced, I expected her to win every game. In the event, she scored 3-2. I thought all the games were one-sided: Either zerg won fairly easily, or else Locutus collected enough dragoons and was able to overpower her hydralisks with superhuman dragoon micro. In the game where Locutus chose to go zealots instead, the zealots looked wimpy; Steamhammer has had the same experience.
I also felt that the first game was the only one that Cadenzie played with 100% seriousness (and she said as much in the interview). She played a well-rehearsed build with mass hydras and drop. Locutus (lacking PurpleWave’s strategy skills) did not understand how to read her build in order to cut corners in the opening, and it fell behind (slightly behind in bot terms, “massively” behind according to Cadenzie—the distance varies by skill level!). When hydras collected outside its natural, Locutus trickled units out through the narrow opening in its wall and let them be picked off, falling further behind. The front gateway fell, and then overlord speed finished. She’d gotten overlord drop first, and she picked up hydras and put them in the protoss main, where they cleaned up with little effort. I judged that she could as easily have skipped drop and powered through the front door.
I thought the most interesting answer in the interview was “I played in a tournament before where there was a team melee relay style with a mix of progamers and beginner level players and they would take turns every 2 minutes, in a way it was most similar to that.” In other words, Locutus was extremely good at some aspects of the game, and extremely weak at others. That is similar to other games where computer programs were good enough to play humans and not good enough to win every time; for example, chess programs in the old days were superhuman at tactics and weak at strategy in a very similar way.
She repeatedly emphasized that bots need to adapt more to what they scout. I think that’s the main takeaway.
Compare Artosis versus top bots on Twitch: Notice how often Artosis says “In this build, when I see such-and-such, I do so-and-so.” Human players have extensive knowledge of how to play in specific situations, and no bot comes close. SAIDA may come closest, with its one all-purpose build and numerous reactions, but its understanding is shallow by comparison. PurpleWave I think has the greatest strategy knowledge of any bot, but it has a weak understanding of tactics. Locutus relies on its strong micro and aggressive tactics, which cover for the weaknesses in other aspects. Bots not only know less, they don’t integrate their knowledge into a theory of how the game works—they don’t understand what they know, so they a weak at drawing inferences and their adaptation ability is shallow. See for example the Artosis game versus Killerbot by Marian Devecka, where Artosis was able to guess that a third zerg base at an early timing was likely, and scouted for it specifically. In other games, he did not spend effort scouting for expansions he did not see as likely.
By the way, the last game is the best, Artosis vs McRave starting at ~2 hours in.
Filling the knowledge gap I believe will require machine learning. Writing rules and reactions by hand will take a day or two less than forever, and search will not solve all problems if it has only handmade evaluations to rely on. For Steamhammer, I’ve figured out a way to put together familiar algorithms that will execute fast, and I expect it will also learn fast (from little data) and be reasonably accurate (not amazing like deep learning, but adequate). It’s part of my strategy adaptation goal. If it works as well as I hope, I WILL CRUSH YOU PUNY MORTALS BENEATH MY STEEL THUMB BWAHAHAHAHA, or something like that. Actually the first application will be nothing more than an evaluation function to choose openings and strategies, valuable but extracting only a little of the potential, and if it’s successful then after I explain how it works everyone else will get ahead of me again. That will be good too.
Comments
Edmund Nelson on :
Locutus has superhuman dragoon micro which crushes any human, Combine that with superhuman macro and superhuman mineral gathering and that's a winning formula. If your goal is to beat humans then beating humans at their own game seems like a losing strategy, try to make the game about things that bots are good at and humans are bad at Starcraft is less of a strategy game than you think and is more of a game about out executing your opponent, bots still have major issues with macro and building too many/idling gateways among other mistakes.
Thinking about how humans get edges vs other humans is partially a losing battle, look at alphastar vs mana, alphastar did not out strategize mana, it abused 2k APM blink micro which broke the game. Humans make a bunch of macro mistakes, (probe building ect.) which gives bots a very strong way to be superhuman.
Admittedly Alphastar and OpenAi5 are projects made by large teams with 100s of millions of dollars behind them. So it might be more that they have more time and resources to work on the problem than SSCAIT. Bruce Nielsen, Dave Churchill and Jay Scott are great programmers, but they don't have the resources time and monolithic focus that a 30 man research team at deepmind has.
Jay Scott on :
jtolmar on :
Jay Scott on :