AIIDE 2021 - the learning curves
Before I dig into what each bot learned, I thought I’d look at the win percentage over time graph. Every bot wrote data, and it is likely that every bot attempted to learn and improve over time. Only some succeeded in improving their results, though.
Every bot shows a startup transient on the graph. The early swings up and down are controlled by some combination of luck and learning; luck because there are few games so statistical variation is high, and learning if and when the learning algorithms make fast adjustments (I think they usually do). To disentangle luck from learning, I think I want both statistical tests and a look into the algorithms to see what the learning rates could be. It would be too much for one post. In this post, I’m looking at the curves after 20 or 30 rounds, when the swings have mostly leveled off. I’m answering the question: Is the bot able to keep learning throughout a long tournament, outlearning its competition in the long run?
Four bots more or less held even. There are wobbles or slight trends, but not large ones. It’s what you expect if most bots are about equally good at lifetime learning. The learning systems are more or less saturated, and when one discovers an exploit, its counterpart figures out soon enough how to neuter the exploit, or so I imagine it. The learning competition is near an equilibrium.
Stardust doesn’t learn much, and apparently doesn’t have to. Steamhammer and McRave have messy early curves, perhaps reflecting complicated learning systems. FreshMeat has a beautiful clean early curve, unlike any other bot’s, suggesting that it knows what it is doing and straightforwardly does it. All 3 of the lower bots show low humps followed by slight regressions. I provisionally interpret that as the bot’s learning system saturating, then its opponents adjusting to that over time.
Four bots were able to improve. BananaBrain was in a class by itself, improving far more than any other bot. WillyT, Microwave, and UAlbertaBot had slight upward trends. None of them looks as impressive as AIUR did in 2015.
What gives BananaBrain a steeper curve? Is it good at learning in the long term, or bad at learning in the short term? (See that down-hook at the beginning.) I’ll look into it later on.
Dragon and DaQin fell behind. If somebody’s going up, somebody else must be going down. It may not be a coincidence that both are carryover bots from last year. Dragon’s learning files have a simple structure, the strategy name and win/loss. DaQin plays few strategies and has few ways to escape from exploits that other bots may find.
Next: Looking at Stardust’s learning.
Comments