I doubt anyone will take my advice, and I certainly won’t take it myself, so I can offer it with complete freedom. Here’s what I would take into account in choosing maps if I were running a tournament.
Balance. You want the maps to be fair across races. We can’t use statistics from pro games to judge balance, because bot balance and pro balance are unrelated. Also bots are improving rapidly and the participant pools are small, plus we may choose some new maps in each tournament, so past tournament statistics are not too helpful for balance either. But the same graph I linked above, showing that bot balance and pro balance are different, also shows that bots have narrower imbalances. Or to say it differently: The maps may have imbalances, but bots suck at exploiting the imbalances. Choose enough maps, and the balance differences will average out statistically; it’s the same principle as balancing a portfolio of stocks. The 5 maps of CIG 2017 are not enough to convince me, but the 10 maps of AIIDE are probably enough.
Number of starting positions. For this year, maps were chosen like this:
| tournament | 2 player | 3 player | 4 player |
| CIG | 1 (20%) | 2 (40%) | 2 (40%) |
| AIIDE | 3 (30%) | 2 (20%) | 5 (50%) |
| SSCAIT | 3 (21%) | 2 (14%) | 9 (64%) |
Those all seem reasonable to me. I like SSCAIT’s ratios best. 2 player maps favor rush strategies and 4 player maps favor macro strategies, and you don’t want to emphasize either too much. One issue is that there aren’t many good 3 player maps (though the best ones are quite good).
Novelty versus consistency. If you carry some maps over from year to year, we can use them to (at least try to) measure balance changes and progress. If you introduce new maps, you pose a stronger test of adaptability. If I were choosing, I would pick some old standbys and a few unusual maps that the bots might not have played on before (or else run specialized tournaments and do both separately). CIG has done a good job of this, though I think it’s only a side effect of their process and not a deliberate decision.
Prodding bots to improve. Since bots are poor at exploiting map features, I want to include some maps with exploitable features to encourage authors to step up. Think of Iron failing on the map Hitchhiker at CIG 2017 because (as the author explained) BWEM did not grasp all the map features; do you think Iron will fail the same way next year? I proposed the map Namja Iyagi, which has 6 islands, as a map with exploitable features which is still playable by bots that do not understand islands. PurpleFistJadian suggested Outsider, which has the exploitable feature of pushing units through mineral lines and remains playable without. There are a lot of choices; the universe of pro maps is large.
No appearance of cheating. Sometimes the tournament organizers participate in the tournament. When not, the organizers may have a real or apparent interest in some participants: “Bot X uses method Y, which I’ve been pushing. So if X does well....” To avoid controversy, we may want map selection to make favoritism visibly difficult. So divide your universe of maps into classes depending on the other goals, and choose randomly from each class. We’ve seen the procedure of accepting a number from each participant, XORing the numbers together, and using that as the random number seed for a known generator, so that the process is transparent and tamper-resistant. It has never been clear to me whether the organizers actually follow the elaborate process. We often see map pools reused from year to year, probably to save time. Well, if there are no suspicions, then there is no reason to allay them. I have no reason to suspect that the tournaments are unfair, even unintentionally.