archive by month
Skip to content

astonishing discovery: Starcraft is hard

The secret motivation behind the last two days of posts is to remind us all how hard Starcraft is. Not that anybody forgot but—even when you know it’s hard, you don’t know. The amount of knowledge and skill it takes to play well is huge, and we don’t know how huge because most of it isn’t written down. Humans can’t write down everything they know, even when they try.

Adding knowledge to a program is also hard. People who worked on expert systems starting in the 1970s named the problem the knowledge acquisition bottleneck: The bottleneck in acquiring knowledge from humans and making it available to computers.

Right now, most knowledge acquisition in Starcraft bots happens by bot authors manually coding it in, the most direct and the slowest method. There are exceptions, of course. But if we want to make faster progress, we have to take faster ways. There are three broad classes of faster ways.

1. Code it better. One of the things expert system researchers did was invent knowledge representations to make it easier to encode knowledge. Computer code is procedural, but knowledge is declarative. You want a knowledge representation language to be as declarative as possible; some fixed set of coded procedures interprets the declarative knowledge to put it to use. LetaBot’s database of build orders is an example: LetaBot declares build orders and code compares the build orders against scouting information to decide what to do. I approve. A number of bots are coded in domain-specific language style, where the program implements what amounts to a higher-level language to describe Starcraft behaviors. Skynet is a good example. I approve of that too. Coding knowledge is still hard, but these kinds of steps make it easier.

2. Search bypasses knowledge acquisition. Chess as played by humans also requires huge knowledge to play well, but chess programs don’t need to encode most of that knowledge because search can discover stuff on its own. In a chess program, most of the skill comes from knowledge encoded procedurally in the search and the evaluation function. Most of the knowledge by bulk, adding a little more skill, comes from databases of opening and ending positions. And how were those databases created? By offline search, like the strategy catalog that I proposed.

In Starcraft we may want separate searches, and/or separate knowledge bases, at the strategic, tactical, and unit control levels. I don’t know any bot that does strategy search. MaasCraft does tactical search. The SparCraft library does unit control search.

3. Learning automates knowledge acquisition. The strategy learning that some bots do now is different; it is opponent modeling, learning the opponent rather than learning about Starcraft. Learning for knowledge acquisition means learning to play Starcraft better from data, whatever the data may be. A few old bots like BroodWarBotQ used to learn tactics and unit control, though they’re gone now so apparently it was not too successful. We’ve heard that Tscmoo is working on neural networks for strategic decisions, but have no details. Other than that, I don’t know any current bot that uses learning for knowledge acquisition, whether for strategy, tactics, or unit control. (Though I wouldn’t be surprised to hear of one.) If a bot learns from replays, it can learn from the replays of humans or of other bots as well as its own.

As an aside, I think learning for opponent modeling using replays would be good, but current tournaments don’t allow for it. Without replays during the tournament, replay learning can only be done offline ahead of time—which is cool for learning for knowledge acquisition.

administrivia

For now, comments are allowed throughout, even on the oldest posts. After the first comment was spam (to no one’s surprise, I’m sure), I turned on moderation: Subscribers should see no junk, but comments may be delayed while I approve them.

If you don’t want to post a comment, you can e-mail me directly.

levels of abstraction

In Starcraft it seems that decisions have to be made at about 3 levels of abstraction, which go something like this:

  • strategy - what tech to pursue, when to expand, etc.
  • tactics - maneuvering the army, deciding when to fight
  • unit control - moving, targeting, casting

Most people seem to like 3 levels. Maybe some prefer 2 or 4. For example, if you have opponent modeling or strategy learning then you might want to write it in at the highest level, a meta-strategy level at which decisions have effects over multiple games. And of course there’s no agreement on the names of the levels or on exactly which decisions each one includes (the decisions above are examples and don’t cover everything). But my list is similar to most.

One way to look at it is that plans made at a higher level last longer and have to be made less often: They are in effect for a longer period. At the strategy level, which tech path you take or how many workers you cut for a rush may be part of a plan that the bot follows for minutes. At the unit control level, which enemy you target in a fight may have long-term consequences (do you win or lose the battle?), but the decision itself may be over in a fraction of a second and then it’s time for the next one. The paper A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft points this out, as well as other differences between levels of abstraction.

Bottom up. The way I think of it is this: It’s theoretically possible to make all decisions at the unit control level. Your initial nexus is a unit, each probe it makes is a unit, all you have to do is give the right commands at the right time. But it’s crazy hard to coordinate a strategy if you only think at the detail level. Abstraction is necessary if you want to have effective game plans; it is just plain infeasible to find a good strategy by trying different patterns of low-level actions until you hit on something that works, simulated annealing-style. If it’s possible at all, that kind of strategy search has to happen offline.

Top down. On the other hand, PerfectBot certainly doesn’t make strategy decisions without considering the tactical and combat situations. The levels of abstraction are not isolated. Remember mine laying from yesterday: Mines have long-term strategic, short-term tactical, and immediate combat uses. To decide how to best to place spider mines, PerfectBot has to consider every level of abstraction.

Someday actual bots will want to consider every level of abstraction too. Cue topic hierarchical search, which I’ll write about someday. Another idea for integrating levels of abstraction is mission command, a term from the real life military. In outline, the commander sets overall goals and the subordinates figure out how to achieve them. One implementation for Starcraft (quite different from how it’s done in real life) might go like this: The strategy boss, paying no attention to the tactical situation, figures out possible strategic goals and scores them. “We need resources most of all, 100 points for a new expansion. We need to counter lurkers, 50 points for vessels and enough tanks. The opponent is taking bases, 50 points for killing an enemy base. It would be nice to harass, 20 points for wraiths to hunt overlords or 20 points for small drops to get drones.” The tactics boss considers tactical plans and scores them in part by how well they achieve strategic goals. “We can’t expand (0 points) or kill bases (0 points), we’re lurker contained (curse KillerBot!). Tech to vessels and make tanks until we can break out (50 points); harassment is worth less.” You get the idea.

spider mine placement

I’ve been thinking about mine placement. Where do you lay spider mines? It’s a tiny part of Starcraft, but even by itself it’s super complicated.

Attack units or see the map? IceBot lays mines in a wide grid pattern, each mine in sight range of its neighbors, that gives plenty of warning of incoming attacks or drops. But the mine grid doesn’t kill many units. WOPR lays mines in dense bunches along the shortest path between main bases. Bunched mines don’t scout as widely but are dangerous to force because they trigger in groups. Tyr lays mines in a tighter grid pattern, which could be a compromise between the two goals.

Offensive or defensive placement? IceBot and Tyr lay mines outside their own base to defend. Iron lays mines aggressively outside the opponent’s base as part of a containment game plan. Another good offensive mine placement is up ramps, taking advantage of high ground and vision. I haven’t seen any bots seem to do that intentionally.

Strategic versus tactical area denial. Iron’s containment mines are laid for a strategic purpose. “If you ever want to leave, you have to pass my minefield.” It’s also possible to lay mines for short-term tactical purposes: to block a retreat, to hinder reinforcements, to keep vultures unmolested for a short time while they massacre workers.

Strategic/tactical versus combat mine use. IceBot, WOPR, and Tyr lay mines to be triggered later. Tscmoo and Iron not only do that, but also attack sieged tanks by laying mines next to them that trigger immediately. In human play it’s popular to attack dragoons with mines, but that attack is trickier to pull off because the vultures have to coordinate to block the dragoons from running away.

Blocking an expansion. Several bots, including IceBot and Krasi0, lay mines at expansions. The mine scouts the expansion and delays the opponent from taking it. (By the way, dark templar or burrowed zerglings could do the same.) In principle, mines could block other buildings, but what other opponent building appears in a predictable place? Well, you could use mines to delay terran add-ons.

Luring is still too fancy for bots, but they’ll get there someday. For example, lay a mine in the enemy probe line and then try to entice a zealot in. Devastating when it works.

Mines can also be used to bypass map obstacles, to attack drones when triggered by a larva, and more. Besides the positive uses, a complication is that mines can harm friendly units, which the opponent may try to exploit by mine dragging. Skynet knows how to drag mines (at least its code says it does).

How will PerfectBot lay mines? Being perfect and all, PerfectBot knows the uses of mines and can judge which uses are more important when. I have to imagine that PerfectBot does all the things I mentioned above, and probably in an unpredictable adaptive way. I can’t wait until the end of time to see what it looks like!

blog upgrade complete

Upgrade complete! That sure took a long time (I had insufficient vespine gas).

As promised, there is an RSS feed. Online services can turn the feed into e-mail notifications, if that’s what you prefer. Plus other new features. I’ll be working behind the scenes to clean up details and add categories.

Regular Brood War posting mania returns tomorrow.