astonishing discovery: Starcraft is hard
The secret motivation behind the last two days of posts is to remind us all how hard Starcraft is. Not that anybody forgot but—even when you know it’s hard, you don’t know. The amount of knowledge and skill it takes to play well is huge, and we don’t know how huge because most of it isn’t written down. Humans can’t write down everything they know, even when they try.
Adding knowledge to a program is also hard. People who worked on expert systems starting in the 1970s named the problem the knowledge acquisition bottleneck: The bottleneck in acquiring knowledge from humans and making it available to computers.
Right now, most knowledge acquisition in Starcraft bots happens by bot authors manually coding it in, the most direct and the slowest method. There are exceptions, of course. But if we want to make faster progress, we have to take faster ways. There are three broad classes of faster ways.
1. Code it better. One of the things expert system researchers did was invent knowledge representations to make it easier to encode knowledge. Computer code is procedural, but knowledge is declarative. You want a knowledge representation language to be as declarative as possible; some fixed set of coded procedures interprets the declarative knowledge to put it to use. LetaBot’s database of build orders is an example: LetaBot declares build orders and code compares the build orders against scouting information to decide what to do. I approve. A number of bots are coded in domain-specific language style, where the program implements what amounts to a higher-level language to describe Starcraft behaviors. Skynet is a good example. I approve of that too. Coding knowledge is still hard, but these kinds of steps make it easier.
2. Search bypasses knowledge acquisition. Chess as played by humans also requires huge knowledge to play well, but chess programs don’t need to encode most of that knowledge because search can discover stuff on its own. In a chess program, most of the skill comes from knowledge encoded procedurally in the search and the evaluation function. Most of the knowledge by bulk, adding a little more skill, comes from databases of opening and ending positions. And how were those databases created? By offline search, like the strategy catalog that I proposed.
In Starcraft we may want separate searches, and/or separate knowledge bases, at the strategic, tactical, and unit control levels. I don’t know any bot that does strategy search. MaasCraft does tactical search. The SparCraft library does unit control search.
3. Learning automates knowledge acquisition. The strategy learning that some bots do now is different; it is opponent modeling, learning the opponent rather than learning about Starcraft. Learning for knowledge acquisition means learning to play Starcraft better from data, whatever the data may be. A few old bots like BroodWarBotQ used to learn tactics and unit control, though they’re gone now so apparently it was not too successful. We’ve heard that Tscmoo is working on neural networks for strategic decisions, but have no details. Other than that, I don’t know any current bot that uses learning for knowledge acquisition, whether for strategy, tactics, or unit control. (Though I wouldn’t be surprised to hear of one.) If a bot learns from replays, it can learn from the replays of humans or of other bots as well as its own.
As an aside, I think learning for opponent modeling using replays would be good, but current tournaments don’t allow for it. Without replays during the tournament, replay learning can only be done offline ahead of time—which is cool for learning for knowledge acquisition.