Tom Peeter’s thesis on ForceBot
I’ve just finished reading Tom Peeters’ master’s thesis about zerg ForceBot, called “Evaluating a Cognitive Agent-Orientated Approach for the creation of Artificial Intelligence in StarCraft”. ForceBot was written using the GOAL agent-oriented logic programming system (itself written in Prolog), and the StarCraft-GOAL Environment Connector (or just connector) that ties GOAL to Starcraft.
For me the interesting part of the thesis is the story of the development of ForceBot, which is divided into episodes or “milestones” for each tournament that the bot participated in. At the beginning of the story, Tom Peeters is fixing basic bugs like “the drone died, so the building was not constructed.” By the end, he was adjusting complex strategic reactions. Those of us who followed ForceBot’s career on SSCAIT may remember that it started out quite weak, and steadily progressed until it became a solid mid-tier bot. It was interesting to get a glimpse into ForceBot’s design and Starcraft knowledge.
ForceBot started with the most straightforward agent architecture: Each unit and building was a GOAL agent, and made its own decisions. To me, with the advantage of already having a well-developed view of how Starcraft bots can and should work, that seems obviously not the right approach. It’s too low-level; you want some form of abstraction, so that the bot can follow a coherent strategy and units can cooperate with each other without each agent having to understand the situation around them from scratch. I think a more natural approach is a hierarchy of agents, or at least a network with higher-level agents so that the unit agents can share a view of the world. And in fact ForceBot evolved in that direction, with higher-level “managers” taking over some of the work of the unit and building agents, and doing a better job because they had a more comprehensive view of the game situation. There are a lot of ways to do it, though, and any one bot can only explore a small part of the design space.
I was struck with how much of the development consisted of fighting with the GOAL system and the connector. Many updates were made to the connector, and the author struggled with architectural limitations and performance issues in GOAL. The main purpose of the project was to evaluate GOAL, so in a sense that’s what you want! But in my eyes, the result of the evaluation was that GOAL was not yet mature enough for this advanced project. There were too many stumbling blocks, and the bot was forced into design compromises that harmed its performance.
I approve of GOAL, though. It’s important to experiment with different software design philosophies, and agent architectures are intuitively appealing. With time and effort, good philosophies can be turned into good software tools, and it takes experience to make progress.
Comments
Yegers on :
(https://bitbucket.org/goalhub/runtime/src/7a5e3bbe670951f99ead73840f2852504f79c981/src/main/java/goal/?at=master)
But it uses Prolog as a sort of querying languages for the knowledge the agents store (e.g. your Beliefs, Goals etc.)
Jay Scott on :