archive by month
Skip to content

pathing 5 - potential fields

Pathfinding algorithms come in two kinds, planning algorithms and reactive algorithms. The planning algorithms are the A* gang, which rely on predictions of future events to find good ways forward. When something unpredictable happens (and the enemy will try to make sure that it does), planners have to spend time replanning. Because A* doesn’t try to predict enemy actions, the original plan may have been poor in the first place.

Potential fields are a family of reactive algorithms, or more precisely a data structure and framework for making up reactive navigation algorithms. Reactive algorithms, popularized in robotics by Rod Brooks, look at the immediate situation and react to it. They don’t know or care what happens next, they just try to do something sensible in the moment.

Potential fields seem popular; a lot of bots say they use them. My favorite sources are by Johan Hagelbäck. Ratiotile pointed out the thesis in this comment (thanks!).

A Multi-Agent Potential Field based approach for Real-Time Strategy Game Bots - 2009 thesis with a simpler game than Starcraft
Potential-Field Based navigation in StarCraft - 2012 paper covering OpprimoBot

The idea is to define a potential for each map location (x, y). For its next move, the unit seeks the highest potential in its immediate neighborhood (or, if you think like a physicist, rolls downhill toward the lowest potential). The goal generates an attractive potential that reaches across the map. Obstacles generate a short-range repulsive potential. And so on—the Starcraft paper has a table of OpprimoBot’s sources of potential. To get the overall potential, the separate potentials from each origin are combined, by adding or with max depending on the effect.

For efficiency, you don’t calculate the full potential field across the map. For each unit, calculate the potential for a small selection of points where you may want to move next. Also, use a bounding box (or something) to prune away sources of potential which are too far away to matter.

Advantages. The strengths of potential fields cover the weaknesses of A*. They naturally handle collisions and destructible obstacles and dynamic hazards and even combat behavior. The thesis suggests a number of tricks for different situations.

It’s easy to think about and easy to implement. The problem shifts from “what behavior do I want in this complex situation?” to “what potential should each object generate?” The problem decomposition is given for you, if you like. You still have to solve the problem, giving attractive fields to goals (“attack the enemy here”) and repulsive fields to hazards (“combat simulator says you’re going to lose, run away”).

Simple micro is easy. To get kiting, give enemies an attractive potential field (up to just inside your firing range) when you’re ready to fire and a repulsive one during cooldown. Some bots overdo it and kite, for example, dragoons against overlords; there’s no need to avoid units which can’t hurt you (and it can reduce your firing rate).

Simple cooperative behavior falls out. If your tanks avoid each other as obstacles and are attracted to within siege range of enemies, then they’ll form up into a line at the right range.

Complicated behavior is possible. Nothing says that a reactive algorithm has to be 100% reactive—a planner can calculate potential fields too, any kind of potential field for any purpose. You can, say, scan the enemy base and use a planner to decide where each dropship should go, and generate a potential field to pull them to their targets. The dropships will dodge unexpected enemies on their way, if the enemies generate their own fields. Potential fields are general-purpose.

Problems. The famous problem is that the fields may combine in a way that creates a local optimum where a unit gets stuck permanently. OpprimoBot reduces but does not solve the local optimum problem with a “pheromone trail,” meaning that each unit’s own backtrail generates a repulsive field that keeps the unit moving. IceBot uses a different solution, called potential flow, which mathematically eliminates the possibility of local optimum points. One paper is Potential Flow for Unit Positioning During Combat in StarCraft (pdf) by IceBot team members, 2013.

It may not be easy to tune the potential fields to get good overall behavior. OpprimoBot apparently has hand-tuned fields. The Berkeley Overmind used machine learning to tune its potential fields.

In general, the weakness of potential fields is that they don’t foresee what happens next. The space between these enemy groups is safe—until you get sandwiched. The Berkeley Overmind used to compute the convex hull of the threats to avoid getting caught inside (and then fly around the outside looking for weak points). But that’s only one example of a way to get into trouble by ignoring what’s next. The Starcraft paper gives a simpler example: If the base exit is to the north and the enemy is to the south, units won’t get out.

The bottom line is that potential fields look great for local decision-making, and not so great for larger decisions like “which choke do I go through?” Which immediately suggests that you could use A* to plan a path at the map region level (“go through this sequence of chokes”) and potential fields along the way. OpprimoBot’s documentation says that it uses Starcraft’s built-in navigation for units which have no enemy in sight range, and otherwise switches to potential fields or flocking.

I see obvious ways to combine potential fields with forward search to gain advantages from both, and that may go into the final Pathfinder Almighty. I think we knew from the start that the Pathfinder Almighty, which takes all available information into account, was neither a pure planning algorithm (since it expects surprises) nor a pure reactive algorithm (since it has to foresee the range of enemy reactions).

Tomorrow: Flocking.

Trackbacks

No Trackbacks

Comments

PurpleWaveJadien on :

Re-stumbled on this post while doing some unrelated research. I recall reading it some months ago and finding some of the existing potential field workarounds silly. There's a pretty quick fix to the "enemy is left, but exit is right" dilemma: use an appropriate travel metric (air distance for air units, but ground distance for ground units). PurpleWave does that, using a BWEM-style fast choke-to-choke algorithm. Meanwhile, tscmoo uses something like the combined pathfinding+potential field approach you mention. He treats undesirable areas as unwalkable and pathfinds around them, but you could get a similar effect by simply jacking up the cost function of undesirable tiles; ie. if I have to run through this gauntlet of Photon Cannons, which path will cause me the least damage? That approach could be extended to runbys -- set a maximum acceptable threshold of cost to achieve a runby, then try to find a path which satisfies it.

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Form options

Submitted comments will be subject to moderation before being displayed.