archive by month
Skip to content

pathing 4 - up-to-date A* stuff

For a view of the A* family today, I liked this site the best. Maybe I should have skipped the last two posts and pointed “go there!” But I don’t regret stopping at a couple points along the road to get an idea of the historical development.

Amit’s A* Pages

There’s more stuff here than ought to fit into one post, but I’ll keep to the highlights.

Practical techniques. Amit talks about different choices of heuristic, ways of adjusting the “actual” sunk cost g(x) and/or the heuristic future cost h(x) to get different behavior, and other little tricks. Worth reading, not worth repeating.

Implementation details. Amit has a long section on how to make A* efficient. For the priority queue at the heart of A*, he examines the tradeoffs of too many algorithms and then says that he’s never needed anything other than a binary heap. OK, done!

Various fancier data structures. This seems like a key section. If you represent a Starcraft map as a grid of tiles, then A* will have to do a ton of work to step over each tile that might be in a path. Amit offers these choices that seem relevant to Starcraft.

  • visibility graph - draw polygons around obstacles and navigate from corner to corner
  • navigation mesh - break walkable areas into convex polygons and navigate from edge to edge
  • hierarchical representations - where each hierarchy level may be of a different type
  • skip links - add extra graph edges to take long paths in one A* search step (cheap imitation hierarchical representation)

And there are choices about how to use each representation. It an important topic, but I don’t know enough to judge which ideas are worth going into in detail. I’ll come back to it when I’ve gotten further along.

Path recalculation when the map changes on you, ditto: I’ll revisit if necessary when I know more.

Islands. If the map has areas you can’t reach by walking but you may want to go to, you’d better mark them as only reachable by air. Do a preprocessing step, I guess, or at least cache the results. You don’t want to have to repeatedly search the whole map during the game to find out that you can’t walk there. That includes not only islands to expand to but cliffs you might want to drop on. Obvious, but I hadn’t thought of it.

Group movement more or less says “see flocking.” I’ll do flocking after potential fields, since they’re related. Is it Opprimobot that claims to use a flocking algorithm? Some bot does; I’ll check it out.

Coordinated movement, like moving in formation or moving through a narrow passage in order, earns a paragraph. It’s potentially interesting for Starcraft. If you want to do that coordination in top-down planning style, a better source is the article pair Coordinated Unit Movement and Implementing Coordinated Movement by Dave Pottinger on Gamasutra.

Tomorrow: Potential fields.

pathing 3 - hierarchical algorithms based on A*

As the next step in tracing the development of A* methods, I picked this older paper from 2004 by Adi Botea, Martin Müller, and Jonathan Schaeffer of the games group at the University of Alberta:

Near Optimal Hierarchical Path-Finding

Links to code in the paper are expired, unsurprisingly. The presented algorithm seems OK to me (it’s about saving time by accepting a slightly worse path), but I was most interested in the literature review from pages 5-8.

They talk about a bunch of algorithms based on A* but in some way more efficient. These fancier algorithms are hierarchical, with at least two levels of abstraction. If there are two levels, then the higher level is something like “go through these map regions” (maybe giving border points to pass through), and the lower level is more like “go through these points within a region.” Finding a path means doing a hierarchical search to find a high-level path (“go through these regions”) and a low-level path (“go through these map grid points”). Each level may be an A* search itself.

Note: This is not the same kind of hierarchical search that I promised to talk about, though it’s related. Pathfinding hierarchies have only a simple form of abstraction.

My conclusions:

The hierarchical search, if it has two levels, goes something like this: The top-level search plans a path through map regions. To find the cost of traversing a region (“go from this choke to that one”), it calls on the low-level search. The low-level search should cache answers, because it’s going to be asked the same questions frequently. Details vary by algorithm but seem easy enough to figure out.

Maps are dynamic. Not everything stays put. In Starcraft, blocking minerals can be mined out and map buildings can be destroyed. Your own and the opponent’s buildings and units act as physical obstacles too. For full accuracy, the low-level search has to take everything into account. Ideas I’ve seen so far for coping with this within the A* family seem to amount to “replan if the map changes,” which is OK for occasional changes like the destruction of map obstacles.

Be lazy. Not only maps are dynamic, goals are dynamic too; before you reach the end of your path, you may change your mind and want to go somewhere else after all. So don’t spend cpu time to plan the whole path in full accuracy. Make sure the high-level path is good and plan only your immediate moves in full accuracy. One idea is to have a quick low-level search that only takes into account map features and a full-accuracy low-level search that takes everything into account.

Group movement is important in Starcraft, and this paper doesn’t talk about it. You usually want your units together (not straggling separately past obstacles, or taking different bridges when they might find trouble before joining up again), and if the enemy is around then you care about good formation. That deserves another post.

Next: One or two posts about the latest in the A* family. I still have reading to do, so probably not tomorrow.

pathing 2: the classic A* algorithm

The A* algorithm (pronounced “A star”) is famous in AI. It’s a general-purpose best-first graph search algorithm, and finding paths is only one of its uses (though the biggest).

The Wikipedia article makes it sound more complicated than it is. If you need an introduction from zero, I thought a better source was A* Pathfinding for Beginners by Patrick Lester, from 2005 (though some further reading links are broken now).

The situation: You start at a node in a graph. In the graph, every arc between nodes gives the distance between them. Somewhere out there in the graph are goal nodes. You want to find the closest goal node, or at least one of them. Luckily you don’t have to search blindly, because you have a heuristic function h(x) which gives you a guess, for each node x, telling how close it may be to a goal. Of course h(x) = 0 when x is a goal node.

The algorithm: You keep an “open list” of nodes that are in line to be searched. For each node x in the open list you remember its distance from the start, which is traditionally called g(x). The open list starts out containing the start node, with distance 0 from itself. Each search step is: From the open list, pick the node x with the lowest value g(x) + h(x), the one which is estimated to be closest to a goal (that’s what makes it a best-first algorithm). g is how far you have come, h is how far you estimate you have to go, their sum is the estimated total distance. If x is a goal, done. Otherwise remove x from the open list and add any of x’s neighbors that have not already been visited (a node that is or ever was on the open list has been visited). That’s all there is to it; code may be filled out with special-case improvements or implementation details, but the idea is that simple.

Because you always take the best node from the open list, the open list can be implemented as a priority queue. Different ways of breaking ties in the priority queue give different search behavior. All the variants are called A*.

What is the mysterious heuristic function h(x)? If h is the exact distance to the goal, then A* wastes no time on side paths and proceeds straight to the goal. But if you had that, you wouldn’t need A*. If h never overestimates the distance to the goal, then A* is guaranteed to find the shortest path: It took what it thought was shortest at each step, and may have made optimistic mistakes but never overshot, so it could not have overlooked a shorter path.

So for pathfinding on a map, it generally works to set h(x) = the straight line distance between the start and end points. The actual distance to the goal may be longer than the straight-line distance, but never shorter, so you’ll find the best path. There may be smarter heuristics that know about map regions or obstacles, but they’re not obvious (and not for this post).

A* is mathematically optimal in a certain sense; can can call it “the fastest” algorithm that solves its exact problem. But don’t be fooled. You may be able to do better than A* by solving a different problem. If you imagine pathfinding on a featureless grid with no obstacles, you can calculate a straight-line path from the start to the goal without examining any nodes in between, because you already know all about them—you’re solving an easier problem and you can do it in one step. There may be ways to cast Starcraft pathfinding as an easier problem than graph search (I’m pretty sure there are), so A* is not necessarily optimal for us.

A* is not able to solve pathfinding in full generality. Make each map tile a graph node. A tile blocked by a building or destructible map obstacle can be given a larger distance from its neighbors to represent the time it takes to clear the obstacle. You can even represent that tiles which are under enemy fire are unsafe to travel over by making the distance to those tiles longer, so that other paths are preferred when available. But A* assumes that the graph is static. It can’t cope with other units moving around or with buildings being built or lifting off. Starcraft is too dynamic for A* to solve the whole range of pathfinding problems. A* in its basic form can only do part of the job.

It looks like there’s a ton of stuff in the A* space, so it needs at least two more posts before I move on to the next pathfinding topic. Tomorrow: Hierarchical algorithms based on A*.

pathing 1: intro

I don’t know much about pathfinding, so I decided to write about it.

I wrote earlier about threat-aware pathing, but I didn’t know how to implement it efficiently. Bots are real time and have a lot to think about, so faster is better.

I’ll write about classic A* pathing and its descendants, about potential fields, and about any other interesting ideas my shovel turns up (I’ve seen the corners of a couple other things that might be cool). I’ll look into the pathfinding features of libraries like BWEM. I hope to learn something about how it all works at the low level with BWAPI and how it might interact with Starcraft’s built-in pathfinding. In the end I’ll try to put pieces together to outline how to find paths in full generality, taking into account strategic and tactical goals, obstructions like terrain, destructible map objects, buildings and units, and fields of vision and fields of fire for both sides. I’ll try, but I don’t promise to succeed!

This should be old hat for old hands, but a lot of it is new to me. Maybe I’ll get some help in the comments.

Tomorrow: The classic A* algorithm.

threat-aware pathing

Of all the mistakes bots make, I think the worst is wandering blindly into enemy fire. If you know where the enemy is, then don’t get shot at for nothing—react to the threat one way or another.

Examples: Send an SCV to scout the zerg base, where the sunken kills it. Oops, no scout, send another. The sunken kills it. Oops, no scout, etc.

Or: The enemy is ravaging your expansion. You’ve lost all the workers. That’s not efficient, better balance your bases by sending more workers to the dying base, into the teeth of the attack. Or: Your main mines out and it’s time to transfer workers to a new base, but the enemy is at the gates. So what? Transfer workers through the enemy army.

Or: You’re sieging down static defense, but your attacking tanks/reavers wander too near. Or their supporting units do.

Or: The marine needs to get into the bunker, but a zealot is in the straight line path. Isn’t the straight path the only path?

Or: You’re making a drop. Or: You’re repositioning an overlord or an observer. And so on. I’ve seen bots make these blunders and more.

Bots just gotta be aware of threats. You can’t always know where enemy units are, but when you do, your pathfinding algorithm needs to take them into account. That’s threat-aware pathing. Or anything, as long as it has the same effect.

Leaving aside complications like cloaked units, when a unit or a squad sees enemy fire covering its future path, it has 4 possibilities: 1. It can go in and fight. 2. It can go around the enemy to reach its goal. 3. It can give up on its goal. 4. It can run by, accepting damage to reach its goal.

All four options are important and appear frequently in human games. In more detail:

1. Fight. This makes sense in a lot of cases, but not if your goal is to scout or to transfer workers.

2. Go around. Maybe you can transfer workers over a different bridge and avoid danger. That’s an elementary skill. Maybe you can get drop tech and fly around. That’s an advanced skill. Even if you are intending to fight, it makes sense to stay out of range of as much enemy fire as you can. Burrow the lurkers out of bunker range. Or, when the enemy tanks on high ground can only defend part of the enemy natural, you can slide dragoons into the far side and get shots in until the tanks are able to reposition. (You might want to consider that an issue of micro rather than pathing. Depends on your code.)

3. Give up. If the pather says there’s no safe path, maybe you shouldn’t try. Don’t transfer workers through the enemy army, leave them idle until you can develop a solution.

4. Run by. Bunker, meet rear view mirror. Or let the shuttle take hits, as long as the reaver drops into a good position. If you have a battle simulator that can estimate whether the runby will succeed, it will often be worth it to get in a scout or to aim for the mineral line. (In other words, a battle simulator is more useful if it can simulate runbys too.)

The old zerg bot Berkeley Overmind had threat-aware pathing in 2010. We can do it today too.