pathing 9 - the Pathfinder Almighty
I’ll outline vaguely more or less how a bot could handle movement in all circumstances, taking everything into account. You knew from the start this wasn’t going to be anything practical, right? PerfectBot might have this kind of system; it’s not something to sit down and implement, it’s a destination to approach over time (if we can find the path). “Pathfinder Almighty” is of course a funny name, since if you manage to create it, it won’t be either of those things. It will be a tactical analysis system that can sometimes achieve its goals!
design wishes
- Understand blocking by buildings.
- Don’t lock yourself in by mistake.
- Know what to do about your own and enemy walls.
- Know about narrowing chokes to restrict runbys.
- Know how buildings increase path lengths.
- Know how many units still fit in the space (for moving groups).
- Understand blocking by units.
- Pass through narrow chokes smoothly (at least don’t get clogged up).
- Don’t (say) siege on a ramp if other units want to move up.
- Use units to physically block (say) zealots from reaching tanks.
- Handle units in stasis, or under maelstrom, or locked down.
- Account for possible enemy vision, enemy fire, enemy movement.
- Account for uncertainty.
- Minimum effort. Spend time planning only what benefits from being planned.
As I mentioned last time, uncertainty is small in the immediate future and grows with time.
I haven’t noticed any bot that seems to understand blocking with units. It would allow a lot of valuable skills, such as:
- probe body-blocks the scouting SCV to slow it so the zealot can hit
- block the ramp to delay attacks
- with zealots until dark templar defense is up
- with dark templar
- with eggs
- with stasis or maelstrom or lockdown
- medics or SCVs block zerglings from reaching marines
- vultures lay mines among dragoons and physically block their escape
- vultures physically block zealots from reaching tanks
modeling the world
By the minimum effort and uncertainty management principles, we want to model the immediate situation (where you start from now, the stuff you can see) in detail and the distant situation (where you want to end up) only roughly, and probably with measures of uncertainty. “Enemy army is not in sight, might be close, but is most likely guarding their choke.”
So model the enemy locations that you can see as units in exact positions, and those in distant or future places maybe as an influence map or a risk map.
decision making
Here’s what seems to me a good way to divide up the work. Let there be a tactics boss whose job it is to give orders to squads, and a unit boss whose job it is to carry out those orders by issuing detailed orders to units. Orders to a squad are from some list similar to this: Scout a location, move safely to a location (e.g., transfer workers), attack an area/units (go for the enemy), defend an area (e.g. your own mineral line from harassment), hold a line against attack (stop the enemy at a choke), drop a location, assume a formation. Or something along those lines. The tactics boss should be responsible for ordering physical blocking (zealots on the ramp, medics in front of marines, etc.), whether with a formation order or with a “hold position here” order.
In general, the tactics boss should be responsible for coordination of forces: These units in front, these units behind. Or: This squad holds, the squad comes in back for the sandwich. Or: This squad pokes at the front lines to draw defenders forward, then the drop goes in. The tactics boss should only issue orders with a reasonable chance of success. The unit boss is responsible for executing the orders so they have the best local chances of success, but doesn’t take responsibility for any coordination above the squad level; if the holding squad can’t hold or the sandwich squad gets intercepted, that’s the fault of the tactics boss.
The unit boss has to account for a lot of details in a short time, so it should work by a reactive algorithm. It can’t be as simple as the potential fields and flocking examples I wrote up, but something in the same vein. It can rely on data prepared by the tactics boss. In the spirit of reacting to the future, every friendly unit could have an intended course for the immediate future and every visible enemy unit a projected course, so that units coordinate more smoothly.
The tactics boss has to make complex decisions and, it seems to me, should work by search. (I think a knowledge-based tactics boss is possible, but I don’t see how to create one without first creating a search-based tactics boss.) I think the key decisions in starting to design the search are: 1. Can it be a flat search like MaasCraft’s, or does it want another level of abstraction, making it a hierarchical search? 2. How should it account for uncertainty?
A flat search says: I have these actions, the enemy has those actions, let’s look at sequences of actions for both sides and try to find good ones. A hierarchical search might say: I have these goals. For each goal, what actions (allowing for enemy reactions) might help me achieve it? Is the goal feasible? What combinations of goals can be satisfied? It’s hierarchical because it introduces a new level of abstraction (goals) and searches across both goals and actions. The hierarchical search is more complicated and may seem to do more work, but knowing your goals may also speed up the action search because you can prune away actions that don’t make sense. I don’t know which way is better in this case. It likely depends on implementation choices and stuff.
MaasCraft, as I understand it (I hope somebody will correct me if I make a mistake), doesn’t truly account for uncertainty. Its search allows for the enemy to maneuver squads that MaasCraft knows about; MaasCraft does nothing to protect itself against unscouted squads. In estimating how hard it will be to destroy a base, MaasCraft accounts for enemy reinforcements that may be produced during the attack, and that’s the extent of it.
It may be good enough for all I know, at least if scouting is thorough. But I suspect it would be better to consider the uncertainty about the enemy’s army: Number, locations, unit mix, what upgrades will finish this side of the search horizon. You don’t want to miss the point at which the first few high templar have storm energy. A few more ultralisks can turn a battle, and the enemy may have a few more than you expect, or be about to produce them.
The natural kind of search to use is something in the MCTS family, as MaasCraft does. Algorithms in this family work by drawing a statistical sample of action sequences for both sides and letting the stats talk out which sequences are better. Given that, a natural way to account for uncertainty is to draw a sample of enemy army details too: We don’t know if zerg concentrated on drones or hydras; if hydras, they may have gone here or over there.
I think a smart evaluation function will be a key ingredient. I find MaasCraft’s evaluation much too primitive and low-level. A smart evaluator, even if it’s slower than you think you can get away with, will make search much more effective. I know by experience. A smart evaluator can reduce the number of nodes a search needs to examine by orders of magnitude.
And that’s about as far as I can work it out without starting to make arbitrary implementation decisions. Have fun with it if you can!
Anyway, this is what makes sense to me. You should do what makes sense to you. But whatever else this is, it’s rather a lot for something I started out calling pathfinding!
Comments