archive by month
Skip to content

Steamhammer games and status

Steamhammer played an excellent game versus Monster today. The game is kind of long and boring to watch, with repetitive action, but I’m pleased by the good play against stubborn defense. Steamhammer wasted some resources and missed some opportunities, but made no severe mistake at any point. It even expanded at a good time, which is depressingly rare in its ZvZs. Near the end, Steamhammer tried to put the cherry on top by ensnaring Monster’s mutalisks, but the mutas zoomed by too fast, the ensnare missed, and the queen was shot down. Oh well, dropping the cherry didn’t change the rest!

For a game that is not in the least excellent but is interesting for its mistakes, I like yesterday’s Steamhammer-Slater game. I watched the game live, and when Steamhammer bumbled the defense of its natural I steeled myself for a quick upset. But it was not so quick after all. The game is a showcase of ways to go wrong on both sides. Some of Steamhammer’s mistakes remain unresolved because my planned fixes are complicated and need to be implemented as projects.

The latency compensation bug is still making me scratch my head. The easiest way to work around it is to use the Micro module’s order tracking; Steamhammer already keeps track of what orders it has given to units, including larvas, so it doesn’t need to rely on BWAPI to keep it straight. I traced the backbone of the production code and added the minimal workaround, a two-line addition to the code that decides whether a unit should be added to the set of candidate producers. And... it didn’t work. In order to control where zerg units are made, to do things like make drones at bases that don’t have enough drones, there is a special-case low-level routine, and it ignores the set of candidate producers and does its own calculations from scratch—slightly complicated calculations that the candidates don’t make easier. I’m still thinking about the right fix. Maybe I can find a way to make it simple and powerful at the same time.

It is, by the way, a serious bug. In Steamhammer, the effect is to sometimes—at predictable times—drop a unit that was queued for production. Among other things, it turns 12 hatch openings into 11 hatch. I had noticed that Steamhammer was playing 11 hatch surprisingly often, but it does have a full suite of intentional 11 hatch openings, so I didn’t realize that it was due to a bug.

Trackbacks

No Trackbacks

Comments

Bytekeeper on :

I think reserving/locking a unit until a task is confirmend to have started is a good choice.

I started out without latcom, which meant my bot often built things twice. It worked for units usually, because if a larva was still a larva - the second morph/train did not matter.

For buildings it was terrible. Even with locking/remembering the worker. It got the build order, but then next frames did not reflect that. So another worker was chosen and I had 2-3 pools (do 3 pools count as 3pool hmmm?).

Also, BWAPI events are more of a guideline, due to the strange transitions. So Extractors were not working etc.

So my current build checks if there is the expected building at the expected location.

Of course earlier iterations took the enemy extractor in my base as a sign that building the extractor worked...

I should board game AIs again...

Jay Scott on :

It’s a special case of goal monitoring:

http://satirist.org/ai/starcraft/blog/archives/604-goal-monitoring-as-a-structural-principle.html

Bytekeeper on :

Yeah, a small part of it. I find goal monitoring to be very hard.
The thing you described in there can be achieved with behaviour trees for example (as I do). I have some nodes were I try it.

Ie. I had a node for moving the worker to a build location, and it remembered the expected time of arrival. And failed if it was overshot by some margin.

But now what?
The move failed! I use a utility value on an node to describe how important it is. Ideally I should change that first because building that creep colony might not be important enough to transport my drone to an island...
Still, should I just look for another worker? That's what my code did, and low and behold it might either select the same worker or fail to move another worker, and retry again and again.

Ok so, maybe stop retrying in the goal monitoring of the build node. Yeah, but what if it was a order on my build order list? I could make the whole build order node fail. Retrying would be stupid as it would just mean retrying the failed build again.
So, yes it might be correct to fail and try a dynamic approach afterwards.

Parts of this is what my bot actually does.
But a lot of monitoring would be situation specific. Ie. I expect the hydra den to be started in 10 seconds. Now an enemy rush requires worker defense. Now, 20 seconds later, even with some leeway ... I failed. But in this case it might be ok to retry, a delay was to be expected after all...

You mentioned machine learning in hat post, and I find it might be required to make goal monitoring work good enough.

Jay Scott on :

I agree, goal monitoring is hard for me too. That’s why so little of it is implemented. But it still seems important....

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Form options

Submitted comments will be subject to moderation before being displayed.