When I started building TinyGenerals, I knew it had to be a multiplayer game - but I also knew it needed AI. Especially for the early days when the player pool would be too small for people to find matches.
The core game is PvP, so the AI was meant to be a stopgap. My inspiration came from Civilization 1’s AI, including its legendary cheats. Fresh off reading Sid Meier’s Memoir, the whole thing fit together nicely. I even sketched out plans for an Easter egg: a Gandhi commander that could overflow its aggression counter and transform from a peacekeeper into a berserker warrior - a bug so iconic I’m still considering implementing a toned-down version.
The Problem with Simple Rules
But here’s the thing: what worked in Civ 1 over 30 years ago doesn’t necessarily work in a game played primarily on tiny 12×12 hex maps.
The initial AI was just… bad. It was either naive or overly cautious - it wouldn’t use its advantages and generally got out-played easily. The decision tree approach of “do A, then B, then C” doesn’t translate well to fast-paced tactical combat.
Enter Utility AI
I pivoted to Utility AI. Instead of a rigid priority list, every possible action gets scored based on the current situation, and the action with the highest utility wins.
Our codebase is full of these “magic numbers”. Here’s actual logic from our bot/service.go simplified into pseudocode:
function evaluateTarget(target, gameState):
score = 100 // Base score
// Magic numbers galore!
if canKillInOneHit(target):
score += 400 // Priority - eliminate this now!
if isWeakPlayer(target):
score += 800 // Critical - finish off the weakest opponent!
if isUndefendedAsset(target):
score += 500 // Jackpot - easy capture!
// Penalize bad ideas
if isLocallyOutnumbered(target):
score -= 220 // Avoid losing fights
return score
This worked better, but the AI was still predictable. With larger maps, it would get stuck in loops or build units in one corner and never use them in combat. The problem? It had no sense of the map itself. It was just chasing numbers.
The Breakthrough: Influence Maps
This is where Influence Maps changed everything. Think of it as a spatial heat map showing:
- Where enemies have pressure
- Where we have strength advantages
- Where the front lines are
- Which routes are safe to move through
We implemented an InfluenceMap struct that pre-calculates the battlefield state every turn:
type InfluenceMap struct {
threatLevel [][]float64 // Where are we likely to get hit?
ourControl [][]float64 // What territory do we own?
theirControl [][]float64 // What do they own?
}
function calculateInfluence(gameState):
// Calculate radiated influence from every unit
for each unit in gameState.allUnits:
power = unit.strength
// Radiate power outwards (like heat dissipating)
propagateInfluence(unit.position, power, falloff_rate)
Suddenly, many of those magical numbers I’d been tweaking could be replaced with dynamic, spatial information. The AI understood pressure points, safe zones, and tactical advantages.
Strategic vs. Tactical Phases
I drew inspiration from Auftragstaktik (German military concept: generals set objectives, commanders execute tactics). I split AI decision-making into two phases:
Strategic Phase:
- Assess overall map control using the Influence Map.
- Set a primary objective (e.g., “Push Left Flank”).
- Allocate units to strategic goals.
Tactical Phase:
- Individual units execute the plan.
- React to immediate threats.
- Maintain formation.
The Moment It Clicked
Here’s where it got fun: I was testing on a map with a narrow choke point. The AI started using flanking maneuvers, routing units through alternative paths to hit me from the side.
I never programmed explicit flanking behavior. It emerged naturally because the Influence Map showed the choke point was “high threat,” while the side route was “safe.” The Utility AI simply chose the path of least resistance to the target.
And it worked. It caught me off-guard because I wasn’t expecting it. That’s when I knew we were onto something.
What’s Next: Behavior Trees & GOAP
We are currently migrating from the massive service.go file (over 3,000 lines!) to structured Behavior Trees.
planTree := bt.New(
bt.Selector,
// Priority 1: Handle critical needs
bt.New(CheckCriticalGap, BuildDefenseUnit),
// Priority 2: React to strength disadvantage
bt.New(CheckWeakPosition, BuildOffenseUnit),
// Priority 3: Spend surplus resources
bt.New(CheckExcessResources, BuildSupportUnit),
)
I’ve also been reading up on Goal-Oriented Action Planning (GOAP) - the system that powered the AI in F.E.A.R. GOAP separates what the AI wants to achieve from how to achieve it.
I’ve even started experimenting with something wild: consulting LLMs (Mistral, Gemini, Claude) for strategic planning. Yeah, I might be overcomplicating things. But the results are interesting enough to keep exploring.
The Real Question
Do these sophisticated systems matter? For TinyGenerals in its current state - probably not critical. But for future games and more complex scenarios? Absolutely.
Help Shape the Game
Want to test this AI yourself? Head over to tinygenerals.com and play for free—no ads, no paywalls. Let me know what works, what doesn’t, and what surprised you. Did the AI outmaneuver you? Did it make stupid decisions? Tell me.
Supporting the project? Buy me a coffee on Ko-fi.
Tool Recommendation: Machinations.io
One tool has massively accelerated my economy and strategy testing: Machinations.io. It lets you build interactive flowcharts that simulate game systems. I can validate unit balance, test resource distribution, and run thousands of game turns in seconds—all before writing a single line of game code.
TinyGenerals is in active development. We’re continuously refining the AI, expanding map variety, and exploring new strategic mechanics. Follow along as we build the ultimate turn-based strategy experience.