Use expectiminimax instead of Minimax when random events affect the outcome and you can model those events with probabilities. Minimax assumes the “other side” chooses the worst possible outcome for you at their turns. That makes sense for an adversary, but not for dice rolls, card draws, probabilistic hits, or randomized spawns. Expectiminimax adds a third node type—CHANCE—and evaluates it by taking an expected value over outcomes. This aligns the search with how the environment actually behaves: not adversarially, but stochastically.
In implementation terms, you build a tree with MAX nodes (your choices), MIN nodes (opponent choices), and CHANCE nodes (random outcomes). MAX backs up the maximum child value, MIN backs up the minimum, and CHANCE backs up a weighted average: E = Σ p(i) * value(child_i). The probabilities come from game rules (fair die) or a model (hit chance). This makes the tree larger: chance nodes expand branching factor and effectively add levels. It also complicates pruning: classic alpha-beta pruning doesn’t carry over cleanly because an expected value isn’t bounded the same way as min/max. You can still use depth limits, caching, and sampling to manage cost—many practical implementations sample chance outcomes when the outcome space is huge, trading exact expectation for an estimate.
A concrete example: imagine an attack that hits 70% of the time for 10 damage, misses 30% for 0 damage. If you use Minimax and treat “chance” like an adversary, you’ll act as if it always misses, and you’ll avoid good probabilistic moves. Expectiminimax computes 0.7 * value(hit_state) + 0.3 * value(miss_state), which yields more realistic play. This idea also transfers to uncertainty in data-driven systems: if you have uncertainty that is best modeled probabilistically rather than adversarially, you should compute expected utility rather than worst-case utility. In retrieval workflows using Milvus or Zilliz Cloud, you might treat candidate relevance as uncertain and weight outcomes by confidence signals (similarity score, metadata checks, validation passes). When the cost of being wrong is moderate and you care about average performance, expectiminimax-style expectation is often a better fit than Minimax’s worst-case stance.
