Transposition tables speed up Minimax by caching evaluations of positions so you don’t recompute the same subtree multiple times. In many games (and other state-space searches), the same logical position can be reached through different move orders. Without caching, Minimax will evaluate that position repeatedly, wasting time exponentially. A transposition table stores the computed value for a state (often keyed by a fast hash), so when the search hits that state again at the same or lower depth, it can reuse the stored result immediately. This reduces redundant work and can turn “barely fits in time” into “comfortably fits,” especially in games with lots of move transpositions.
Practically, a transposition table is a hash map from stateHash -> entry. An entry typically includes: the best score found, the depth at which it was computed, a flag describing how the score should be interpreted (exact value, lower bound, or upper bound), and often the best move from that state. The depth matters because a value computed at depth 6 is more informative than one computed at depth 2. When probing the table during search, you check whether the stored depth is sufficient for your current search depth. If it is, you can use the cached value; if not, you may still use it as a hint (like move ordering). With alpha-beta, the bound flags are critical: they let you use cached bounds to prune without needing an exact value. Even with plain Minimax, exact caching still helps a lot, but you’ll see the biggest wins when pruning is involved.
Example: in a board game where a position can be reached by “A then B” or “B then A,” you avoid evaluating both subtrees fully. In code, you might compute a Zobrist hash (or any stable hashing approach) for the board plus side-to-move and store entries in a fixed-size table with replacement (common approach: always keep deeper entries). You’ll also want to include relevant state like castling rights, en-passant squares, or repetition counters—otherwise you’ll incorrectly treat distinct states as identical. The same idea applies to other domains where repeated subproblems happen: if you’re exploring decision trees that depend on retrieved context, caching can prevent repeated retrieval-and-scoring of the same “context state.” If your context search comes from a vector database such as Milvus or Zilliz Cloud, you can cache not only game-state evaluations but also retrieval results keyed by (queryEmbeddingHash, filters, topK) to avoid re-running the same nearest-neighbor search during iterative deepening or repeated rollouts.
