Fiveable

🧃Intermediate Microeconomic Theory Unit 11 Review

QR code for Intermediate Microeconomic Theory practice questions

11.1 Static and dynamic games

11.1 Static and dynamic games

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🧃Intermediate Microeconomic Theory
Unit & Topic Study Guides

Static and Dynamic Games

Game theory provides a framework for analyzing situations where your outcome depends not just on your own choices, but on what others choose too. Static and dynamic games represent the two fundamental structures these interactions can take. The distinction between them changes how you model the situation, what strategies are available, and which equilibrium concepts apply.

Static vs Dynamic Games

Timing and Information Structures

The core difference between static and dynamic games comes down to when players move and what they know when they move.

In a static game, all players choose their actions simultaneously (or at least without observing each other's choices). You know the rules, the payoffs, and who the other players are. What you don't know is what the other player is actually doing right now. This is called imperfect information about actions, even though you have complete information about the game's structure.

In a dynamic game, players move in sequence, and at least some later movers can observe what happened before them. Dynamic games split into two categories:

  • Perfect information games: Every player can observe all previous moves before choosing. Chess is the classic example, since you can see the entire board before deciding your next move.
  • Imperfect information games: Some previous moves are hidden. Think of a card game where you can see some of an opponent's actions but not their hand.

A common source of confusion: "complete" and "perfect" information are different things. Complete information means you know the payoff structure of the game (who gets what under each outcome). Perfect information means you can observe all previous actions. A game can be complete but imperfect (like the standard Prisoner's Dilemma), or incomplete but perfect (rare in practice).

Representation and Analysis

Each game type has a natural way to represent it:

  • Static games use the normal form (strategic form), which is a payoff matrix. Rows represent one player's strategies, columns represent the other's, and each cell shows the resulting payoffs.
  • Dynamic games use the extensive form, a game tree where nodes represent decision points, branches represent available actions, and terminal nodes show payoffs.

In extensive form trees, information sets group together decision nodes that a player can't distinguish between. If two of your decision nodes are in the same information set, you don't know which one you're actually at, so you must choose the same action at both. This is how imperfect information gets represented visually. A static game can actually be drawn in extensive form too: you'd just put all of the second player's decision nodes into a single information set, reflecting that they don't know what the first player chose.

Subgame perfect equilibrium (SPE) is the key refinement for dynamic games. A regular Nash equilibrium might rely on threats that a player would never actually carry out if the moment came. SPE eliminates these non-credible threats by requiring that the strategy constitute a Nash equilibrium in every subgame of the original game. A subgame starts at a decision node that is a singleton information set (the player knows exactly where they are), includes all subsequent nodes, and doesn't break any information sets apart.

Simultaneous vs Sequential Moves

Timing and Information Structures, Frontiers | Multi-Channel Interactive Reinforcement Learning for Sequential Tasks

Solution Concepts

Nash equilibrium applies to both game types: it's a strategy profile where no player can improve their payoff by unilaterally changing their own strategy. But the tools for finding equilibria differ.

For simultaneous-move games, you typically:

  1. Write out the normal form matrix.
  2. Check for dominant strategies (strategies that are best regardless of what others do).
  3. Eliminate dominated strategies (strategies that are always worse than some alternative) through iterated elimination of strictly dominated strategies (IESDS). Remove a dominated strategy, then check whether new strategies become dominated in the reduced game, and repeat.
  4. Identify any pure strategy Nash equilibria by checking each cell for mutual best responses. At a Nash equilibrium cell, neither player's payoff improves by switching to a different row (or column).
  5. If no pure strategy equilibrium exists, solve for a mixed strategy equilibrium, where players randomize over actions with specific probabilities that make their opponent indifferent between their own options.

For sequential-move games, you use backward induction:

  1. Start at the final decision nodes of the game tree.
  2. Determine the optimal action at each of those nodes.
  3. Move one step earlier in the tree, replacing the future nodes with the payoffs that will result from optimal play going forward.
  4. Repeat until you reach the first move of the game.

This process yields the subgame perfect equilibrium. It also reveals whether there's a first-mover advantage (the player who moves first can lock in a favorable outcome) or a second-mover advantage (the later player benefits from observing and responding). Which one arises depends on the specific payoff structure, not on some general rule.

Focal points (Schelling points) matter when a game has multiple equilibria. These are outcomes that players gravitate toward based on shared expectations, cultural norms, or the salience of a particular option. For example, if two people must independently choose a meeting spot in New York City, Grand Central Station might serve as a focal point. Focal points aren't derived from the payoff matrix itself; they come from context outside the formal model.

Game Theory in Action

Classic Models

The Prisoner's Dilemma is the most famous static game. Two players each choose to cooperate or defect. Defecting is a strictly dominant strategy for both, yet mutual cooperation would leave both better off. The unique Nash equilibrium (Defect, Defect) is Pareto-dominated by (Cooperate, Cooperate). This tension between individual rationality and collective welfare shows up in arms races, environmental agreements, and price competition among firms.

Coordination games model situations where players benefit from choosing the same action. Think of two firms deciding between competing technology standards: both prefer agreement on a standard, but they may disagree on which one. The Battle of the Sexes is a classic example with two pure strategy Nash equilibria and one mixed strategy equilibrium. Because multiple equilibria exist, focal points become especially relevant for predicting which outcome actually occurs.

Signaling games are dynamic games with incomplete information. One player (the sender) takes a costly action to reveal private information to another player (the receiver). Spence's job market signaling model is the textbook example: a worker gets a degree not necessarily for the skills it provides, but to signal high ability to employers. The key equilibrium concepts here are separating equilibria (where different types of senders choose different signals) and pooling equilibria (where all types choose the same signal).

Repeated games take a one-shot game and play it multiple times. Repetition can sustain cooperation even in Prisoner's Dilemma settings, because players can punish defection in future rounds. The Folk Theorem formalizes this: in an infinitely repeated game with sufficiently patient players (high enough discount factor δ\delta), virtually any feasible and individually rational payoff can be sustained as a Nash equilibrium. This logic underlies tacit collusion in oligopolistic markets, where firms maintain high prices without explicit agreements.

Advanced Applications

  • Bargaining games (such as the Rubinstein alternating-offers model) formalize negotiations as dynamic games. Players take turns making offers, and delay is costly. The key insight is that patience (a higher discount factor) and outside options determine bargaining power. In the limit, the equilibrium split reflects the ratio of the players' discount factors.
  • Evolutionary game theory replaces the assumption of perfect rationality with population dynamics. Strategies that yield higher payoffs spread through a population over time. An evolutionarily stable strategy (ESS) is one that, if adopted by the whole population, can't be invaded by a small group playing an alternative strategy. This framework connects to biology but also models how conventions and norms emerge in economic settings.
  • Mechanism design works in reverse: instead of analyzing a given game, you design the rules of the game to achieve a desired outcome. Auction formats are a central application. In a second-price (Vickrey) auction, bidding your true valuation is a weakly dominant strategy, which simplifies the strategic problem considerably compared to a first-price auction where optimal bids depend on beliefs about other bidders. Voting systems and matching markets are other areas where the structure of the game is chosen to align individual incentives with efficient or fair outcomes.