Sequential Games and Game Trees
Sequential games model situations where players move one at a time, and later players can observe (at least some of) what earlier players did. Unlike simultaneous games, the order of moves matters here, and that order creates strategic leverage. The core analytical tool is the game tree, and the key solution concept is subgame perfect equilibrium (SPE), which sharpens Nash equilibrium by ruling out strategies that rely on threats no rational player would actually carry out.
Game Tree Structure and Elements
A game tree (also called the extensive form) is the graphical representation of a sequential game. Its components:
- Decision nodes: Points where a specific player chooses an action. The initial node is where the game starts.
- Branches: The possible actions available at each node.
- Terminal nodes: Endpoints of the game where payoffs are assigned to each player.
- Information sets: A collection of decision nodes where the player whose turn it is cannot distinguish which node they're at. In a game of perfect information, every information set contains exactly one node (each player sees everything that happened before). When an information set groups multiple nodes together, the player faces uncertainty about prior moves.
Payoffs are listed at terminal nodes, typically as ordered pairs corresponding to Player 1 and Player 2's payoffs.
For example, in an entry deterrence game, Firm 1 (the entrant) moves first by choosing Enter or Stay Out. If Firm 1 enters, Firm 2 (the incumbent) then chooses Fight or Accommodate. Each terminal node shows the resulting profit pair.
Backward Induction Method
Backward induction solves sequential games by starting at the final decision nodes and working toward the root. The logic: if you know what rational players will do at every future point, you can figure out what they should do now.
Steps for backward induction:
- Identify all terminal nodes and their payoffs.
- Move to the decision nodes immediately before those terminal nodes. At each one, determine which action maximizes that player's payoff.
- Replace that decision node with the payoff vector that results from the optimal choice.
- Repeat, moving one step closer to the root each time, until you reach the initial node.
- The collection of optimal actions at every decision node constitutes the backward induction solution.
Consider a simple ultimatum-style game: Player 1 offers a split of $10 (say, $7/$3 or $5/$5), and Player 2 accepts or rejects. You start by asking what Player 2 would do at each possible offer. If Player 2 is rational and prefers any positive payoff to zero, they accept both offers. Knowing this, Player 1 offers $7/$3 to maximize their own payoff.
In finite games of perfect information with no payoff ties, backward induction yields a unique solution. This result is guaranteed by Zermelo's theorem.

Subgame Perfect Equilibrium
Concept and Refinement of Nash Equilibrium
A subgame is any portion of the game tree that:
- Starts at a single decision node,
- Contains all successor nodes and branches from that point forward, and
- Does not break any information set apart (if a node is in the subgame, every other node in the same information set must be too).
Every game is a subgame of itself, and most sequential games contain several proper subgames as well.
Subgame perfect equilibrium requires that the players' strategies form a Nash equilibrium in every subgame, not just the game as a whole. This is what makes SPE a refinement of Nash equilibrium: every SPE is a Nash equilibrium, but not every Nash equilibrium is subgame perfect.
Why does this matter? In sequential games, some Nash equilibria are sustained by non-credible threats. A strategy might say "if you enter my market, I'll start a price war and lose money just to punish you." That threat could support a Nash equilibrium where the entrant stays out. But look at the subgame that starts after entry actually occurs: fighting the price war isn't optimal for the incumbent. The threat is empty. SPE eliminates exactly these kinds of equilibria.
The classic illustration is the centipede game, where two players alternate choosing to "continue" or "stop." Several Nash equilibria exist, but backward induction shows that the unique SPE involves stopping at the very first node, because at every subsequent node, the player whose turn it is has an incentive to stop rather than continue.

Finding and Analyzing SPE
To find all SPEs in a sequential game:
- Identify every subgame in the game tree. Remember the three conditions above: single starting node, all successors included, no information sets broken.
- Find the Nash equilibrium (or equilibria) in the smallest, final subgames first.
- Replace each solved subgame with its equilibrium payoff, effectively shrinking the tree.
- Work backward through progressively larger subgames until you've solved the entire game.
A few properties to keep in mind:
- The set of SPEs is always a subset of the set of Nash equilibria. SPE can only eliminate equilibria, never create new ones.
- In finite perfect information games with no payoff ties, backward induction gives a unique SPE.
- When payoff ties exist or information is imperfect, multiple SPEs may survive.
One important note on strategies in extensive form games: a strategy must specify an action at every decision node where a player might move, including nodes that the strategy itself prevents from being reached. This is why non-credible threats show up as part of Nash equilibria in the first place. The player never actually has to carry out the threat on the equilibrium path, so the threat "works" as a Nash equilibrium strategy. SPE catches this by checking optimality even at off-path nodes.
Entry deterrence example: Firm 1 chooses Enter or Stay Out. If entry occurs, Firm 2 chooses Accommodate (both earn moderate profits, say ) or Fight (both earn low profits, say ). If Firm 1 stays out, payoffs are . The subgame after entry has Firm 2 choosing Accommodate since . Knowing this, Firm 1 enters since . The SPE is (Enter, Accommodate). A Nash equilibrium where Firm 2 threatens to Fight (causing Firm 1 to Stay Out, yielding ) exists but is not subgame perfect, because Fight is not optimal in the post-entry subgame.
Credibility in Sequential Games
Assessing Threat and Promise Credibility
A threat or promise is credible if the player making it would actually follow through when the time comes. SPE provides the formal test: a strategy is credible if and only if it prescribes optimal actions in every subgame.
Time consistency is the underlying issue. A strategy is time-consistent if a player who planned it at the start of the game would still want to follow it at every later decision point. Non-credible threats are time-inconsistent: they look useful when announced but become irrational to execute.
The chain-store paradox (Selten, 1978) highlights this tension. Suppose an incumbent faces potential entrants in 20 successive markets. Intuitively, fighting early entrants to build a tough reputation seems smart. But in the final market, fighting is clearly irrational (it's a one-shot game at that point). By backward induction, fighting is irrational in the second-to-last market too, and the logic unravels all the way back to market 1. The SPE prediction is that the incumbent accommodates in every market. This result is counterintuitive, which is why it's called a paradox, and it motivates the study of reputation models with incomplete information (where entrants aren't sure whether the incumbent is "tough" by nature or just bluffing).
Enhancing Credibility in Games
Since non-credible threats fail in SPE analysis, players sometimes take costly actions to make their threats credible. These are commitment devices: actions that change the game's structure so that following through on the threat becomes genuinely optimal.
Common commitment devices include:
- Capacity investment: A firm builds excess production capacity before a rival's entry decision. The sunk cost of that capacity changes the firm's cost structure, making aggressive pricing the profit-maximizing response if entry occurs. What was previously an empty threat becomes a credible one because the payoffs in the relevant subgame have actually changed.
- Contractual obligations: Signing a most-favored-customer clause or a penalty contract can bind a firm to certain pricing behavior, removing the discretion to back down.
- Burning bridges: Eliminating your own option to retreat forces commitment to the aggressive strategy. The key is that removing an option from your own strategy set can make you better off by changing how other players respond to you.
Reputation effects offer a different path to credibility, especially in repeated interactions. If a player expects to face similar situations many times, the short-term cost of following through on a threat can be outweighed by the long-term benefit of being perceived as tough. Formally, this involves comparing the one-period gain from deviating against the present value of future payoffs lost if credibility is destroyed.
The key analytical question is always: does the commitment device or reputation mechanism actually change the payoffs in the relevant subgame enough to make the threat sequentially rational? If yes, the threat is credible and SPE supports it. If not, rational opponents will call the bluff.