Fiveable

🎱Game Theory Unit 7 Review

QR code for Game Theory practice questions

7.1 Finitely and infinitely repeated games

7.1 Finitely and infinitely repeated games

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎱Game Theory
Unit & Topic Study Guides

Repeated Games and Cooperation

Repeated games study what happens when the same players face the same strategic situation over and over. The core question is whether repetition can sustain cooperation that wouldn't arise in a single interaction. The answer depends heavily on whether the game repeats a fixed number of times or goes on indefinitely.

Finitely vs Infinitely Repeated Games

Key Differences

A finitely repeated game has a fixed, known number of rounds (say, 10). An infinitely repeated game either literally never ends or, more realistically, continues each round with some probability, so players never know for sure which round is the last.

This distinction matters because of how players reason about the future:

  • In finitely repeated games, you can use backward induction to find the subgame perfect Nash equilibrium (SPNE). You solve the last round first, then work backward through each earlier round to pin down optimal play at every stage.
  • In infinitely repeated games, there's no last round to anchor backward induction. Instead, the Folk Theorem tells us that any feasible and individually rational payoff can be sustained as a Nash equilibrium, provided players are patient enough (high discount factor). The Folk Theorem characterizes the set of possible equilibrium payoffs but doesn't pick out a unique one.

Implications of Game Length

The known endpoint in finite games creates an unraveling problem. Here's the logic:

  1. In the final round, there's no future to worry about, so both players defect (just like in a one-shot game).
  2. Knowing round TT will involve mutual defection, there's no reason to cooperate in round T1T-1 either.
  3. This reasoning cascades all the way back to round 1, and cooperation collapses entirely.

In infinitely repeated games, there's no final round to trigger this unraveling. Players always face the prospect of future interactions, which gives them a reason to maintain cooperation today. The ongoing relationship makes punishment threats credible: if you cheat now, your opponent can punish you for many rounds to come.

Repetition's Impact on Strategies

Emergence of Cooperative Strategies

Repetition enables trigger strategies that reward cooperation and punish defection:

  • Tit-for-tat: Cooperate in round 1, then copy whatever your opponent did last round. This promotes reciprocity since cooperation is met with cooperation and defection is met with defection.
  • Grim trigger: Cooperate until your opponent defects even once, then defect forever. This is a harsher punishment that makes deviation very costly.

Repetition also creates reputation effects. A player's history of past actions shapes what opponents expect in the future. If you've cooperated consistently, opponents are more likely to cooperate with you, giving you an incentive to build and maintain a cooperative reputation.

Communication and Learning

Repeated play opens channels for coordination beyond just observing actions:

  • Cheap talk refers to non-binding, costless communication (like announcements or promises). While cheap talk can't be enforced, it can help players coordinate on mutually beneficial outcomes when multiple equilibria exist.
  • Players also learn and adapt over time, updating their beliefs about opponents based on observed behavior. This can lead strategies to evolve as players experiment and adjust.

Equilibria in Repeated Games

Backward Induction in Finitely Repeated Games

For finitely repeated games, the standard approach is backward induction:

  1. Solve the final round as a one-shot game to find the Nash equilibrium.
  2. Given that outcome, solve the second-to-last round.
  3. Continue working backward to round 1.

The result is the subgame perfect Nash equilibrium, which requires that players' strategies are optimal at every decision point in the game, not just at the start.

To check whether a candidate strategy profile is an SPNE, you can apply the One-Shot Deviation Principle: a strategy profile is subgame perfect if and only if no player can improve their payoff by deviating in a single period while following the prescribed strategy in all other periods.

If the stage game has a unique Nash equilibrium, the finitely repeated game's only SPNE is to play that Nash equilibrium in every round. Cooperation can potentially be sustained in finite repetition only when the stage game has multiple Nash equilibria.

Equilibrium Analysis in Infinitely Repeated Games

The Folk Theorem is the central result here. It states that any feasible payoff that gives each player at least their minmax payoff (the worst they can guarantee themselves) can be sustained as a Nash equilibrium, provided δ\delta is high enough.

Several tools help identify and analyze these equilibria:

  • Trigger strategies (tit-for-tat, grim trigger) enforce cooperation by making defection costly over the long run.
  • Reputation building involves taking cooperative actions early to influence opponents' future behavior.
  • Renegotiation considers whether players might agree mid-game to abandon a punishment phase and return to cooperation. Equilibria that survive this concern are called renegotiation-proof.

Discount Factors in Infinite Repetition

Concept and Interpretation

The discount factor δ\delta (where 0δ10 \leq \delta \leq 1) captures how much players value future payoffs relative to today's payoff. Think of it as a measure of patience.

  • δ=1\delta = 1: Future payoffs are worth just as much as today's. The player is perfectly patient.
  • δ=0\delta = 0: Only today's payoff matters. The player is completely impatient.
  • δ\delta close to 1: The player cares a lot about the future and is willing to sacrifice short-term gains for long-term benefits.

If a player receives payoff π\pi every round, the total discounted payoff in an infinitely repeated game is:

π+δπ+δ2π+δ3π+=π1δ\pi + \delta\pi + \delta^2\pi + \delta^3\pi + \cdots = \frac{\pi}{1 - \delta}

Role in Sustaining Cooperation

The Folk Theorem's condition of "sufficient patience" translates into a specific threshold: the critical discount factor δ\delta^*. Cooperation can be sustained as an equilibrium if and only if δδ\delta \geq \delta^*.

To find δ\delta^*, you compare the temptation to defect against the long-run cost of punishment. For example, with a grim trigger strategy in a Prisoner's Dilemma:

  1. Calculate the one-time gain from defecting while the opponent cooperates.
  2. Calculate the per-round loss from being stuck in mutual defection forever (the punishment phase) versus mutual cooperation.
  3. Set the present value of cooperation equal to the present value of defecting-then-being-punished, and solve for δ\delta.

The critical discount factor depends on the stage game's payoff structure. Games where the reward for mutual cooperation is large relative to the temptation payoff have a lower critical discount factor, meaning cooperation is easier to sustain. Games with a big temptation to cheat require more patient players (higher δ\delta^*) for cooperation to hold.

2,589 studying →