upgrade
upgrade

🤑AP Microeconomics

Game Theory Strategies

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Game theory is the analytical backbone of oligopoly analysis on the AP Microeconomics exam. When firms are mutually interdependent—meaning each company's best move depends on what competitors do—you can't just apply the simple profit-maximization rules from perfect competition or monopoly. Instead, you need to think strategically about payoff matrices, dominant strategies, Nash equilibrium, and the incentives to cheat on collusive agreements. These concepts explain why cartels fall apart, why price wars happen, and why oligopolists often end up stuck in outcomes that hurt everyone.

You're being tested on your ability to read a payoff matrix, identify dominant strategies, find Nash equilibrium, and calculate the incentive sufficient to alter a player's strategy. The College Board loves FRQs that present a two-firm game and ask you to determine equilibrium outcomes or explain why firms can't sustain collusion. Don't just memorize definitions—know how each concept connects to real oligopoly behavior and be ready to apply them to unfamiliar scenarios.


Equilibrium Concepts: Finding Stable Outcomes

These concepts help you identify where strategic interactions settle—the outcomes that persist because no player wants to deviate unilaterally. An equilibrium represents mutual best responses, where each player's strategy is optimal given what others are doing.

Nash Equilibrium

  • Mutual best response—a situation where no player can benefit by changing their strategy while others keep theirs unchanged
  • Stable outcome in oligopoly games; once reached, neither firm has an incentive to deviate unilaterally
  • Key for payoff matrices—find the cell where both players are simultaneously playing their best response to each other

Dominant Strategy

  • Best choice regardless of opponent's action—if a strategy wins no matter what the other player does, it's dominant
  • Simplifies equilibrium analysis because you don't need to guess what opponents will do; just play your dominant strategy
  • Not always present—many games lack a dominant strategy for one or both players, requiring deeper analysis

Dominant Strategy Equilibrium

  • Both players have dominant strategies—the equilibrium occurs where these strategies intersect in the payoff matrix
  • Easiest equilibrium to identify on exams; check each player's best response to every possible opponent action
  • Prisoner's Dilemma outcome—even when both players have dominant strategies, the result can be collectively suboptimal

Compare: Nash Equilibrium vs. Dominant Strategy Equilibrium—both represent stable outcomes, but dominant strategy equilibrium is a subset of Nash equilibrium where each player's strategy is best regardless of the opponent's choice. If an FRQ asks you to find equilibrium, always check for dominant strategies first—it's the fastest path.


The Prisoner's Dilemma: Why Cooperation Fails

This is the most important game structure for AP Microeconomics because it directly models oligopoly collusion problems. The dilemma arises when individual rationality leads to collective irrationality—both players would be better off cooperating, but each has an incentive to cheat.

Prisoner's Dilemma

  • Each player's dominant strategy is to defect (cheat)—betraying the other yields a better payoff regardless of what the opponent does
  • Mutual defection is the Nash equilibrium even though mutual cooperation would make both players better off
  • Models cartel instability—firms agree to restrict output but each has an incentive to secretly increase production

Incentive to Cheat on Collusion

  • Calculate the gain from defecting—compare the payoff from cheating (while the other cooperates) to the payoff from mutual cooperation
  • This calculation appears on FRQs—you may be asked to determine what payoff change would eliminate the incentive to cheat
  • Explains why cartels collapse—the short-term gain from cheating often outweighs the benefits of sustained cooperation

Compare: Prisoner's Dilemma vs. Coordination Games—in the Prisoner's Dilemma, players have conflicting incentives that lead to a bad outcome; in coordination games, players want to match strategies but may fail due to communication problems. Know which structure applies when analyzing oligopoly behavior.


Strategies for Repeated Interactions

One-shot games often produce different outcomes than repeated games, where the shadow of the future changes players' incentives. When players expect to interact again, cooperation can become sustainable because cheating triggers future punishment.

Repeated Games

  • Multiple interactions change incentives—players consider future payoffs, not just immediate gains from cheating
  • Cooperation becomes sustainable when the present value of future cooperation exceeds the one-time gain from defection
  • Explains tacit collusion in oligopolies—firms maintain high prices because they fear triggering price wars

Tit-for-Tat

  • Start cooperating, then mirror opponent's last move—reward cooperation with cooperation, punish defection with defection
  • Simple and effective at sustaining cooperation; forgives past defection if opponent returns to cooperation
  • Requires repeated interaction—useless in one-shot games where there's no future to consider

Grim Trigger Strategy

  • Cooperate until opponent defects, then defect forever—the harshest possible punishment for cheating
  • Strong deterrent because it makes the cost of defection infinite (permanent loss of cooperation benefits)
  • Less forgiving than Tit-for-Tat—a single defection destroys cooperation permanently, even if accidental

Compare: Tit-for-Tat vs. Grim Trigger—both sustain cooperation in repeated games, but Tit-for-Tat allows recovery from defection while Grim Trigger doesn't. On FRQs about collusion stability, explain how the threat of future punishment (either strategy) can overcome the one-shot incentive to cheat.


Solving Dynamic Games: Thinking Ahead

Some strategic situations unfold over time, with players moving sequentially rather than simultaneously. Backward induction solves these games by starting at the end and working backward, ensuring strategies are credible at every decision point.

Backward Induction

  • Solve from the end to the beginning—determine what the last mover will do, then work backward to earlier decisions
  • Reveals credible threats and promises—only strategies that players would actually follow through on matter
  • Essential for sequential games—like a firm deciding whether to enter a market where an incumbent might retaliate

Subgame Perfect Equilibrium

  • Nash equilibrium in every subgame—strategies must be optimal not just overall, but at every possible decision point
  • Eliminates non-credible threats—a threat to take an action that hurts yourself isn't believable
  • Refinement of Nash equilibrium for extensive-form (sequential) games; found using backward induction

Compare: Backward Induction vs. Simultaneous Game Analysis—use backward induction when players move in sequence and can observe previous moves; use payoff matrix analysis when players move simultaneously without knowing the other's choice. Identify the game structure before choosing your solution method.


Mixed Strategies and Zero-Sum Games

When no pure strategy Nash equilibrium exists, or when predictability is costly, players may randomize their choices. Mixed strategies assign probabilities to different actions, making opponents unable to exploit predictable behavior.

Mixed Strategy

  • Randomize over pure strategies—assign probabilities to different actions to keep opponents guessing
  • Equilibrium in games without pure strategy equilibrium—some games only have mixed strategy Nash equilibria
  • Creates unpredictability—useful when being predictable allows opponents to exploit your strategy

Minimax Strategy

  • Minimize your maximum possible loss—focus on the worst-case scenario and choose the strategy that makes it least bad
  • Optimal in zero-sum games—where one player's gain exactly equals the other's loss
  • Conservative approach—guarantees a certain payoff regardless of opponent's strategy

Compare: Mixed Strategy vs. Dominant Strategy—a dominant strategy is always best (no randomization needed), while mixed strategies emerge when no single action is always best. If you find a dominant strategy, use it; if not, consider whether randomization makes sense.


Alternative Game Structures

Not all strategic interactions follow the Prisoner's Dilemma pattern. These alternative structures help you recognize when coordination rather than conflict drives outcomes.

Coordination Games

  • Players benefit from matching choices—unlike the Prisoner's Dilemma, both players want to coordinate on the same action
  • Multiple equilibria possible—the challenge is selecting which equilibrium to coordinate on
  • Communication helps—signaling or focal points can help players achieve the better equilibrium

Stag Hunt

  • Cooperation yields the best outcome, but requires trust—hunting the stag together beats hunting hares alone, but only if both cooperate
  • Risk of coordination failure—if you're unsure your partner will cooperate, the safe choice (hare) is tempting
  • Illustrates trust vs. safety tradeoff—relevant for understanding why firms might not collude even when it benefits both

ConceptBest Examples
Equilibrium identificationNash Equilibrium, Dominant Strategy, Dominant Strategy Equilibrium
Collusion and cheatingPrisoner's Dilemma, Incentive to Cheat calculation
Sustaining cooperationRepeated Games, Tit-for-Tat, Grim Trigger
Sequential gamesBackward Induction, Subgame Perfect Equilibrium
RandomizationMixed Strategy, Minimax Strategy
Coordination problemsCoordination Games, Stag Hunt
Cartel behaviorPrisoner's Dilemma, Repeated Games, Tacit Collusion
FRQ calculationsDominant Strategy, Nash Equilibrium, Incentive to Cheat

Self-Check Questions

  1. In a payoff matrix, how do you identify whether a player has a dominant strategy, and what's the difference between finding a dominant strategy equilibrium versus a Nash equilibrium?

  2. Why does the Prisoner's Dilemma model explain cartel instability? What specific calculation would you perform to determine the incentive to cheat on a collusive agreement?

  3. Compare Tit-for-Tat and Grim Trigger strategies: how do both sustain cooperation in repeated games, and what key difference determines which is more forgiving?

  4. If an FRQ presents a sequential game where one firm moves first, which solution method should you use, and why does this differ from analyzing a simultaneous-move payoff matrix?

  5. Contrast the Prisoner's Dilemma with the Stag Hunt: in which game do players have dominant strategies, and how does the nature of the strategic tension differ between them?