๐Ÿค‘AP Microeconomics

Game Theory Strategies

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Game theory is the analytical backbone of oligopoly analysis on the AP Microeconomics exam. When firms are mutually interdependent, each company's best move depends on what competitors do. You can't just apply the simple profit-maximization rules from perfect competition or monopoly. Instead, you need to think strategically about payoff matrices, dominant strategies, Nash equilibrium, and the incentives to cheat on collusive agreements. These concepts explain why cartels fall apart, why price wars happen, and why oligopolists often end up stuck in outcomes that hurt everyone.

You're being tested on your ability to read a payoff matrix, identify dominant strategies, find Nash equilibrium, and calculate the incentive sufficient to alter a player's strategy. The College Board loves FRQs that present a two-firm game and ask you to determine equilibrium outcomes or explain why firms can't sustain collusion. Don't just memorize definitions. Know how each concept connects to real oligopoly behavior and be ready to apply them to unfamiliar scenarios.


Equilibrium Concepts: Finding Stable Outcomes

An equilibrium in a game is an outcome that persists because no player wants to deviate on their own. Think of it as a resting point: once players land there, nobody has a reason to switch. Understanding the different types of equilibrium is the first step to solving any payoff matrix.

Nash Equilibrium

A Nash equilibrium is a set of strategies where each player is making their best response to the other player's choice. Neither player can improve their payoff by unilaterally changing what they do.

To find it in a payoff matrix:

  1. Pick one player (say, Firm A). For each possible action by Firm B, circle Firm A's highest payoff.
  2. Repeat for Firm B: for each possible action by Firm A, circle Firm B's highest payoff.
  3. Any cell where both payoffs are circled is a Nash equilibrium.

A game can have one Nash equilibrium, multiple, or (in pure strategies) none at all.

Dominant Strategy

A dominant strategy is a single action that gives a player the highest payoff no matter what the opponent does. If Firm A earns more by choosing "Low Price" whether Firm B picks "Low Price" or "High Price," then "Low Price" is Firm A's dominant strategy.

Not every game has one. Many games require you to think more carefully about what the opponent will do. But when a dominant strategy exists, the analysis gets much simpler: just play it.

Dominant Strategy Equilibrium

When both players have a dominant strategy, the cell where those strategies intersect is the dominant strategy equilibrium. This is the easiest type of equilibrium to spot on an exam.

The classic Prisoner's Dilemma produces a dominant strategy equilibrium, but notice the twist: both players end up worse off than if they had cooperated. Having a dominant strategy doesn't guarantee a good outcome.

Compare: Nash Equilibrium vs. Dominant Strategy Equilibrium. Every dominant strategy equilibrium is a Nash equilibrium, but not every Nash equilibrium is a dominant strategy equilibrium. Dominant strategy equilibrium is the stronger condition: each player's strategy is best regardless of the opponent's choice. On an FRQ, always check for dominant strategies first. It's the fastest path to finding equilibrium.


The Prisoner's Dilemma: Why Cooperation Fails

This is the most important game structure for AP Microeconomics because it directly models oligopoly collusion problems. The core tension: individual rationality leads to collective irrationality. Both players would be better off cooperating, but each has a personal incentive to cheat.

Prisoner's Dilemma

Consider two firms that agree to keep prices high. Each firm faces a choice: honor the agreement or secretly undercut. If you cheat while your rival cooperates, you grab a huge share of the market. If you both cheat, you're both worse off than if you'd cooperated. But cheating is the dominant strategy for each firm.

  • Each player's dominant strategy is to defect (cheat), because betraying yields a better payoff no matter what the opponent does.
  • Mutual defection is the Nash equilibrium, even though mutual cooperation would make both players better off.
  • This directly models cartel instability: firms agree to restrict output, but each one has an incentive to secretly increase production.

Incentive to Cheat on Collusion

To calculate the incentive to cheat, compare two payoffs from the matrix:

  1. Find the payoff a player gets from cheating while the other cooperates.
  2. Find the payoff from mutual cooperation.
  3. Subtract: the difference is the incentive to cheat.

For example, if mutual cooperation pays each firm $50\$50 million but cheating while the rival cooperates pays $70\$70 million, the incentive to cheat is $70โˆ’$50=$20\$70 - \$50 = \$20 million.

FRQs may also ask: what change in payoffs would eliminate the incentive to cheat? You'd need to reduce the cheating payoff (or raise the cooperation payoff) until the difference equals zero. For instance, a penalty of at least $20\$20 million on the cheating firm would make defection no more attractive than cooperating.

Compare: Prisoner's Dilemma vs. Coordination Games. In the Prisoner's Dilemma, players have conflicting incentives that lead to a bad outcome. In coordination games, players want to match strategies but may fail due to communication problems. Recognizing which structure you're dealing with determines your entire analysis.


Strategies for Repeated Interactions

One-shot games often produce different outcomes than repeated games. When players expect to interact again, the threat of future punishment changes their incentives. Cheating might win today, but it can trigger retaliation tomorrow. If future cooperation is valuable enough, players may choose to cooperate now.

Repeated Games

In a repeated game, players face the same strategic situation multiple times. This changes the math: a player now weighs the one-time gain from cheating against the present value of lost future cooperation.

  • Cooperation becomes sustainable when the present value of staying in the cooperative agreement exceeds the one-time gain from defection.
  • This explains tacit collusion in oligopolies. Firms maintain high prices not because of a formal agreement, but because they fear triggering a price war.

Repetition doesn't guarantee cooperation, but it makes cooperation possible in situations where a one-shot game would always produce defection. The number of rounds matters too: if both players know the game ends after a fixed number of rounds, backward induction can unravel cooperation from the last round back to the first.

Tit-for-Tat

Tit-for-Tat is a simple repeated-game strategy:

  1. Start by cooperating in the first round.
  2. In every subsequent round, do whatever your opponent did in the previous round.
  3. If they cooperated, you cooperate. If they defected, you defect.

This strategy rewards cooperation and punishes cheating, but it also forgives. If an opponent defects once and then returns to cooperation, Tit-for-Tat goes back to cooperating too. It only works in repeated interactions; in a one-shot game, there's no future round to use as leverage.

Grim Trigger Strategy

Grim Trigger is the harshest punishment strategy:

  1. Start by cooperating.
  2. Continue cooperating as long as your opponent cooperates.
  3. If your opponent defects even once, defect in every round forever after.

This creates a powerful deterrent because the cost of cheating is the permanent loss of all future cooperation benefits. However, it's completely unforgiving. A single defection destroys the cooperative relationship with no path back.

Compare: Tit-for-Tat vs. Grim Trigger. Both sustain cooperation in repeated games through the threat of punishment. Tit-for-Tat allows recovery from defection; Grim Trigger doesn't. On FRQs about collusion stability, explain how the threat of future punishment (either strategy) can overcome the one-shot incentive to cheat.


Solving Dynamic Games: Thinking Ahead

Some strategic situations unfold over time, with players moving sequentially rather than simultaneously. When you can observe what the first mover did before making your choice, the analysis changes. Backward induction solves these games by starting at the end and working backward, ensuring strategies are credible at every decision point.

Backward Induction

To solve a sequential game using backward induction:

  1. Start at the final decision nodes (the last player to move). At each node, determine what that player would choose by picking their highest payoff.
  2. Move back one step. The previous player now knows what the final player will do at each node, so they choose the action that leads to their own best outcome.
  3. Continue working backward until you reach the first move.

This method reveals which threats and promises are credible. If an incumbent firm threatens to start a price war when a rival enters, but the price war would hurt the incumbent too, backward induction shows the threat isn't believable. The entrant knows the incumbent would actually accommodate entry rather than follow through.

Subgame Perfect Equilibrium

A subgame perfect equilibrium requires that the strategies form a Nash equilibrium in every subgame, not just the game as a whole. A subgame is any point in the game tree where a player makes a decision, along with everything that follows from that point.

This concept eliminates non-credible threats. A strategy that says "I'll do something that hurts me if you deviate" isn't credible because you wouldn't actually follow through when the moment arrives. Subgame perfect equilibrium, found through backward induction, only keeps strategies players would genuinely carry out.

Compare: Backward Induction vs. Simultaneous Game Analysis. Use backward induction when players move in sequence and can observe previous moves. Use payoff matrix analysis when players move simultaneously without knowing the other's choice. Identify the game structure before choosing your solution method.


Mixed Strategies and Zero-Sum Games

When no pure strategy Nash equilibrium exists, or when predictability is costly, players may randomize their choices. Mixed strategies assign probabilities to different actions, making opponents unable to exploit predictable behavior.

Mixed Strategy

A mixed strategy means a player doesn't commit to one action but instead plays each action with a specific probability. For example, a firm might set a high price 60% of the time and a low price 40% of the time.

  • Mixed strategy equilibria exist in games where no pure strategy equilibrium can be found (think of matching pennies or rock-paper-scissors).
  • The equilibrium probabilities are chosen so that the opponent is indifferent between their own options. That's the mathematical condition you'd use to solve for the mix: set the opponent's expected payoff equal across their choices, then solve for your probabilities.

Minimax Strategy

The minimax strategy focuses on minimizing your maximum possible loss. For each of your available actions, you identify the worst-case payoff, then pick the action where that worst case is least bad.

  • This is the optimal approach in zero-sum games, where one player's gain exactly equals the other's loss.
  • It's a conservative, defensive approach that guarantees a certain floor on your payoff regardless of what the opponent does.

Compare: Mixed Strategy vs. Dominant Strategy. A dominant strategy is always best, so no randomization is needed. Mixed strategies emerge when no single action is always best. If you find a dominant strategy in a payoff matrix, use it. If not, consider whether the game requires randomization.


Alternative Game Structures

Not all strategic interactions follow the Prisoner's Dilemma pattern. These alternative structures help you recognize when coordination rather than conflict drives outcomes.

Coordination Games

In a coordination game, players benefit from choosing the same action. Unlike the Prisoner's Dilemma, there's no temptation to defect. The challenge is that multiple equilibria may exist, and players might fail to land on the same one.

Communication helps enormously here. If firms can signal their intentions (or if a natural focal point exists that both parties recognize), coordination becomes easier. Think of two firms choosing between compatible technology standards: both prefer to match, but they might prefer different standards.

Stag Hunt

The Stag Hunt captures a tension between cooperation and safety. Two hunters can cooperate to catch a stag (the best outcome for both) or individually hunt hares (a smaller but guaranteed payoff).

  • Unlike the Prisoner's Dilemma, there's no dominant strategy. Cooperating is best if the other player cooperates too, but hunting hares is safer if you're unsure.
  • The game has two pure strategy Nash equilibria: both hunt stag, or both hunt hare. The stag equilibrium is payoff-dominant (better for everyone), but the hare equilibrium is risk-dominant (safer).
  • This illustrates why firms might not collude even when it benefits both: the risk of being the only one cooperating can push players toward the safe, inferior outcome.

ConceptBest Examples
Equilibrium identificationNash Equilibrium, Dominant Strategy, Dominant Strategy Equilibrium
Collusion and cheatingPrisoner's Dilemma, Incentive to Cheat calculation
Sustaining cooperationRepeated Games, Tit-for-Tat, Grim Trigger
Sequential gamesBackward Induction, Subgame Perfect Equilibrium
RandomizationMixed Strategy, Minimax Strategy
Coordination problemsCoordination Games, Stag Hunt
Cartel behaviorPrisoner's Dilemma, Repeated Games, Tacit Collusion
FRQ calculationsDominant Strategy, Nash Equilibrium, Incentive to Cheat

Self-Check Questions

  1. In a payoff matrix, how do you identify whether a player has a dominant strategy, and what's the difference between finding a dominant strategy equilibrium versus a Nash equilibrium?

  2. Why does the Prisoner's Dilemma model explain cartel instability? What specific calculation would you perform to determine the incentive to cheat on a collusive agreement?

  3. Compare Tit-for-Tat and Grim Trigger strategies: how do both sustain cooperation in repeated games, and what key difference determines which is more forgiving?

  4. If an FRQ presents a sequential game where one firm moves first, which solution method should you use, and why does this differ from analyzing a simultaneous-move payoff matrix?

  5. Contrast the Prisoner's Dilemma with the Stag Hunt: in which game do players have dominant strategies, and how does the nature of the strategic tension differ between them?