Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Game theory is the analytical backbone of oligopoly analysis on the AP Microeconomics exam. When firms are mutually interdependent—meaning each company's best move depends on what competitors do—you can't just apply the simple profit-maximization rules from perfect competition or monopoly. Instead, you need to think strategically about payoff matrices, dominant strategies, Nash equilibrium, and the incentives to cheat on collusive agreements. These concepts explain why cartels fall apart, why price wars happen, and why oligopolists often end up stuck in outcomes that hurt everyone.
You're being tested on your ability to read a payoff matrix, identify dominant strategies, find Nash equilibrium, and calculate the incentive sufficient to alter a player's strategy. The College Board loves FRQs that present a two-firm game and ask you to determine equilibrium outcomes or explain why firms can't sustain collusion. Don't just memorize definitions—know how each concept connects to real oligopoly behavior and be ready to apply them to unfamiliar scenarios.
These concepts help you identify where strategic interactions settle—the outcomes that persist because no player wants to deviate unilaterally. An equilibrium represents mutual best responses, where each player's strategy is optimal given what others are doing.
Compare: Nash Equilibrium vs. Dominant Strategy Equilibrium—both represent stable outcomes, but dominant strategy equilibrium is a subset of Nash equilibrium where each player's strategy is best regardless of the opponent's choice. If an FRQ asks you to find equilibrium, always check for dominant strategies first—it's the fastest path.
This is the most important game structure for AP Microeconomics because it directly models oligopoly collusion problems. The dilemma arises when individual rationality leads to collective irrationality—both players would be better off cooperating, but each has an incentive to cheat.
Compare: Prisoner's Dilemma vs. Coordination Games—in the Prisoner's Dilemma, players have conflicting incentives that lead to a bad outcome; in coordination games, players want to match strategies but may fail due to communication problems. Know which structure applies when analyzing oligopoly behavior.
One-shot games often produce different outcomes than repeated games, where the shadow of the future changes players' incentives. When players expect to interact again, cooperation can become sustainable because cheating triggers future punishment.
Compare: Tit-for-Tat vs. Grim Trigger—both sustain cooperation in repeated games, but Tit-for-Tat allows recovery from defection while Grim Trigger doesn't. On FRQs about collusion stability, explain how the threat of future punishment (either strategy) can overcome the one-shot incentive to cheat.
Some strategic situations unfold over time, with players moving sequentially rather than simultaneously. Backward induction solves these games by starting at the end and working backward, ensuring strategies are credible at every decision point.
Compare: Backward Induction vs. Simultaneous Game Analysis—use backward induction when players move in sequence and can observe previous moves; use payoff matrix analysis when players move simultaneously without knowing the other's choice. Identify the game structure before choosing your solution method.
When no pure strategy Nash equilibrium exists, or when predictability is costly, players may randomize their choices. Mixed strategies assign probabilities to different actions, making opponents unable to exploit predictable behavior.
Compare: Mixed Strategy vs. Dominant Strategy—a dominant strategy is always best (no randomization needed), while mixed strategies emerge when no single action is always best. If you find a dominant strategy, use it; if not, consider whether randomization makes sense.
Not all strategic interactions follow the Prisoner's Dilemma pattern. These alternative structures help you recognize when coordination rather than conflict drives outcomes.
| Concept | Best Examples |
|---|---|
| Equilibrium identification | Nash Equilibrium, Dominant Strategy, Dominant Strategy Equilibrium |
| Collusion and cheating | Prisoner's Dilemma, Incentive to Cheat calculation |
| Sustaining cooperation | Repeated Games, Tit-for-Tat, Grim Trigger |
| Sequential games | Backward Induction, Subgame Perfect Equilibrium |
| Randomization | Mixed Strategy, Minimax Strategy |
| Coordination problems | Coordination Games, Stag Hunt |
| Cartel behavior | Prisoner's Dilemma, Repeated Games, Tacit Collusion |
| FRQ calculations | Dominant Strategy, Nash Equilibrium, Incentive to Cheat |
In a payoff matrix, how do you identify whether a player has a dominant strategy, and what's the difference between finding a dominant strategy equilibrium versus a Nash equilibrium?
Why does the Prisoner's Dilemma model explain cartel instability? What specific calculation would you perform to determine the incentive to cheat on a collusive agreement?
Compare Tit-for-Tat and Grim Trigger strategies: how do both sustain cooperation in repeated games, and what key difference determines which is more forgiving?
If an FRQ presents a sequential game where one firm moves first, which solution method should you use, and why does this differ from analyzing a simultaneous-move payoff matrix?
Contrast the Prisoner's Dilemma with the Stag Hunt: in which game do players have dominant strategies, and how does the nature of the strategic tension differ between them?