Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Game theory is the analytical backbone of oligopoly analysis on the AP Microeconomics exam. When firms are mutually interdependent, each company's best move depends on what competitors do. You can't just apply the simple profit-maximization rules from perfect competition or monopoly. Instead, you need to think strategically about payoff matrices, dominant strategies, Nash equilibrium, and the incentives to cheat on collusive agreements. These concepts explain why cartels fall apart, why price wars happen, and why oligopolists often end up stuck in outcomes that hurt everyone.
You're being tested on your ability to read a payoff matrix, identify dominant strategies, find Nash equilibrium, and calculate the incentive sufficient to alter a player's strategy. The College Board loves FRQs that present a two-firm game and ask you to determine equilibrium outcomes or explain why firms can't sustain collusion. Don't just memorize definitions. Know how each concept connects to real oligopoly behavior and be ready to apply them to unfamiliar scenarios.
An equilibrium in a game is an outcome that persists because no player wants to deviate on their own. Think of it as a resting point: once players land there, nobody has a reason to switch. Understanding the different types of equilibrium is the first step to solving any payoff matrix.
A Nash equilibrium is a set of strategies where each player is making their best response to the other player's choice. Neither player can improve their payoff by unilaterally changing what they do.
To find it in a payoff matrix:
A game can have one Nash equilibrium, multiple, or (in pure strategies) none at all.
A dominant strategy is a single action that gives a player the highest payoff no matter what the opponent does. If Firm A earns more by choosing "Low Price" whether Firm B picks "Low Price" or "High Price," then "Low Price" is Firm A's dominant strategy.
Not every game has one. Many games require you to think more carefully about what the opponent will do. But when a dominant strategy exists, the analysis gets much simpler: just play it.
When both players have a dominant strategy, the cell where those strategies intersect is the dominant strategy equilibrium. This is the easiest type of equilibrium to spot on an exam.
The classic Prisoner's Dilemma produces a dominant strategy equilibrium, but notice the twist: both players end up worse off than if they had cooperated. Having a dominant strategy doesn't guarantee a good outcome.
Compare: Nash Equilibrium vs. Dominant Strategy Equilibrium. Every dominant strategy equilibrium is a Nash equilibrium, but not every Nash equilibrium is a dominant strategy equilibrium. Dominant strategy equilibrium is the stronger condition: each player's strategy is best regardless of the opponent's choice. On an FRQ, always check for dominant strategies first. It's the fastest path to finding equilibrium.
This is the most important game structure for AP Microeconomics because it directly models oligopoly collusion problems. The core tension: individual rationality leads to collective irrationality. Both players would be better off cooperating, but each has a personal incentive to cheat.
Consider two firms that agree to keep prices high. Each firm faces a choice: honor the agreement or secretly undercut. If you cheat while your rival cooperates, you grab a huge share of the market. If you both cheat, you're both worse off than if you'd cooperated. But cheating is the dominant strategy for each firm.
To calculate the incentive to cheat, compare two payoffs from the matrix:
For example, if mutual cooperation pays each firm million but cheating while the rival cooperates pays million, the incentive to cheat is million.
FRQs may also ask: what change in payoffs would eliminate the incentive to cheat? You'd need to reduce the cheating payoff (or raise the cooperation payoff) until the difference equals zero. For instance, a penalty of at least million on the cheating firm would make defection no more attractive than cooperating.
Compare: Prisoner's Dilemma vs. Coordination Games. In the Prisoner's Dilemma, players have conflicting incentives that lead to a bad outcome. In coordination games, players want to match strategies but may fail due to communication problems. Recognizing which structure you're dealing with determines your entire analysis.
One-shot games often produce different outcomes than repeated games. When players expect to interact again, the threat of future punishment changes their incentives. Cheating might win today, but it can trigger retaliation tomorrow. If future cooperation is valuable enough, players may choose to cooperate now.
In a repeated game, players face the same strategic situation multiple times. This changes the math: a player now weighs the one-time gain from cheating against the present value of lost future cooperation.
Repetition doesn't guarantee cooperation, but it makes cooperation possible in situations where a one-shot game would always produce defection. The number of rounds matters too: if both players know the game ends after a fixed number of rounds, backward induction can unravel cooperation from the last round back to the first.
Tit-for-Tat is a simple repeated-game strategy:
This strategy rewards cooperation and punishes cheating, but it also forgives. If an opponent defects once and then returns to cooperation, Tit-for-Tat goes back to cooperating too. It only works in repeated interactions; in a one-shot game, there's no future round to use as leverage.
Grim Trigger is the harshest punishment strategy:
This creates a powerful deterrent because the cost of cheating is the permanent loss of all future cooperation benefits. However, it's completely unforgiving. A single defection destroys the cooperative relationship with no path back.
Compare: Tit-for-Tat vs. Grim Trigger. Both sustain cooperation in repeated games through the threat of punishment. Tit-for-Tat allows recovery from defection; Grim Trigger doesn't. On FRQs about collusion stability, explain how the threat of future punishment (either strategy) can overcome the one-shot incentive to cheat.
Some strategic situations unfold over time, with players moving sequentially rather than simultaneously. When you can observe what the first mover did before making your choice, the analysis changes. Backward induction solves these games by starting at the end and working backward, ensuring strategies are credible at every decision point.
To solve a sequential game using backward induction:
This method reveals which threats and promises are credible. If an incumbent firm threatens to start a price war when a rival enters, but the price war would hurt the incumbent too, backward induction shows the threat isn't believable. The entrant knows the incumbent would actually accommodate entry rather than follow through.
A subgame perfect equilibrium requires that the strategies form a Nash equilibrium in every subgame, not just the game as a whole. A subgame is any point in the game tree where a player makes a decision, along with everything that follows from that point.
This concept eliminates non-credible threats. A strategy that says "I'll do something that hurts me if you deviate" isn't credible because you wouldn't actually follow through when the moment arrives. Subgame perfect equilibrium, found through backward induction, only keeps strategies players would genuinely carry out.
Compare: Backward Induction vs. Simultaneous Game Analysis. Use backward induction when players move in sequence and can observe previous moves. Use payoff matrix analysis when players move simultaneously without knowing the other's choice. Identify the game structure before choosing your solution method.
When no pure strategy Nash equilibrium exists, or when predictability is costly, players may randomize their choices. Mixed strategies assign probabilities to different actions, making opponents unable to exploit predictable behavior.
A mixed strategy means a player doesn't commit to one action but instead plays each action with a specific probability. For example, a firm might set a high price 60% of the time and a low price 40% of the time.
The minimax strategy focuses on minimizing your maximum possible loss. For each of your available actions, you identify the worst-case payoff, then pick the action where that worst case is least bad.
Compare: Mixed Strategy vs. Dominant Strategy. A dominant strategy is always best, so no randomization is needed. Mixed strategies emerge when no single action is always best. If you find a dominant strategy in a payoff matrix, use it. If not, consider whether the game requires randomization.
Not all strategic interactions follow the Prisoner's Dilemma pattern. These alternative structures help you recognize when coordination rather than conflict drives outcomes.
In a coordination game, players benefit from choosing the same action. Unlike the Prisoner's Dilemma, there's no temptation to defect. The challenge is that multiple equilibria may exist, and players might fail to land on the same one.
Communication helps enormously here. If firms can signal their intentions (or if a natural focal point exists that both parties recognize), coordination becomes easier. Think of two firms choosing between compatible technology standards: both prefer to match, but they might prefer different standards.
The Stag Hunt captures a tension between cooperation and safety. Two hunters can cooperate to catch a stag (the best outcome for both) or individually hunt hares (a smaller but guaranteed payoff).
| Concept | Best Examples |
|---|---|
| Equilibrium identification | Nash Equilibrium, Dominant Strategy, Dominant Strategy Equilibrium |
| Collusion and cheating | Prisoner's Dilemma, Incentive to Cheat calculation |
| Sustaining cooperation | Repeated Games, Tit-for-Tat, Grim Trigger |
| Sequential games | Backward Induction, Subgame Perfect Equilibrium |
| Randomization | Mixed Strategy, Minimax Strategy |
| Coordination problems | Coordination Games, Stag Hunt |
| Cartel behavior | Prisoner's Dilemma, Repeated Games, Tacit Collusion |
| FRQ calculations | Dominant Strategy, Nash Equilibrium, Incentive to Cheat |
In a payoff matrix, how do you identify whether a player has a dominant strategy, and what's the difference between finding a dominant strategy equilibrium versus a Nash equilibrium?
Why does the Prisoner's Dilemma model explain cartel instability? What specific calculation would you perform to determine the incentive to cheat on a collusive agreement?
Compare Tit-for-Tat and Grim Trigger strategies: how do both sustain cooperation in repeated games, and what key difference determines which is more forgiving?
If an FRQ presents a sequential game where one firm moves first, which solution method should you use, and why does this differ from analyzing a simultaneous-move payoff matrix?
Contrast the Prisoner's Dilemma with the Stag Hunt: in which game do players have dominant strategies, and how does the nature of the strategic tension differ between them?