Why This Matters
Game theory is the analytical engine behind nearly every strategic business decision you'll encounter. Whether you're analyzing why competitors undercut prices, how to structure a negotiation, or why firms in an oligopoly behave the way they do, game theory provides the framework. You're being tested on your ability to identify equilibrium concepts, strategic interdependence, and the conditions that lead to cooperation versus competition.
These concepts connect directly to market structure analysis, pricing strategy, and competitive dynamics. When you see a case about firms choosing output levels or bidders strategizing in an auction, you need to recognize which game-theoretic model applies and what outcome it predicts. Don't just memorize definitions. Know what strategic situation each concept explains and when to apply it.
Equilibrium Concepts: Predicting Stable Outcomes
These concepts help you identify where strategic interactions "settle." An equilibrium is the outcome that persists because no player wants to deviate on their own. Understanding equilibrium is essential for predicting market outcomes and competitor behavior.
Nash Equilibrium
- No player can improve by changing strategy alone. Each player's choice is the best response to what everyone else is doing. If you'd stick with your current strategy after learning what your opponents chose, you're at a Nash Equilibrium.
- Can exist in pure or mixed strategies. Pure means each player picks a specific action. Mixed means players randomize across actions with calculated probabilities.
- Not always efficient. Nash outcomes can be suboptimal for everyone (the Prisoner's Dilemma is the classic case). This gap between equilibrium and efficiency is exactly why regulation or coordination mechanisms sometimes matter in markets.
Dominant Strategy
- Yields the highest payoff regardless of opponents' choices. This simplifies analysis because you don't need to predict what others will do. If one action beats every alternative no matter what, that's your dominant strategy.
- Not all games have one. But when dominant strategies exist for all players, they converge to a Nash Equilibrium automatically. A dominant strategy equilibrium is a special, stronger case of Nash.
- Key shortcut in problem-solving. Always check for dominant strategies first. If you find them, you've found the equilibrium without needing to do more complex analysis.
Compare: Nash Equilibrium vs. Dominant Strategy: A dominant strategy guarantees a Nash Equilibrium, but Nash can exist without dominant strategies. On exams, identify dominant strategies first; if none exist, then solve for Nash directly.
Strategic Dilemmas: When Rationality Backfires
These situations reveal the tension between individual optimization and collective welfare. The core mechanism is that pursuing self-interest leads to outcomes worse than cooperation, a pattern that explains cartel breakdowns, public goods problems, and the rationale for regulatory interventions.
Prisoner's Dilemma
- Individual rationality leads to collective irrationality. Both players defect even though mutual cooperation would yield higher payoffs for both. Each player reasons: "No matter what the other does, I'm better off defecting." When both think this way, they land on the worst joint outcome.
- Dominant strategy is to defect. That's what makes it a dilemma. The rational choice for each individual produces an irrational result for the group.
- Foundation for understanding cartels. Two firms in an oligopoly might both profit from keeping prices high, but each has an incentive to secretly cut prices and steal market share. This is why price-fixing agreements tend to collapse without enforcement mechanisms.
Coordination Games
- Players benefit from matching choices. Unlike the Prisoner's Dilemma, interests are aligned. The challenge isn't conflicting incentives; it's figuring out which option to coordinate on.
- Multiple equilibria create uncertainty. If two firms are each choosing between Technology A and Technology B, both preferring to match, there are two Nash Equilibria. Without communication or a focal point, players may end up mismatched at an inferior outcome.
- Critical for technology adoption and standards. This explains network effects, platform competition, and why markets sometimes lock into inferior technologies (think VHS over Betamax, or QWERTY keyboards persisting despite arguably better layouts).
Compare: Prisoner's Dilemma vs. Coordination Games: Both can produce suboptimal outcomes, but for opposite reasons. In the Prisoner's Dilemma, players want different things (each wants to defect while the other cooperates); in coordination games, they want the same thing but can't guarantee they'll align. FRQs often ask you to identify which structure applies to a business scenario.
Dynamic Strategy: Timing and Repetition
When games unfold over time, strategy changes fundamentally. The ability to observe, react, and build reputation transforms the strategic landscape.
Sequential Games
- Order of moves matters. First-movers may gain advantage by committing to a strategy (e.g., building capacity to deter entry), or late-movers may benefit from observing and reacting.
- Solved using backward induction. Start from the final decision node and work backward to determine optimal play at each stage. This method identifies the subgame perfect equilibrium, which eliminates non-credible threats.
- Represented by game trees (extensive form). Unlike the payoff matrices you use for simultaneous games, game trees map out every possible sequence of decisions and outcomes. They're essential for analyzing negotiations, market entry timing, and contract design.
Backward induction in steps:
- Identify the last player to move and determine their optimal choice at each final decision node.
- Knowing what the last player will do, move one step earlier and determine the previous player's optimal choice.
- Continue working backward until you reach the first move.
- The resulting path through the tree is the subgame perfect equilibrium.
Repeated Games
- Future interactions change present incentives. The "shadow of the future" enables cooperation that wouldn't occur in a one-shot game. If you know you'll face the same competitor next quarter, cheating today risks retaliation tomorrow.
- Reputation becomes a strategic asset. Players can punish defection and reward cooperation over time, making cooperative behavior self-enforcing even without contracts.
- Tit-for-tat and trigger strategies emerge. Tit-for-tat (cooperate first, then copy your opponent's last move) is simple and effective. Trigger strategies (cooperate until the other defects, then punish forever) can also sustain cooperation in infinitely repeated games, provided players value future payoffs enough (i.e., the discount factor is sufficiently high).
Compare: Sequential vs. Repeated Games: Sequential games involve different players moving in order within one interaction; repeated games involve the same interaction occurring multiple times. Both add temporal dynamics but through different mechanisms. Sequential games are about commitment and observation; repeated games are about reputation and reciprocity.
Strategy Under Uncertainty: Randomization
When predictability is a weakness, randomization becomes optimal. Mixed strategies prevent exploitation by keeping opponents unable to anticipate your moves.
Mixed Strategy
- Players randomize across actions with specific probabilities. This isn't random guessing. The probabilities are precisely calculated to form an equilibrium.
- Necessary when no pure strategy Nash Equilibrium exists. Games like matching pennies have no stable outcome if both players pick definite actions, because each player always wants to do the opposite of the other. Mixing solves this. Nash's theorem guarantees that every finite game has at least one equilibrium (pure or mixed).
- Probabilities are chosen to make opponents indifferent. You set your mix so that your opponent gets the same expected payoff from each of their options. That way, they can't exploit any predictable pattern in your play.
Compare: Pure Strategy vs. Mixed Strategy: Pure strategies specify exact actions; mixed strategies specify probability distributions over actions. Use mixed strategies when any predictable choice would be exploited by opponents.
Market Applications: Competition and Negotiation
These concepts apply game theory directly to business contexts. Mastering these models is essential for market structure analysis on exams.
Oligopoly Models (Cournot and Bertrand)
- Cournot: firms compete on quantity. Each firm independently chooses how much to produce. Market price then emerges from total supply via the demand curve. The Nash Equilibrium yields output levels between the monopoly outcome (low output, high price) and perfect competition (high output, low price), giving firms moderate profits.
- Bertrand: firms compete on price. With identical products and no capacity constraints, price competition drives the equilibrium price down to marginal cost, leaving firms with zero economic profit. This is the Bertrand paradox: just two competitors can replicate the perfectly competitive outcome.
- Model choice depends on industry characteristics. Industries where capacity is expensive to adjust (oil refining, airlines) tend to fit Cournot. Industries where firms can easily scale output to meet demand at their posted price tend to fit Bertrand. Product differentiation softens the Bertrand paradox, allowing firms to maintain some pricing power even under price competition.
Bargaining Theory
- Analyzes how surplus gets divided. When a buyer values a good at $100 and a seller's cost is $60, there's $40 of surplus. Bargaining theory predicts how that $40 gets split.
- Outcomes depend on bargaining power, outside options, and information. A party's reservation price (the worst deal they'd accept) is shaped by their next-best alternative. Better outside options mean more bargaining power.
- Cooperative vs. non-cooperative approaches. The Nash bargaining solution assumes parties reach an efficient agreement and splits surplus based on relative bargaining power. Strategic (non-cooperative) models like Rubinstein's alternating-offers game allow for inefficiency and costly delay, with outcomes depending on patience and discount rates.
- Information is power. The party with better information about valuations or alternatives typically captures more surplus. This is why sellers try to learn your budget and buyers try to hide it.
Auction Theory
- Format affects bidding behavior and revenue. English (ascending price), Dutch (descending price), first-price sealed-bid, and second-price sealed-bid auctions each create different strategic incentives. In a second-price sealed-bid auction (Vickrey auction), bidding your true value is actually a dominant strategy.
- Winner's curse: winning may mean overpaying. In common-value auctions (where the item has the same underlying value for all bidders, like an oil lease), the winner tends to be the bidder with the most optimistic estimate. If you won, ask yourself: "Did I win because I knew something others didn't, or because I overestimated?"
- Strategic underbidding is rational. Sophisticated bidders shade their bids below their value estimates to protect against the winner's curse. The more bidders in a common-value auction, the more you should shade your bid.
Compare: Cournot vs. Bertrand: Same market structure (oligopoly), opposite competitive variables (quantity vs. price), dramatically different outcomes (positive profits vs. zero profits). Exams frequently ask you to explain why the same industry might behave differently depending on which model applies.
Quick Reference Table
|
| Equilibrium prediction | Nash Equilibrium, Dominant Strategy |
| Cooperation failures | Prisoner's Dilemma, Coordination Games |
| Temporal dynamics | Sequential Games, Repeated Games |
| Uncertainty and randomization | Mixed Strategy |
| Quantity competition | Cournot Model |
| Price competition | Bertrand Model |
| Negotiation outcomes | Bargaining Theory |
| Market mechanism design | Auction Theory |
Self-Check Questions
-
A firm knows its best response regardless of competitor actions. Is this a Nash Equilibrium, a dominant strategy, or both? Explain the relationship between these concepts.
-
Two streaming platforms would both benefit from adopting the same video codec, but neither wants to switch first. Is this a Prisoner's Dilemma or a Coordination Game? What distinguishes them?
-
Compare Cournot and Bertrand competition: Why does the same market structure (oligopoly) produce such different profit outcomes depending on the competitive variable?
-
In a repeated game, how does the "shadow of the future" change strategic incentives compared to a one-shot game? Give a business example where this matters.
-
An FRQ describes a sealed-bid auction where bidders have similar estimates of an item's true value. What strategic concern should bidders have, and how should it affect their bids?