Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Greedy algorithms represent one of the most elegant problem-solving paradigms you'll encounter in data structures and algorithm design. The core principle is deceptively simple: make the locally optimal choice at each step and hope it leads to a globally optimal solution. But here's what you're really being tested on—understanding when greedy works and why it fails. Exam questions love to probe whether you can identify the conditions that guarantee optimality: optimal substructure and the greedy choice property.
These algorithms appear everywhere in technical interviews and form the foundation for understanding more complex approaches like dynamic programming. When you study these examples, don't just memorize the steps—ask yourself: "What's the greedy choice? Why does it work here? Would it work with different constraints?" That comparative thinking is exactly what FRQ prompts and coding interviews demand.
These algorithms solve fundamental graph problems by greedily selecting the "cheapest" option at each step. The key insight is that local minimum-cost decisions accumulate into globally optimal solutions when edge weights are non-negative.
Compare: Prim's vs. Kruskal's—both guarantee a minimum spanning tree, but Prim's grows from a single vertex (vertex-centric) while Kruskal's considers all edges globally (edge-centric). If an FRQ asks you to choose between them, mention graph density: Prim's for dense graphs, Kruskal's for sparse.
These problems involve choosing a maximum-size subset from competing options. The greedy insight is that finishing early leaves the most room for future selections.
Compare: Activity Selection vs. Job Sequencing—both involve scheduling, but activity selection maximizes count (sort by finish time) while job sequencing maximizes profit (sort by profit). Know which sorting criterion matches which objective.
These problems optimize value subject to capacity constraints. The critical distinction is whether items are divisible—this determines if greedy works.
Compare: Fractional Knapsack vs. Coin Change—both seem like "take the best ratio" problems, but fractional knapsack always works (items are divisible) while coin change sometimes fails (coins are discrete). This distinction is a favorite exam topic.
Huffman coding demonstrates greedy algorithm design for information theory problems. The insight is that frequent symbols should have shorter codes, and building bottom-up ensures optimality.
Compare: Huffman Coding vs. Decoding—encoding requires building the tree ( with a heap), while decoding just traverses it ( for bits). Both rely on the same tree, but they're distinct algorithmic processes.
| Concept | Best Examples |
|---|---|
| Graph shortest paths | Dijkstra's Algorithm |
| Minimum spanning trees | Prim's, Kruskal's |
| Interval/activity scheduling | Activity Selection, Interval Scheduling |
| Profit maximization with deadlines | Job Sequencing |
| Value-to-weight optimization | Fractional Knapsack |
| Greedy failure cases | Coin Change (non-canonical systems), 0/1 Knapsack |
| Data compression | Huffman Coding, Huffman Decoding |
| Union-Find applications | Kruskal's Algorithm |
Both Prim's and Kruskal's algorithms produce minimum spanning trees. What data structure does each rely on, and how does graph density affect which you should choose?
Why does the greedy approach guarantee an optimal solution for the Fractional Knapsack Problem but fail for the 0/1 Knapsack Problem?
Activity Selection and Job Sequencing both involve scheduling, but they sort by different criteria. What does each sort by, and why does this difference matter for optimality?
Compare and contrast Dijkstra's Algorithm and the greedy Coin Change approach: both make "minimum cost" choices, but one always works and one sometimes fails. Explain the key difference.
If you're asked to design a compression scheme and given character frequencies, walk through the greedy choice Huffman Coding makes at each step. Why does merging the two smallest frequencies first lead to an optimal prefix-free code?