Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Greedy algorithms solve optimization problems by making locally optimal choices at each step, aiming for a globally optimal solution. You're being tested not just on knowing these algorithms exist, but on understanding when and why the greedy approach works. That means recognizing two key properties:
These applications span graph algorithms, data compression, scheduling, and resource allocation. Exam questions frequently ask you to prove greedy correctness, analyze time complexity, or identify when greedy fails. Don't just memorize the algorithms. Know what structural property each problem exploits, and be ready to contrast greedy solutions with dynamic programming alternatives.
Graph-based greedy algorithms build solutions edge-by-edge or vertex-by-vertex. The reason this works is that committing to locally best edges or paths doesn't prevent you from finding the global optimum, as long as certain conditions hold (like non-negative weights).
An MST connects all vertices in a weighted, undirected graph with the minimum possible total edge weight. Both Kruskal's and Prim's guarantee optimality, but they use different greedy strategies.
Dijkstra's finds the shortest path from a single source to all other vertices, but only in graphs with non-negative edge weights.
Time complexity is with a binary heap. The algorithm fails on negative weights because once a vertex is finalized, its distance can't be revised, and a negative edge could later provide a shorter path.
Compare: Kruskal's and Dijkstra's both use greedy edge selection with priority structures, but MST algorithms consider global edge weights while Dijkstra's tracks cumulative path distances. If a question asks about greedy graph algorithms, clarify whether you're optimizing total weight (MST) or path length (shortest path).
These problems share a common structure: given a set of tasks with constraints, select or order them optimally. The greedy insight is that sorting by the right criterion (usually finish time or deadline) enables optimal sequential selection.
The goal is to select the maximum number of non-overlapping activities. This is the classic greedy scheduling problem.
Correctness is proven via an exchange argument: you can take any optimal solution and swap its first activity for the one with the earliest finish time without reducing the count. This shows the greedy choice is always safe.
This is equivalent to activity selection. You're selecting the maximum number of non-overlapping intervals from a set defined by start and end times. Greedy by earliest finish time guarantees leaving maximum room for subsequent intervals. The overall complexity is due to sorting; the selection phase itself is linear.
Here the goal shifts: maximize total profit by scheduling jobs before their deadlines, where each job takes one time unit.
Prioritizing high-profit jobs ensures maximum total profit when slots are limited.
Compare: Activity Selection maximizes count (sort by finish time) while Job Sequencing maximizes profit (sort by profit). Know which sorting criterion matches which objective.
Greedy compression algorithms build optimal encodings by prioritizing frequent elements. The key mechanism is constructing prefix-free codes, where no codeword is a prefix of another. This guarantees unambiguous decoding.
Huffman coding assigns variable-length codes based on character frequency. More frequent characters get shorter codes, which minimizes the expected message length.
Time complexity: where is the number of distinct characters. Huffman coding is proven optimal among all prefix-free encoding schemes for known frequency distributions.
Decoding reconstructs the original data by traversing the Huffman tree. Read each bit: go left for 0, right for 1. When you reach a leaf, output that character and return to the root. The prefix-free property guarantees there's never ambiguity about where one codeword ends and the next begins. Decoding runs in time for encoded bits, since each bit requires constant work.
Compare: Encoding requires building the tree () while decoding only traverses it ( for bits). Exam questions may ask you to construct a Huffman tree from frequencies or trace a decoding sequence.
These problems involve distributing limited resources optimally. Greedy works when items can be ranked by a single efficiency metric and selected accordingly.
You want to maximize value in a weight-limited knapsack, and items can be divided into fractions (unlike the 0/1 version).
Greedy is optimal here because you can take fractions. With 0/1 knapsack, you must take an item entirely or not at all, which means a high-ratio item might waste too much capacity. That's why 0/1 knapsack requires dynamic programming.
The goal is to find the minimum number of coins to make a target amount. The greedy approach is simple: always pick the largest denomination that fits, and repeat.
For canonical coin systems like US coins (1, 5, 10, 25), this works perfectly. But for non-canonical systems, greedy can fail. With coins {1, 3, 4} and target 6, greedy picks {4, 1, 1} (3 coins), but the optimal answer is {3, 3} (2 coins). For non-canonical systems, you need dynamic programming.
Compare: Fractional vs. 0/1 Knapsack is a classic exam question on greedy limitations. Fractional allows greedy (sort by ratio), but 0/1 requires DP because you can't take partial items to "fine-tune" the solution.
Some greedy algorithms solve number-theoretic problems by iteratively selecting the largest valid component.
An Egyptian fraction represents any fraction as a sum of distinct unit fractions (fractions of the form ).
The greedy algorithm works like this: given , subtract the largest unit fraction that doesn't exceed it, which is . Then repeat on the remainder.
This greedy approach always terminates and produces a valid representation, but it's not always optimal in terms of fewest fractions. The greedy decomposition may produce more terms than the minimum possible.
| Concept | Best Examples |
|---|---|
| Graph optimization (edges/paths) | MST (Kruskal's, Prim's), Dijkstra's |
| Scheduling by finish time | Activity Selection, Interval Scheduling |
| Scheduling by profit | Job Sequencing with Deadlines |
| Data compression | Huffman Coding, Huffman Decoding |
| Resource allocation with divisibility | Fractional Knapsack |
| Greedy works conditionally | Coin Change (canonical systems only) |
| Greedy not optimal | Egyptian Fraction, 0/1 Knapsack, Coin Change (non-canonical) |
| Requires non-negative weights | Dijkstra's Algorithm |
Both Activity Selection and Interval Scheduling use the same greedy strategy. What sorting criterion do they share, and why does it guarantee optimality?
Compare Kruskal's and Prim's algorithms: what data structures does each use, and how do their greedy selection strategies differ?
Why does the greedy approach work for Fractional Knapsack but fail for 0/1 Knapsack? What property does the fractional version have that the 0/1 version lacks?
Given a coin system {1, 3, 4}, explain why greedy fails to find the minimum coins for amount 6. What algorithm would you use instead?
(FRQ-style) Prove that Dijkstra's algorithm fails on graphs with negative edge weights by constructing a small counterexample. Then explain what property of greedy algorithms is violated.