Algorithm design techniques give you a toolkit for tackling complex problems efficiently. Greedy, divide-and-conquer, and dynamic programming each solve different kinds of problems well, and choosing the wrong one can mean the difference between an optimal solution and a slow or incorrect one. This section focuses on how to compare these three approaches and pick the right one for a given problem.
Algorithm Design Techniques
Greedy vs divide-and-conquer vs dynamic programming
These three techniques differ in how they break down and solve problems. Understanding each one's core strategy helps you recognize which fits a given situation.
Greedy algorithms make the locally optimal choice at each step, hoping those choices add up to a globally optimal solution. They're often the fastest and simplest to implement, but they don't always produce the best answer. They work correctly only when the problem has the greedy choice property, meaning local optima genuinely lead to a global optimum. Classic examples: Huffman coding, Dijkstra's shortest path, activity selection.
Divide-and-conquer algorithms recursively split a problem into smaller subproblems, solve each one independently, then combine the results. The key word here is independently: the subproblems don't share work or overlap. Examples include merge sort, quicksort, and binary search. This technique is a natural fit when a problem cleanly divides into parts that don't depend on each other.
Dynamic programming (DP) also breaks problems into subproblems, but it handles the case where those subproblems overlap, meaning a naive recursive approach would re-solve the same subproblem many times. DP stores (memoizes or tabulates) the results of subproblems so each is solved only once. It requires two properties:
- Optimal substructure: the optimal solution to the whole problem can be built from optimal solutions to its subproblems.
- Overlapping subproblems: the same subproblems recur during recursion.
Examples: Fibonacci sequence, longest common subsequence, 0/1 knapsack problem.
A useful way to remember the distinction: divide-and-conquer has independent subproblems, while dynamic programming has overlapping subproblems. If subproblems overlap but you use divide-and-conquer anyway, you'll end up doing redundant work.

Selection of algorithm design techniques
Choosing the right technique is a structured process. Here's how to think through it:
-
Analyze the problem's structural properties.
- Does it have optimal substructure? Can you build the overall optimal solution from optimal solutions to smaller pieces?
- Does it have overlapping subproblems? Would a recursive solution re-solve the same subproblems repeatedly?
- Does it have the greedy choice property? Can you prove that always picking the locally best option leads to the globally best result?
-
Clarify your requirements.
- Time complexity: How fast does the algorithm need to be? Can you afford or do you need ?
- Space complexity: How much memory is available? Can you afford a 2D table, or do you need extra space?
- Optimality: Do you need the provably best solution, or is a good approximation acceptable?
-
Match technique to properties.
- If the problem has the greedy choice property (and you can prove it), use greedy. It'll be fast and simple.
- If the problem splits into independent subproblems that combine cleanly, use divide-and-conquer.
- If the problem has both optimal substructure and overlapping subproblems, use dynamic programming.
A common mistake is jumping to DP when greedy would work, or using greedy when the greedy choice property doesn't actually hold. Always verify the structural properties before committing to a technique.

Trade-offs in algorithm design
|Greedy|Divide-and-Conquer|Dynamic Programming| |---|---|---|---| | Typical time | or | (varies) | Depends on table size, e.g. or | | Typical space | or | to (recursion stack) | to (memoization table) | | Implementation | Usually straightforward | Requires recursion; combining step can be tricky | Recurrence relations and table management add complexity | | Optimality guarantee | Only with greedy choice property | Yes (for correctly formulated problems) | Yes (when optimal substructure holds) |
A few things to note from this table:
- Greedy wins on speed and simplicity, but you sacrifice guaranteed optimality unless the problem specifically supports it.
- Divide-and-conquer's space cost comes mainly from the recursion stack. For something like merge sort, you also need auxiliary space for merging.
- DP's space cost can be significant. For a 2D table like in the knapsack problem, you're looking at space, where is the capacity. Sometimes you can optimize this with space-reduction tricks (e.g., keeping only the previous row of the table).
Combining techniques for complex problems
Some real-world problems are too complex for a single technique. In those cases, you can apply different strategies to different parts of the problem.
Example: Traveling Salesman Problem (TSP)
The TSP asks for the shortest route visiting all cities exactly once. A pure brute-force approach is , which is unusable for large inputs.
- DP (Held-Karp algorithm) solves TSP exactly in time, which is much better than brute force but still exponential. This works well for small to moderate numbers of cities.
- Greedy heuristics (like nearest-neighbor) give fast approximate solutions in time but don't guarantee optimality.
- A combined approach might use DP to solve small subsets of cities optimally, then use a greedy strategy to connect those subsets together.
General process for combining techniques:
- Decompose the complex problem into subproblems.
- Identify which technique best fits each subproblem based on its properties.
- Solve each subproblem with its matched technique.
- Combine the subproblem solutions, verifying that the combined result is still correct.
- Analyze the overall time and space complexity of the combined approach. Sometimes the combination introduces overhead that needs to be accounted for.
The goal is to exploit each technique's strengths where they apply, rather than forcing one approach onto the entire problem.