Dynamic programming solves complex optimization problems by breaking them into smaller subproblems and storing the results so you never solve the same subproblem twice. This technique is essential whenever a brute-force recursive approach would waste time recomputing the same values over and over. Two properties signal that DP applies: overlapping subproblems and optimal substructure.
Introduction to Dynamic Programming
Characteristics of Dynamic Programming
Optimal substructure means the optimal solution to the whole problem can be built from optimal solutions to its subproblems. For example, the shortest path from A to C through B is the shortest path from A to B plus the shortest path from B to C. If a problem doesn't have this property, DP won't work.
Overlapping subproblems means a naive recursive solution would solve the same subproblems many times. The classic example is the Fibonacci sequence: computing recursively calls twice, three times, and so on. Without storing results, the call tree explodes exponentially.
DP addresses this redundancy through two strategies:
- Memoization (top-down): Write the solution recursively, but cache each result in a table (dictionary or array). Before making a recursive call, check the cache first. If the answer is already there, return it immediately.
- Tabulation (bottom-up): Build a table iteratively, starting from the base cases and working up to the final answer. You fill entries in an order that guarantees every subproblem you need has already been computed.
Both approaches yield the same time complexity. Memoization is often easier to write because it mirrors the recursive structure. Tabulation avoids recursion overhead and makes space optimization easier.
Applications of Dynamic Programming
- Optimization problems: resource allocation, scheduling, knapsack variants
- Graph algorithms: shortest paths (Floyd-Warshall, Bellman-Ford), network flow
- Sequence analysis: DNA sequence alignment, longest common subsequence (LCS), edit distance
- Combinatorics: counting paths in a grid, binomial coefficients, Fibonacci numbers
The common thread is that all of these involve overlapping subproblems and optimal substructure.

Applying Dynamic Programming
Solutions for Classic Problems
Knapsack Problem (0/1)
You have items, each with a weight and value, and a knapsack with capacity . The goal is to maximize total value without exceeding the capacity.
- Define the subproblem: = maximum value achievable using the first items with remaining capacity .
- Write the recurrence: For each item , you either skip it or take it (if it fits): The first term is "skip item "; the second is "take item ."
- Set base cases: for all (no items means no value), and for all (no capacity means no value).
- Fill the table row by row from to , column by column from to . The answer is .
Longest Common Subsequence (LCS)
Given two strings of lengths and , find the length of their longest common subsequence.
-
Define the subproblem: = length of the LCS of the first characters of and the first characters of .
-
Write the recurrence:
- If :
- Otherwise:
-
Set base cases: and (an empty string has no common subsequence with anything).
-
Fill the table row by row. The answer is .

Optimization of Dynamic Programming
Memoization (Top-Down)
- Write the recursive solution as you normally would.
- Add a lookup table (hash map or array) initialized to a sentinel value (e.g., ).
- At the start of each recursive call, check if the result is already in the table. If so, return it.
- Otherwise, compute the result, store it in the table, then return it.
This turns an exponential recursive solution into a polynomial one without changing the recursive logic.
Tabulation (Bottom-Up)
- Create a DP table sized to hold all subproblems.
- Fill in the base cases.
- Iterate through the table in an order that ensures every subproblem you depend on is already solved.
- Read the final answer from the appropriate cell.
Space Optimization
You don't always need the full table. If the recurrence for row only depends on row , you can keep just two rows (or even one row filled in reverse). For example, the 0/1 Knapsack recurrence only looks at the previous row, so you can reduce space from to . Similarly, LCS can be reduced from to by keeping only two rows.
Complexity Analysis in Dynamic Programming
A useful rule of thumb: DP time complexity = (number of subproblems) × (time per subproblem).
Time Complexity:
| Problem | Subproblems | Time per Subproblem | Total |
|---|---|---|---|
| 0/1 Knapsack | |||
| LCS | |||
| Fibonacci (memoized) |
Note that for Knapsack is pseudo-polynomial, not truly polynomial, because is a numeric value, not the size of the input. If is encoded in binary, the input length is , making the runtime exponential in input size. This distinction matters for understanding why Knapsack is NP-hard despite having a DP solution.
Space Complexity:
- Knapsack: for the full table, with space optimization
- LCS: for the full table, with space optimization
- Fibonacci: with memoization, if you only track the last two values