Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Dynamic programming (DP) is one of the most heavily tested algorithmic techniques in technical interviews and computer science courses. You're not just being asked to solve problems; you're being evaluated on your ability to recognize optimal substructure and overlapping subproblems, then choose the right approach to exploit them. Understanding these patterns means the difference between an brute-force solution that times out and an or solution that passes all test cases.
The patterns here fall into distinct categories: foundational approaches (how you structure your DP solution), sequence problems (finding patterns within ordered data), optimization problems (maximizing or minimizing under constraints), and string/comparison problems (measuring similarity or transformation costs). Don't just memorize the recurrence relations. Know which pattern each problem represents and why that pattern applies. When you see a new problem, your first question should be: "Which DP pattern does this resemble?"
These two techniques form the backbone of every dynamic programming solution. The choice between them affects your code structure, space usage, and how you think about the problem.
Top-down memoization follows the problem's natural recursive definition. You write a recursive function, then add a cache (hash table or array) so that each subproblem is solved at most once. Before computing any subproblem, you check the cache first.
Bottom-up tabulation flips the direction. You start from the base cases and iteratively fill in a table, solving every subproblem in dependency order until you reach the answer.
Compare: Top-down vs. Bottom-up: both achieve the same time complexity, but top-down only solves necessary subproblems (better for sparse dependency graphs) while bottom-up avoids recursion limits and enables space optimization. If an interview asks you to optimize space, bottom-up is usually your answer.
These patterns identify meaningful subsequences within ordered data. The key insight is that the optimal solution for a sequence ending at position i depends on optimal solutions for earlier positions.
The Fibonacci recurrence is the simplest DP pattern. Each state depends on a fixed number of previous states, which makes it a great starting point for recognizing DP structure.
The goal is to find the longest subsequence where each element is strictly greater than the previous one. The elements don't need to be adjacent.
dp[i] stores the LIS length ending at index i. For each i, you check all previous indices j < i and update: if , then .tails[k] holds the smallest possible tail element of any increasing subsequence of length . You use binary search to find where each new element fits, which is why this is sometimes called the patience sorting approach.LCS finds the longest subsequence shared by two sequences. Unlike substrings, the characters don't need to be contiguous, but they must appear in the same relative order in both.
dp[i][j] represents the LCS length for the first i characters of string A and the first j characters of string BCompare: LIS vs. LCS: LIS operates on a single sequence finding internal order, while LCS compares two sequences for shared structure. Both use basic solutions, but LIS has an optimization that LCS doesn't naturally support (because LCS has two independent dimensions to consider).
These problems require maximizing or minimizing a value while respecting resource limits. The critical distinction is whether items can be reused.
You have a set of items, each with a weight and a value, and a knapsack with a fixed weight capacity . Each item is either included once or excluded (hence "0/1"). The goal is to maximize total value without exceeding the capacity.
Same setup as 0/1 Knapsack, but you can take unlimited copies of each item. This single change has a big effect on how you fill the table.
Given a set of coin denominations and a target amount, find the minimum number of coins needed to make that amount.
dp[a] stores the fewest coins needed to make amount . Initialize dp[0] = 0 and everything else to infinity (or some large sentinel value).Compare: 0/1 Knapsack vs. Unbounded Knapsack: the only difference is item reusability, but this changes iteration order entirely. 0/1 requires reverse capacity iteration (or a 2D table) to prevent using an item multiple times; unbounded iterates forward. Interviewers love asking about this distinction.
These patterns measure the cost or possibility of transforming one structure into another. They typically build 2D tables tracking transformation states.
Edit distance (also called Levenshtein distance) measures the minimum number of single-character operations needed to transform one string into another. The three allowed operations are insertion, deletion, and substitution, each with cost 1 (though variants can assign different weights).
dp[i][j] represents the minimum edits to transform the first i characters of the source into the first j characters of the targetGiven a chain of matrices to multiply, the goal is to find the parenthesization that minimizes the total number of scalar multiplications. The product is the same regardless of order, but the computational cost can vary dramatically.
dp[i][j] represents the minimum cost to multiply matrices from index i to jCompare: Edit Distance vs. LCS: both use 2D tables on two strings, but edit distance tracks transformation cost while LCS tracks shared structure. You can actually derive one from the other: . (This becomes an equality when substitution costs 2 or when only insertions and deletions are allowed.)
| Concept | Best Examples |
|---|---|
| Foundational Approaches | Top-down Memoization, Bottom-up Tabulation |
| Linear Recurrence | Fibonacci Sequence Pattern |
| Subsequence Finding | Longest Increasing Subsequence, Longest Common Subsequence |
| Bounded Resource Optimization | 0/1 Knapsack Problem |
| Unbounded Resource Optimization | Unbounded Knapsack, Coin Change Problem |
| String Transformation | Edit Distance |
| Interval DP | Matrix Chain Multiplication |
| 2D Table Patterns | LCS, Edit Distance, 0/1 Knapsack |
Pattern Recognition: Given a problem asking for the minimum cost to transform sequence A into sequence B using insertions, deletions, and swaps, which DP pattern applies and what would your table dimensions be?
Compare and Contrast: What is the key difference between 0/1 Knapsack and Unbounded Knapsack in terms of table iteration order, and why does this matter?
Optimization Insight: The Fibonacci sequence can be solved in space. Which other problems in this guide can have their space complexity reduced from or to or , and how?
Complexity Analysis: Both LIS and LCS have basic solutions. Why can LIS be optimized to while LCS cannot easily achieve the same improvement?
Application Mapping: If you're building a spell-checker that suggests corrections for misspelled words, which DP pattern would you use, and how would you modify it to also return the actual corrections (not just the distance)?