๐Ÿ”Data Structures

Dynamic Programming Patterns

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Dynamic programming (DP) is one of the most heavily tested algorithmic techniques in technical interviews and computer science courses. You're not just being asked to solve problems; you're being evaluated on your ability to recognize optimal substructure and overlapping subproblems, then choose the right approach to exploit them. Understanding these patterns means the difference between an O(2n)O(2^n) brute-force solution that times out and an O(n2)O(n^2) or O(n)O(n) solution that passes all test cases.

The patterns here fall into distinct categories: foundational approaches (how you structure your DP solution), sequence problems (finding patterns within ordered data), optimization problems (maximizing or minimizing under constraints), and string/comparison problems (measuring similarity or transformation costs). Don't just memorize the recurrence relations. Know which pattern each problem represents and why that pattern applies. When you see a new problem, your first question should be: "Which DP pattern does this resemble?"


Foundational Approaches

These two techniques form the backbone of every dynamic programming solution. The choice between them affects your code structure, space usage, and how you think about the problem.

Top-down Memoization

Top-down memoization follows the problem's natural recursive definition. You write a recursive function, then add a cache (hash table or array) so that each subproblem is solved at most once. Before computing any subproblem, you check the cache first.

  • Reduces exponential to polynomial time for problems with overlapping subproblems
  • Easier to implement when the recurrence relation is complex, since you're just translating the mathematical definition directly into code
  • Only solves subproblems that are actually needed, which helps when the dependency graph is sparse

Bottom-up Tabulation

Bottom-up tabulation flips the direction. You start from the base cases and iteratively fill in a table, solving every subproblem in dependency order until you reach the answer.

  • No recursion overhead, which eliminates stack space and function call costs. For large inputs, this avoids stack overflow issues entirely.
  • Space optimization is often possible since you can reduce from O(n2)O(n^2) to O(n)O(n) by only keeping the rows or columns currently needed
  • Requires you to figure out the correct fill order yourself, which can be trickier for complex dependency structures

Compare: Top-down vs. Bottom-up: both achieve the same time complexity, but top-down only solves necessary subproblems (better for sparse dependency graphs) while bottom-up avoids recursion limits and enables space optimization. If an interview asks you to optimize space, bottom-up is usually your answer.


Sequence Problems

These patterns identify meaningful subsequences within ordered data. The key insight is that the optimal solution for a sequence ending at position i depends on optimal solutions for earlier positions.

Fibonacci Sequence Pattern

The Fibonacci recurrence F(n)=F(nโˆ’1)+F(nโˆ’2)F(n) = F(n-1) + F(n-2) is the simplest DP pattern. Each state depends on a fixed number of previous states, which makes it a great starting point for recognizing DP structure.

  • Constant space achievable by tracking only the last two values instead of the entire table
  • This pattern generalizes: any time you can express the answer at step nn purely in terms of a small, fixed number of earlier steps, you're looking at a linear recurrence. Climbing stairs (1 or 2 steps at a time) is the classic variant.

Longest Increasing Subsequence (LIS)

The goal is to find the longest subsequence where each element is strictly greater than the previous one. The elements don't need to be adjacent.

  • O(n2)O(n^2) basic DP: dp[i] stores the LIS length ending at index i. For each i, you check all previous indices j < i and update: if arr[j]<arr[i]arr[j] < arr[i], then dp[i]=maxโก(dp[i],dp[j]+1)dp[i] = \max(dp[i], dp[j] + 1).
  • O(nlogโกn)O(n \log n) optimization uses a "tails" array where tails[k] holds the smallest possible tail element of any increasing subsequence of length k+1k+1. You use binary search to find where each new element fits, which is why this is sometimes called the patience sorting approach.
  • Any problem asking for "longest subsequence with property X" likely follows this structure.

Longest Common Subsequence (LCS)

LCS finds the longest subsequence shared by two sequences. Unlike substrings, the characters don't need to be contiguous, but they must appear in the same relative order in both.

  • 2D table where dp[i][j] represents the LCS length for the first i characters of string A and the first j characters of string B
  • Recurrence based on character match: if A[i]=B[j]A[i] = B[j], then dp[i][j]=dp[iโˆ’1][jโˆ’1]+1dp[i][j] = dp[i-1][j-1] + 1. Otherwise, dp[i][j]=maxโก(dp[iโˆ’1][j],dp[i][jโˆ’1])dp[i][j] = \max(dp[i-1][j], dp[i][j-1]). The "otherwise" case means you try skipping a character from either string and take whichever gives a longer result.
  • Shows up in diff tools, version control systems, and DNA sequence alignment in bioinformatics.

Compare: LIS vs. LCS: LIS operates on a single sequence finding internal order, while LCS compares two sequences for shared structure. Both use O(n2)O(n^2) basic solutions, but LIS has an O(nlogโกn)O(n \log n) optimization that LCS doesn't naturally support (because LCS has two independent dimensions to consider).


Optimization Under Constraints

These problems require maximizing or minimizing a value while respecting resource limits. The critical distinction is whether items can be reused.

0/1 Knapsack Problem

You have a set of items, each with a weight and a value, and a knapsack with a fixed weight capacity WW. Each item is either included once or excluded (hence "0/1"). The goal is to maximize total value without exceeding the capacity.

  • 2D state space of (item index, remaining capacity): dp[i][w]dp[i][w] = best value using the first ii items with capacity ww
  • Recurrence: dp[i][w]=maxโก(dp[iโˆ’1][w],ย dp[iโˆ’1][wโˆ’weight[i]]+value[i])dp[i][w] = \max(dp[i-1][w],\ dp[i-1][w - weight[i]] + value[i]). The first term skips item ii; the second includes it.
  • Space reducible to O(W)O(W) by using a 1D array and iterating capacity in reverse. Reverse iteration is critical here because it prevents you from accidentally using the same item twice in one pass.

Unbounded Knapsack Problem

Same setup as 0/1 Knapsack, but you can take unlimited copies of each item. This single change has a big effect on how you fill the table.

  • 1D table sufficient: dp[w]=maxโก(dp[w],ย dp[wโˆ’weight[i]]+value[i])dp[w] = \max(dp[w],\ dp[w - weight[i]] + value[i])
  • Because items are reusable, you iterate capacity forward (opposite of 0/1). Forward iteration means earlier updates within the same pass can feed into later ones, which is exactly the "reuse" behavior you want.
  • The coin change combinations problem is a variant where all "values" equal 1 and you're counting the number of ways to reach a target.

Coin Change Problem

Given a set of coin denominations and a target amount, find the minimum number of coins needed to make that amount.

  • dp[a] stores the fewest coins needed to make amount aa. Initialize dp[0] = 0 and everything else to infinity (or some large sentinel value).
  • Recurrence: dp[a]=minโก(dp[a],ย dp[aโˆ’coin]+1)dp[a] = \min(dp[a],\ dp[a - coin] + 1) for each valid coin denomination where aโˆ’coinโ‰ฅ0a - coin \geq 0
  • This is a clean demonstration of optimal substructure: if you know the best way to make amount aโˆ’coina - coin, adding one coin gives you a candidate solution for amount aa.

Compare: 0/1 Knapsack vs. Unbounded Knapsack: the only difference is item reusability, but this changes iteration order entirely. 0/1 requires reverse capacity iteration (or a 2D table) to prevent using an item multiple times; unbounded iterates forward. Interviewers love asking about this distinction.


String Transformation Problems

These patterns measure the cost or possibility of transforming one structure into another. They typically build 2D tables tracking transformation states.

Edit Distance

Edit distance (also called Levenshtein distance) measures the minimum number of single-character operations needed to transform one string into another. The three allowed operations are insertion, deletion, and substitution, each with cost 1 (though variants can assign different weights).

  • 2D table dp[i][j] represents the minimum edits to transform the first i characters of the source into the first j characters of the target
  • Recurrence: if the characters match (source[i]=target[j]source[i] = target[j]), then dp[i][j]=dp[iโˆ’1][jโˆ’1]dp[i][j] = dp[i-1][j-1] (no edit needed). Otherwise, take the minimum of three options: dp[iโˆ’1][j]+1dp[i-1][j] + 1 (delete), dp[i][jโˆ’1]+1dp[i][j-1] + 1 (insert), dp[iโˆ’1][jโˆ’1]+1dp[i-1][j-1] + 1 (substitute).
  • Applications include spell checkers, DNA alignment, and plagiarism detection.

Matrix Chain Multiplication

Given a chain of matrices to multiply, the goal is to find the parenthesization that minimizes the total number of scalar multiplications. The product is the same regardless of order, but the computational cost can vary dramatically.

  • This is an interval DP pattern: dp[i][j] represents the minimum cost to multiply matrices from index i to j
  • You try every possible split point kk between ii and jj, computing the cost of multiplying the left half, the right half, and combining them: dp[i][j]=minโกiโ‰คk<j(dp[i][k]+dp[k+1][j]+dims[i]ร—dims[k+1]ร—dims[j+1])dp[i][j] = \min_{i \leq k < j}(dp[i][k] + dp[k+1][j] + dims[i] \times dims[k+1] \times dims[j+1])
  • O(n3)O(n^3) complexity from trying all split points across all interval lengths. This pattern shows that DP isn't always about sequences; sometimes it's about intervals.

Compare: Edit Distance vs. LCS: both use 2D tables on two strings, but edit distance tracks transformation cost while LCS tracks shared structure. You can actually derive one from the other: editย distanceโ‰ฅโˆฃAโˆฃ+โˆฃBโˆฃโˆ’2ร—LCS(A,B)\text{edit distance} \geq |A| + |B| - 2 \times \text{LCS}(A, B). (This becomes an equality when substitution costs 2 or when only insertions and deletions are allowed.)


Quick Reference Table

ConceptBest Examples
Foundational ApproachesTop-down Memoization, Bottom-up Tabulation
Linear RecurrenceFibonacci Sequence Pattern
Subsequence FindingLongest Increasing Subsequence, Longest Common Subsequence
Bounded Resource Optimization0/1 Knapsack Problem
Unbounded Resource OptimizationUnbounded Knapsack, Coin Change Problem
String TransformationEdit Distance
Interval DPMatrix Chain Multiplication
2D Table PatternsLCS, Edit Distance, 0/1 Knapsack

Self-Check Questions

  1. Pattern Recognition: Given a problem asking for the minimum cost to transform sequence A into sequence B using insertions, deletions, and swaps, which DP pattern applies and what would your table dimensions be?

  2. Compare and Contrast: What is the key difference between 0/1 Knapsack and Unbounded Knapsack in terms of table iteration order, and why does this matter?

  3. Optimization Insight: The Fibonacci sequence can be solved in O(1)O(1) space. Which other problems in this guide can have their space complexity reduced from O(n2)O(n^2) or O(n)O(n) to O(1)O(1) or O(n)O(n), and how?

  4. Complexity Analysis: Both LIS and LCS have O(n2)O(n^2) basic solutions. Why can LIS be optimized to O(nlogโกn)O(n \log n) while LCS cannot easily achieve the same improvement?

  5. Application Mapping: If you're building a spell-checker that suggests corrections for misspelled words, which DP pattern would you use, and how would you modify it to also return the actual corrections (not just the distance)?