๐ŸงฉIntro to Algorithms

Dynamic Programming Problems

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Dynamic programming (DP) is one of the most heavily tested algorithm design paradigms in intro algorithms courses. You're being tested on your ability to recognize when a problem exhibits optimal substructure and overlapping subproblems, the two hallmarks that signal DP is the right approach.

Understanding these classic DP problems isn't about memorizing solutions. It's about recognizing problem patterns. When you see a new problem, you should ask: "Does this look like a knapsack variant? A sequence alignment problem? An optimization over intervals?" Each problem below represents a template that generalizes to dozens of related challenges. Don't just memorize the recurrence relations. Know why each problem structure demands dynamic programming and how to spot similar problems on sight.


Sequence-Based Problems

These problems operate on one or two sequences (strings, arrays) and build solutions by examining prefixes or suffixes. The core insight: the optimal solution for a sequence depends on optimal solutions for shorter subsequences.

Fibonacci Sequence

The Fibonacci sequence is the canonical DP example because it makes overlapping subproblems obvious.

  • Recurrence: F(n)=F(nโˆ’1)+F(nโˆ’2)F(n) = F(n-1) + F(n-2) with base cases F(0)=0F(0) = 0, F(1)=1F(1) = 1
  • Why DP matters here: Naive recursion recomputes the same values over and over, ballooning to O(2n)O(2^n) time. Storing results (either via memoization or a bottom-up table) cuts this to O(n)O(n) time and O(n)O(n) space. You can even reduce space to O(1)O(1) since you only need the two most recent values.
  • Foundational pattern: Whenever you see a recursive solution doing redundant work, that's your signal to apply DP.

Longest Increasing Subsequence

Given an array, find the longest subsequence where each element is strictly greater than the previous. The elements do not need to be contiguous.

  • DP approach: Build an array where dp[i]dp[i] stores the length of the longest increasing subsequence that ends at index ii. For each ii, check all earlier indices j<ij < i: if a[j]<a[i]a[j] < a[i], then dp[i]=maxโก(dp[i],dp[j]+1)dp[i] = \max(dp[i], dp[j] + 1). This gives an O(n2)O(n^2) solution.
  • Faster approach: There's an O(nlogโกn)O(n \log n) version using binary search with a "tails" array, but the O(n2)O(n^2) DP is what you'll typically need to know for this course.
  • Applications: Trend detection in time series data, patience sorting, and as a subroutine in more complex sequence problems.

Longest Common Subsequence

Given two sequences, find the longest subsequence present in both. Unlike substrings, the characters need not be adjacent.

  • DP table: dp[i][j]dp[i][j] stores the LCS length for the first ii characters of sequence A and the first jj characters of sequence B. If the characters match (A[i]=B[j]A[i] = B[j]), take dp[iโˆ’1][jโˆ’1]+1dp[i-1][j-1] + 1. Otherwise, take maxโก(dp[iโˆ’1][j],dp[i][jโˆ’1])\max(dp[i-1][j], dp[i][j-1]). Time and space are both O(mn)O(mn) where mm and nn are the sequence lengths.
  • Real-world uses: Diff utilities in version control, DNA sequence comparison, plagiarism detection.

Compare: LIS vs. LCS: Both find non-contiguous subsequences, but LIS operates on one sequence with an ordering constraint, while LCS operates on two sequences finding shared elements. If asked to find similarities between two strings, think LCS. If asked about monotonic patterns in one sequence, think LIS.


String Transformation Problems

These problems measure the "distance" or "cost" to transform one string into another. The DP table typically has dimensions matching the two string lengths, with each cell representing the cost for corresponding prefixes.

Edit Distance

Edit distance (also called Levenshtein distance) is the minimum number of single-character insertions, deletions, or substitutions needed to transform string A into string B.

  • Recurrence: dp[i][j]=minโก(dp[iโˆ’1][j]+1,โ€…โ€Šdp[i][jโˆ’1]+1,โ€…โ€Šdp[iโˆ’1][jโˆ’1]+cost)dp[i][j] = \min(dp[i-1][j] + 1, \; dp[i][j-1] + 1, \; dp[i-1][j-1] + \text{cost}) where cost is 0 if A[i]=B[j]A[i] = B[j], and 1 otherwise. The three terms correspond to a deletion, an insertion, and a substitution (or match), respectively.
  • Base cases: dp[i][0]=idp[i][0] = i (delete all of A) and dp[0][j]=jdp[0][j] = j (insert all of B).
  • Applications: Spell checkers, DNA mutation analysis, fuzzy string matching in search engines.

Compare: Edit Distance vs. LCS: Both use similar 2D DP tables on string pairs, but LCS maximizes similarity while Edit Distance minimizes transformation cost. Edit Distance is the go-to when you need a numeric "difference score" between strings.


Optimization with Constraints

These problems require maximizing or minimizing an objective while respecting resource constraints. The key pattern: build optimal solutions by considering whether to include each item.

Knapsack Problem

You have nn items, each with a weight wiw_i and value viv_i, and a knapsack with capacity WW. Pick items to maximize total value without exceeding the weight limit. In the 0/1 variant, each item is either taken or left; you can't split items.

  • Recurrence: dp[i][w]=maxโก(dp[iโˆ’1][w],โ€…โ€Šdp[iโˆ’1][wโˆ’wi]+vi)dp[i][w] = \max(dp[i-1][w], \; dp[i-1][w - w_i] + v_i) if wiโ‰คww_i \leq w, otherwise dp[i][w]=dp[iโˆ’1][w]dp[i][w] = dp[i-1][w]. The first option skips item ii; the second takes it.
  • Complexity: O(nW)O(nW), which is pseudo-polynomial. It looks polynomial, but WW is a numeric value, not the input size. If WW is exponentially large relative to the number of bits used to represent it, this blows up.
  • Fractional variant: When items can be divided, a greedy approach (sort by value-to-weight ratio, take greedily) works in O(nlogโกn)O(n \log n). No DP needed.

Coin Change Problem

Given a set of coin denominations and a target amount AA, find the fewest coins needed to make exactly AA. This is essentially an unbounded knapsack since you can use each denomination multiple times.

  • Recurrence: dp[a]=minโก(dp[aโˆ’ci]+1)dp[a] = \min(dp[a - c_i] + 1) for all coin values ciโ‰คac_i \leq a, with base case dp[0]=0dp[0] = 0. Initialize all other entries to infinity (meaning "not yet reachable").
  • Time complexity: O(Aโ‹…k)O(A \cdot k) where kk is the number of denominations.
  • Practical relevance: Currency systems, making change algorithms, resource allocation with discrete units.

Subset Sum Problem

Given a set of integers and a target TT, determine whether any subset sums exactly to TT. This is a decision problem: it returns true or false, not an optimal value.

  • DP table: dp[i][s]dp[i][s] is true if sum ss is achievable using the first ii elements. The recurrence is dp[i][s]=dp[iโˆ’1][s]โ€…โ€ŠORโ€…โ€Šdp[iโˆ’1][sโˆ’ai]dp[i][s] = dp[i-1][s] \;\text{OR}\; dp[i-1][s - a_i] (either skip or include element ii). Runs in O(nT)O(nT) time.
  • NP-completeness: Subset Sum is weakly NP-complete. The O(nT)O(nT) solution is pseudo-polynomial, just like 0/1 Knapsack.

Compare: Knapsack vs. Subset Sum vs. Coin Change: All involve selecting items to hit a target. Knapsack maximizes value under a weight constraint (two attributes per item: weight and value). Subset Sum asks if an exact target is achievable (one attribute per item). Coin Change minimizes count to reach an exact target (with unlimited copies of each item).


Interval and Structure Optimization

These problems optimize over ways to partition or combine elements. The insight: optimal solutions for larger intervals depend on optimal solutions for smaller sub-intervals.

Matrix Chain Multiplication

Given matrices A1,A2,โ€ฆ,AnA_1, A_2, \ldots, A_n to multiply, find the parenthesization that minimizes the total number of scalar multiplications. The matrix dimensions determine the cost of each pairwise multiplication.

  • Interval DP pattern: dp[i][j]dp[i][j] = minimum cost to multiply matrices ii through jj. You try every possible split point kk where iโ‰คk<ji \leq k < j, computing dp[i][j]=minโกk(dp[i][k]+dp[k+1][j]+piโˆ’1โ‹…pkโ‹…pj)dp[i][j] = \min_{k}(dp[i][k] + dp[k+1][j] + p_{i-1} \cdot p_k \cdot p_j) where pp is the dimension array.
  • Why it's O(n3)O(n^3): There are O(n2)O(n^2) subproblems (all pairs i,ji, j), and each considers up to O(n)O(n) split points.
  • Brute force comparison: The number of possible parenthesizations is the Catalan number, which grows exponentially. DP collapses this to O(n3)O(n^3).

Rod Cutting Problem

Given a rod of length nn and a price table where length ii sells for pip_i, find the cuts that maximize total revenue.

  • Recurrence: dp[j]=maxโก1โ‰คiโ‰คj(pi+dp[jโˆ’i])dp[j] = \max_{1 \leq i \leq j}(p_i + dp[j - i]). You try every possible first cut of length ii, then solve the remaining subproblem dp[jโˆ’i]dp[j - i].
  • Complexity: O(n2)O(n^2) since for each length jj you consider up to jj cut options.
  • Classic intro example: Rod Cutting is frequently the first DP problem taught because the recurrence is intuitive and easy to visualize.

Compare: Matrix Chain Multiplication vs. Rod Cutting: Both optimize over partitions, but they differ structurally. Matrix Chain considers all possible split points within an interval (two endpoints to track, giving O(n2)O(n^2) subproblems and O(n3)O(n^3) total). Rod Cutting makes one cut from the end and solves the remainder (one endpoint, giving O(n)O(n) subproblems and O(n2)O(n^2) total). Matrix Chain is the template for "interval DP" problems.


Graph-Based Dynamic Programming

These problems apply DP to graph structures, where optimal paths are built from optimal sub-paths.

Shortest Path Problems (Floyd-Warshall)

Floyd-Warshall computes the shortest path between every pair of vertices in a weighted graph.

  • DP formulation: dp[i][j][k]dp[i][j][k] = shortest path from vertex ii to vertex jj using only vertices {1,โ€ฆ,k}\{1, \ldots, k\} as intermediates. The recurrence is dp[i][j][k]=minโก(dp[i][j][kโˆ’1],โ€…โ€Šdp[i][k][kโˆ’1]+dp[k][j][kโˆ’1])dp[i][j][k] = \min(dp[i][j][k-1], \; dp[i][k][k-1] + dp[k][j][k-1]). In practice, you overwrite in place and drop the kk dimension, using a single 2D table.
  • Complexity: O(V3)O(V^3) time, O(V2)O(V^2) space (with the in-place optimization).
  • Handles negative weights: Unlike Dijkstra's algorithm, Floyd-Warshall works with negative edge weights. It only fails if the graph contains a negative-weight cycle (which makes shortest paths undefined).

Compare: Floyd-Warshall vs. Dijkstra: Floyd-Warshall computes all-pairs shortest paths in O(V3)O(V^3) and handles negative weights. Dijkstra computes single-source shortest paths in O(ElogโกV)O(E \log V) but requires non-negative weights. If you need shortest paths from just one source and weights are non-negative, Dijkstra is faster. If you need all pairs or have negative weights, use Floyd-Warshall.


Quick Reference Table

ConceptBest Examples
Overlapping subproblemsFibonacci, Rod Cutting, Coin Change
Sequence alignmentLCS, Edit Distance
Single-sequence optimizationLIS, Subset Sum
Constrained optimization0/1 Knapsack, Coin Change, Subset Sum
Interval DPMatrix Chain Multiplication, Rod Cutting
Graph DPFloyd-Warshall
Decision vs. optimizationSubset Sum (decision), Knapsack (optimization)
Pseudo-polynomial timeKnapsack, Subset Sum, Coin Change

Self-Check Questions

  1. Both the Knapsack Problem and Coin Change Problem involve selecting items to reach a target. What is the key structural difference that changes how you formulate the DP recurrence?

  2. You're given two strings and asked to find how "similar" they are. Which two problems from this guide could apply, and when would you choose one over the other?

  3. Matrix Chain Multiplication and Rod Cutting both optimize over partitions. Why does Matrix Chain require O(n3)O(n^3) time while Rod Cutting only needs O(n2)O(n^2)?

  4. How do the DP tables for Longest Common Subsequence and Edit Distance differ in what each cell represents, even though both operate on two strings?

  5. A problem asks whether you can partition an array into two subsets with equal sums. Which classic DP problem is this a direct application of, and what would be your target value?