upgrade
upgrade

🧩Intro to Algorithms

Dynamic Programming Problems

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Dynamic programming (DP) is one of the most heavily tested algorithm design paradigms in computer science courses—and for good reason. You're being tested on your ability to recognize when a problem exhibits optimal substructure and overlapping subproblems, the two hallmarks that signal DP is the right approach. These problems appear constantly in technical interviews, competitive programming, and real-world optimization scenarios from resource allocation to bioinformatics to natural language processing.

Understanding these classic DP problems isn't about memorizing solutions—it's about recognizing problem patterns. When you see a new problem, you should immediately ask: "Does this look like a knapsack variant? A sequence alignment problem? An optimization over intervals?" Each problem below represents a template that generalizes to dozens of related challenges. Don't just memorize the recurrence relations—know why each problem structure demands dynamic programming and how to identify similar problems on sight.


Sequence-Based Problems

These problems operate on one or two sequences (strings, arrays) and build solutions by examining prefixes or suffixes. The key insight is that the optimal solution for a sequence depends on optimal solutions for shorter subsequences.

Fibonacci Sequence

  • Recurrence relation: F(n)=F(n1)+F(n2)F(n) = F(n-1) + F(n-2) with base cases F(0)=0F(0) = 0, F(1)=1F(1) = 1—the canonical example of overlapping subproblems
  • Memoization vs. tabulation—naive recursion runs in O(2n)O(2^n) time, but storing results reduces this to O(n)O(n) time and space
  • Foundational pattern—demonstrates how exponential blowup from redundant computation can be eliminated through caching

Longest Increasing Subsequence

  • Problem structure—find the longest subsequence where each element is strictly greater than the previous; elements need not be contiguous
  • DP approach: maintain array where dp[i]dp[i] = length of LIS ending at index ii, giving O(n2)O(n^2) solution (O(nlogn)O(n \log n) with binary search optimization)
  • Applications—trend detection in time series data, patience sorting, and as a subroutine in more complex sequence problems

Longest Common Subsequence

  • Two-sequence alignment—find the longest subsequence present in both sequences; unlike substrings, characters need not be adjacent
  • DP table construction: dp[i][j]dp[i][j] stores LCS length for first ii characters of sequence A and first jj characters of sequence B
  • Real-world uses—diff utilities in version control, DNA sequence comparison in bioinformatics, plagiarism detection

Compare: Longest Increasing Subsequence vs. Longest Common Subsequence—both find non-contiguous subsequences, but LIS operates on one sequence with an ordering constraint while LCS operates on two sequences finding shared elements. If asked to find similarities between two strings, think LCS; if asked about monotonic patterns in one sequence, think LIS.


String Transformation Problems

These problems measure the "distance" or "cost" to transform one string into another. The DP table typically has dimensions matching the two string lengths, with each cell representing the cost for corresponding prefixes.

Edit Distance

  • Levenshtein distance—minimum number of insertions, deletions, or substitutions to transform string A into string B
  • Recurrence logic: dp[i][j]=min(dp[i1][j]+1,dp[i][j1]+1,dp[i1][j1]+cost)dp[i][j] = \min(dp[i-1][j] + 1, dp[i][j-1] + 1, dp[i-1][j-1] + \text{cost}) where cost is 0 if characters match, 1 otherwise
  • Ubiquitous applications—spell checkers, DNA mutation analysis, fuzzy string matching in search engines

Compare: Edit Distance vs. Longest Common Subsequence—both use similar 2D DP tables on string pairs, but LCS maximizes similarity while Edit Distance minimizes transformation cost. Edit Distance is the go-to when you need a numeric "difference score" between strings.


Optimization with Constraints

These problems require maximizing or minimizing an objective while respecting resource constraints. The key pattern is building optimal solutions by considering whether to include each item.

Knapsack Problem

  • 0/1 Knapsack—select items with weights wiw_i and values viv_i to maximize total value without exceeding capacity WW; items cannot be split
  • DP formulation: dp[i][w]dp[i][w] = maximum value using first ii items with capacity ww, running in O(nW)O(nW) pseudo-polynomial time
  • Fractional variant—when items can be divided, a greedy approach (sort by value/weight ratio) works; no DP needed

Coin Change Problem

  • Minimum coins—find the fewest coins from given denominations to make amount AA; this is essentially unbounded knapsack
  • Recurrence: dp[a]=min(dp[aci]+1)dp[a] = \min(dp[a - c_i] + 1) for all coin values ciac_i \leq a, with dp[0]=0dp[0] = 0
  • Practical relevance—currency systems, making change algorithms, resource allocation with discrete units

Subset Sum Problem

  • Decision problem—determine if any subset of a given set sums exactly to target TT; returns boolean, not optimal value
  • DP table: dp[i][s]dp[i][s] = true if sum ss is achievable using first ii elements; runs in O(nT)O(nT) time
  • NP-complete foundation—this problem is weakly NP-complete; many harder problems reduce to it

Compare: Knapsack vs. Subset Sum vs. Coin Change—all involve selecting items to hit a target. Knapsack maximizes value under a weight constraint; Subset Sum asks if exact target is achievable; Coin Change minimizes count to reach exact target. Knapsack has two attributes per item (weight and value), while the others have one.


Interval and Structure Optimization

These problems optimize over ways to partition or combine elements. The insight is that optimal solutions for larger intervals depend on optimal solutions for smaller sub-intervals.

Matrix Chain Multiplication

  • Problem goal—find the optimal parenthesization to multiply matrices A1×A2××AnA_1 \times A_2 \times \cdots \times A_n minimizing scalar multiplications
  • Interval DP pattern: dp[i][j]dp[i][j] = minimum cost to multiply matrices ii through jj; try all split points kk between them
  • Complexity insight—brute force considers Catalan number of parenthesizations (exponential); DP achieves O(n3)O(n^3)

Rod Cutting Problem

  • Maximize profit—cut a rod of length nn into pieces where each length ii has price pip_i; find cuts maximizing total revenue
  • Optimal substructure: dp[n]=max(pi+dp[ni])dp[n] = \max(p_i + dp[n-i]) for all valid cut lengths iifirst cut determines remaining subproblem
  • Classic DP example—frequently used to introduce the concept because the recurrence is intuitive and visual

Compare: Matrix Chain Multiplication vs. Rod Cutting—both optimize over ways to partition a structure, but Matrix Chain considers all possible split points in an interval (O(n3)O(n^3)) while Rod Cutting makes one cut at a time from the end (O(n2)O(n^2)). Matrix Chain is the template for "interval DP" problems.


Graph-Based Dynamic Programming

These problems apply DP principles to graph structures, where optimal paths are built from optimal sub-paths.

Shortest Path Problems (Floyd-Warshall)

  • All-pairs shortest paths—compute shortest distances between every pair of vertices in O(V3)O(V^3) time using three nested loops
  • DP formulation: dp[i][j][k]dp[i][j][k] = shortest path from ii to jj using only vertices {1,,k}\{1, \ldots, k\} as intermediates
  • Handles negative weights—unlike Dijkstra, Floyd-Warshall works with negative edge weights; fails only on negative cycles

Compare: Floyd-Warshall vs. Dijkstra—Floyd-Warshall computes all-pairs shortest paths in O(V3)O(V^3) and handles negative weights; Dijkstra computes single-source shortest paths in O(ElogV)O(E \log V) but requires non-negative weights. Choose based on whether you need all pairs and whether negative weights exist.


Quick Reference Table

ConceptBest Examples
Overlapping subproblemsFibonacci, Rod Cutting, Coin Change
Sequence alignmentLCS, Edit Distance
Single-sequence optimizationLIS, Subset Sum
Constrained optimization0/1 Knapsack, Coin Change, Subset Sum
Interval DPMatrix Chain Multiplication, Rod Cutting
Graph DPFloyd-Warshall
Decision vs. optimizationSubset Sum (decision), Knapsack (optimization)
Pseudo-polynomial timeKnapsack, Subset Sum, Coin Change

Self-Check Questions

  1. Both the Knapsack Problem and Coin Change Problem involve selecting items to reach a target—what is the key structural difference that changes how you formulate the DP recurrence?

  2. You're given two strings and asked to find how "similar" they are. Which two problems from this guide could apply, and when would you choose one over the other?

  3. Matrix Chain Multiplication and Rod Cutting both optimize over partitions. Why does Matrix Chain require O(n3)O(n^3) time while Rod Cutting only needs O(n2)O(n^2)?

  4. Compare and contrast: How do the DP tables for Longest Common Subsequence and Edit Distance differ in what each cell represents, even though both operate on two strings?

  5. A problem asks whether you can partition an array into two subsets with equal sums. Which classic DP problem is this a direct application of, and what would be your target value?