Fiveable

🔁Data Structures Unit 15 Review

QR code for Data Structures practice questions

15.3 Dynamic Programming Fundamentals

15.3 Dynamic Programming Fundamentals

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🔁Data Structures
Unit & Topic Study Guides

Dynamic programming solves complex optimization problems by breaking them into smaller subproblems and storing the results so you never solve the same subproblem twice. This technique is essential whenever a brute-force recursive approach would waste time recomputing the same values over and over. Two properties signal that DP applies: overlapping subproblems and optimal substructure.

Introduction to Dynamic Programming

Characteristics of Dynamic Programming

Optimal substructure means the optimal solution to the whole problem can be built from optimal solutions to its subproblems. For example, the shortest path from A to C through B is the shortest path from A to B plus the shortest path from B to C. If a problem doesn't have this property, DP won't work.

Overlapping subproblems means a naive recursive solution would solve the same subproblems many times. The classic example is the Fibonacci sequence: computing fib(5)fib(5) recursively calls fib(3)fib(3) twice, fib(2)fib(2) three times, and so on. Without storing results, the call tree explodes exponentially.

DP addresses this redundancy through two strategies:

  • Memoization (top-down): Write the solution recursively, but cache each result in a table (dictionary or array). Before making a recursive call, check the cache first. If the answer is already there, return it immediately.
  • Tabulation (bottom-up): Build a table iteratively, starting from the base cases and working up to the final answer. You fill entries in an order that guarantees every subproblem you need has already been computed.

Both approaches yield the same time complexity. Memoization is often easier to write because it mirrors the recursive structure. Tabulation avoids recursion overhead and makes space optimization easier.

Applications of Dynamic Programming

  • Optimization problems: resource allocation, scheduling, knapsack variants
  • Graph algorithms: shortest paths (Floyd-Warshall, Bellman-Ford), network flow
  • Sequence analysis: DNA sequence alignment, longest common subsequence (LCS), edit distance
  • Combinatorics: counting paths in a grid, binomial coefficients, Fibonacci numbers

The common thread is that all of these involve overlapping subproblems and optimal substructure.

Characteristics of dynamic programming, Dynamic programming - Wikipedia

Applying Dynamic Programming

Solutions for Classic Problems

Knapsack Problem (0/1)

You have nn items, each with a weight and value, and a knapsack with capacity WW. The goal is to maximize total value without exceeding the capacity.

  1. Define the subproblem: dp[i][w]dp[i][w] = maximum value achievable using the first ii items with remaining capacity ww.
  2. Write the recurrence: For each item ii, you either skip it or take it (if it fits): dp[i][w]=max(dp[i1][w],  dp[i1][wweight[i]]+value[i])dp[i][w] = \max(dp[i-1][w],\; dp[i-1][w - weight[i]] + value[i]) The first term is "skip item ii"; the second is "take item ii."
  3. Set base cases: dp[0][w]=0dp[0][w] = 0 for all ww (no items means no value), and dp[i][0]=0dp[i][0] = 0 for all ii (no capacity means no value).
  4. Fill the table row by row from i=1i = 1 to nn, column by column from w=1w = 1 to WW. The answer is dp[n][W]dp[n][W].

Longest Common Subsequence (LCS)

Given two strings of lengths mm and nn, find the length of their longest common subsequence.

  1. Define the subproblem: dp[i][j]dp[i][j] = length of the LCS of the first ii characters of str1str1 and the first jj characters of str2str2.

  2. Write the recurrence:

    • If str1[i]==str2[j]str1[i] == str2[j]: dp[i][j]=dp[i1][j1]+1dp[i][j] = dp[i-1][j-1] + 1
    • Otherwise: dp[i][j]=max(dp[i1][j],  dp[i][j1])dp[i][j] = \max(dp[i-1][j],\; dp[i][j-1])
  3. Set base cases: dp[i][0]=0dp[i][0] = 0 and dp[0][j]=0dp[0][j] = 0 (an empty string has no common subsequence with anything).

  4. Fill the table row by row. The answer is dp[m][n]dp[m][n].

Characteristics of dynamic programming, Dinamik Programlama (Dynamic Programming) nedir? | tolpp.com

Optimization of Dynamic Programming

Memoization (Top-Down)

  1. Write the recursive solution as you normally would.
  2. Add a lookup table (hash map or array) initialized to a sentinel value (e.g., 1-1).
  3. At the start of each recursive call, check if the result is already in the table. If so, return it.
  4. Otherwise, compute the result, store it in the table, then return it.

This turns an exponential recursive solution into a polynomial one without changing the recursive logic.

Tabulation (Bottom-Up)

  1. Create a DP table sized to hold all subproblems.
  2. Fill in the base cases.
  3. Iterate through the table in an order that ensures every subproblem you depend on is already solved.
  4. Read the final answer from the appropriate cell.

Space Optimization

You don't always need the full table. If the recurrence for row ii only depends on row i1i-1, you can keep just two rows (or even one row filled in reverse). For example, the 0/1 Knapsack recurrence only looks at the previous row, so you can reduce space from O(nW)O(nW) to O(W)O(W). Similarly, LCS can be reduced from O(mn)O(mn) to O(min(m,n))O(\min(m, n)) by keeping only two rows.

Complexity Analysis in Dynamic Programming

A useful rule of thumb: DP time complexity = (number of subproblems) × (time per subproblem).

Time Complexity:

ProblemSubproblemsTime per SubproblemTotal
0/1 Knapsackn×Wn \times WO(1)O(1)O(nW)O(nW)
LCSm×nm \times nO(1)O(1)O(mn)O(mn)
Fibonacci (memoized)nnO(1)O(1)O(n)O(n)

Note that O(nW)O(nW) for Knapsack is pseudo-polynomial, not truly polynomial, because WW is a numeric value, not the size of the input. If WW is encoded in binary, the input length is logW\log W, making the runtime exponential in input size. This distinction matters for understanding why Knapsack is NP-hard despite having a DP solution.

Space Complexity:

  • Knapsack: O(nW)O(nW) for the full table, O(W)O(W) with space optimization
  • LCS: O(mn)O(mn) for the full table, O(min(m,n))O(\min(m, n)) with space optimization
  • Fibonacci: O(n)O(n) with memoization, O(1)O(1) if you only track the last two values