Fiveable

🔁Data Structures Unit 1 Review

QR code for Data Structures practice questions

1.2 Algorithm Analysis and Big O Notation

1.2 Algorithm Analysis and Big O Notation

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🔁Data Structures
Unit & Topic Study Guides

Algorithm analysis gives you a framework for evaluating how algorithms perform as input sizes grow. Instead of guessing which approach is faster, you can mathematically describe and compare their efficiency. This matters most when you're working with large datasets or time-sensitive applications where a poor algorithm choice can mean the difference between milliseconds and hours.

Algorithm Analysis Fundamentals

Importance of algorithm analysis

Not all algorithms that solve the same problem are equally good. Algorithm analysis lets you measure how an algorithm's running time and resource usage grow relative to input size, rather than just timing it on one machine with one dataset.

  • It provides a standardized way to compare algorithms, so you can make informed choices independent of hardware or language
  • It helps you identify bottlenecks and predict whether your code will still work well when the input doubles, or grows by a factor of 1000
  • For large datasets or real-time systems, picking an algorithm with better scaling behavior can be the difference between a usable program and one that never finishes
Importance of algorithm analysis, Big O Notation — The Science of Machine Learning & AI

Concept of time complexity

Time complexity describes how many operations an algorithm performs as a function of input size nn. You're not measuring wall-clock time (that depends on your machine). Instead, you're counting the fundamental steps the algorithm takes.

This count is expressed using asymptotic notation, which captures the growth rate while ignoring constant factors and lower-order terms. The most common time complexities you'll see, ranked from fastest to slowest growth:

  • Constant O(1)O(1): runtime doesn't change regardless of input size
  • Logarithmic O(logn)O(\log n): runtime grows slowly, even as input gets very large
  • Linear O(n)O(n): runtime grows directly in proportion to input size
  • Quadratic O(n2)O(n^2): runtime grows with the square of input size, which gets expensive fast

To see why this matters: if you have an O(n)O(n) algorithm and an O(n2)O(n^2) algorithm, they might feel similar at n=10n = 10. But at n=10,000n = 10{,}000, the linear algorithm does roughly 10,000 operations while the quadratic one does 100,000,000.

Importance of algorithm analysis, Complexities of Sorting Algorithms and Data Structure Operations

Big O Notation and Algorithm Comparison

Big O notation fundamentals

Big O notation describes the upper bound of an algorithm's growth rate. It tells you the worst-case scenario for how running time scales with input size.

The key rules for determining Big O:

  1. Count the dominant term. If your algorithm does 3n2+5n+1003n^2 + 5n + 100 operations, the n2n^2 term dominates as nn grows large.
  2. Drop constants. O(3n2)O(3n^2) simplifies to O(n2)O(n^2), because constants don't affect the growth rate.
  3. Drop lower-order terms. O(n2+n)O(n^2 + n) simplifies to O(n2)O(n^2), since nn becomes negligible compared to n2n^2 for large nn.

The common Big O classes, from most to least efficient:

  • O(1)O(1): Constant (e.g., accessing an array element by index)
  • O(logn)O(\log n): Logarithmic (e.g., binary search)
  • O(n)O(n): Linear (e.g., scanning through a list)
  • O(nlogn)O(n \log n): Linearithmic (e.g., merge sort)
  • O(n2)O(n^2): Quadratic (e.g., nested loops over the same data)
  • O(2n)O(2^n): Exponential (e.g., naive recursive Fibonacci)

Lower Big O classes scale better. An O(nlogn)O(n \log n) sorting algorithm will consistently outperform an O(n2)O(n^2) one on large inputs, even if the quadratic one is faster for very small arrays.

Time complexity comparisons

Here's how Big O plays out across common algorithm categories:

1. Searching algorithms:

  • Linear search O(n)O(n): checks each element one by one. Works on any list, but slow for large inputs.
  • Binary search O(logn)O(\log n): repeatedly divides a sorted list in half. At each step, half the remaining elements are eliminated. For a list of 1,000,000 elements, binary search needs at most about 20 comparisons.

2. Sorting algorithms:

  • Bubble sort, selection sort, insertion sort O(n2)O(n^2): these use nested loops to compare and rearrange elements. Simple to implement, but impractical for large datasets.
  • Merge sort, quicksort O(nlogn)O(n \log n): these use a divide-and-conquer strategy, splitting the list into smaller pieces, sorting them, and combining the results. Note that quicksort's worst case is actually O(n2)O(n^2), but its average case is O(nlogn)O(n \log n).

3. Graph traversal algorithms:

  • Depth-first search (DFS) and breadth-first search (BFS) O(V+E)O(V + E): both visit every vertex VV and traverse every edge EE in the graph. The complexity depends on the graph's structure, not just the number of nodes.

4. Dynamic programming (optimization example):

  • Fibonacci (naive recursive) O(2n)O(2^n): each call branches into two more calls, recalculating the same values over and over. At n=40n = 40, this already takes over a billion operations.
  • Fibonacci (memoized or bottom-up) O(n)O(n): by storing previously computed results, you eliminate all the redundant work. This is a classic example of how algorithmic thinking can turn an unusable solution into an efficient one.