upgrade
upgrade

⌨️AP Computer Science Principles

Big O Notation Examples

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

When you're analyzing algorithms on the AP CSP exam, you're not just being asked to identify what code does—you're being tested on algorithmic efficiency and your ability to reason about how programs scale. Big O notation is the language computer scientists use to describe this scaling behavior, and it connects directly to core concepts like iteration, list traversals, nested loops, and recursive problem-solving. Understanding these efficiency classes helps you predict whether an algorithm will work on 10 items, 10,000 items, or crash your computer trying.

The exam frequently presents scenarios where you must compare approaches or identify why one solution is reasonable while another is unreasonable for large inputs. This isn't about memorizing formulas—it's about recognizing patterns. When you see a single loop through a list, you should immediately think linear time. When you see nested loops, think quadratic. Don't just memorize the notation; know what code structures produce each complexity class and when that efficiency becomes a problem.


Efficient Algorithms: When Scaling Isn't a Problem

These complexity classes represent algorithms that handle growth gracefully. The key insight is that efficient algorithms either ignore input size entirely or reduce the problem space dramatically with each step.

O(1) – Constant Time

  • Execution time stays the same regardless of whether your list has 10 elements or 10 million—the operation takes one step
  • Direct access operations like aList[i] or LENGTH(aList) are constant time because the computer jumps straight to the answer
  • Ideal for exam questions asking about the "most efficient" approach—if you can solve it in constant time, that's always the best answer

O(log n) – Logarithmic Time

  • Halving the problem with each step means even massive inputs require surprisingly few operations—a billion items needs only about 30 steps
  • Binary search is the classic example: by checking the middle element and eliminating half the remaining items, you avoid checking every element
  • Requires sorted data to work, which is why exam questions often pair this with sorting algorithm discussions

Compare: O(1) vs. O(log n)—both are highly efficient, but constant time doesn't depend on input at all, while logarithmic time grows slowly with larger inputs. If an FRQ asks about searching, binary search (log n) beats linear search (n), but direct index access (1) beats both.


Linear Growth: The Baseline for Comparison

When an algorithm must examine each element once, you get linear complexity. This is often the best you can do when every item matters.

O(n) – Linear Time

  • One pass through the data means execution time grows proportionally—double the input, double the time
  • List traversals using FOR EACH item IN aList are the textbook example, including finding minimum/maximum values, computing sums, or linear search
  • Reasonable for most inputs but becomes slow with very large datasets—this is your baseline for "acceptable" efficiency

O(n log n) – Linearithmic Time

  • Combines linear traversal with logarithmic division, typically seen when you must process every element but can organize work efficiently
  • Efficient sorting algorithms like merge sort achieve this by repeatedly dividing the list in half (log n divisions) while processing all elements (n work per level)
  • The practical limit for sorting—you can't sort faster than this using comparisons, making it a key benchmark on efficiency questions

Compare: O(n) vs. O(n log n)—both scale reasonably, but linearithmic grows slightly faster. A linear search through an unsorted list is O(n), but if you need to search many times, sorting first at O(n log n) then using O(log n) searches becomes worthwhile.


Polynomial Growth: When Nesting Creates Problems

Nested loops multiply their iterations, causing execution time to grow as a power of the input size. Each additional nesting level adds another factor of n.

O(n²) – Quadratic Time

  • Nested loops where both iterate through the input create n×nn \times n operations—common in algorithms comparing every pair of elements
  • Simple sorting algorithms like bubble sort and selection sort fall here because they repeatedly scan the remaining unsorted portion
  • Becomes unreasonable quickly—1,000 items means 1,000,000 operations; 10,000 items means 100,000,000 operations

Compare: O(n log n) vs. O(n²)—both sort data, but efficient sorts (merge sort) dramatically outperform simple sorts (bubble sort) on large inputs. Exam questions often ask why one sorting approach is "more reasonable" than another—this is why.


Exponential and Beyond: Unreasonable Algorithms

These complexity classes grow so rapidly that they become impractical for all but the smallest inputs. The defining characteristic is that adding one element multiplies (rather than adds to) the work required.

O(2ⁿ) – Exponential Time

  • Doubling with each element means 20 items require over a million operations, and 30 items require over a billion
  • Recursive exploration of all possibilities often produces this—like computing Fibonacci numbers naively by recalculating the same values repeatedly
  • Classic "unreasonable" algorithm on the exam—when asked to identify algorithms that won't work for large inputs, exponential time is the red flag

O(n!) – Factorial Time

  • Permutation problems where you must examine every possible ordering of n items—10 items means 3.6 million orderings; 15 items means over a trillion
  • Traveling salesman problem exemplifies this: finding the shortest route visiting all cities requires checking every possible route order
  • Effectively unsolvable for inputs beyond about 10-15 items, making it the textbook example of computational intractability

Compare: O(2ⁿ) vs. O(n!)—both are "unreasonable," but factorial grows even faster. For n = 10, exponential gives ~1,000 operations while factorial gives ~3,600,000. Both signal that you need a different approach for large inputs.


Quick Reference Table

ConceptBest Examples
Constant time operationsArray index access aList[i], LENGTH(aList), arithmetic operations
Logarithmic efficiencyBinary search on sorted lists
Linear traversalFOR EACH loops, linear search, sum/average calculations
Efficient sortingMerge sort, heap sort
Nested loop patternsBubble sort, selection sort, comparing all pairs
Exponential growthNaive recursive Fibonacci, subset generation
Factorial growthTraveling salesman, generating all permutations
Reasonable vs. unreasonableO(n²) and below generally reasonable; O(2ⁿ) and above unreasonable

Self-Check Questions

  1. A program uses a FOR EACH loop to find the maximum value in a list of n elements. What is its time complexity, and why can't it be done faster?

  2. Compare linear search on an unsorted list versus binary search on a sorted list. Under what conditions would you choose each approach?

  3. Two sorting algorithms are available: one runs in O(n log n) time, the other in O(n²) time. For a list of 10,000 elements, approximately how many times more operations does the slower algorithm require?

  4. A recursive algorithm solves a problem by making two recursive calls, each on a problem half the original size. Is this algorithm's complexity closer to O(n), O(n log n), or O(2ⁿ)? Explain your reasoning.

  5. An exam question asks you to identify which algorithm is "unreasonable" for large inputs. What complexity classes should immediately signal "unreasonable," and what code patterns typically produce them?