Fiveable

🧠Thinking Like a Mathematician Unit 10 Review

QR code for Thinking Like a Mathematician practice questions

10.2 Time complexity

10.2 Time complexity

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🧠Thinking Like a Mathematician
Unit & Topic Study Guides

Definition of time complexity

Time complexity measures how the number of operations an algorithm performs grows as the input size increases. Rather than timing code on a specific computer (which varies by hardware), time complexity gives you a hardware-independent way to compare algorithms and predict how they'll behave at scale.

Big O notation

Big O notation expresses the upper bound of an algorithm's growth rate. You write it as O(f(n))O(f(n)), where f(n)f(n) is a function describing how operations scale with input size nn.

Two key rules make Big O practical:

  • Drop constant factors. O(3n)O(3n) simplifies to O(n)O(n) because the constant doesn't change the growth rate.
  • Drop lower-order terms. O(n2+n)O(n^2 + n) simplifies to O(n2)O(n^2) because the quadratic term dominates as nn gets large.

This means Big O captures the shape of growth, not the exact operation count. That's the whole point: you want to know whether doubling the input will double the work, square it, or barely change it.

Asymptotic analysis

Asymptotic analysis studies how an algorithm behaves as input size approaches infinity. Small inputs can be misleading. An algorithm with higher overhead might actually beat a "faster" algorithm on tiny inputs, but asymptotic analysis reveals which one wins in the long run. This is how you spot scalability problems before they hit you in production.

Best, worst, and average case

A single algorithm can have different time complexities depending on the input it receives:

  • Best case is the minimum operations needed (e.g., the item you're searching for is the first element).
  • Worst case is the maximum operations needed (e.g., the item is last, or not present at all).
  • Average case is the expected complexity across typical inputs.

Big O usually refers to the worst case unless stated otherwise. That's because worst-case analysis gives you a guarantee: the algorithm will never be slower than this bound.

Common time complexities

These are listed from fastest-growing to slowest. Knowing them cold will help you quickly evaluate any algorithm.

Constant time O(1)O(1)

The number of operations stays the same no matter how large the input is. Accessing an array element by index is O(1)O(1) because the computer jumps directly to that memory address. Pushing or popping from a stack is also O(1)O(1). This is the best you can get.

Logarithmic time O(logn)O(\log n)

Operations grow logarithmically, meaning the algorithm cuts the problem roughly in half with each step. Binary search is the classic example: searching a sorted array of 1,000,000 elements takes at most about 20 comparisons (since log2(1,000,000)20\log_2(1{,}000{,}000) \approx 20). Balanced binary tree operations like insertion and lookup also fall here. This scales extremely well.

Linear time O(n)O(n)

Operations grow in direct proportion to input size. If you double the input, you double the work. Linear search (checking each element one by one) and traversing an array or linked list are both O(n)O(n). For problems where you need to look at every element at least once, linear time is the best possible.

Linearithmic time O(nlogn)O(n \log n)

This combines linear and logarithmic growth. It's the time complexity of the most efficient comparison-based sorting algorithms: merge sort always runs in O(nlogn)O(n \log n), and quicksort achieves it on average. The intuition is that these algorithms do O(n)O(n) work across each of O(logn)O(\log n) levels of recursion.

Quadratic time O(n2)O(n^2)

Operations grow with the square of the input size. This typically comes from nested loops that each iterate over the input. Bubble sort and insertion sort are both O(n2)O(n^2) in the worst case. For 1,000 elements that's 1,000,000 operations; for 10,000 elements it's 100,000,000. You can see why these become impractical for large datasets.

Big O notation, Analysis of algorithms - Basics Behind

Exponential time O(2n)O(2^n)

Each additional input element roughly doubles the total work. A naive recursive Fibonacci implementation is O(2n)O(2^n) because it recomputes the same subproblems over and over. Brute-force solutions to the traveling salesman problem also land here. At just n=30n = 30, you're looking at over a billion operations. These algorithms are essentially unusable for anything beyond small inputs.

Analyzing algorithms

Counting operations

To determine time complexity, identify the basic operations your algorithm performs (comparisons, assignments, arithmetic) and count how many times they execute relative to input size nn.

Consider this example of finding the maximum value in an array:

  1. Set a variable max to the first element. (1 assignment)

  2. Loop through the remaining n1n - 1 elements. (n1n - 1 iterations)

  3. In each iteration, compare the current element to max. (1 comparison per iteration)

  4. If it's larger, update max. (at most 1 assignment per iteration)

The dominant cost is the loop: roughly nn comparisons. So this is O(n)O(n).

Identifying dominant terms

When you have an expression like 5n2+3n+105n^2 + 3n + 10, the n2n^2 term grows fastest as nn increases. By the time n=1,000n = 1{,}000, the 5n25n^2 term is 5,000,000 while 3n3n is only 3,000. The lower-order terms become negligible, so the complexity is O(n2)O(n^2).

Simplifying expressions

The simplification process follows directly from the two Big O rules:

  1. Remove constant coefficients: O(5n2)O(5n^2) becomes O(n2)O(n^2)
  2. Drop lower-order terms: O(n2+n+100)O(n^2 + n + 100) becomes O(n2)O(n^2)
  3. Combine where possible: O(nlogn+n)O(n \cdot \log n + n) becomes O(nlogn)O(n \log n)

The result is a clean expression that makes comparison between algorithms straightforward.

Factors affecting time complexity

Input size

Input size nn is the variable that drives everything. An O(n2)O(n^2) algorithm might work fine for n=100n = 100 (10,000 operations) but choke at n=100,000n = 100{,}000 (10 billion operations). Always consider the realistic range of nn your algorithm will face.

Algorithm design

The approach you choose matters enormously. Searching a sorted array with linear search is O(n)O(n), but switching to binary search drops it to O(logn)O(\log n). That's not a small improvement; for a million elements, it's the difference between 1,000,000 and 20 operations. Choosing the right strategy is often more impactful than micro-optimizing code.

Data structures

Your choice of data structure directly determines the cost of common operations:

  • Arrays give O(1)O(1) access by index but O(n)O(n) search for unsorted data.
  • Hash tables provide O(1)O(1) average-case lookups and insertions.
  • Balanced binary search trees offer O(logn)O(\log n) for search, insert, and delete.

Picking the right data structure for your problem can shift an algorithm's complexity class entirely.

Big O notation, Big O notation: definition and examples · YourBasic

Time complexity in practice

Optimizing algorithms

When an algorithm is too slow, you have several strategies:

  • Memoization stores results of expensive function calls so you don't recompute them. This is what turns naive recursive Fibonacci from O(2n)O(2^n) to O(n)O(n).
  • Dynamic programming builds solutions to larger problems from stored solutions to smaller subproblems.
  • Eliminating redundant work, like breaking out of a loop early when the answer is found, can improve real-world performance even if the Big O class stays the same.

Trade-offs with space complexity

Faster algorithms often use more memory. This is a fundamental trade-off you'll encounter repeatedly:

  • Hash tables trade O(n)O(n) extra space for O(1)O(1) average lookup time.
  • Dynamic programming trades space (storing a table of subproblem results) for time (avoiding recomputation).

When memory is limited, you may need to accept a slower algorithm. The right choice depends on your specific constraints.

Scalability considerations

An algorithm that works for 1,000 users might collapse at 1,000,000. Scalability analysis asks: what happens when nn gets very large? An O(nlogn)O(n \log n) sort will handle millions of records comfortably, while an O(n2)O(n^2) sort will grind to a halt. For systems expected to grow, choosing algorithms with better asymptotic complexity pays off over time.

Advanced concepts

Amortized analysis

Some data structures have operations that are usually cheap but occasionally expensive. A dynamic array, for instance, copies all its elements when it runs out of space, which is O(n)O(n). But this resizing happens so rarely that the average cost per insertion, spread over many operations, is still O(1)O(1). Amortized analysis captures this by averaging the cost over a sequence of operations rather than looking at each one in isolation.

Recurrence relations

Recursive algorithms have time complexities that are naturally expressed as recurrence relations. For merge sort, the recurrence is:

T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)

This says: to sort nn elements, you sort two halves (each of size n/2n/2) and then merge them in O(n)O(n) time. Solving this recurrence gives T(n)=O(nlogn)T(n) = O(n \log n).

Master theorem

The Master Theorem gives you a formula for solving recurrences of the form:

T(n)=aT(n/b)+O(nd)T(n) = aT(n/b) + O(n^d)

where aa is the number of subproblems, n/bn/b is the size of each subproblem, and O(nd)O(n^d) is the cost of dividing and combining. By comparing logba\log_b a to dd, you can determine the overall complexity without solving the recurrence from scratch. For merge sort, a=2a = 2, b=2b = 2, d=1d = 1, and since log22=1=d\log_2 2 = 1 = d, the result is O(ndlogn)=O(nlogn)O(n^d \log n) = O(n \log n).

Practical tools and techniques

Language-specific considerations

Time complexity is language-independent in theory, but practical performance varies. Compiled languages (C, C++) generally execute faster than interpreted ones (Python). Garbage collection in languages like Java can introduce unpredictable pauses. Built-in data structures may have different underlying implementations across languages. These factors don't change Big O class, but they affect real-world running time.

Profiling and benchmarking

Profiling tools measure where your code actually spends its time. Tools like gprof (C/C++) and cProfile (Python) pinpoint bottlenecks that theoretical analysis might miss, such as cache behavior or I/O delays.

Benchmarking compares algorithms empirically by running them on various input sizes and measuring execution time. This helps validate your theoretical analysis and accounts for real-world factors like input distribution, hardware, and compiler optimizations. A good benchmark tests multiple input sizes to confirm the expected growth pattern.