Definition of time complexity
Time complexity measures how the number of operations an algorithm performs grows as the input size increases. Rather than timing code on a specific computer (which varies by hardware), time complexity gives you a hardware-independent way to compare algorithms and predict how they'll behave at scale.
Big O notation
Big O notation expresses the upper bound of an algorithm's growth rate. You write it as , where is a function describing how operations scale with input size .
Two key rules make Big O practical:
- Drop constant factors. simplifies to because the constant doesn't change the growth rate.
- Drop lower-order terms. simplifies to because the quadratic term dominates as gets large.
This means Big O captures the shape of growth, not the exact operation count. That's the whole point: you want to know whether doubling the input will double the work, square it, or barely change it.
Asymptotic analysis
Asymptotic analysis studies how an algorithm behaves as input size approaches infinity. Small inputs can be misleading. An algorithm with higher overhead might actually beat a "faster" algorithm on tiny inputs, but asymptotic analysis reveals which one wins in the long run. This is how you spot scalability problems before they hit you in production.
Best, worst, and average case
A single algorithm can have different time complexities depending on the input it receives:
- Best case is the minimum operations needed (e.g., the item you're searching for is the first element).
- Worst case is the maximum operations needed (e.g., the item is last, or not present at all).
- Average case is the expected complexity across typical inputs.
Big O usually refers to the worst case unless stated otherwise. That's because worst-case analysis gives you a guarantee: the algorithm will never be slower than this bound.
Common time complexities
These are listed from fastest-growing to slowest. Knowing them cold will help you quickly evaluate any algorithm.
Constant time
The number of operations stays the same no matter how large the input is. Accessing an array element by index is because the computer jumps directly to that memory address. Pushing or popping from a stack is also . This is the best you can get.
Logarithmic time
Operations grow logarithmically, meaning the algorithm cuts the problem roughly in half with each step. Binary search is the classic example: searching a sorted array of 1,000,000 elements takes at most about 20 comparisons (since ). Balanced binary tree operations like insertion and lookup also fall here. This scales extremely well.
Linear time
Operations grow in direct proportion to input size. If you double the input, you double the work. Linear search (checking each element one by one) and traversing an array or linked list are both . For problems where you need to look at every element at least once, linear time is the best possible.
Linearithmic time
This combines linear and logarithmic growth. It's the time complexity of the most efficient comparison-based sorting algorithms: merge sort always runs in , and quicksort achieves it on average. The intuition is that these algorithms do work across each of levels of recursion.
Quadratic time
Operations grow with the square of the input size. This typically comes from nested loops that each iterate over the input. Bubble sort and insertion sort are both in the worst case. For 1,000 elements that's 1,000,000 operations; for 10,000 elements it's 100,000,000. You can see why these become impractical for large datasets.

Exponential time
Each additional input element roughly doubles the total work. A naive recursive Fibonacci implementation is because it recomputes the same subproblems over and over. Brute-force solutions to the traveling salesman problem also land here. At just , you're looking at over a billion operations. These algorithms are essentially unusable for anything beyond small inputs.
Analyzing algorithms
Counting operations
To determine time complexity, identify the basic operations your algorithm performs (comparisons, assignments, arithmetic) and count how many times they execute relative to input size .
Consider this example of finding the maximum value in an array:
-
Set a variable
maxto the first element. (1 assignment) -
Loop through the remaining elements. ( iterations)
-
In each iteration, compare the current element to
max. (1 comparison per iteration) -
If it's larger, update
max. (at most 1 assignment per iteration)
The dominant cost is the loop: roughly comparisons. So this is .
Identifying dominant terms
When you have an expression like , the term grows fastest as increases. By the time , the term is 5,000,000 while is only 3,000. The lower-order terms become negligible, so the complexity is .
Simplifying expressions
The simplification process follows directly from the two Big O rules:
- Remove constant coefficients: becomes
- Drop lower-order terms: becomes
- Combine where possible: becomes
The result is a clean expression that makes comparison between algorithms straightforward.
Factors affecting time complexity
Input size
Input size is the variable that drives everything. An algorithm might work fine for (10,000 operations) but choke at (10 billion operations). Always consider the realistic range of your algorithm will face.
Algorithm design
The approach you choose matters enormously. Searching a sorted array with linear search is , but switching to binary search drops it to . That's not a small improvement; for a million elements, it's the difference between 1,000,000 and 20 operations. Choosing the right strategy is often more impactful than micro-optimizing code.
Data structures
Your choice of data structure directly determines the cost of common operations:
- Arrays give access by index but search for unsorted data.
- Hash tables provide average-case lookups and insertions.
- Balanced binary search trees offer for search, insert, and delete.
Picking the right data structure for your problem can shift an algorithm's complexity class entirely.

Time complexity in practice
Optimizing algorithms
When an algorithm is too slow, you have several strategies:
- Memoization stores results of expensive function calls so you don't recompute them. This is what turns naive recursive Fibonacci from to .
- Dynamic programming builds solutions to larger problems from stored solutions to smaller subproblems.
- Eliminating redundant work, like breaking out of a loop early when the answer is found, can improve real-world performance even if the Big O class stays the same.
Trade-offs with space complexity
Faster algorithms often use more memory. This is a fundamental trade-off you'll encounter repeatedly:
- Hash tables trade extra space for average lookup time.
- Dynamic programming trades space (storing a table of subproblem results) for time (avoiding recomputation).
When memory is limited, you may need to accept a slower algorithm. The right choice depends on your specific constraints.
Scalability considerations
An algorithm that works for 1,000 users might collapse at 1,000,000. Scalability analysis asks: what happens when gets very large? An sort will handle millions of records comfortably, while an sort will grind to a halt. For systems expected to grow, choosing algorithms with better asymptotic complexity pays off over time.
Advanced concepts
Amortized analysis
Some data structures have operations that are usually cheap but occasionally expensive. A dynamic array, for instance, copies all its elements when it runs out of space, which is . But this resizing happens so rarely that the average cost per insertion, spread over many operations, is still . Amortized analysis captures this by averaging the cost over a sequence of operations rather than looking at each one in isolation.
Recurrence relations
Recursive algorithms have time complexities that are naturally expressed as recurrence relations. For merge sort, the recurrence is:
This says: to sort elements, you sort two halves (each of size ) and then merge them in time. Solving this recurrence gives .
Master theorem
The Master Theorem gives you a formula for solving recurrences of the form:
where is the number of subproblems, is the size of each subproblem, and is the cost of dividing and combining. By comparing to , you can determine the overall complexity without solving the recurrence from scratch. For merge sort, , , , and since , the result is .
Practical tools and techniques
Language-specific considerations
Time complexity is language-independent in theory, but practical performance varies. Compiled languages (C, C++) generally execute faster than interpreted ones (Python). Garbage collection in languages like Java can introduce unpredictable pauses. Built-in data structures may have different underlying implementations across languages. These factors don't change Big O class, but they affect real-world running time.
Profiling and benchmarking
Profiling tools measure where your code actually spends its time. Tools like gprof (C/C++) and cProfile (Python) pinpoint bottlenecks that theoretical analysis might miss, such as cache behavior or I/O delays.
Benchmarking compares algorithms empirically by running them on various input sizes and measuring execution time. This helps validate your theoretical analysis and accounts for real-world factors like input distribution, hardware, and compiler optimizations. A good benchmark tests multiple input sizes to confirm the expected growth pattern.