Comparison-based sorting algorithms organize data by repeatedly comparing pairs of elements to determine their order. They range from simple methods that work well on small inputs to efficient algorithms designed for large datasets. Knowing how each one works, and when to reach for it, is a core skill in data structures.
Comparison-Based Sorting Algorithms
Implementation of basic sorting algorithms
These three algorithms are straightforward to code and reason about, but they all degrade to time on large or poorly ordered inputs.
Bubble Sort
Bubble Sort walks through the array comparing adjacent elements and swapping them if they're out of order. After each full pass, the largest unsorted element "bubbles" to its final position at the end.
- Repeat passes over the array until no swaps occur in a complete pass.
- Time complexity: worst/average case (reverse-ordered or random input), best case (already sorted, with an early-termination check).
- Space complexity: since only a temporary swap variable is needed.
Selection Sort
Selection Sort splits the array into a sorted portion (initially empty) and an unsorted portion. On each iteration, it scans the unsorted portion for the minimum element and swaps it into the next position of the sorted portion.
- The sorted region grows by one element per pass; after passes the array is fully sorted.
- Time complexity: in all cases. Even if the array is already sorted, it still scans for the minimum every time.
- Space complexity: .
Insertion Sort
Insertion Sort treats the first element as a one-element sorted subarray. It then takes the next unsorted element and shifts larger sorted elements to the right until it finds the correct insertion point.
- Time complexity: worst/average case, best case (already sorted, since each element only needs one comparison).
- Space complexity: .
- Particularly fast on nearly sorted data because few shifts are needed. The number of shifts equals the number of inversions (pairs of elements that are out of order).
Application of efficient sorting techniques
Both Merge Sort and Quick Sort use a divide-and-conquer strategy: break the problem into smaller subproblems, solve them recursively, and combine the results.
Merge Sort
- Divide: Split the array into two halves.
- Recurse: Recursively sort each half until you reach subarrays of size 1 (base case).
- Merge: Combine two sorted halves into one sorted array by comparing elements from the front of each half and placing the smaller one next.
- Time complexity: in all cases. The array is halved times, and each level of recursion does work during the merge step.
- Space complexity: because the merge step requires a temporary array to hold the merged result.
- Stable: equal elements keep their original relative order.
Quick Sort
- Choose a pivot: Select an element (common choices: last element, first element, median-of-three, or random).
- Partition: Rearrange the array so that all elements less than the pivot come before it and all elements greater come after it. The pivot is now in its final sorted position.
- Recurse: Recursively sort the subarray before the pivot and the subarray after the pivot. Base case: subarray of size 0 or 1.
- Time complexity: best/average case (balanced partitions), worst case (extremely unbalanced partitions, e.g., pivot is always the min or max).
- Space complexity: for the recursive call stack in the average case. The sort is in-place since it doesn't allocate a separate output array.
- Unstable: equal elements may be reordered during partitioning.
- Worst-case behavior is rare in practice when you use randomized pivot selection or the median-of-three heuristic.

Complexity analysis of comparison sorts
| Algorithm | Best | Average | Worst | Space | Stable? |
|---|---|---|---|---|---|
| Bubble Sort | Yes | ||||
| Selection Sort | No | ||||
| Insertion Sort | Yes | ||||
| Merge Sort | Yes | ||||
| Quick Sort | No |
One important theoretical result: no comparison-based sort can do better than in the worst case. This lower bound comes from the fact that there are possible orderings, and each comparison eliminates at most half of them, requiring at least comparisons, which is .
Suitability of sorting algorithms
Choosing the right sort depends on input size, how sorted the data already is, memory constraints, and whether stability matters.
- Bubble Sort, Selection Sort, Insertion Sort
- Best for small arrays (roughly –100) where the overhead of recursive algorithms isn't worth it.
- Insertion Sort is the standout of the three: it runs in on nearly sorted data and has low constant factors. Many real-world libraries use it as a subroutine inside Quick Sort for small partitions (e.g., Java's
Arrays.sortswitches to Insertion Sort below a threshold). - Selection Sort's one advantage: it performs at most swaps, which matters if writes are expensive.
- Merge Sort
- Guarantees regardless of input order, so it's a safe default for large datasets.
- Stable, making it the go-to when you need to preserve the relative order of equal elements (e.g., sorting records by one field after already sorting by another).
- The downside is extra memory. In memory-constrained environments, this can be a dealbreaker.
- Quick Sort
- Generally the fastest in practice for large arrays due to good cache locality and small constant factors, despite sharing the same average-case complexity as Merge Sort.
- In-place (only stack space), so it's preferred when memory is tight.
- Unstable, so avoid it when relative order of equal elements matters.
- Vulnerable to worst case on already-sorted or nearly-sorted input if you use a naive pivot (like always picking the first or last element). Randomized pivot selection or median-of-three effectively eliminates this risk.