upgrade
upgrade

🧩Intro to Algorithms

Common Search Algorithms

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Search algorithms form the backbone of nearly every program you'll encounter—from finding a contact in your phone to GPS navigation plotting your route. In this course, you're being tested on your ability to analyze time complexity, space complexity, and algorithmic trade-offs. Understanding search algorithms means understanding how computer scientists solve the fundamental problem of locating data efficiently, and why choosing the wrong algorithm can mean the difference between milliseconds and hours of computation.

Don't just memorize that binary search is O(logn)O(\log n)—know why it achieves that complexity through interval halving, when it applies (sorted data only), and how it compares to alternatives. Exam questions will ask you to select the right algorithm for a scenario, analyze worst-case performance, or trace through an algorithm's execution. Master the underlying mechanisms, and you'll handle any variation they throw at you.


Sequential and Array-Based Searches

These algorithms work directly on arrays or lists, moving through elements in predictable patterns. The key distinction is whether they require sorted data and how they reduce the search space.

  • Checks every element sequentially—starts at index 0 and continues until the target is found or the list ends
  • Time complexity is O(n)O(n) in the worst and average case, making it the baseline against which other algorithms are measured
  • Works on unsorted data, which makes it the only option when you can't guarantee ordering—flexibility at the cost of efficiency
  • Divides the search interval in half with each comparison—if the target is less than the middle element, search left; otherwise, search right
  • Time complexity is O(logn)O(\log n), achieved because each step eliminates half the remaining elements
  • Requires sorted data, so factor in the O(nlogn)O(n \log n) sorting cost if data arrives unsorted—still worth it for repeated searches
  • Jumps ahead by n\sqrt{n} steps, then performs linear search within the identified block
  • Time complexity is O(n)O(\sqrt{n})—a middle ground when binary search's random access is costly (like in linked structures)
  • Requires sorted data and works best when backward traversal is expensive compared to forward jumps

Compare: Linear Search vs. Binary Search—both find a target in an array, but linear works on unsorted data at O(n)O(n) while binary requires sorting but achieves O(logn)O(\log n). If an FRQ gives you a sorted array and asks for the most efficient search, binary search is always your answer.


Optimized Array Searches

These algorithms improve on binary search for specific data distributions or unknown dataset sizes. They exploit mathematical properties of the data to make smarter guesses about target location.

  • Estimates target position using value distribution—calculates probable location using the formula pos=low+(targetarr[low])×(highlow)arr[high]arr[low]pos = low + \frac{(target - arr[low]) \times (high - low)}{arr[high] - arr[low]}
  • Best-case complexity is O(loglogn)O(\log \log n) for uniformly distributed data, but degrades to O(n)O(n) when distribution is skewed
  • Requires sorted, uniformly distributed data—think searching for a name in a phone book by opening near where you'd expect it alphabetically
  • Finds the range first, then applies binary search—starts at index 1, doubles the index until finding a value greater than the target
  • Time complexity is O(logn)O(\log n) overall, combining O(logn)O(\log n) for range finding and O(logn)O(\log n) for binary search within that range
  • Ideal for unbounded or infinite lists where you don't know the size—commonly used in search engines and streaming data

Compare: Binary Search vs. Interpolation Search—both require sorted data and achieve sub-linear time, but interpolation makes position estimates based on value (like humans searching a dictionary) while binary always splits in half. Use interpolation only when you can guarantee uniform distribution.


Graph Traversal Algorithms

These algorithms navigate nodes and edges rather than array indices. The fundamental choice is between exploring deep (DFS) or wide (BFS), which determines what kind of solution you find first.

Depth-First Search (DFS)

  • Explores as deep as possible before backtracking—follows one path to its end, then returns to explore alternatives
  • Time complexity is O(V+E)O(V + E) where VV is vertices and EE is edges; implemented with recursion or an explicit stack
  • Uses O(V)O(V) space for the call stack—excellent for detecting cycles, topological sorting, and solving mazes

Breadth-First Search (BFS)

  • Explores all neighbors at current depth before going deeper—guarantees you find the closest nodes first
  • Time complexity is O(V+E)O(V + E); implemented with a queue data structure (FIFO ordering is essential)
  • Finds shortest path in unweighted graphs—if all edges have equal cost, BFS gives you the minimum number of steps

Compare: DFS vs. BFS—same time complexity O(V+E)O(V + E), but DFS uses a stack and finds a path while BFS uses a queue and finds the shortest path in unweighted graphs. When an exam asks about shortest path without edge weights, BFS is correct; when it asks about exploring all possibilities or detecting cycles, think DFS.


These algorithms use domain knowledge to make intelligent choices about which paths to explore. Heuristics transform brute-force search into informed decision-making.

A* Search Algorithm

  • Combines path cost with heuristic estimate—evaluates nodes using f(n)=g(n)+h(n)f(n) = g(n) + h(n) where g(n)g(n) is cost so far and h(n)h(n) is estimated cost to goal
  • Guarantees optimal path when heuristic is admissibleadmissible means h(n)h(n) never overestimates the true remaining cost
  • Time complexity varies with heuristic quality—a perfect heuristic gives linear time; a poor one degrades toward BFS performance

Compare: BFS vs. A* Search—both can find shortest paths, but BFS explores blindly in all directions while A* uses a heuristic to prioritize promising paths. For weighted graphs or large search spaces, A* dramatically outperforms BFS when you have a good heuristic.


Quick Reference Table

ConceptBest Examples
Unsorted data searchLinear Search
Sorted array searchBinary Search, Jump Search
Uniformly distributed dataInterpolation Search
Unknown dataset sizeExponential Search
Unweighted shortest pathBFS
Graph traversal/cycle detectionDFS
Weighted shortest path with heuristicA* Search
O(logn)O(\log n) complexityBinary Search, Exponential Search

Self-Check Questions

  1. Which two search algorithms both require sorted data but use different strategies to estimate target position? How do their best-case complexities compare?

  2. You need to find the shortest path between two nodes in an unweighted graph. Which algorithm guarantees the optimal solution, and what data structure does it use?

  3. Compare DFS and BFS: they share the same time complexity, so what factors should determine which one you choose for a given problem?

  4. An FRQ presents a scenario where you must search an infinitely long sorted stream of data. Which algorithm is designed for this situation, and why does it outperform standard binary search here?

  5. Explain why A* search can outperform BFS on weighted graphs. What property must the heuristic function have to guarantee A* finds the optimal path?