Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Search algorithms form the backbone of nearly every program you'll encounter, from finding a contact in your phone to GPS navigation plotting your route. In this course, you're being tested on your ability to analyze time complexity, space complexity, and algorithmic trade-offs. Understanding search algorithms means understanding how computer scientists solve the fundamental problem of locating data efficiently, and why choosing the wrong algorithm can mean the difference between milliseconds and hours of computation.
Don't just memorize that binary search is . Know why it achieves that complexity through interval halving, when it applies (sorted data only), and how it compares to alternatives. Exam questions will ask you to select the right algorithm for a scenario, analyze worst-case performance, or trace through an algorithm's execution. Master the underlying mechanisms, and you'll handle any variation they throw at you.
These algorithms work directly on arrays or lists, moving through elements in predictable patterns. The key distinction is whether they require sorted data and how they reduce the search space.
Linear search is the simplest approach: start at index 0 and check every element one by one until you find the target or reach the end of the list.
Binary search works by comparing the target to the middle element of a sorted array. If the target is smaller, you discard the right half; if larger, you discard the left half. Each comparison cuts the remaining search space in half.
Tracing through an example: Say you're searching for 7 in the sorted array [1, 3, 5, 7, 9, 11].
Jump search splits the difference between linear and binary search. You jump ahead by a fixed block size (optimally ), and once you've jumped past the target, you do a linear search backward within that block.
Compare: Linear Search vs. Binary Search: both find a target in an array, but linear works on unsorted data at while binary requires sorting and achieves . If a question gives you a sorted array and asks for the most efficient search, binary search is your answer.
These algorithms improve on binary search for specific data distributions or unknown dataset sizes. They exploit mathematical properties of the data to make smarter guesses about target location.
Instead of always checking the middle element, interpolation search estimates where the target is likely to be based on its value. Think about how you'd search a dictionary: if you're looking for "walrus," you'd open near the back, not the middle.
The position estimate uses this formula:
Exponential search is designed for situations where you don't know the size of the dataset. It works in two phases:
Compare: Binary Search vs. Interpolation Search: both require sorted data and achieve sub-linear time, but interpolation makes position estimates based on value while binary always splits in half. Use interpolation only when you can guarantee uniform distribution; otherwise its worst case is no better than linear search.
These algorithms navigate nodes and edges rather than array indices. The fundamental choice is between exploring deep (DFS) or wide (BFS), and that choice determines what kind of solution you find first.
DFS picks one path and follows it as far as possible before backtracking. When it reaches a dead end (a node with no unvisited neighbors), it returns to the most recent node that still has unexplored options.
BFS explores all neighbors of the current node before moving to the next level of depth. This level-by-level approach guarantees that you discover closer nodes before farther ones.
Compare: DFS vs. BFS have the same time complexity of , but they solve different problems. DFS uses a stack and finds a path; BFS uses a queue and finds the shortest path in unweighted graphs. When an exam asks about shortest path without edge weights, BFS is correct. When it asks about exploring all possibilities or detecting cycles, think DFS.
These algorithms use domain knowledge to make intelligent choices about which paths to explore. Heuristics transform brute-force search into informed decision-making.
A* evaluates each node using the function:
where is the actual cost from the start to node , and is a heuristic estimate of the cost from to the goal. At each step, A* expands the node with the lowest value.
Compare: BFS vs. A* Search: both can find shortest paths, but BFS explores blindly in all directions while A* uses a heuristic to prioritize promising paths. For weighted graphs or large search spaces, A* dramatically outperforms BFS when you have a good heuristic.
| Concept | Best Examples |
|---|---|
| Unsorted data search | Linear Search |
| Sorted array search | Binary Search, Jump Search |
| Uniformly distributed data | Interpolation Search |
| Unknown dataset size | Exponential Search |
| Unweighted shortest path | BFS |
| Graph traversal/cycle detection | DFS |
| Weighted shortest path with heuristic | A* Search |
| complexity | Binary Search, Exponential Search |
Which two search algorithms both require sorted data but use different strategies to estimate target position? How do their best-case complexities compare?
You need to find the shortest path between two nodes in an unweighted graph. Which algorithm guarantees the optimal solution, and what data structure does it use?
Compare DFS and BFS: they share the same time complexity, so what factors should determine which one you choose for a given problem?
A question presents a scenario where you must search an infinitely long sorted stream of data. Which algorithm is designed for this situation, and why does it outperform standard binary search here?
Explain why A* search can outperform BFS on weighted graphs. What property must the heuristic function have to guarantee A* finds the optimal path?