upgrade
upgrade

📊Graph Theory

Key Shortest Path Algorithms

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Shortest path algorithms form the backbone of countless real-world applications you'll encounter throughout computer science—from GPS navigation and network routing to social network analysis and game AI. When you're tested on these algorithms, you're not just being asked to recall steps; you're being evaluated on your understanding of algorithmic trade-offs, graph properties, and problem-solving strategies. Each algorithm represents a different approach to the same fundamental question: how do we efficiently find optimal paths through a network?

The key to mastering this topic is recognizing when to use which algorithm. You need to understand how graph properties like edge weights, density, and the presence of negative cycles determine which algorithm is appropriate. Don't just memorize time complexities—know what constraints each algorithm handles, what data structures it leverages, and why those choices matter. When an exam question describes a graph scenario, you should immediately know which algorithm fits.


Single-Source Algorithms for Non-Negative Weights

These algorithms find shortest paths from one starting node to all other nodes, but they require that edge weights be non-negative. The greedy approach works here because once a node is finalized, no shorter path can exist.

Dijkstra's Algorithm

  • Greedy selection with a priority queue—always processes the unvisited node with the smallest known distance, guaranteeing optimal paths when weights are non-negative
  • Cannot handle negative edge weights because the greedy assumption breaks down; a "finalized" node might later find a cheaper path through a negative edge
  • Time complexity: O((V+E)logV)O((V + E) \log V) with a binary heap, or O(V2)O(V^2) with a simple array—choose your implementation based on graph density

Breadth-First Search (BFS)

  • Optimal for unweighted graphs only—treats every edge as having weight 1, finding paths with the fewest edges
  • Uses a queue (FIFO order) to explore nodes level by level; the first time you reach a node is guaranteed to be via the shortest path
  • Time complexity: O(V+E)O(V + E)—the most efficient shortest path algorithm when edge weights don't vary

Compare: Dijkstra's vs. BFS—both find single-source shortest paths, but BFS only works on unweighted graphs while Dijkstra's handles varying non-negative weights. If an exam gives you an unweighted graph, BFS is simpler and faster; don't overcomplicate it with Dijkstra's.


Algorithms That Handle Negative Edge Weights

When graphs contain negative edge weights, greedy approaches fail. These algorithms use iterative relaxation—repeatedly updating distance estimates until no improvement is possible.

Bellman-Ford Algorithm

  • Handles negative edge weights by relaxing all edges V1V-1 times, allowing negative weights to propagate through the graph
  • Detects negative weight cycles—if any edge can still be relaxed after V1V-1 iterations, a negative cycle exists (and shortest paths are undefined)
  • Time complexity: O(VE)O(VE)—slower than Dijkstra's but necessary when negative weights are present

Shortest Path Faster Algorithm (SPFA)

  • Queue-based optimization of Bellman-Ford—only re-processes nodes whose distances have actually changed, avoiding redundant relaxations
  • Performs well on sparse graphs in practice but has the same O(VE)O(VE) worst-case complexity; adversarial inputs can force worst-case behavior
  • Not suitable for competitive programming on untrusted inputs due to worst-case vulnerability, but excellent for many real-world sparse graphs

Compare: Bellman-Ford vs. SPFA—both handle negative weights with the same worst-case complexity, but SPFA uses a queue to skip unnecessary work. Choose Bellman-Ford when you need guaranteed performance; use SPFA when average-case speed matters more.


All-Pairs Shortest Path Algorithms

Sometimes you need shortest paths between every pair of nodes, not just from one source. These algorithms compute an entire distance matrix, trading higher complexity for complete information.

Floyd-Warshall Algorithm

  • Dynamic programming over intermediate vertices—iteratively considers whether routing through vertex kk improves the path from ii to jj
  • Simple implementation: triple nested loop—the recurrence dist[i][j]=min(dist[i][j],dist[i][k]+dist[k][j])dist[i][j] = \min(dist[i][j], dist[i][k] + dist[k][j]) is easy to code and debug
  • Time complexity: O(V3)O(V^3)—practical for dense graphs with up to a few thousand vertices; space complexity is O(V2)O(V^2)

Johnson's Algorithm

  • Designed for sparse graphs with negative weights—reweights edges using Bellman-Ford to eliminate negatives, then runs Dijkstra's from each vertex
  • Reweighting preserves shortest paths by adding a potential function h(v)h(v) to each vertex; the transformation is reversible
  • Time complexity: O(V2logV+VE)O(V^2 \log V + VE)—beats Floyd-Warshall on sparse graphs where EV2E \ll V^2

Compare: Floyd-Warshall vs. Johnson's—both compute all-pairs shortest paths, but Floyd-Warshall is simpler with O(V3)O(V^3) always, while Johnson's is faster on sparse graphs. If an FRQ asks about all-pairs paths on a sparse graph with negative weights, Johnson's is your answer.


Heuristic and Search-Based Approaches

When you have additional information about the graph structure—like geometric positions of nodes—you can guide the search more intelligently. Heuristics trade guaranteed optimality (in some cases) for dramatic speed improvements.

A* Search Algorithm

  • Best-first search with admissible heuristics—evaluates nodes by f(n)=g(n)+h(n)f(n) = g(n) + h(n), where g(n)g(n) is actual cost and h(n)h(n) estimates remaining cost
  • Guarantees optimality when heuristic is admissible (never overestimates) and consistent; common heuristics include Euclidean and Manhattan distance
  • Dominates Dijkstra's in practice for point-to-point pathfinding—widely used in games, robotics, and GPS systems
  • Searches from both endpoints simultaneously—meets in the middle, dramatically reducing the explored search space
  • Reduces complexity from O(bd)O(b^d) to approximately O(bd/2)O(b^{d/2})—where bb is branching factor and dd is path depth; this is exponentially faster
  • Requires knowing the target node and works best when the graph structure is similar in both directions; often combined with A* or Dijkstra's

Compare: A* vs. Bidirectional Search—both optimize point-to-point pathfinding, but A* uses domain knowledge (heuristics) while bidirectional search exploits problem structure (known target). In practice, bidirectional A* combines both for maximum efficiency.


Quick Reference Table

ConceptBest Examples
Non-negative weights, single sourceDijkstra's, BFS (unweighted only)
Negative weights allowedBellman-Ford, SPFA, Johnson's
Negative cycle detectionBellman-Ford
All-pairs shortest pathsFloyd-Warshall, Johnson's
Sparse graph optimizationJohnson's, SPFA, Dijkstra's with adjacency list
Dense graph efficiencyFloyd-Warshall, Dijkstra's with array
Heuristic-guided searchA*, Bidirectional Search
Unweighted graphsBFS

Self-Check Questions

  1. You have a sparse graph with some negative edge weights and need all-pairs shortest paths. Which algorithm should you choose, and why is it better than the alternative?

  2. Compare Dijkstra's Algorithm and Bellman-Ford: what graph property determines which one you can use, and what's the trade-off in time complexity?

  3. A game developer needs to find the shortest path between two points on a grid map. Which two algorithms would you recommend they consider, and what information would help them choose between them?

  4. Why does BFS guarantee shortest paths in unweighted graphs but fail on weighted graphs? What data structure change does Dijkstra's make to handle weights?

  5. An FRQ describes a dense graph with 500 vertices and asks for shortest paths between all pairs. Which algorithm would you use, what is its time complexity, and what technique does it employ?