๐Ÿ“ŠGraph Theory

Key Shortest Path Algorithms

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Shortest path algorithms power countless real-world applications: GPS navigation, network routing, social network analysis, and game AI. When you're tested on these algorithms, you're being evaluated on your understanding of algorithmic trade-offs, graph properties, and problem-solving strategies. Each algorithm represents a different approach to the same fundamental question: how do we efficiently find optimal paths through a network?

The key to mastering this topic is recognizing when to use which algorithm. Graph properties like edge weights, density, and the presence of negative cycles determine which algorithm is appropriate. Don't just memorize time complexities. Know what constraints each algorithm handles, what data structures it leverages, and why those choices matter. When an exam question describes a graph scenario, you should immediately know which algorithm fits.


Single-Source Algorithms for Non-Negative Weights

These algorithms find shortest paths from one starting node to all other nodes, but they require that edge weights be non-negative. The greedy approach works here because once a node is finalized, no shorter path to it can exist.

Dijkstra's Algorithm

  • Greedy selection with a priority queue. It always processes the unvisited node with the smallest known distance, guaranteeing optimal paths when weights are non-negative.
  • Cannot handle negative edge weights because the greedy assumption breaks down. Once a node is "finalized," Dijkstra's never revisits it. But a negative edge could later reveal a cheaper path to that node, making the earlier finalization wrong.
  • Time complexity: O((V+E)logโกV)O((V + E) \log V) with a binary heap, or O(V2)O(V^2) with a simple array. The binary heap version is better for sparse graphs (few edges), while the array version is better for dense graphs (many edges), since the overhead of heap operations outweighs its benefit when nearly every pair of nodes shares an edge.

Breadth-First Search (BFS)

  • Optimal for unweighted graphs only. It treats every edge as having weight 1, finding paths with the fewest edges.
  • Uses a queue (FIFO order) to explore nodes level by level. Because all edges cost the same, the first time you reach a node is guaranteed to be via the shortest path.
  • Time complexity: O(V+E)O(V + E). This is the most efficient shortest path algorithm when edge weights don't vary.

Compare: Dijkstra's vs. BFS โ€” both find single-source shortest paths, but BFS only works on unweighted graphs while Dijkstra's handles varying non-negative weights. If an exam gives you an unweighted graph, BFS is simpler and faster; don't overcomplicate it with Dijkstra's.


Algorithms That Handle Negative Edge Weights

When graphs contain negative edge weights, greedy approaches fail. These algorithms use iterative relaxation: repeatedly updating distance estimates until no improvement is possible.

Bellman-Ford Algorithm

The core idea is straightforward: relax every edge in the graph, and repeat that process Vโˆ’1V - 1 times. Why Vโˆ’1V - 1? Because the longest possible shortest path (without cycles) visits at most Vโˆ’1V - 1 edges. Each iteration guarantees that paths using one more edge are correctly computed.

  • Handles negative edge weights by giving those negative values enough iterations to propagate through the entire graph.
  • Detects negative weight cycles. After Vโˆ’1V - 1 iterations, all shortest paths should be finalized. If you run one more iteration and any distance still decreases, that means a negative cycle exists, and shortest paths are undefined for nodes reachable from that cycle.
  • Time complexity: O(VE)O(VE). Slower than Dijkstra's, but necessary when negative weights are present.

Shortest Path Faster Algorithm (SPFA)

  • Queue-based optimization of Bellman-Ford. Instead of blindly relaxing all edges every iteration, SPFA maintains a queue of nodes whose distances recently changed. Only those nodes have their outgoing edges relaxed, skipping redundant work.
  • Performs well on sparse graphs in practice but has the same O(VE)O(VE) worst-case complexity. Adversarial inputs can force it into worst-case behavior.
  • Best used when average-case performance matters. For guaranteed worst-case bounds, stick with standard Bellman-Ford.

Compare: Bellman-Ford vs. SPFA โ€” both handle negative weights with the same worst-case complexity, but SPFA uses a queue to skip unnecessary work. Choose Bellman-Ford when you need guaranteed performance; use SPFA when average-case speed matters more.


All-Pairs Shortest Path Algorithms

Sometimes you need shortest paths between every pair of nodes, not just from one source. These algorithms compute an entire distance matrix, trading higher complexity for complete information.

Floyd-Warshall Algorithm

This algorithm uses dynamic programming over intermediate vertices. The idea: for each vertex kk, check whether routing the path from ii to jj through kk produces a shorter distance. The recurrence is:

dist[i][j]=minโก(dist[i][j],ย dist[i][k]+dist[k][j])dist[i][j] = \min(dist[i][j],\ dist[i][k] + dist[k][j])

You iterate kk from 1 to VV as the outermost loop, with ii and jj as the inner loops. The order matters: kk must be outermost so that when you consider vertex kk as an intermediate, all paths using vertices 11 through kโˆ’1k-1 are already computed.

  • Simple implementation: triple nested loop. Easy to code and debug.
  • Time complexity: O(V3)O(V^3) regardless of edge count. Space complexity is O(V2)O(V^2) for the distance matrix.
  • Practical for dense graphs with up to a few thousand vertices.

Johnson's Algorithm

Johnson's is designed for sparse graphs with negative weights. It works in two phases:

  1. Reweight edges using Bellman-Ford. Add a new temporary vertex connected to all other vertices with zero-weight edges. Run Bellman-Ford from this vertex to compute a potential function h(v)h(v) for each vertex. Then reweight each edge (u,v)(u, v) as: wโ€ฒ(u,v)=w(u,v)+h(u)โˆ’h(v)w'(u, v) = w(u, v) + h(u) - h(v). This makes all edge weights non-negative while preserving which paths are shortest.
  2. Run Dijkstra's from every vertex using the reweighted edges. After collecting results, reverse the reweighting to recover true shortest-path distances.
  • Time complexity: O(V2logโกV+VE)O(V^2 \log V + VE). This beats Floyd-Warshall on sparse graphs where Eโ‰ชV2E \ll V^2.

Compare: Floyd-Warshall vs. Johnson's โ€” both compute all-pairs shortest paths, but Floyd-Warshall is simpler with O(V3)O(V^3) always, while Johnson's is faster on sparse graphs. If a question asks about all-pairs paths on a sparse graph with negative weights, Johnson's is your answer.


Heuristic and Search-Based Approaches

When you have additional information about the graph structure (like geometric positions of nodes), you can guide the search more intelligently. Heuristics can dramatically reduce the number of nodes explored.

A* Search Algorithm

A* is a best-first search that evaluates nodes using:

f(n)=g(n)+h(n)f(n) = g(n) + h(n)

where g(n)g(n) is the actual cost from the start to nn, and h(n)h(n) is a heuristic estimate of the remaining cost from nn to the goal.

  • Guarantees optimality when the heuristic is admissible (never overestimates the true remaining cost) and consistent (satisfies the triangle inequality). Common heuristics include Euclidean distance and Manhattan distance.
  • Outperforms Dijkstra's for point-to-point pathfinding because the heuristic steers the search toward the goal instead of expanding outward in all directions. Widely used in games, robotics, and GPS systems.
  • If h(n)=0h(n) = 0 for all nodes, A* reduces to Dijkstra's. So you can think of Dijkstra's as A* with no heuristic guidance.
  • Searches from both the start and the target simultaneously, meeting in the middle. This dramatically reduces the explored search space.
  • Reduces complexity from O(bd)O(b^d) to approximately O(bd/2)O(b^{d/2}), where bb is the branching factor and dd is the path depth. That's an exponential improvement.
  • Requires knowing the target node and works best when the graph structure is similar in both directions. Often combined with A* or Dijkstra's for even better performance.

Compare: A* vs. Bidirectional Search โ€” both optimize point-to-point pathfinding, but A* uses domain knowledge (heuristics) while bidirectional search exploits problem structure (known target). In practice, bidirectional A* combines both for maximum efficiency.


Quick Reference Table

ConceptBest Examples
Non-negative weights, single sourceDijkstra's, BFS (unweighted only)
Negative weights allowedBellman-Ford, SPFA, Johnson's
Negative cycle detectionBellman-Ford
All-pairs shortest pathsFloyd-Warshall, Johnson's
Sparse graph optimizationJohnson's, SPFA, Dijkstra's with adjacency list
Dense graph efficiencyFloyd-Warshall, Dijkstra's with array
Heuristic-guided searchA*, Bidirectional Search
Unweighted graphsBFS

Self-Check Questions

  1. You have a sparse graph with some negative edge weights and need all-pairs shortest paths. Which algorithm should you choose, and why is it better than the alternative?

  2. Compare Dijkstra's Algorithm and Bellman-Ford: what graph property determines which one you can use, and what's the trade-off in time complexity?

  3. A game developer needs to find the shortest path between two points on a grid map. Which two algorithms would you recommend they consider, and what information would help them choose between them?

  4. Why does BFS guarantee shortest paths in unweighted graphs but fail on weighted graphs? What data structure change does Dijkstra's make to handle weights?

  5. A question describes a dense graph with 500 vertices and asks for shortest paths between all pairs. Which algorithm would you use, what is its time complexity, and what technique does it employ?