upgrade
upgrade

📊Graph Theory

Key Graph Theory Algorithms

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Graph theory algorithms are the backbone of computational problem-solving, and you're being tested on more than just memorizing steps—you need to understand when and why each algorithm works. These algorithms demonstrate fundamental principles like greedy optimization, dynamic programming, traversal strategies, and complexity trade-offs. Whether you're finding the shortest route, building efficient networks, or analyzing system dependencies, the algorithm you choose depends entirely on the problem's constraints.

Don't just memorize pseudocode. Know what problem type each algorithm solves, what data structures it relies on, and what limitations determine when it fails. Exam questions will test your ability to select the right tool for a given scenario and explain the underlying mechanism that makes it work.


Graph Traversal Algorithms

These foundational algorithms systematically visit every node in a graph. The key difference lies in the order of exploration—breadth-first expands outward level by level, while depth-first dives deep before backtracking.

Breadth-First Search (BFS)

  • Queue-based traversal—explores all neighbors at the current depth before moving to the next level, guaranteeing level-order visitation
  • Shortest path in unweighted graphs—since BFS visits nodes in order of their distance from the source, the first path found is always the shortest
  • Connected component detection—running BFS from each unvisited node identifies all separate components in O(V+E)O(V + E) time

Depth-First Search (DFS)

  • Stack-based traversal—explores as far as possible along each branch before backtracking, implemented via recursion or an explicit stack
  • Cycle detection and topological sorting—tracking discovery and finish times enables detecting back edges (cycles) and ordering vertices in DAGs
  • Maze and puzzle solving—the backtracking nature makes DFS ideal for exploring all possible paths in constraint satisfaction problems

Compare: BFS vs. DFS—both achieve O(V+E)O(V + E) traversal, but BFS guarantees shortest paths in unweighted graphs while DFS excels at detecting cycles and performing topological sorts. If an exam asks about finding the shortest route with equal edge weights, BFS is your answer.


Single-Source Shortest Path Algorithms

These algorithms compute optimal paths from one starting node to all others. The choice between them depends entirely on whether your graph contains negative edge weights.

Dijkstra's Shortest Path Algorithm

  • Priority queue optimization—greedily selects the unvisited node with smallest tentative distance, achieving O((V+E)logV)O((V + E) \log V) with a binary heap
  • Non-negative weights only—the greedy approach fails when negative edges exist because a "finalized" node's distance might later decrease
  • Real-world routing applications—powers GPS navigation and network routing protocols where distances/costs are always positive

Bellman-Ford Algorithm

  • Edge relaxation over V1V-1 iterations—systematically updates distances by checking every edge, allowing negative weights to propagate correctly
  • Negative cycle detection—if any distance decreases on the VVth iteration, a negative cycle exists, making shortest paths undefined
  • Financial and economic modeling—handles scenarios like currency arbitrage where "negative costs" (profits) are meaningful

Compare: Dijkstra's vs. Bellman-Ford—Dijkstra's runs faster at O((V+E)logV)O((V + E) \log V) but breaks with negative edges; Bellman-Ford handles negative weights at O(VE)O(VE) cost. For FRQs, always check the edge weight constraints before selecting an algorithm.


All-Pairs Shortest Path

When you need shortest paths between every pair of nodes, specialized algorithms outperform running single-source algorithms repeatedly.

Floyd-Warshall Algorithm

  • Dynamic programming on intermediate vertices—builds solutions by considering whether paths through vertex kk improve existing distances: d[i][j]=min(d[i][j],d[i][k]+d[k][j])d[i][j] = \min(d[i][j], d[i][k] + d[k][j])
  • O(V3)O(V^3) time complexity—practical only for smaller graphs (typically V<500V < 500), but elegantly simple to implement
  • Handles negative weights, not negative cycles—works with negative edges but produces meaningless results if negative cycles exist

Compare: Floyd-Warshall vs. running Dijkstra's VV times—Floyd-Warshall's O(V3)O(V^3) beats repeated Dijkstra's on dense graphs, but for sparse graphs with non-negative weights, VV runs of Dijkstra's at O(V(V+E)logV)O(V(V + E) \log V) may win.


Minimum Spanning Tree Algorithms

These algorithms find the subset of edges connecting all vertices with minimum total weight. Both Kruskal's and Prim's produce optimal MSTs but use fundamentally different strategies—edge-centric vs. vertex-centric growth.

Kruskal's Algorithm

  • Greedy edge selection—sorts all edges by weight and adds them in order, skipping any edge that would create a cycle
  • Union-Find data structure—uses disjoint sets with path compression and union by rank to achieve near-constant cycle detection per edge
  • Optimal for sparse graphs—time complexity of O(ElogE)O(E \log E) depends primarily on edge sorting, making it efficient when EE is small relative to V2V^2

Prim's Algorithm

  • Greedy vertex expansion—grows the MST from a starting node by repeatedly adding the minimum-weight edge connecting the tree to a new vertex
  • Priority queue implementation—achieves O((V+E)logV)O((V + E) \log V) with a binary heap, or O(E+VlogV)O(E + V \log V) with a Fibonacci heap
  • Optimal for dense graphs—outperforms Kruskal's when the graph has many edges because it avoids sorting all of them

Compare: Kruskal's vs. Prim's—both guarantee the same minimum total weight, but Kruskal's processes edges globally (better for sparse graphs) while Prim's grows locally from a vertex (better for dense graphs). Know which to recommend based on graph density.


Directed Graph Analysis

These algorithms reveal the structure and dependencies within directed graphs. Understanding vertex ordering and component decomposition is essential for scheduling, compilation, and system analysis.

Topological Sorting

  • Linear ordering of DAG vertices—arranges vertices so that for every directed edge uvu \to v, vertex uu appears before vv in the ordering
  • Two implementation approaches—DFS-based (reverse post-order) or Kahn's algorithm (BFS-based, repeatedly removing zero-indegree vertices)
  • Dependency resolution applications—essential for build systems, course prerequisites, and task scheduling where order constraints must be satisfied

Strongly Connected Components (Kosaraju's Algorithm)

  • Two-pass DFS strategy—first DFS records finish times, second DFS on the transposed graph (reversed edges) processes vertices in decreasing finish order
  • Maximal strongly connected subgraphs—within each SCC, every vertex can reach every other vertex; between SCCs, connections are one-way only
  • O(V+E)O(V + E) efficiency—linear time makes it practical for analyzing large directed graphs like web link structures or call graphs

Compare: Topological Sort vs. SCC detection—topological sorting requires a DAG (no cycles), while SCC algorithms work on any directed graph and effectively collapse cycles into single nodes. If a graph has cycles, find SCCs first, then topologically sort the resulting DAG of components.


Network Flow Algorithms

Flow algorithms model capacity-constrained networks where resources move from source to sink. The key insight is that maximum flow equals minimum cut—the bottleneck limiting throughput.

Maximum Flow (Ford-Fulkerson Algorithm)

  • Augmenting path method—repeatedly finds paths from source to sink with available capacity, pushing flow until no more paths exist
  • Residual graph representation—tracks remaining capacity and allows "undoing" flow decisions by maintaining backward edges
  • BFS variant (Edmonds-Karp) guarantees O(VE2)O(VE^2)—using BFS to find shortest augmenting paths ensures polynomial time; pure DFS can be exponential with irrational capacities

Compare: Ford-Fulkerson vs. specialized flow algorithms—Ford-Fulkerson is foundational but can be slow; Dinic's algorithm achieves O(V2E)O(V^2E) and push-relabel methods offer better practical performance. Know Ford-Fulkerson's mechanism, but recognize its limitations on exams.


Quick Reference Table

ConceptBest Examples
Unweighted shortest pathBFS
Weighted shortest path (non-negative)Dijkstra's
Shortest path with negative weightsBellman-Ford
All-pairs shortest pathFloyd-Warshall
Minimum spanning tree (sparse graphs)Kruskal's
Minimum spanning tree (dense graphs)Prim's
Dependency orderingTopological Sort
Cycle detection in directed graphsDFS, Bellman-Ford, SCC
Network capacity optimizationFord-Fulkerson

Self-Check Questions

  1. You need to find the shortest path in a graph where some edges have negative weights but no negative cycles exist. Which algorithm should you use, and why would Dijkstra's fail here?

  2. Compare Kruskal's and Prim's algorithms: what data structure does each rely on, and how does graph density affect which one you'd choose?

  3. A directed graph contains cycles. Can you perform a topological sort on it? If not, what algorithm would you run first to make topological sorting possible?

  4. Both BFS and Dijkstra's algorithm can find shortest paths. Under what specific graph condition do they produce identical results, and what distinguishes their general use cases?

  5. FRQ-style: Given a flow network, explain how the Ford-Fulkerson algorithm uses residual graphs to find maximum flow. Why is the ability to "push flow backward" essential to reaching the optimal solution?