Why This Matters
Time complexity classes are the foundation of computational complexity theory—they define the boundaries of what computers can and cannot do efficiently. When you're tested on this material, you're not just being asked to recall definitions. You're being asked to understand the hierarchy of computational difficulty, how classes relate to each other, and what the major open questions (like P vs. NP, L vs. NL, and P vs. PSPACE) tell us about the nature of computation itself.
Think of complexity classes as a map of computational terrain. Some problems live in the flat, easily traversable lowlands (P, L), while others sit on increasingly steep mountains (NP, PSPACE, EXPTIME). The key insight is that these classes are defined by resources—time, space, determinism, and parallelism. Don't just memorize which problems belong where; understand what resource constraints define each class and how classes nest inside one another.
Efficiently Solvable: The Tractable Classes
These classes represent problems we consider "tractable"—solvable with reasonable resources. The key distinction here is between deterministic and nondeterministic computation, and between time and space constraints.
P (Polynomial Time)
- Deterministic polynomial-time solvability—problems in P can be both solved and verified efficiently, making this the gold standard for tractability
- Closed under complement—if a problem is in P, so is its complement, which is not guaranteed for NP
- Canonical examples include sorting (O(nlogn)), shortest path algorithms, and linear programming
L (Logarithmic Space)
- O(logn) space on a deterministic machine—only enough memory to store a constant number of pointers into the input
- Subset of P—every problem solvable in log space is also solvable in polynomial time, since L⊆P
- Examples include undirected connectivity (proven in L by Reingold) and certain string matching problems
NC (Nick's Class)
- Polylogarithmic time with polynomial processors—captures problems efficiently parallelizable, formally O(logkn) time with O(nc) processors
- Subset of P—NC ⊆ P, but whether NC = P is open (the parallel computation thesis)
- Key for parallel algorithms—matrix multiplication, sorting networks, and graph connectivity all have NC algorithms
Compare: P vs. NC—both are tractable, but NC adds the constraint of efficient parallelization. If an exam asks about parallel complexity, NC is your go-to class. The open question NC =? P asks whether every efficient sequential algorithm can be efficiently parallelized.
The defining feature of NP is efficient verification, not efficient solution. A nondeterministic Turing machine can "guess" a solution and verify it in polynomial time—but whether deterministic machines can do the same is the P vs. NP question.
NP (Nondeterministic Polynomial Time)
- Polynomial-time verifiable—given a certificate (witness), a deterministic machine can check correctness in polynomial time
- Contains P—every problem in P is also in NP, since solving implies verifying; the question is whether P⊊NP
- NP-complete problems like SAT, Traveling Salesman (decision version), and Knapsack are the "hardest" in NP—solve one efficiently, solve them all
co-NP (Complement of NP)
- Complements verifiable in polynomial time—for co-NP problems, you can efficiently verify that an instance is a "no" instance
- Tautology and unsatisfiability are canonical examples—proving a formula is always true or never satisfiable
- NP ∩ co-NP contains problems verifiable in both directions; if NP = co-NP, then P = NP
NL (Nondeterministic Logarithmic Space)
- Nondeterministic log-space computation—can guess a polynomial-length certificate but only use O(logn) space to verify
- Directed st-connectivity is NL-complete—determining if a path exists between two nodes in a directed graph
- NL ⊆ P is known, but whether L = NL remains open; Immerman-Szelepcsényi showed NL = co-NL
Compare: NP vs. co-NP—NP verifies "yes" answers efficiently; co-NP verifies "no" answers efficiently. If you can do both for a problem, it's in NP ∩ co-NP. For FRQs on open problems, the NP vs. co-NP question is closely tied to P vs. NP.
Space-Bounded Complexity
Space complexity classes focus on memory usage rather than time. The key insight is that space can be reused, so space-bounded classes often contain their time-bounded counterparts.
PSPACE (Polynomial Space)
- Polynomial memory, unlimited time—can run for exponential time as long as memory stays polynomial; formally, PSPACE=⋃kSPACE(nk)
- Contains P, NP, and co-NP—the hierarchy is P⊆NP⊆PSPACE, but whether any containment is strict is unknown
- PSPACE-complete problems include QBF (Quantified Boolean Formula) and many two-player games like generalized chess
Compare: NP vs. PSPACE—NP problems have polynomial-length certificates; PSPACE problems may require exploring exponentially many configurations but can reuse space. QBF is PSPACE-complete because quantifier alternation captures the back-and-forth of reusable computation.
Probabilistic Computation
Randomized algorithms introduce a new resource: random bits. These classes capture what's efficiently solvable when we allow bounded probability of error.
BPP (Bounded-error Probabilistic Polynomial Time)
- Polynomial time with ≤1/3 error probability—the error bound is arbitrary (can be amplified to 2−n by repetition)
- Believed equal to P—no natural problem is known in BPP but not P; derandomization conjectures suggest BPP = P
- Practical importance—primality testing was in BPP (Miller-Rabin) before AKS proved it in P; Monte Carlo algorithms live here
Compare: P vs. BPP—P is deterministic; BPP allows randomness with bounded error. The conjecture BPP = P suggests randomness doesn't add computational power, but proving this requires circuit lower bounds we don't yet have.
Intractable Classes: Exponential Time
These classes represent problems believed to require exponential resources. The jump from polynomial to exponential time represents a fundamental barrier in tractability.
EXPTIME (Exponential Time)
- Deterministic 2nO(1) time—problems requiring brute-force search over exponentially many possibilities
- Strictly contains P—by the time hierarchy theorem, P⊊EXPTIME is proven, not conjectured
- EXPTIME-complete problems include generalized chess, checkers, and Go on n×n boards
NEXPTIME (Nondeterministic Exponential Time)
- Nondeterministic exponential time—can guess and verify certificates of doubly-exponential length in exponential time
- Contains EXPTIME—and believed to be strictly larger, but EXPTIME=?NEXPTIME is open
- Succinctly represented problems—many NEXPTIME-complete problems involve compressed or implicit inputs
Compare: EXPTIME vs. NEXPTIME—mirrors the P vs. NP question at exponential scale. The time hierarchy theorem proves P=EXPTIME, but we cannot yet prove NP=NEXPTIME, showing how hard separation proofs are.
Quick Reference Table
|
| Deterministic tractability | P, L, NC |
| Nondeterministic verification | NP, co-NP, NL |
| Space-bounded computation | L, NL, PSPACE |
| Probabilistic computation | BPP |
| Exponential intractability | EXPTIME, NEXPTIME |
| Parallel computation | NC |
| Proven separations | L⊊PSPACE, P⊊EXPTIME, NL=co-NL |
| Major open questions | P vs. NP, L vs. NL, NP vs. co-NP, BPP vs. P |
Self-Check Questions
-
Which two complexity classes are defined by space constraints rather than time, and how do they relate to P?
-
Compare and contrast NP and co-NP: what does each class verify efficiently, and why would NP = co-NP imply P = NP?
-
The time hierarchy theorem proves P = EXPTIME. Why can't we use similar techniques to prove P = NP?
-
If a problem is in both NP and co-NP, what can you conclude about its verification properties? Give an example of such a problem.
-
An FRQ asks you to explain why BPP is considered "practically equivalent" to P despite being defined with randomness. What evidence and conjectures would you cite?