upgrade
upgrade

🖥️Computational Complexity Theory

Key Time Complexity Classes

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Time complexity classes are the foundation of computational complexity theory—they define the boundaries of what computers can and cannot do efficiently. When you're tested on this material, you're not just being asked to recall definitions. You're being asked to understand the hierarchy of computational difficulty, how classes relate to each other, and what the major open questions (like P vs. NP, L vs. NL, and P vs. PSPACE) tell us about the nature of computation itself.

Think of complexity classes as a map of computational terrain. Some problems live in the flat, easily traversable lowlands (P, L), while others sit on increasingly steep mountains (NP, PSPACE, EXPTIME). The key insight is that these classes are defined by resources—time, space, determinism, and parallelism. Don't just memorize which problems belong where; understand what resource constraints define each class and how classes nest inside one another.


Efficiently Solvable: The Tractable Classes

These classes represent problems we consider "tractable"—solvable with reasonable resources. The key distinction here is between deterministic and nondeterministic computation, and between time and space constraints.

P (Polynomial Time)

  • Deterministic polynomial-time solvability—problems in P can be both solved and verified efficiently, making this the gold standard for tractability
  • Closed under complement—if a problem is in P, so is its complement, which is not guaranteed for NP
  • Canonical examples include sorting (O(nlogn)O(n \log n)), shortest path algorithms, and linear programming

L (Logarithmic Space)

  • O(logn)O(\log n) space on a deterministic machine—only enough memory to store a constant number of pointers into the input
  • Subset of P—every problem solvable in log space is also solvable in polynomial time, since LPL \subseteq P
  • Examples include undirected connectivity (proven in L by Reingold) and certain string matching problems

NC (Nick's Class)

  • Polylogarithmic time with polynomial processors—captures problems efficiently parallelizable, formally O(logkn)O(\log^k n) time with O(nc)O(n^c) processors
  • Subset of P—NC \subseteq P, but whether NC = P is open (the parallel computation thesis)
  • Key for parallel algorithms—matrix multiplication, sorting networks, and graph connectivity all have NC algorithms

Compare: P vs. NC—both are tractable, but NC adds the constraint of efficient parallelization. If an exam asks about parallel complexity, NC is your go-to class. The open question NC =?\stackrel{?}{=} P asks whether every efficient sequential algorithm can be efficiently parallelized.


The defining feature of NP is efficient verification, not efficient solution. A nondeterministic Turing machine can "guess" a solution and verify it in polynomial time—but whether deterministic machines can do the same is the P vs. NP question.

NP (Nondeterministic Polynomial Time)

  • Polynomial-time verifiable—given a certificate (witness), a deterministic machine can check correctness in polynomial time
  • Contains P—every problem in P is also in NP, since solving implies verifying; the question is whether PNPP \subsetneq NP
  • NP-complete problems like SAT, Traveling Salesman (decision version), and Knapsack are the "hardest" in NP—solve one efficiently, solve them all

co-NP (Complement of NP)

  • Complements verifiable in polynomial time—for co-NP problems, you can efficiently verify that an instance is a "no" instance
  • Tautology and unsatisfiability are canonical examples—proving a formula is always true or never satisfiable
  • NP \cap co-NP contains problems verifiable in both directions; if NP \neq co-NP, then P \neq NP

NL (Nondeterministic Logarithmic Space)

  • Nondeterministic log-space computation—can guess a polynomial-length certificate but only use O(logn)O(\log n) space to verify
  • Directed st-connectivity is NL-complete—determining if a path exists between two nodes in a directed graph
  • NL \subseteq P is known, but whether L = NL remains open; Immerman-Szelepcsényi showed NL = co-NL

Compare: NP vs. co-NP—NP verifies "yes" answers efficiently; co-NP verifies "no" answers efficiently. If you can do both for a problem, it's in NP \cap co-NP. For FRQs on open problems, the NP vs. co-NP question is closely tied to P vs. NP.


Space-Bounded Complexity

Space complexity classes focus on memory usage rather than time. The key insight is that space can be reused, so space-bounded classes often contain their time-bounded counterparts.

PSPACE (Polynomial Space)

  • Polynomial memory, unlimited time—can run for exponential time as long as memory stays polynomial; formally, PSPACE=kSPACE(nk)\text{PSPACE} = \bigcup_k \text{SPACE}(n^k)
  • Contains P, NP, and co-NP—the hierarchy is PNPPSPACEP \subseteq NP \subseteq PSPACE, but whether any containment is strict is unknown
  • PSPACE-complete problems include QBF (Quantified Boolean Formula) and many two-player games like generalized chess

Compare: NP vs. PSPACE—NP problems have polynomial-length certificates; PSPACE problems may require exploring exponentially many configurations but can reuse space. QBF is PSPACE-complete because quantifier alternation captures the back-and-forth of reusable computation.


Probabilistic Computation

Randomized algorithms introduce a new resource: random bits. These classes capture what's efficiently solvable when we allow bounded probability of error.

BPP (Bounded-error Probabilistic Polynomial Time)

  • Polynomial time with 1/3\leq 1/3 error probability—the error bound is arbitrary (can be amplified to 2n2^{-n} by repetition)
  • Believed equal to P—no natural problem is known in BPP but not P; derandomization conjectures suggest BPP = P
  • Practical importance—primality testing was in BPP (Miller-Rabin) before AKS proved it in P; Monte Carlo algorithms live here

Compare: P vs. BPP—P is deterministic; BPP allows randomness with bounded error. The conjecture BPP = P suggests randomness doesn't add computational power, but proving this requires circuit lower bounds we don't yet have.


Intractable Classes: Exponential Time

These classes represent problems believed to require exponential resources. The jump from polynomial to exponential time represents a fundamental barrier in tractability.

EXPTIME (Exponential Time)

  • Deterministic 2nO(1)2^{n^{O(1)}} time—problems requiring brute-force search over exponentially many possibilities
  • Strictly contains P—by the time hierarchy theorem, PEXPTIMEP \subsetneq EXPTIME is proven, not conjectured
  • EXPTIME-complete problems include generalized chess, checkers, and Go on n×nn \times n boards

NEXPTIME (Nondeterministic Exponential Time)

  • Nondeterministic exponential time—can guess and verify certificates of doubly-exponential length in exponential time
  • Contains EXPTIME—and believed to be strictly larger, but EXPTIME=?NEXPTIME\text{EXPTIME} \stackrel{?}{=} \text{NEXPTIME} is open
  • Succinctly represented problems—many NEXPTIME-complete problems involve compressed or implicit inputs

Compare: EXPTIME vs. NEXPTIME—mirrors the P vs. NP question at exponential scale. The time hierarchy theorem proves PEXPTIMEP \neq EXPTIME, but we cannot yet prove NPNEXPTIMENP \neq NEXPTIME, showing how hard separation proofs are.


Quick Reference Table

ConceptBest Examples
Deterministic tractabilityP, L, NC
Nondeterministic verificationNP, co-NP, NL
Space-bounded computationL, NL, PSPACE
Probabilistic computationBPP
Exponential intractabilityEXPTIME, NEXPTIME
Parallel computationNC
Proven separationsLPSPACEL \subsetneq PSPACE, PEXPTIMEP \subsetneq EXPTIME, NL=co-NLNL = co\text{-}NL
Major open questionsP vs. NP, L vs. NL, NP vs. co-NP, BPP vs. P

Self-Check Questions

  1. Which two complexity classes are defined by space constraints rather than time, and how do they relate to P?

  2. Compare and contrast NP and co-NP: what does each class verify efficiently, and why would NP \neq co-NP imply P \neq NP?

  3. The time hierarchy theorem proves P \neq EXPTIME. Why can't we use similar techniques to prove P \neq NP?

  4. If a problem is in both NP and co-NP, what can you conclude about its verification properties? Give an example of such a problem.

  5. An FRQ asks you to explain why BPP is considered "practically equivalent" to P despite being defined with randomness. What evidence and conjectures would you cite?