Fiveable

🎲Statistical Mechanics Unit 1 Review

QR code for Statistical Mechanics practice questions

1.5 Phase space and microstates

1.5 Phase space and microstates

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎲Statistical Mechanics
Unit & Topic Study Guides

Definition of phase space

Phase space is a mathematical framework that captures the complete state of a mechanical system by combining all position and momentum information into a single space. It's the starting point for statistical mechanics because it lets you describe every possible configuration a system could be in.

Position and momentum coordinates

Each particle's state is specified by its position vector and momentum vector. For a single particle moving in three dimensions, you need six numbers: three position coordinates (qx,qy,qzq_x, q_y, q_z) and three momentum coordinates (px,py,pzp_x, p_y, p_z).

For a system of NN particles in 3D, the full phase space has 6N6N dimensions: 3N3N position coordinates and 3N3N momentum coordinates. A single point in this 6N6N-dimensional space tells you the exact microscopic state of the entire system at a given instant.

Degrees of freedom

The number of degrees of freedom is the number of independent variables needed to fully specify the system's configuration. For NN point particles in 3D with no constraints, that's 3N3N positional degrees of freedom (and 3N3N conjugate momenta).

If particles have internal structure, you also count rotational and vibrational degrees of freedom. A diatomic molecule, for example, adds rotational and vibrational modes beyond its center-of-mass translation.

Dimensionality of phase space

The dimensionality equals twice the number of degrees of freedom. Even modest systems become enormous: a mole of gas (N1023N \sim 10^{23}) has a phase space of roughly 6×10236 \times 10^{23} dimensions. This astronomical dimensionality is exactly why statistical methods are necessary; you can't track individual trajectories for that many coordinates.

Microstates in phase space

A microstate is one specific, fully-detailed configuration of a system, meaning every particle's position and momentum is pinned down. Each microstate corresponds to a single point in phase space. The central project of statistical mechanics is connecting the huge number of microstates to the macroscopic quantities you can actually measure (temperature, pressure, entropy).

Microscopic configurations

A microscopic configuration specifies the exact positions and momenta of all particles. For a gas of 100 particles, one microstate might have particle 1 at position (x1,y1,z1)(x_1, y_1, z_1) with momentum (px1,py1,pz1)(p_{x1}, p_{y1}, p_{z1}), and so on for all 100 particles.

The number of microstates grows exponentially with system size. This explosive growth is what makes entropy a useful and well-defined quantity for large systems.

Distinguishability of particles

In classical mechanics, particles are treated as distinguishable. Swapping two particles creates a different microstate, and you count using standard Boltzmann counting.

In quantum mechanics, identical particles are fundamentally indistinguishable. Swapping two identical particles doesn't produce a new state. This changes how you count microstates:

  • Bosons (integer spin) follow Bose-Einstein statistics
  • Fermions (half-integer spin) follow Fermi-Dirac statistics and obey the Pauli exclusion principle

Getting the counting wrong leads to incorrect entropy values. This is the origin of the famous Gibbs paradox in classical statistical mechanics, resolved by dividing the classical phase space volume by N!N! to correct for overcounting.

Quantum vs. classical microstates

Classical microstates form a continuum: phase space is smooth, and a point can take any value. Quantum microstates are discrete, corresponding to quantized energy levels.

The Heisenberg uncertainty principle sets a fundamental limit: you cannot simultaneously know a particle's position and momentum more precisely than ΔqΔp/2\Delta q \, \Delta p \geq \hbar / 2. This means each quantum state occupies a minimum phase space volume of approximately h3Nh^{3N} for NN particles in 3D (where hh is Planck's constant). This quantum "graininess" provides a natural unit for counting states and makes entropy dimensionless.

Phase space trajectories

As a system evolves in time, its representative point traces out a path through phase space. These trajectories encode the full dynamics of the system and connect to deep ideas about equilibrium and ergodicity.

Time evolution of systems

In classical mechanics, the trajectory is governed by Hamilton's equations of motion:

q˙i=Hpi,p˙i=Hqi\dot{q}_i = \frac{\partial H}{\partial p_i}, \qquad \dot{p}_i = -\frac{\partial H}{\partial q_i}

where HH is the Hamiltonian (total energy as a function of coordinates and momenta). Given an initial point in phase space, the trajectory is completely deterministic.

In quantum mechanics, time evolution is governed by the Schrödinger equation (for pure states) or the von Neumann equation (for density matrices).

Liouville's theorem

Liouville's theorem states that the phase space density is constant along trajectories generated by Hamilton's equations. Equivalently, the volume of any region in phase space is preserved under Hamiltonian time evolution.

Think of it like an incompressible fluid flowing through phase space: the "shape" of a cloud of representative points can distort, but its total volume never changes. This has a profound consequence: Hamiltonian dynamics conserves information. You can't lose track of where the system came from.

Ergodic hypothesis

The ergodic hypothesis asserts that, given enough time, a system's trajectory will pass arbitrarily close to every accessible point in phase space (consistent with its conserved quantities like total energy).

Why does this matter? It justifies replacing time averages (what you'd measure by watching one system for a long time) with ensemble averages (averaging over many copies of the system at one instant). This replacement is what makes statistical mechanics practical.

Not all systems are ergodic. Systems with additional conserved quantities or integrable systems (like a collection of uncoupled harmonic oscillators) can be confined to lower-dimensional subsets of the accessible phase space.

Density of states

The density of states Ω(E)\Omega(E) counts how many microstates are available at a given energy EE. It's the bridge between the microscopic world (individual microstates) and macroscopic thermodynamics (entropy, temperature).

Counting microstates

For discrete systems, you enumerate all configurations consistent with given constraints (total energy, particle number, etc.). Combinatorial methods are the standard tool here. For example, distributing EE quanta of energy among NN oscillators uses the "stars and bars" formula.

For continuous systems, "counting" becomes measuring the volume of the energy surface in phase space. Analytical results exist for simple systems; complex systems require numerical approaches.

Continuous vs. discrete states

  • Classical systems have continuous phase spaces. The number of states in a region is proportional to the phase space volume, divided by h3Nh^{3N} to make it dimensionless.
  • Quantum systems have discrete energy levels, and you sum over them directly.

In the thermodynamic limit (NN \to \infty), the spacing between energy levels becomes vanishingly small relative to the total energy, and the discrete sum approaches a continuous integral.

Degeneracy of energy levels

Degeneracy means multiple distinct microstates share the same energy. For instance, a free particle in a cubic box can have different combinations of quantum numbers (nx,ny,nz)(n_x, n_y, n_z) that yield the same total energy E=2π22mL2(nx2+ny2+nz2)E = \frac{\hbar^2 \pi^2}{2mL^2}(n_x^2 + n_y^2 + n_z^2).

Degeneracy directly inflates the density of states at that energy and therefore increases entropy. Symmetries in the system (rotational, translational) are the usual source of degeneracy.

Phase space volumes

Phase space volumes quantify how many microstates are accessible to a system under given constraints. They're the raw material for calculating probabilities and entropies.

Accessible regions

The accessible region of phase space is the set of all points consistent with the system's constraints. For an isolated system with energy EE, the accessible region is the energy surface (or a thin shell around it) defined by H(q,p)=EH(q, p) = E.

The shape and size of this region depend on the Hamiltonian and on external parameters like volume and particle number.

Constraints and boundaries

Constraints carve out the accessible region from the full phase space:

  • Energy conservation restricts the system to a surface or shell
  • Volume restrictions confine position coordinates
  • Particle number conservation fixes the dimensionality

Additional constraints (like fixed angular momentum) further reduce the accessible region, sometimes creating complex geometries in high-dimensional space.

Volume as probability measure

The fundamental postulate of statistical mechanics assigns equal probability to every accessible microstate. This means the probability of finding the system in some region of phase space is proportional to the volume of that region.

This directly leads to Boltzmann's entropy formula:

S=kBlnΩS = k_B \ln \Omega

where Ω\Omega is the number of accessible microstates (proportional to the phase space volume of the accessible region) and kBk_B is Boltzmann's constant.

Position and momentum coordinates, Phase Diagrams · Chemistry

Ensemble averages

An ensemble is a (conceptual) collection of many copies of the same system, each in a different microstate. Ensemble averages let you calculate macroscopic observables without solving the equations of motion for every particle.

Probability distributions in phase space

The probability distribution ρ(q,p)\rho(q, p) describes how likely you are to find the system at each point in phase space. The form of ρ\rho depends on which ensemble you're using:

  • Microcanonical: ρ\rho is uniform over the energy surface, zero elsewhere
  • Canonical: ρeH/kBT\rho \propto e^{-H/k_B T} (Boltzmann distribution)
  • Grand canonical: also allows particle number to fluctuate

The ensemble average of any observable A(q,p)A(q, p) is:

A=A(q,p)ρ(q,p)dqdp\langle A \rangle = \int A(q, p) \, \rho(q, p) \, dq \, dp

Time averages vs. ensemble averages

A time average follows one system over a long period and averages the observable along its trajectory. An ensemble average averages over many copies of the system at a single instant.

In practice, ensemble averages are far easier to compute. The ergodic hypothesis guarantees that these two types of averages agree for ergodic systems at equilibrium.

Ergodicity and equilibrium

An ergodic system eventually explores all accessible microstates, so its long-time behavior is fully captured by the ensemble average. Equilibrium is the condition where ensemble averages become time-independent.

Non-ergodic systems (glasses, systems with broken symmetry, integrable systems) may get trapped in subsets of phase space and never reach true equilibrium, or they may exhibit multiple metastable states.

Microcanonical ensemble

The microcanonical ensemble describes an isolated system with fixed energy EE, volume VV, and particle number NN. It's the most fundamental ensemble because it's built directly from energy conservation.

Constant energy surfaces

All microstates with total energy EE lie on a (6N1)(6N - 1)-dimensional surface in phase space defined by H(q,p)=EH(q, p) = E. In practice, you often work with a thin energy shell between EE and E+δEE + \delta E, since a surface of zero thickness has zero volume.

The geometry of this surface is determined entirely by the system's Hamiltonian.

Equal a priori probability

The fundamental postulate: all accessible microstates (those on the constant energy surface) are equally probable. There's no deeper derivation of this within classical statistical mechanics; it's an axiom justified by its success.

This postulate leads directly to the principle of maximum entropy at equilibrium: the equilibrium macrostate is the one realized by the largest number of microstates.

Entropy and phase space volume

Boltzmann's entropy formula connects the macroscopic quantity entropy to the microscopic count of states:

S=kBlnΩ(E,V,N)S = k_B \ln \Omega(E, V, N)

From this, you can derive all other thermodynamic quantities. Temperature, for example, emerges as:

1T=SEV,N\frac{1}{T} = \frac{\partial S}{\partial E}\bigg|_{V, N}

This gives entropy a concrete microscopic meaning: it measures how many microstates are compatible with the macroscopic constraints. The second law of thermodynamics then follows from the overwhelming probability that a system will be found in the macrostate with the most microstates.

Phase space in statistical mechanics

Phase space is the unifying mathematical language of statistical mechanics. Every thermodynamic quantity can, in principle, be extracted from the structure of phase space and the probability distributions defined on it.

Connection to thermodynamic properties

Macroscopic observables like temperature, pressure, and internal energy are all ensemble averages of microscopic quantities. Pressure, for example, can be computed from the virial:

P=13ViriFi+NkBTVP = \frac{1}{3V}\left\langle \sum_i \mathbf{r}_i \cdot \mathbf{F}_i \right\rangle + \frac{Nk_BT}{V}

This is how statistical mechanics provides microscopic interpretations of thermodynamic concepts.

Partition functions

The partition function is a sum (or integral) over all microstates, weighted by the Boltzmann factor. For the canonical ensemble:

Z=eH(q,p)/kBTdqdph3NN!Z = \int e^{-H(q,p)/k_BT} \, \frac{dq \, dp}{h^{3N} N!}

The partition function is enormously powerful: once you have ZZ, you can extract all thermodynamic information. The free energy, entropy, internal energy, and pressure all follow from derivatives of lnZ\ln Z.

Free energy calculations

The Helmholtz free energy connects directly to the canonical partition function:

F=kBTlnZF = -k_BT \ln Z

Other thermodynamic potentials (Gibbs free energy, grand potential) relate to partition functions of other ensembles through Legendre transformations. Free energies determine equilibrium conditions and govern phase transitions.

Numerical methods

Most realistic systems can't be solved analytically. Numerical methods let you explore phase space computationally, making statistical mechanics applicable to real materials and complex phenomena.

Monte Carlo sampling

Monte Carlo methods generate random samples of microstates according to a target probability distribution (typically the Boltzmann distribution). The Metropolis algorithm is the classic approach:

  1. Start from some microstate
  2. Propose a random move (e.g., displace a particle)
  3. Accept the move with probability min(1,eΔE/kBT)\min(1, e^{-\Delta E / k_BT})
  4. Repeat to build up a representative sample of phase space

This is efficient for computing equilibrium averages in high-dimensional systems.

Molecular dynamics simulations

Molecular dynamics (MD) numerically integrates Hamilton's equations step by step, generating a trajectory through phase space. This gives you access to both equilibrium properties (via time averages) and dynamical properties like diffusion coefficients and viscosities.

Common integrators include the Verlet and velocity-Verlet algorithms, chosen for their good energy conservation over long simulations.

Phase space discretization

Numerical work requires approximating the continuous phase space with a finite grid or set of sample points. The resolution of this discretization affects accuracy: too coarse and you miss important features, too fine and the computation becomes prohibitively expensive. Adaptive methods and importance sampling help focus computational effort on the regions of phase space that matter most.

Applications and examples

Ideal gas in phase space

The ideal gas is the simplest many-particle system. Because particles don't interact, the Hamiltonian is just a sum of single-particle kinetic energies:

H=i=1Npi22mH = \sum_{i=1}^{N} \frac{|\mathbf{p}_i|^2}{2m}

The phase space factorizes into independent single-particle contributions, making the partition function analytically solvable. The result reproduces the ideal gas law PV=NkBTPV = Nk_BT and gives the Sackur-Tetrode equation for entropy.

Harmonic oscillator states

A single 1D harmonic oscillator has the Hamiltonian H=p22m+12mω2q2H = \frac{p^2}{2m} + \frac{1}{2}m\omega^2 q^2. In classical phase space, constant-energy trajectories are ellipses. In quantum mechanics, the energy levels are discrete: En=ω(n+1/2)E_n = \hbar\omega(n + 1/2).

The harmonic oscillator is a workhorse model. Solids are often modeled as collections of coupled harmonic oscillators (the Einstein and Debye models), and molecular vibrations are treated as harmonic oscillators at low energies.

Many-body systems

Real systems (liquids, plasmas, proteins, condensed matter) involve complex interactions and enormous phase spaces. Analytical solutions are rarely possible, so these systems are studied using the Monte Carlo and molecular dynamics methods described above. The concepts of phase space, microstates, and ensemble averages remain the theoretical foundation, even when the practical work is computational.