Definition of master equation
A master equation describes how the probability of finding a system in each of its possible states changes over time. It's the central equation for modeling stochastic (random) processes in non-equilibrium statistical mechanics, and it connects microscopic transition rules to the macroscopic evolution of probability distributions.
The core idea: at any moment, probability "flows" between states. Each state gains probability from transitions arriving from other states and loses probability from transitions leaving toward other states. The master equation bookkeeps all of these flows simultaneously.
Probability transition rates
Transition rates quantify how likely a system is to jump from one state to another per unit time. If is the transition rate from state to state , it has units of inverse time (e.g., ).
- These rates depend on the physics of the system: energy barriers between states, interaction strengths, temperature, and external conditions.
- They can be constant (time-independent) or vary with time if the system is driven by a changing external field.
- Transition rates are not the same as probabilities. A rate of means that, on average, the transition would occur 5 times per second if the system were certainly in state .
Time evolution of systems
The master equation tracks how the full probability distribution shifts as time progresses. It accounts for:
- Gain terms: probability flowing into state from all other states
- Loss terms: probability flowing out of state to all other states
This balance of gains and losses governs whether the system relaxes toward equilibrium, reaches a non-equilibrium steady state, or exhibits more complex dynamical behavior.
Components of master equation
State variables
State variables label the possible configurations of the system. They define the "space" over which the probability distribution lives.
- Discrete states: energy levels of an atom, number of molecules of a chemical species, spin configurations in a lattice. These lead to a countable set of probabilities .
- Continuous states: position and momentum of a Brownian particle. These lead to a probability density , and the master equation becomes an integro-differential equation.
The number of state variables determines the dimensionality of the problem. A two-level system has two states; a lattice of spins has states, which grows extremely fast.
Transition probabilities
Transition probabilities (or rates) specify the rules for how the system hops between states.
- They can be symmetric (), meaning forward and backward transitions are equally likely, or asymmetric, which is common when an external force or chemical potential gradient drives the system.
- They're typically derived from microscopic physics. For example, Fermi's golden rule gives transition rates between quantum states, and Arrhenius-type expressions give rates for thermally activated processes: , where is the activation energy.
Time dependence
The probabilities always depend on time (that's what the master equation solves for). The transition rates may or may not depend on time:
- Autonomous systems: is constant. The system's rules don't change, and it typically relaxes toward a unique steady state.
- Non-autonomous systems: varies, reflecting time-dependent driving (e.g., an oscillating external field). These systems can exhibit periodic steady states or more complex behavior.
Mathematical formulation
Differential equation form
The master equation in its standard form is:
Here's how to read each piece:
- is the probability of being in state at time .
- is the rate of probability flowing into state from state (gain).
- is the rate of probability flowing out of state to state (loss).
- The sum runs over all states .
The right-hand side is the net probability current into state . If gains exceed losses, increases; if losses exceed gains, it decreases. Total probability is conserved: at all times.
Matrix representation
For a system with discrete states, you can collect all probabilities into a column vector and write:
The transition rate matrix (or generator) has the following structure:
- Off-diagonal elements: (transition rate from to )
- Diagonal elements: , ensuring each column sums to zero (probability conservation)
This matrix form is powerful because:
- The formal solution is .
- Eigenvalue decomposition of reveals the relaxation timescales. The eigenvalue corresponds to the steady state; all other eigenvalues have negative real parts (for ergodic systems), and their magnitudes give the decay rates of transient modes.
Continuous vs. discrete time
- Continuous-time master equations use the differential form above. They're natural for physical systems where transitions can happen at any instant.
- Discrete-time master equations use a difference equation: , where is a transition probability matrix (columns sum to 1, not 0). This is the standard Markov chain formulation, useful for systems with well-defined update steps like cellular automata or Monte Carlo simulations.
The two formulations are related: for small , .
Applications in statistical mechanics
Equilibrium systems
Master equations describe how isolated systems relax toward thermal equilibrium. If the transition rates satisfy detailed balance (see below), the steady-state solution is the Boltzmann distribution: . This provides a dynamical route to deriving equilibrium statistical mechanics, complementing the usual ensemble approach.
Applications include modeling spin relaxation in magnetic materials, energy redistribution in ideal gases, and equilibration of simple chemical reactions.

Non-equilibrium processes
Many of the most interesting applications involve systems away from equilibrium:
- Transport phenomena: heat conduction through a chain of oscillators, particle diffusion across a membrane
- Driven systems: a molecular motor consuming ATP, a semiconductor under voltage bias
- Biological systems: enzyme kinetics (Michaelis-Menten as a master equation), gene regulatory networks, epidemic spreading (SIR models)
In these cases, detailed balance is typically violated, and the steady state carries nonzero probability currents.
Stochastic dynamics
When systems are small (few molecules, few particles), random fluctuations become significant and deterministic rate equations break down. The master equation captures the full probability distribution, not just the mean.
- Chemical master equation: tracks the probability of having exactly molecules of species 1, of species 2, etc. Critical for gene expression, where copy numbers can be as low as a few molecules per cell.
- Noise-induced phenomena: stochastic resonance (noise actually improves signal detection) and noise-induced transitions between metastable states.
Solving master equations
Analytical methods
Exact solutions are possible for simple systems:
- Eigenvalue decomposition: For time-independent , decompose into eigenvalues and eigenvectors. The solution is a sum of exponentially decaying modes plus the steady state.
- Generating function methods: Particularly useful for birth-death processes. Define , which converts the master equation into a PDE that may be solvable.
- Detailed balance exploitation: If detailed balance holds, the steady state can be written down directly without solving the full dynamics.
Numerical techniques
For larger or more complex systems:
- Direct ODE integration: Runge-Kutta or implicit methods applied to the coupled ODEs. Works well for moderate state spaces (up to thousands of states).
- Matrix exponentiation: Compute using Padé approximants or Krylov subspace methods. Efficient for sparse matrices.
- Gillespie algorithm (stochastic simulation algorithm): Instead of tracking the full distribution, simulate individual stochastic trajectories. Each step picks the next transition and its timing from the correct probability distribution. Exact for any state space size, but you need many trajectories for good statistics.
- Kinetic Monte Carlo: Similar in spirit to Gillespie, widely used in surface science and materials modeling.
Approximation schemes
When exact or direct numerical solutions are impractical:
- System size expansion (van Kampen): Expand the master equation in powers of , where is a system size parameter (e.g., volume or total population). The leading order gives deterministic rate equations; the next order gives a linear Fokker-Planck equation for fluctuations.
- Moment closure: Write equations for , , etc. Higher moments couple to lower ones, creating an infinite hierarchy. Truncate by approximating higher moments in terms of lower ones (e.g., assume Gaussian statistics).
- Adiabatic elimination: If some variables relax much faster than others, set their time derivatives to zero and solve for them in terms of the slow variables. This reduces the dimensionality of the problem.
- Mean-field approximation: Replace correlations between subsystems with average values. Exact for fully connected systems; an approximation for finite-dimensional ones.
Steady-state solutions
Detailed balance condition
Detailed balance means that, in the steady state, every individual transition is balanced by its reverse:
This is a much stronger condition than simply requiring (which only requires the net flow into each state to vanish). Detailed balance guarantees:
- The steady state is a true thermodynamic equilibrium (no net currents anywhere).
- Time-reversibility: a movie of the system's fluctuations looks the same played forward or backward.
- The steady-state distribution can be constructed by chaining pairwise ratios: .
Systems driven out of equilibrium (e.g., by external forces or chemical gradients) violate detailed balance and sustain nonzero probability currents in their steady states.
Stationary distributions
A stationary distribution satisfies:
This means is the eigenvector of with eigenvalue zero. For an ergodic system (one where every state can be reached from every other state), this eigenvector is unique, and the system will reach it regardless of initial conditions.
Non-ergodic systems can have multiple stationary distributions. For example, a system with absorbing states (states with no outgoing transitions) will have a stationary distribution concentrated on those absorbing states, but the specific distribution depends on initial conditions.
Ergodicity
A system is ergodic if it can eventually visit all accessible states starting from any initial state. Ergodicity ensures:
- Time averages equal ensemble averages (the system "samples" its own steady-state distribution over long times).
- The stationary distribution is unique.
- The Perron-Frobenius theorem guarantees that the zero eigenvalue of is non-degenerate.
Ergodicity breaks down in systems with disconnected state spaces, absorbing states, or certain symmetries that trap the system in subsets of states. Glassy systems exhibit effective ergodicity breaking: they're technically ergodic but the timescale to explore all states exceeds any practical observation time.
Connection to other concepts
Markov processes
The master equation is the evolution equation for a Markov process: a stochastic process where the future depends only on the present state, not on the history of how the system got there. This "memoryless" property is encoded in the fact that the transition rates depend only on states and , not on the system's trajectory.
All the mathematical machinery of Markov chain theory (Chapman-Kolmogorov equation, classification of states, convergence theorems) applies directly to master equations.

Fokker-Planck equation
The Fokker-Planck equation is the continuous-state limit of the master equation. When transitions involve small jumps in a continuous variable, you can Taylor-expand the master equation to second order in the jump size, yielding:
Here is the drift coefficient (deterministic tendency) and is the diffusion coefficient (strength of fluctuations). This is the Kramers-Moyal expansion truncated at second order.
Langevin equation
While the master equation and Fokker-Planck equation describe the evolution of probability distributions, the Langevin equation describes a single stochastic trajectory:
where is Gaussian white noise with and .
The Langevin and Fokker-Planck descriptions are equivalent: averaging over many Langevin trajectories reproduces the Fokker-Planck probability distribution. The Langevin approach is often more convenient for simulations, while the Fokker-Planck/master equation approach is better for analytical calculations.
Examples in physical systems
Chemical reactions
Consider a simple birth-death process for a molecular species with copy number :
- Production at rate :
- Degradation at rate :
The master equation is:
The steady-state solution is a Poisson distribution with mean . For systems with very few molecules (e.g., transcription factors in a cell, where ), the full stochastic description matters because fluctuations are comparable to the mean.
Population dynamics
Birth-death master equations model populations where individuals are born and die stochastically. The SIR (Susceptible-Infected-Recovered) model for epidemics can be formulated as a master equation over the joint state space . For small populations, stochastic effects like random extinction of the infected class become important and are missed by deterministic ODE models.
Quantum systems
Open quantum systems (a quantum system coupled to an environment) are described by the Lindblad master equation for the density matrix :
The first term is coherent (Hamiltonian) evolution; the second describes dissipation and decoherence through Lindblad operators . This is the quantum analog of the classical master equation and is central to quantum optics, quantum computing (modeling noise and errors), and condensed matter physics.
Limitations and extensions
Non-Markovian processes
The standard master equation assumes no memory: transition rates depend only on the current state. Real systems often have memory effects (e.g., a polymer whose future dynamics depend on its conformational history, or a system coupled to a structured environment).
Generalized master equations handle this by introducing a memory kernel :
The Markovian master equation is recovered when . Non-Markovian dynamics appear in glassy systems, protein folding, and quantum systems with structured baths.
Quantum master equations
Beyond the Lindblad equation (which is Markovian), there are non-Markovian quantum master equations for systems strongly coupled to their environment. The Nakajima-Zwanzig projection operator formalism provides a systematic way to derive these, though the resulting equations are often difficult to solve.
Generalized master equations
Projection operator techniques (Mori-Zwanzig formalism) allow you to derive effective master equations for a subset of "relevant" variables by integrating out the remaining "irrelevant" degrees of freedom. The price is that the resulting equation is typically non-Markovian and involves memory kernels. This approach is widely used in complex fluids, polymer dynamics, and coarse-grained modeling.
Experimental relevance
Measurement of transition rates
Transition rates can be extracted from experiments in several ways:
- Single-molecule experiments: optical traps and fluorescence techniques track individual molecules switching between conformational states, giving direct access to transition rates.
- Spectroscopy: absorption and emission spectra reveal energy level spacings and transition rates between quantum states.
- Fluorescence correlation spectroscopy (FCS): measures intensity fluctuations from a small observation volume, yielding reaction rate constants and diffusion coefficients.
Validation of master equation models
Testing a master equation model against experiment involves comparing predicted probability distributions (or their moments) with measured histograms of system states. For equilibrium systems, you can verify that detailed balance holds by checking that the ratio of forward and backward transition rates matches the Boltzmann factor: .
System identification techniques
Inferring the transition rate matrix from experimental time-series data is an active area of research. Methods include:
- Maximum likelihood estimation: find the that makes the observed data most probable.
- Bayesian inference: estimate along with uncertainty quantification.
- Hidden Markov models: handle cases where the observed signal doesn't directly correspond to the master equation states.
- Machine learning approaches: neural networks and related methods for extracting transition rates from high-dimensional data.