Boltzmann Distribution in Thermodynamics
Derivation and Significance
The Boltzmann distribution describes how particles spread across available energy states when a system is at thermal equilibrium. The derivation starts from a core question: given a fixed total energy and a fixed number of particles, what is the most probable way to distribute those particles among the available energy levels? The answer comes from maximizing the number of microstates (using Lagrange multipliers with constraints on total particle number and total energy).
The result is:
where:
- = number of particles in energy state
- = total number of particles
- = degeneracy of energy state (the number of distinct quantum states sharing the same energy)
- = energy of state
- = Boltzmann constant
- = absolute temperature
The exponential factor is called the Boltzmann factor. It controls how population falls off with increasing energy. At low temperature, almost all particles sit in the lowest-energy states. As temperature rises, higher-energy states become increasingly populated.
This distribution is the foundation for connecting microscopic properties (energy levels, degeneracies) to macroscopic thermodynamic quantities (temperature, pressure, entropy). Every thermodynamic observable you'll calculate in this unit traces back to it.
Applications and Importance
- Predicts the population of each energy level at a given temperature, which is essential for spectroscopy (relative intensities of spectral lines depend directly on level populations).
- Underlies the Maxwell-Boltzmann speed distribution for gases: the familiar bell-shaped curve of molecular speeds is a consequence of applying the Boltzmann distribution to translational kinetic energy.
- Serves as the starting point for deriving entropy expressions like the Sackur-Tetrode equation for an ideal monatomic gas.
- Explains why chemical equilibria shift with temperature: changing reshuffles populations among reactant and product energy levels.
A useful limiting-case check: when , the Boltzmann factor , so that state is essentially unoccupied. When , the factor approaches 1, and the population is determined mainly by the degeneracy .

Partition Functions in Thermodynamics
Definition and Role
The partition function is the denominator of the Boltzmann distribution. It sums the Boltzmann factors over every state in the system:
Think of as a measure of how many energy states are thermally accessible at temperature . At very low temperatures, only the ground state contributes significantly, so . At very high temperatures, many states contribute, and becomes large.
Why does matter so much? Because once you have the partition function, you can extract every equilibrium thermodynamic property by taking appropriate derivatives. It's the single function that encodes all the statistical information about the system.
The form of depends on the system: an ideal gas, a collection of harmonic oscillators, a set of spins in a magnetic field, etc. For molecular systems, also depends on which degrees of freedom you include (translational, rotational, vibrational, electronic).

Thermodynamic Quantities from Partition Functions
Here , which is a convenient shorthand that simplifies many expressions.
-
Internal energy: Equivalently: . Both forms appear on exams; be comfortable with either.
-
Helmholtz free energy:
-
Entropy: This can also be written as , which follows directly from .
-
Pressure:
The pattern here is worth noticing: and its derivatives with respect to , , and give you everything. If you can write down for a system, you can compute all its thermodynamic properties.
Note on distinguishable vs. indistinguishable particles: For indistinguishable, independent particles, the system partition function is , where the corrects for overcounting (this avoids the Gibbs paradox). The formulas above for , , etc. should use this corrected when dealing with identical particles like gas molecules.
Calculating Partition Functions for Simple Systems
Ideal Gas
For a molecule in an ideal gas, the total single-molecule partition function factors into independent contributions from each type of motion:
This factorization works because translational, rotational, vibrational, and electronic energies are approximately independent of each other (the Born-Oppenheimer approximation and similar separability assumptions).
Translational partition function:
- = mass of the molecule
- = Planck's constant
- = volume of the container
This comes from replacing the sum over translational quantum states with an integral (valid because translational energy levels are extremely closely spaced). The quantity is called the thermal de Broglie wavelength, so you can write .
Rotational partition function:
-
Linear molecule: where is the moment of inertia and is the symmetry number ( for heteronuclear diatomics like HCl, for homonuclear diatomics like ). This high-temperature limit is valid when the rotational energy spacing.
-
Nonlinear molecule: where , , are the three principal moments of inertia.
Vibrational partition function (harmonic oscillator model):
where is the vibrational frequency. This expression assumes the zero of energy is set at the ground vibrational state (). If you include the zero-point energy , the partition function becomes . Be careful about which convention a problem uses.
For a polyatomic molecule with multiple vibrational modes, the total vibrational partition function is the product over all normal modes: .
Other Simple Systems
Quantum harmonic oscillator:
where is the angular frequency. The geometric series converges because for all finite temperatures. This is the same result as the vibrational partition function above, just written with instead of and with the zero-point energy included explicitly.
Two-level system:
where and are the degeneracies of the ground and excited states, and is the energy gap. This is one of the few systems where the partition function is a simple closed-form expression with no infinite sum. It's useful for modeling electronic excitations, spin systems, and any situation with just two accessible states.
At low (), and only the ground state is populated. At high (), and both levels are populated in proportion to their degeneracies.
System of independent particles:
- Distinguishable particles:
- Indistinguishable particles:
The factor for indistinguishable particles is critical. Without it, the entropy you calculate won't be extensive (it won't scale properly with system size), which is the essence of the Gibbs paradox.