Fiveable

Molecular Physics Unit 15 Review

QR code for Molecular Physics practice questions

15.1 Principles of molecular dynamics simulations

15.1 Principles of molecular dynamics simulations

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
Molecular Physics
Unit & Topic Study Guides

Principles of Molecular Dynamics

Molecular dynamics (MD) simulations let you track how atoms and molecules move and interact over time by numerically solving Newton's equations of motion. They bridge the gap between microscopic particle behavior and the macroscopic properties you can measure in a lab, like temperature, pressure, and diffusion rates.

By carefully choosing force fields, boundary conditions, and integration algorithms, you can simulate everything from simple liquids to complex biomolecules. This section covers the core principles behind setting up and running these simulations.

Fundamental Concepts

At its core, an MD simulation does something conceptually simple: it calculates the force on every particle at a given instant, uses those forces to update positions and velocities, then repeats. The physics comes from Newton's second law:

Fi=miaiF_i = m_i a_i

where FiF_i is the net force on particle ii, mim_i is its mass, and aia_i is its acceleration.

The forces between particles come from interatomic potentials (also called force fields). These are mathematical functions that describe how the potential energy of the system varies with particle positions. At each discrete time step, the simulation:

  1. Computes the force on every particle from the current configuration using the force field.
  2. Calculates the resulting acceleration for each particle.
  3. Updates positions and velocities using a numerical integration algorithm.
  4. Repeats for the desired number of time steps.

The trajectory you get (positions and velocities over time) is the raw output of the simulation, and it contains all the information you need to extract physical properties.

Statistical Mechanics Connection

Statistical mechanics is what lets you translate particle-level trajectories into thermodynamic quantities. From the positions and velocities recorded during a simulation, you can calculate:

  • Equilibrium properties like radial distribution functions (which describe how particle density varies as a function of distance from a reference particle) and average energies.
  • Transport properties like diffusion coefficients and viscosities, which describe how matter or momentum moves through the system.
  • Non-equilibrium behavior by applying external perturbations such as shear flow or temperature gradients and observing the system's response.

The key assumption underlying most MD analysis is the ergodic hypothesis: that time averages over a sufficiently long trajectory are equivalent to ensemble averages. In practice, this means running your simulation long enough that the system samples a representative portion of phase space.

Molecular Dynamics Setup

Fundamental Concepts, Newton's Laws in Three Dimensions ‹ OpenCurriculum

Force Fields

The choice of force field is arguably the most important decision in setting up a simulation, because it determines the accuracy of every force calculation.

A force field typically includes terms for:

  • Bonded interactions: bond stretching, angle bending, and torsional (dihedral) rotations. These describe the energy cost of deforming the internal geometry of a molecule.
  • Non-bonded interactions: van der Waals forces (short-range attraction and repulsion) and electrostatic interactions (Coulomb forces between partial charges).

Different force fields are designed for different types of systems:

  • The Lennard-Jones potential is commonly used for simple liquids and noble gases. It models the van der Waals interaction with a r12r^{-12} repulsive term and a r6r^{-6} attractive term: V(r)=4ϵ[(σr)12(σr)6]V(r) = 4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^{6}\right]
  • Force fields like CHARMM, AMBER, and OPLS are parameterized for biomolecular systems (proteins, nucleic acids, lipids) and include detailed bonded and non-bonded terms.

Force field parameters are derived from a combination of experimental data (e.g., spectroscopic measurements, crystal structures) and quantum mechanical calculations. The quality of these parameters directly limits the reliability of your simulation results.

System Configuration

Before running a simulation, you need to define the initial state of your system:

  • Initial positions are often taken from experimental structures (X-ray crystallography, NMR) or generated computationally through energy minimization or Monte Carlo methods.
  • Initial velocities are randomly assigned from a Maxwell-Boltzmann distribution at the target temperature. The total linear momentum is typically set to zero so the center of mass doesn't drift.

Thermodynamic Ensembles

The ensemble you choose determines which macroscopic quantities are held constant during the simulation:

EnsembleHeld ConstantTypical Use
NVE (microcanonical)Number of particles, Volume, EnergyTesting energy conservation; validating integrators
NVT (canonical)Number of particles, Volume, TemperatureMost common for equilibrium simulations
NPT (isothermal-isobaric)Number of particles, Pressure, TemperatureSimulating systems at experimental conditions
To maintain constant temperature or pressure, you need additional algorithms:
  • Thermostats (Nosé-Hoover, Langevin) couple the system to a heat bath to control temperature.
  • Barostats (Berendsen, Parrinello-Rahman) adjust the simulation box volume to maintain target pressure.
Fundamental Concepts, 4.3 Newton’s Second Law of Motion: Concept of a System – College Physics: OpenStax

Integration Algorithms

The equations of motion are solved numerically using integration algorithms. Two widely used options:

  • Verlet algorithm: Uses positions at the current and previous time steps to compute the next position. It's time-reversible and conserves energy well over long simulations.
  • Leapfrog algorithm: A variant of Verlet where velocities are calculated at half-integer time steps. It gives identical trajectories to Verlet but provides velocities more directly.

Both are symplectic integrators, meaning they preserve the geometric structure of the equations of motion, which helps prevent long-term energy drift in NVE simulations.

Boundary Conditions

Boundary conditions define what happens at the edges of your simulation box. Without them, particles at the surface would experience unrealistic forces.

Periodic boundary conditions (PBCs) are the most common approach. The simulation box is effectively replicated infinitely in all directions. When a particle exits one face of the box, it re-enters through the opposite face. This:

  • Eliminates surface effects, so every particle experiences a bulk-like environment.
  • Maintains constant density throughout the simulation.
  • Requires careful treatment of long-range interactions (you need to account for interactions with periodic images, often using methods like Ewald summation for electrostatics).

Fixed or reflective boundary conditions are used when you actually want surfaces or confinement, such as simulations of nanopores, thin films, or molecules adsorbed on a surface.

More specialized boundary conditions exist for specific applications. Grand canonical boundary conditions, for example, allow the number of particles to fluctuate, which is useful for studying adsorption or membrane permeation.

Time Step Selection

The time step (Δt\Delta t) controls how finely you discretize the trajectory. Getting it right is a balancing act between accuracy and computational cost.

If the time step is too large, the integration algorithm can't accurately capture fast motions. Forces change significantly between updates, energy conservation breaks down, and the simulation can become unstable ("blow up" with unphysical energy increases).

If the time step is too small, you waste computational resources. Each step costs roughly the same amount of computation, so halving the time step doubles the cost of reaching the same total simulation time.

Choosing the Right Time Step

A standard rule of thumb: set Δt\Delta t to roughly one-tenth the period of the fastest motion in your system. For most atomistic simulations, the fastest motions are bond vibrations involving hydrogen atoms (period ~10 fs), which gives a typical time step of ~1 fs.

Techniques for Using Larger Time Steps

Several methods let you push the time step higher without sacrificing stability:

  1. Constraint algorithms (SHAKE, RATTLE, LINCS): These fix the lengths of bonds involving hydrogen atoms, removing the fastest vibrations from the system. With constraints, you can typically use a 2 fs time step instead of 1 fs, doubling your effective simulation speed.

  2. Multiple time step methods (r-RESPA): Different interactions are updated at different frequencies. Fast-varying short-range forces are computed every small time step, while slowly varying long-range forces are computed less frequently. This avoids recalculating expensive long-range electrostatics at every step.

  3. Adaptive time stepping: The time step is automatically adjusted based on how rapidly forces are changing. When the system is evolving quickly (e.g., during a collision), the step shrinks; when dynamics are slow, it grows. This is less common in standard MD but useful in certain specialized applications.

The choice of time step, constraints, and integration algorithm are all interconnected. Changing one often requires reconsidering the others to maintain a stable, accurate simulation.