Differential equations describe how quantities change over time or space. They're the primary language for modeling dynamic systems in science and engineering, from population growth to heat flow to electrical circuits. This section covers the main types, solution techniques, and applications you'll need.
Fundamentals of Differential Equations
A differential equation is an equation that relates a function to its derivatives. Because derivatives describe rates of change, these equations naturally capture processes that evolve over time or vary across space.
The key to working with differential equations is recognizing what type you're dealing with, since the type determines which solution method to use.
Types of Differential Equations
Ordinary Differential Equations (ODEs) involve functions of a single independent variable (usually time or position) and their derivatives. Most of the equations you'll encounter first fall into this category.
Partial Differential Equations (PDEs) involve functions of multiple independent variables and their partial derivatives. These show up when a quantity depends on both time and space, like temperature varying across a metal plate as it cools.
Within each category, equations can be:
- Linear: the dependent variable and its derivatives appear only to the first power and aren't multiplied together. These are generally easier to solve.
- Nonlinear: contain terms like , , or . These often produce much more complex behavior and are harder to solve analytically.
Order and Degree
Order is the highest derivative that appears in the equation.
- A first-order equation contains only
- A second-order equation contains (and possibly too)
Degree is the power of the highest-order derivative, once the equation is written in polynomial form with respect to derivatives. For example, if appears, the degree is 3.
Knowing the order and degree tells you which solution methods apply.
Solutions and Initial Conditions
A general solution contains arbitrary constants and represents a whole family of curves that satisfy the equation. For a second-order ODE, you'll typically have two arbitrary constants.
A particular solution pins down those constants using extra information:
- Initial value problems (IVPs) specify conditions at a single point (e.g., , )
- Boundary value problems (BVPs) specify conditions at different points (e.g., , )
Existence and uniqueness theorems tell you when a given problem actually has a solution, and whether that solution is the only one. This matters because not every differential equation with initial conditions is guaranteed to have a unique answer.
Ordinary Differential Equations
ODEs are the starting point for differential equations. They model any situation where a quantity's rate of change depends on the quantity itself or on a single independent variable.
First-Order ODEs
These involve only the first derivative and take various forms:
- Linear first-order: . Solved using an integrating factor.
- Separable: can be rearranged so all terms are on one side and all terms on the other, then integrated.
- Exact: satisfy a specific condition () that allows direct integration.
Common applications: exponential growth/decay (radioactive decay, compound interest), the logistic equation for population growth, and Newton's law of cooling.
Second-Order ODEs
These contain second derivatives and frequently describe systems that oscillate. The general linear form is:
- When (homogeneous), you solve by finding roots of the characteristic equation . The nature of the roots (real distinct, repeated, or complex) determines the solution form.
- When (non-homogeneous), you find the homogeneous solution first, then add a particular solution using methods like undetermined coefficients or variation of parameters.
These equations model spring-mass systems, pendulums, and RLC electrical circuits.
Higher-Order ODEs
Equations with third derivatives or higher arise in beam deflection, multi-body dynamics, and other advanced engineering problems. A useful technique: any th-order ODE can be rewritten as a system of first-order ODEs, which makes both analysis and numerical solution more manageable.
Partial Differential Equations
PDEs describe phenomena that depend on multiple independent variables. Temperature in a rod depends on both position and time. A vibrating drumhead depends on two spatial coordinates and time.
Classification of PDEs
The three main types correspond to fundamentally different physical behaviors:
- Elliptic (e.g., Laplace's equation): describe steady-state or equilibrium problems. No time dependence.
- Parabolic (e.g., heat equation): model diffusion processes. Information spreads smoothly over time.
- Hyperbolic (e.g., wave equation): describe wave propagation. Information travels at finite speed.
This classification matters because each type requires different solution strategies and has different mathematical properties.
Common PDEs in Physics
- Wave equation: — propagation of sound, light, and mechanical waves
- Heat equation: — heat diffusion through materials
- Laplace's equation: — electrostatic potentials, steady-state temperature distributions
- Schrödinger equation: — quantum mechanics
- Navier-Stokes equations — fluid flow (these are notoriously difficult to solve in general)
Methods of Solving ODEs
Separation of Variables
This works when you can get all the terms on one side and all the terms on the other.
- Start with an equation like
- Rewrite as
- Integrate both sides independently
- Solve for if possible
This is the simplest technique and the first one to try. It won't work if the variables can't be fully separated.
Integrating Factor Method
For first-order linear ODEs of the form :
- Compute the integrating factor:
- Multiply every term in the equation by
- The left side becomes
- Integrate both sides to find
The key insight: multiplying by turns the left side into an exact derivative, which you can integrate directly.
Variation of Parameters
This technique finds particular solutions to non-homogeneous linear ODEs when other methods (like undetermined coefficients) don't apply.
- First solve the homogeneous equation to get the complementary solution
- Assume a particular solution with the same form, but replace the constants with unknown functions
- Set up and solve a system of equations to determine those functions
- The particular solution plus the complementary solution gives the general solution
This method is more general than undetermined coefficients but involves more computation.

Techniques for Solving PDEs
Method of Characteristics
Used primarily for first-order and hyperbolic PDEs. The idea is to find curves (called characteristics) along which the PDE reduces to an ODE.
- Write the PDE and identify the characteristic equations
- Solve the system of ODEs along the characteristic curves
- Use initial or boundary conditions to determine the full solution
This is especially effective for transport and wave-like problems.
Separation of Variables for PDEs
The most widely used technique for linear PDEs with simple boundary conditions.
- Assume the solution is a product of functions, each depending on only one variable:
- Substitute into the PDE and separate so each side depends on only one variable
- Set each side equal to a separation constant
- Solve the resulting ODEs independently
- Combine solutions and apply boundary/initial conditions
This often leads to infinite series solutions involving Fourier series.
Fourier Series Solutions
When separation of variables produces periodic boundary conditions, the solution takes the form of an infinite sum of sines and cosines.
- Any reasonable periodic function can be decomposed into sinusoidal components
- The coefficients are found using the orthogonality of sine and cosine functions
- Convergence of the series depends on the smoothness of the function being represented
Fourier series are central to solving the heat equation, wave equation, and many vibration problems.
Systems of Differential Equations
Real-world systems often involve multiple interacting quantities. A predator-prey model, for instance, tracks both predator and prey populations simultaneously, with each population's rate of change depending on the other.
Linear Systems
A linear system can be written in matrix form:
The solution strategy relies on finding the eigenvalues and eigenvectors of the coefficient matrix . The eigenvalues determine whether solutions grow, decay, or oscillate. Techniques include diagonalization and the matrix exponential.
Applications: coupled oscillators, electrical networks, and economic models with interacting sectors.
Nonlinear Systems
Nonlinear systems can exhibit behaviors that linear systems cannot, including limit cycles (sustained oscillations) and chaos (sensitive dependence on initial conditions).
Analytical solutions are rarely possible. Instead, you typically:
- Use linearization near equilibrium points to understand local behavior
- Apply numerical methods for global behavior
- Study qualitative features using phase plane analysis
Famous examples include the Lorenz system (a simplified model of atmospheric convection) and the van der Pol oscillator.
Phase Plane Analysis
For two-dimensional systems, the phase plane plots one variable against the other, with trajectories showing how the system evolves over time.
You can identify:
- Equilibrium points (where trajectories converge, diverge, or spiral)
- Limit cycles (closed trajectories representing periodic behavior)
- Separatrices (curves that divide the phase plane into regions with different behaviors)
This graphical approach gives you qualitative understanding of system dynamics without needing an explicit solution.
Applications of Differential Equations
Population Dynamics Models
The logistic growth model describes population growth with limited resources. Here is the intrinsic growth rate and is the carrying capacity. When is small relative to , growth is nearly exponential; as approaches , growth slows to zero.
Other important models:
- Lotka-Volterra equations: predator-prey interactions producing oscillating populations
- SIR model: tracks Susceptible, Infected, and Recovered populations during an epidemic
- Age-structured models: account for different birth and death rates across age groups
Mechanical Systems
Simple harmonic motion: describes an idealized oscillator (like a mass on a spring with no friction). Solutions are sinusoidal with angular frequency .
Adding damping and an external driving force gives the forced damped oscillator:
where is the damping ratio and is the natural frequency. This equation appears throughout physics and engineering, from suspension systems to building vibration analysis.
Electrical Circuits
Differential equations naturally describe circuits with capacitors and inductors, since these components relate voltage and current through derivatives.
- RC circuit: models capacitor charging and discharging
- RLC circuit: describes current flow through a resistor, inductor, and capacitor in series
Notice that the RLC circuit equation has the same mathematical form as the forced damped oscillator. This is a powerful example of how differential equations reveal structural similarities between seemingly unrelated physical systems.
Numerical Methods
When analytical solutions aren't available (which is often the case for nonlinear or complex equations), numerical methods approximate the solution at discrete points.
Euler's Method
The simplest numerical approach for IVPs. Starting from a known point, you step forward using the derivative:
where is the step size.
- Start at the initial condition
- Compute the slope
- Step forward:
- Repeat from the new point
Euler's method is easy to understand and implement, but it's not very accurate. Errors accumulate with each step, and small step sizes are needed for reasonable results. It also struggles with stiff equations (equations where some components change much faster than others).

Runge-Kutta Methods
The fourth-order Runge-Kutta method (RK4) is the workhorse of numerical ODE solving. Instead of using just one slope estimate per step (like Euler), RK4 evaluates the slope at four points within each step and takes a weighted average. This dramatically improves accuracy without requiring extremely small step sizes.
Adaptive versions automatically adjust the step size: smaller steps where the solution changes rapidly, larger steps where it's smooth.
Finite Difference Methods
These approximate derivatives using differences between function values at neighboring grid points. For example, the derivative at a point can be approximated as:
(central difference)
Finite differences are used for both ODEs and PDEs on discretized domains. Different schemes (forward, backward, central) offer trade-offs between accuracy and stability. Implicit methods like Crank-Nicolson provide better stability for diffusion-type problems.
Applications span computational fluid dynamics, heat transfer simulations, and financial modeling.
Stability and Qualitative Analysis
Even when you can't solve a differential equation explicitly, you can often determine the long-term behavior of solutions through qualitative analysis.
Equilibrium Points
An equilibrium point is where the system stays at rest: all derivatives equal zero. To find them, set and solve the resulting algebraic equations.
Equilibrium points are classified by what nearby solutions do:
- Stable (attracting): nearby solutions move toward the equilibrium
- Unstable (repelling): nearby solutions move away
- Saddle: solutions approach along some directions and diverge along others
For linear systems, the classification comes from the eigenvalues of the Jacobian matrix evaluated at the equilibrium. Negative real parts mean stability; positive real parts mean instability; purely imaginary parts mean oscillation.
Stability Criteria
Several tools help determine stability:
- Lyapunov stability theory: constructs an energy-like function to prove stability without solving the equation
- Asymptotic stability: solutions not only stay near equilibrium but actually converge to it as
- Routh-Hurwitz criterion: for linear systems, determines stability directly from the coefficients of the characteristic polynomial
Bifurcation Theory
As you change a parameter in a differential equation, the qualitative behavior of solutions can suddenly shift. These transitions are called bifurcations.
- Saddle-node bifurcation: two equilibrium points collide and annihilate each other (or appear from nothing)
- Hopf bifurcation: a stable equilibrium loses stability and a limit cycle (periodic orbit) is born
Bifurcation diagrams plot equilibrium values against the parameter, showing where these qualitative changes occur. These are used to study tipping points in climate models, population collapse thresholds, and transitions in fluid flow.
Boundary Value Problems
Unlike IVPs (where all conditions are at one point), BVPs specify conditions at two or more different points. This creates a fundamentally different mathematical challenge: you can't just march forward from initial conditions.
Sturm-Liouville Theory
A Sturm-Liouville problem has the form:
with boundary conditions at two endpoints. Solutions exist only for specific values of (the eigenvalues), and the corresponding solutions (eigenfunctions) have a crucial property: they're orthogonal to each other. This orthogonality allows you to expand arbitrary functions as series of eigenfunctions, much like Fourier series.
Applications include quantum mechanics and vibration analysis.
Green's Functions
A Green's function is the response of a system to a point source (a delta function). Once you know the Green's function for a given differential operator and boundary conditions, you can find the solution for any forcing term by integration:
Constructing the Green's function requires solving the homogeneous equation with specific jump conditions at the source point. This technique is widely used in electrostatics, heat conduction, and structural mechanics.
Eigenvalue Problems
The general form (where is a differential operator) asks: for which values of does a nontrivial solution exist?
The eigenvalues often correspond to physical quantities like natural frequencies of vibration or energy levels in quantum mechanics. The eigenfunctions represent the characteristic modes of the system. Spectral methods use eigenfunction expansions to solve complex PDEs efficiently.
Advanced Topics
Laplace Transforms
The Laplace transform converts a differential equation in into an algebraic equation in :
The solution process becomes:
- Transform the ODE (including initial conditions) into an algebraic equation in
- Solve for using algebra
- Apply the inverse Laplace transform to get
This is especially useful for IVPs and for analyzing control systems and circuits, where transfer functions in the -domain reveal system behavior directly.
Power Series Solutions
When standard methods don't work (particularly near singular points), you can assume a solution of the form , substitute into the ODE, and solve for the coefficients by matching powers of .
The Frobenius method extends this to equations with regular singular points, allowing solutions of the form . Many important special functions in physics (Bessel functions, Legendre polynomials) arise from power series solutions.
Existence and Uniqueness Theorems
These theorems tell you whether a problem is well-posed before you try to solve it.
- Picard-Lindelöf theorem: for the IVP , , if is continuous and Lipschitz continuous in , then a unique solution exists in some neighborhood of
- Cauchy-Kowalevski theorem: provides existence and uniqueness conditions for certain PDEs with analytic data
These results also underpin the validity of numerical methods: if a unique solution exists, there's something definite for your numerical scheme to converge toward.