Differential equations are the backbone of control theory. They describe how systems change over time, modeling everything from simple pendulums to complex spacecraft. Engineers use them to predict system behavior, design controllers that stabilize systems, and achieve desired performance.
This section covers the classification of differential equations, solution methods for first- and higher-order equations, Laplace transforms, systems of equations, stability analysis, and numerical methods.
Definition of differential equations
A differential equation is a mathematical equation that relates a function to its derivatives (rates of change) with respect to one or more variables, typically time or space. These equations capture the relationship between how something changes and what state it's currently in.
In control theory, differential equations represent physical laws like Newton's second law or Kirchhoff's voltage law. They let you write down the rules governing a system's behavior, then solve for how the system evolves over time given some starting conditions.
Classification of differential equations
Ordinary vs partial differential equations
- Ordinary differential equations (ODEs) involve derivatives with respect to a single independent variable, usually time.
- Example: , where is a function of
- Partial differential equations (PDEs) involve derivatives with respect to multiple independent variables, such as time and space.
- Example: , where depends on both and
Most of classical control theory focuses on ODEs, since you're typically tracking how a system's state evolves in time alone.
Linear vs nonlinear differential equations
- Linear differential equations have the dependent variable and its derivatives appearing only to the first power, never multiplied together. Coefficients can be functions of the independent variable.
- Example:
- Nonlinear differential equations have the dependent variable or its derivatives appearing in a nonlinear way (squared, inside a trig function, multiplied together, etc.).
- Example:
This distinction matters enormously in control theory. Linear equations have well-developed solution techniques and superposition applies. Nonlinear equations are generally much harder and often require approximation or numerical methods.
Homogeneous vs non-homogeneous equations
- Homogeneous equations have zero on the right-hand side. Every term involves the dependent variable or its derivatives.
- Example:
- Non-homogeneous equations have a forcing function on the right-hand side that doesn't depend on the unknown.
- Example:
In control terms, the homogeneous equation describes the system's natural (unforced) response, while the non-homogeneous equation describes the response to an external input.
Order of differential equations
The order of a differential equation is the highest derivative present.
- A first-order equation contains only
- A second-order equation contains , and so on
The order determines how many initial conditions you need for a unique solution. A second-order ODE requires two initial conditions (e.g., initial position and initial velocity).
Solution methods for first-order equations
Separation of variables
This method works when you can rearrange a first-order ODE so that all -terms are on one side and all -terms are on the other.
- Start with an equation like
- Separate:
- Integrate both sides:
- Solve for if possible
Example: For , separate to get , which gives .
Integrating factors
This is the go-to method for first-order linear ODEs of the form .
- Compute the integrating factor:
- Multiply both sides of the equation by
- The left side becomes
- Integrate both sides:
- Solve for
The key idea is that the integrating factor turns the left side into an exact derivative, making integration straightforward.
Exact equations
A first-order ODE written as is exact if .
When this condition holds, there exists a function such that and . The solution is .
To find :
- Integrate with respect to (treating as constant), adding an unknown function
- Differentiate the result with respect to and set it equal to
- Solve for
Example: is exact, with solution .
Bernoulli equations
A Bernoulli equation has the form , where . It's nonlinear, but a substitution converts it to a linear equation.
- Substitute
- Differentiate:
- Rewrite the original equation in terms of , which yields a first-order linear ODE
- Solve the linear ODE for , then convert back to
Example: For , substitute to obtain a linear equation in .
Solution methods for higher-order equations

Reduction of order
When you already know one solution to a second-order linear homogeneous ODE, reduction of order finds a second, linearly independent solution.
- Assume , where is unknown
- Substitute into the original ODE
- The resulting equation for reduces to a first-order ODE in
- Solve for , integrate to get , then form
Example: If solves , set and solve for .
Method of undetermined coefficients
This method finds a particular solution to a non-homogeneous linear ODE when the forcing function is a polynomial, exponential, sine, cosine, or a combination of these.
- Look at the form of the forcing function (right-hand side)
- Guess a particular solution with the same form but unknown coefficients
- Substitute into the ODE
- Match coefficients on both sides to determine the unknowns
If your guess overlaps with a solution to the homogeneous equation, multiply it by (or , etc.) until it no longer overlaps.
Example: For , the natural guess actually solves the homogeneous equation (since the characteristic roots are ). You'd need to try instead.
Variation of parameters
This is a more general method that works for any forcing function, not just the special forms required by undetermined coefficients.
Given the homogeneous solutions and :
- Assume
- Impose the condition
- Substitute into the ODE to get , where is the forcing function (divided by the leading coefficient if needed)
- Solve the two-equation system for and
- Integrate to find and
Example: For , the particular solution is , where and are found by solving the system above.
Laplace transforms in differential equations
Definition of Laplace transform
The Laplace transform converts a time-domain function into a complex-frequency-domain function :
Why is this useful? It turns differential equations into algebraic equations, which are much easier to solve. You solve the algebra in the -domain, then transform back to get your time-domain answer.
Example: (valid for )
Properties of Laplace transform
These properties are what make the Laplace transform so powerful for solving ODEs:
- Linearity:
- Differentiation:
- For second derivatives:
- Integration:
- Frequency shifting:
The differentiation property is especially important: it automatically incorporates initial conditions, so you don't need to solve for them separately.
Inverse Laplace transform
The inverse Laplace transform converts back from the -domain to the time domain:
In practice, you rarely compute the formal integral (the Bromwich integral). Instead, you use:
- Partial fraction decomposition to break into simpler terms
- Transform tables to look up the inverse of each term
Example:
Solving differential equations with Laplace transforms
Here's the step-by-step process:
- Take the Laplace transform of both sides of the ODE, applying the differentiation property to handle derivatives and substitute initial conditions
- Solve the resulting algebraic equation for
- Use partial fractions (if needed) to decompose into recognizable forms
- Apply the inverse Laplace transform to obtain
Example: Solve with , .
-
Transform:
-
Solve: , so
-
Inverse transform:
Systems of differential equations
Coupled equations
A system of differential equations involves multiple ODEs with multiple dependent variables that interact with each other. These arise whenever a system has more than one state variable.
In matrix form, a linear system looks like , where is the coefficient matrix and is the state vector.
Example: The Lotka-Volterra predator-prey model is a classic coupled system:
- (prey growth minus predation)
- (predator growth from feeding minus natural death)
Eigenvalues and eigenvectors
For a linear system , the solution structure is determined by the eigenvalues and eigenvectors of .
- An eigenvalue and eigenvector satisfy
- You find eigenvalues by solving the characteristic equation:
- Each eigenvalue gives a solution of the form
The eigenvalues tell you the system's behavior:
- Negative real parts → solutions decay (stable)
- Positive real parts → solutions grow (unstable)
- Imaginary parts → oscillatory behavior
Example: For , , the matrix has eigenvalues found from .

Phase plane analysis
Phase plane analysis is a graphical tool for understanding two-dimensional systems. You plot one state variable against the other (not against time), and the resulting trajectories show how the system evolves.
- Equilibrium points occur where all derivatives equal zero
- The eigenvalues of the linearized system at each equilibrium classify its type:
- Both eigenvalues negative real → stable node
- Both positive real → unstable node
- One positive, one negative → saddle point
- Complex with negative real part → stable spiral
- Purely imaginary → center (closed orbits)
Example: For , , the eigenvalues are (purely imaginary), so the phase plane shows circular trajectories around the origin. This is a center, which is stable but not asymptotically stable.
Stability analysis of solutions
Equilibrium points
Equilibrium points (also called fixed points or steady states) are constant solutions where all derivatives are zero. To find them:
- Set the right-hand side of each equation to zero
- Solve the resulting algebraic system for the state variables
Example: For , setting gives equilibrium points at and . You can check stability by examining the sign of at each point: (unstable), (stable).
Linearization of nonlinear systems
Most real systems are nonlinear, but you can analyze their local behavior near equilibrium points by linearizing.
- Find the equilibrium point(s)
- Compute the Jacobian matrix , which contains all partial derivatives:
- Evaluate at the equilibrium point
- The eigenvalues of determine local stability
This works because near an equilibrium, the nonlinear system behaves approximately like the linear system .
Example: For , , the Jacobian is . At the origin , this becomes , and the eigenvalues ( and ) indicate that linearization alone is inconclusive for one direction.
Lyapunov stability theory
When linearization fails or you want a global stability result, Lyapunov's method provides an alternative. The idea is to find an energy-like function that decreases along system trajectories.
A Lyapunov function must satisfy:
- for all (positive definite)
- along trajectories (negative semidefinite)
If , the equilibrium is stable. If strictly (negative definite), the equilibrium is asymptotically stable, meaning trajectories actually converge to it.
Example: For , try . Then for . This proves the origin is asymptotically stable.
Numerical methods for differential equations
When analytical solutions aren't available (which is common for nonlinear or complex systems), numerical methods approximate the solution.
Euler's method
Euler's method is the simplest numerical approach. It uses the derivative at the current point to step forward:
where is the step size.
- Straightforward to implement
- Only first-order accurate: the global error is proportional to
- Can become unstable with large step sizes
Smaller gives better accuracy but requires more computation. For most practical problems, Euler's method is too inaccurate and is mainly used as a conceptual starting point.
Runge-Kutta methods
Runge-Kutta methods achieve much better accuracy by evaluating the derivative at multiple points within each step. The most widely used is the fourth-order Runge-Kutta (RK4) method:
where:
RK4 is fourth-order accurate, meaning the global error scales as . This is a huge improvement over Euler's method and is the default choice for many engineering applications.
Finite difference methods
Finite difference methods extend numerical techniques to PDEs by discretizing both space and time on a grid.
- Divide the domain into a grid of discrete points
- Replace derivatives with finite difference approximations (forward, backward, or central differences)
- Solve the resulting system of algebraic equations
These methods can be explicit (solution at the next time step computed directly from the current step) or implicit (requires solving a system of equations at each step, but is more stable).
Example: The heat equation can be solved using forward differences in time and central differences in space, giving an update rule for each grid point.
Applications of differential equations in control theory
Modeling of dynamic systems
Differential equations are the standard language for describing dynamic systems in control theory. The general approach:
- Mechanical systems: Newton's second law gives for a mass-spring-damper
- Electrical circuits: Kirchhoff's laws yield equations like for an RL circuit
- Thermal systems: Energy balance gives equations relating temperature change to heat flow
In each case, the differential equation relates the system's inputs (forces, voltages, heat sources) to its outputs (position, current, temperature) through the system's physical parameters. These models form the foundation for controller design, where the goal is to choose inputs that produce desired output behavior.