Fiveable

🎛️Control Theory Unit 1 Review

QR code for Control Theory practice questions

1.2 Differential equations

1.2 Differential equations

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎛️Control Theory
Unit & Topic Study Guides

Differential equations are the backbone of control theory. They describe how systems change over time, modeling everything from simple pendulums to complex spacecraft. Engineers use them to predict system behavior, design controllers that stabilize systems, and achieve desired performance.

This section covers the classification of differential equations, solution methods for first- and higher-order equations, Laplace transforms, systems of equations, stability analysis, and numerical methods.

Definition of differential equations

A differential equation is a mathematical equation that relates a function to its derivatives (rates of change) with respect to one or more variables, typically time or space. These equations capture the relationship between how something changes and what state it's currently in.

In control theory, differential equations represent physical laws like Newton's second law or Kirchhoff's voltage law. They let you write down the rules governing a system's behavior, then solve for how the system evolves over time given some starting conditions.

Classification of differential equations

Ordinary vs partial differential equations

  • Ordinary differential equations (ODEs) involve derivatives with respect to a single independent variable, usually time.
    • Example: dydt=f(t,y)\frac{dy}{dt} = f(t, y), where yy is a function of tt
  • Partial differential equations (PDEs) involve derivatives with respect to multiple independent variables, such as time and space.
    • Example: ut=c22ux2\frac{\partial u}{\partial t} = c^2 \frac{\partial^2 u}{\partial x^2}, where uu depends on both tt and xx

Most of classical control theory focuses on ODEs, since you're typically tracking how a system's state evolves in time alone.

Linear vs nonlinear differential equations

  • Linear differential equations have the dependent variable and its derivatives appearing only to the first power, never multiplied together. Coefficients can be functions of the independent variable.
    • Example: dydt+p(t)y=q(t)\frac{dy}{dt} + p(t)y = q(t)
  • Nonlinear differential equations have the dependent variable or its derivatives appearing in a nonlinear way (squared, inside a trig function, multiplied together, etc.).
    • Example: dydt=y2+sin(t)\frac{dy}{dt} = y^2 + \sin(t)

This distinction matters enormously in control theory. Linear equations have well-developed solution techniques and superposition applies. Nonlinear equations are generally much harder and often require approximation or numerical methods.

Homogeneous vs non-homogeneous equations

  • Homogeneous equations have zero on the right-hand side. Every term involves the dependent variable or its derivatives.
    • Example: d2ydt2+4dydt+4y=0\frac{d^2y}{dt^2} + 4\frac{dy}{dt} + 4y = 0
  • Non-homogeneous equations have a forcing function on the right-hand side that doesn't depend on the unknown.
    • Example: d2ydt2+4dydt+4y=cos(t)\frac{d^2y}{dt^2} + 4\frac{dy}{dt} + 4y = \cos(t)

In control terms, the homogeneous equation describes the system's natural (unforced) response, while the non-homogeneous equation describes the response to an external input.

Order of differential equations

The order of a differential equation is the highest derivative present.

  • A first-order equation contains only dydt\frac{dy}{dt}
  • A second-order equation contains d2ydt2\frac{d^2y}{dt^2}, and so on

The order determines how many initial conditions you need for a unique solution. A second-order ODE requires two initial conditions (e.g., initial position and initial velocity).

Solution methods for first-order equations

Separation of variables

This method works when you can rearrange a first-order ODE so that all yy-terms are on one side and all tt-terms are on the other.

  1. Start with an equation like dydt=g(t)h(y)\frac{dy}{dt} = g(t) \cdot h(y)
  2. Separate: 1h(y)dy=g(t)dt\frac{1}{h(y)} dy = g(t) \, dt
  3. Integrate both sides: 1h(y)dy=g(t)dt\int \frac{1}{h(y)} dy = \int g(t) \, dt
  4. Solve for y(t)y(t) if possible

Example: For dydt=ty2\frac{dy}{dt} = ty^2, separate to get 1y2dy=tdt\int \frac{1}{y^2} dy = \int t \, dt, which gives 1y=t22+C-\frac{1}{y} = \frac{t^2}{2} + C.

Integrating factors

This is the go-to method for first-order linear ODEs of the form dydt+P(t)y=Q(t)\frac{dy}{dt} + P(t)y = Q(t).

  1. Compute the integrating factor: μ(t)=eP(t)dt\mu(t) = e^{\int P(t) \, dt}
  2. Multiply both sides of the equation by μ(t)\mu(t)
  3. The left side becomes ddt[μ(t)y]\frac{d}{dt}[\mu(t) \cdot y]
  4. Integrate both sides: μ(t)y=μ(t)Q(t)dt\mu(t) \cdot y = \int \mu(t) \cdot Q(t) \, dt
  5. Solve for y(t)y(t)

The key idea is that the integrating factor turns the left side into an exact derivative, making integration straightforward.

Exact equations

A first-order ODE written as M(x,y)dx+N(x,y)dy=0M(x, y)dx + N(x, y)dy = 0 is exact if My=Nx\frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}.

When this condition holds, there exists a function F(x,y)F(x, y) such that Fx=M\frac{\partial F}{\partial x} = M and Fy=N\frac{\partial F}{\partial y} = N. The solution is F(x,y)=CF(x, y) = C.

To find FF:

  1. Integrate MM with respect to xx (treating yy as constant), adding an unknown function g(y)g(y)
  2. Differentiate the result with respect to yy and set it equal to NN
  3. Solve for g(y)g(y)

Example: 2xy3dx+(3x2y21)dy=02xy^3 dx + (3x^2y^2 - 1)dy = 0 is exact, with solution F(x,y)=x2y3y=CF(x, y) = x^2y^3 - y = C.

Bernoulli equations

A Bernoulli equation has the form dydt+P(t)y=Q(t)yn\frac{dy}{dt} + P(t)y = Q(t)y^n, where n0,1n \neq 0, 1. It's nonlinear, but a substitution converts it to a linear equation.

  1. Substitute v=y1nv = y^{1-n}
  2. Differentiate: dvdt=(1n)yndydt\frac{dv}{dt} = (1-n)y^{-n}\frac{dy}{dt}
  3. Rewrite the original equation in terms of vv, which yields a first-order linear ODE
  4. Solve the linear ODE for v(t)v(t), then convert back to yy

Example: For dydt+2ty=t2y3\frac{dy}{dt} + 2ty = t^2 y^3, substitute v=y2v = y^{-2} to obtain a linear equation in vv.

Solution methods for higher-order equations

Ordinary vs partial differential equations, Maxwell equations (thermodynamics) - Knowino

Reduction of order

When you already know one solution y1(t)y_1(t) to a second-order linear homogeneous ODE, reduction of order finds a second, linearly independent solution.

  1. Assume y2(t)=v(t)y1(t)y_2(t) = v(t) \cdot y_1(t), where v(t)v(t) is unknown
  2. Substitute y2y_2 into the original ODE
  3. The resulting equation for v(t)v(t) reduces to a first-order ODE in v(t)v'(t)
  4. Solve for v(t)v'(t), integrate to get v(t)v(t), then form y2(t)y_2(t)

Example: If y1(t)=ty_1(t) = t solves t2y+2ty2y=0t^2 y'' + 2t y' - 2y = 0, set y2(t)=v(t)ty_2(t) = v(t) \cdot t and solve for v(t)v(t).

Method of undetermined coefficients

This method finds a particular solution to a non-homogeneous linear ODE when the forcing function is a polynomial, exponential, sine, cosine, or a combination of these.

  1. Look at the form of the forcing function (right-hand side)
  2. Guess a particular solution yp(t)y_p(t) with the same form but unknown coefficients
  3. Substitute ypy_p into the ODE
  4. Match coefficients on both sides to determine the unknowns

If your guess overlaps with a solution to the homogeneous equation, multiply it by tt (or t2t^2, etc.) until it no longer overlaps.

Example: For y+4y=3cos(2t)y'' + 4y = 3\cos(2t), the natural guess yp=Acos(2t)+Bsin(2t)y_p = A\cos(2t) + B\sin(2t) actually solves the homogeneous equation (since the characteristic roots are ±2i\pm 2i). You'd need to try yp=t[Acos(2t)+Bsin(2t)]y_p = t[A\cos(2t) + B\sin(2t)] instead.

Variation of parameters

This is a more general method that works for any forcing function, not just the special forms required by undetermined coefficients.

Given the homogeneous solutions y1(t)y_1(t) and y2(t)y_2(t):

  1. Assume yp(t)=u1(t)y1(t)+u2(t)y2(t)y_p(t) = u_1(t)y_1(t) + u_2(t)y_2(t)
  2. Impose the condition u1y1+u2y2=0u_1'y_1 + u_2'y_2 = 0
  3. Substitute into the ODE to get u1y1+u2y2=g(t)u_1'y_1' + u_2'y_2' = g(t), where g(t)g(t) is the forcing function (divided by the leading coefficient if needed)
  4. Solve the two-equation system for u1u_1' and u2u_2'
  5. Integrate to find u1(t)u_1(t) and u2(t)u_2(t)

Example: For y+y=sec(t)y'' + y = \sec(t), the particular solution is yp(t)=u1(t)cos(t)+u2(t)sin(t)y_p(t) = u_1(t)\cos(t) + u_2(t)\sin(t), where u1u_1 and u2u_2 are found by solving the system above.

Laplace transforms in differential equations

Definition of Laplace transform

The Laplace transform converts a time-domain function f(t)f(t) into a complex-frequency-domain function F(s)F(s):

L{f(t)}=F(s)=0estf(t)dt\mathcal{L}\{f(t)\} = F(s) = \int_0^{\infty} e^{-st} f(t) \, dt

Why is this useful? It turns differential equations into algebraic equations, which are much easier to solve. You solve the algebra in the ss-domain, then transform back to get your time-domain answer.

Example: L{eat}=1sa\mathcal{L}\{e^{at}\} = \frac{1}{s-a} (valid for s>as > a)

Properties of Laplace transform

These properties are what make the Laplace transform so powerful for solving ODEs:

  • Linearity: L{af(t)+bg(t)}=aF(s)+bG(s)\mathcal{L}\{af(t) + bg(t)\} = aF(s) + bG(s)
  • Differentiation: L{f(t)}=sF(s)f(0)\mathcal{L}\{f'(t)\} = sF(s) - f(0)
    • For second derivatives: L{f(t)}=s2F(s)sf(0)f(0)\mathcal{L}\{f''(t)\} = s^2F(s) - sf(0) - f'(0)
  • Integration: L{0tf(τ)dτ}=F(s)s\mathcal{L}\left\{\int_0^t f(\tau) \, d\tau\right\} = \frac{F(s)}{s}
  • Frequency shifting: L{eatf(t)}=F(sa)\mathcal{L}\{e^{at}f(t)\} = F(s-a)

The differentiation property is especially important: it automatically incorporates initial conditions, so you don't need to solve for them separately.

Inverse Laplace transform

The inverse Laplace transform converts back from the ss-domain to the time domain:

L1{F(s)}=f(t)\mathcal{L}^{-1}\{F(s)\} = f(t)

In practice, you rarely compute the formal integral (the Bromwich integral). Instead, you use:

  • Partial fraction decomposition to break F(s)F(s) into simpler terms
  • Transform tables to look up the inverse of each term

Example: L1{1sa}=eat\mathcal{L}^{-1}\left\{\frac{1}{s-a}\right\} = e^{at}

Solving differential equations with Laplace transforms

Here's the step-by-step process:

  1. Take the Laplace transform of both sides of the ODE, applying the differentiation property to handle derivatives and substitute initial conditions
  2. Solve the resulting algebraic equation for Y(s)Y(s)
  3. Use partial fractions (if needed) to decompose Y(s)Y(s) into recognizable forms
  4. Apply the inverse Laplace transform to obtain y(t)y(t)

Example: Solve y+4y=0y'' + 4y = 0 with y(0)=1y(0) = 1, y(0)=0y'(0) = 0.

  1. Transform: s2Y(s)s(1)0+4Y(s)=0s^2Y(s) - s(1) - 0 + 4Y(s) = 0

  2. Solve: Y(s)(s2+4)=sY(s)(s^2 + 4) = s, so Y(s)=ss2+4Y(s) = \frac{s}{s^2 + 4}

  3. Inverse transform: y(t)=cos(2t)y(t) = \cos(2t)

Systems of differential equations

Coupled equations

A system of differential equations involves multiple ODEs with multiple dependent variables that interact with each other. These arise whenever a system has more than one state variable.

In matrix form, a linear system looks like dxdt=Ax\frac{d\vec{x}}{dt} = A\vec{x}, where AA is the coefficient matrix and x\vec{x} is the state vector.

Example: The Lotka-Volterra predator-prey model is a classic coupled system:

  • dxdt=axbxy\frac{dx}{dt} = ax - bxy (prey growth minus predation)
  • dydt=cxydy\frac{dy}{dt} = cxy - dy (predator growth from feeding minus natural death)

Eigenvalues and eigenvectors

For a linear system dxdt=Ax\frac{d\vec{x}}{dt} = A\vec{x}, the solution structure is determined by the eigenvalues and eigenvectors of AA.

  • An eigenvalue λ\lambda and eigenvector v\vec{v} satisfy Av=λvA\vec{v} = \lambda\vec{v}
  • You find eigenvalues by solving the characteristic equation: det(AλI)=0\det(A - \lambda I) = 0
  • Each eigenvalue gives a solution of the form x(t)=veλt\vec{x}(t) = \vec{v} e^{\lambda t}

The eigenvalues tell you the system's behavior:

  • Negative real parts → solutions decay (stable)
  • Positive real parts → solutions grow (unstable)
  • Imaginary parts → oscillatory behavior

Example: For dxdt=2x+3y\frac{dx}{dt} = 2x + 3y, dydt=x+2y\frac{dy}{dt} = x + 2y, the matrix A=[2312]A = \begin{bmatrix} 2 & 3 \\ 1 & 2 \end{bmatrix} has eigenvalues found from det(AλI)=0\det(A - \lambda I) = 0.

Ordinary vs partial differential equations, pde - what is separation of variables - Mathematics Stack Exchange

Phase plane analysis

Phase plane analysis is a graphical tool for understanding two-dimensional systems. You plot one state variable against the other (not against time), and the resulting trajectories show how the system evolves.

  • Equilibrium points occur where all derivatives equal zero
  • The eigenvalues of the linearized system at each equilibrium classify its type:
    • Both eigenvalues negative real → stable node
    • Both positive real → unstable node
    • One positive, one negative → saddle point
    • Complex with negative real part → stable spiral
    • Purely imaginary → center (closed orbits)

Example: For dxdt=y\frac{dx}{dt} = y, dydt=x\frac{dy}{dt} = -x, the eigenvalues are ±i\pm i (purely imaginary), so the phase plane shows circular trajectories around the origin. This is a center, which is stable but not asymptotically stable.

Stability analysis of solutions

Equilibrium points

Equilibrium points (also called fixed points or steady states) are constant solutions where all derivatives are zero. To find them:

  1. Set the right-hand side of each equation to zero
  2. Solve the resulting algebraic system for the state variables

Example: For dxdt=x(1x)\frac{dx}{dt} = x(1-x), setting x(1x)=0x(1-x) = 0 gives equilibrium points at x=0x = 0 and x=1x = 1. You can check stability by examining the sign of f(x)f'(x) at each point: f(0)=1>0f'(0) = 1 > 0 (unstable), f(1)=1<0f'(1) = -1 < 0 (stable).

Linearization of nonlinear systems

Most real systems are nonlinear, but you can analyze their local behavior near equilibrium points by linearizing.

  1. Find the equilibrium point(s)
  2. Compute the Jacobian matrix JJ, which contains all partial derivatives: Jij=fixjJ_{ij} = \frac{\partial f_i}{\partial x_j}
  3. Evaluate JJ at the equilibrium point
  4. The eigenvalues of JJ determine local stability

This works because near an equilibrium, the nonlinear system behaves approximately like the linear system dxdt=Jx\frac{d\vec{x}}{dt} = J\vec{x}.

Example: For dxdt=xy\frac{dx}{dt} = xy, dydt=y+x2\frac{dy}{dt} = -y + x^2, the Jacobian is J=[yx2x1]J = \begin{bmatrix} y & x \\ 2x & -1 \end{bmatrix}. At the origin (0,0)(0,0), this becomes J=[0001]J = \begin{bmatrix} 0 & 0 \\ 0 & -1 \end{bmatrix}, and the eigenvalues (00 and 1-1) indicate that linearization alone is inconclusive for one direction.

Lyapunov stability theory

When linearization fails or you want a global stability result, Lyapunov's method provides an alternative. The idea is to find an energy-like function that decreases along system trajectories.

A Lyapunov function V(x)V(\vec{x}) must satisfy:

  • V(x)>0V(\vec{x}) > 0 for all x0\vec{x} \neq 0 (positive definite)
  • V(0)=0V(0) = 0
  • V˙(x)=dVdt0\dot{V}(\vec{x}) = \frac{dV}{dt} \leq 0 along trajectories (negative semidefinite)

If V˙0\dot{V} \leq 0, the equilibrium is stable. If V˙<0\dot{V} < 0 strictly (negative definite), the equilibrium is asymptotically stable, meaning trajectories actually converge to it.

Example: For dxdt=x3\frac{dx}{dt} = -x^3, try V(x)=12x2V(x) = \frac{1}{2}x^2. Then V˙=x(x3)=x4<0\dot{V} = x \cdot (-x^3) = -x^4 < 0 for x0x \neq 0. This proves the origin is asymptotically stable.

Numerical methods for differential equations

When analytical solutions aren't available (which is common for nonlinear or complex systems), numerical methods approximate the solution.

Euler's method

Euler's method is the simplest numerical approach. It uses the derivative at the current point to step forward:

yn+1=yn+hf(tn,yn)y_{n+1} = y_n + h \cdot f(t_n, y_n)

where hh is the step size.

  • Straightforward to implement
  • Only first-order accurate: the global error is proportional to hh
  • Can become unstable with large step sizes

Smaller hh gives better accuracy but requires more computation. For most practical problems, Euler's method is too inaccurate and is mainly used as a conceptual starting point.

Runge-Kutta methods

Runge-Kutta methods achieve much better accuracy by evaluating the derivative at multiple points within each step. The most widely used is the fourth-order Runge-Kutta (RK4) method:

yn+1=yn+h6(k1+2k2+2k3+k4)y_{n+1} = y_n + \frac{h}{6}(k_1 + 2k_2 + 2k_3 + k_4)

where:

  • k1=f(tn,yn)k_1 = f(t_n, y_n)
  • k2=f(tn+h/2,  yn+hk1/2)k_2 = f(t_n + h/2, \; y_n + hk_1/2)
  • k3=f(tn+h/2,  yn+hk2/2)k_3 = f(t_n + h/2, \; y_n + hk_2/2)
  • k4=f(tn+h,  yn+hk3)k_4 = f(t_n + h, \; y_n + hk_3)

RK4 is fourth-order accurate, meaning the global error scales as h4h^4. This is a huge improvement over Euler's method and is the default choice for many engineering applications.

Finite difference methods

Finite difference methods extend numerical techniques to PDEs by discretizing both space and time on a grid.

  1. Divide the domain into a grid of discrete points
  2. Replace derivatives with finite difference approximations (forward, backward, or central differences)
  3. Solve the resulting system of algebraic equations

These methods can be explicit (solution at the next time step computed directly from the current step) or implicit (requires solving a system of equations at each step, but is more stable).

Example: The heat equation ut=α2ux2\frac{\partial u}{\partial t} = \alpha \frac{\partial^2 u}{\partial x^2} can be solved using forward differences in time and central differences in space, giving an update rule for each grid point.

Applications of differential equations in control theory

Modeling of dynamic systems

Differential equations are the standard language for describing dynamic systems in control theory. The general approach:

  • Mechanical systems: Newton's second law gives mx¨+bx˙+kx=F(t)m\ddot{x} + b\dot{x} + kx = F(t) for a mass-spring-damper
  • Electrical circuits: Kirchhoff's laws yield equations like Ldidt+Ri=V(t)L\frac{di}{dt} + Ri = V(t) for an RL circuit
  • Thermal systems: Energy balance gives equations relating temperature change to heat flow

In each case, the differential equation relates the system's inputs (forces, voltages, heat sources) to its outputs (position, current, temperature) through the system's physical parameters. These models form the foundation for controller design, where the goal is to choose inputs that produce desired output behavior.