Fiveable

🎛️Control Theory Unit 6 Review

QR code for Control Theory practice questions

6.3 Lyapunov stability

6.3 Lyapunov stability

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎛️Control Theory
Unit & Topic Study Guides

Lyapunov stability is a central concept in control theory for analyzing whether dynamical systems will stay near or return to equilibrium after small disturbances. The power of this approach is that you don't need to solve the system's differential equations directly. Instead, you construct a scalar Lyapunov function and examine its properties to draw conclusions about stability, asymptotic stability, or exponential stability.

Lyapunov stability definition

Lyapunov stability provides a rigorous framework for analyzing the behavior of dynamical systems near their equilibrium points. Rather than asking "what does the trajectory look like?" it asks "does the trajectory stay close to (or converge to) the equilibrium?"

Equilibrium points

An equilibrium point is a state where the system remains at rest if no external disturbance is applied. Formally, for a system x˙=f(x)\dot{x} = f(x), a point xex_e is an equilibrium if f(xe)=0f(x_e) = 0.

Physical examples: the resting position of a pendulum hanging straight down, or the steady-state operating point of a power system where generation matches load.

Stable vs unstable equilibrium

  • A stable equilibrium (in the sense of Lyapunov) means that trajectories starting close to the equilibrium remain close for all future time. Formally: for any ϵ>0\epsilon > 0, there exists a δ>0\delta > 0 such that if x(0)xe<δ\|x(0) - x_e\| < \delta, then x(t)xe<ϵ\|x(t) - x_e\| < \epsilon for all t0t \geq 0.
  • An unstable equilibrium is one where at least some nearby trajectories diverge away, no matter how small the initial perturbation.

For linear systems, stability can be checked by examining the eigenvalues of the system matrix (or the Jacobian for nonlinear systems linearized around the equilibrium). But Lyapunov's methods go further, handling cases where linearization is inconclusive.

Asymptotic stability

Asymptotic stability is stronger than plain stability. It requires two things:

  1. The equilibrium is stable (trajectories stay close).
  2. Trajectories starting sufficiently close actually converge to the equilibrium as tt \to \infty.

Asymptotic stability implies stability, but not the other way around. A simple harmonic oscillator with no damping is stable (orbits stay bounded) but not asymptotically stable (it never settles to rest).

Exponential stability

Exponential stability is the strongest of the three. Here, the state converges to equilibrium at an exponential rate: there exist constants α>0\alpha > 0 and M>0M > 0 such that

x(t)xeMx(0)xeeαt\|x(t) - x_e\| \leq M \|x(0) - x_e\| e^{-\alpha t}

This guarantees not just convergence, but a quantifiable speed of convergence and inherent robustness to small perturbations.

Lyapunov stability theorems

The Lyapunov stability theorems let you determine stability without solving the system's differential equations. They rely on constructing or analyzing scalar functions that act like generalized energy measures.

Lyapunov's first method

Lyapunov's first (indirect) method works by linearizing the nonlinear system around the equilibrium and checking the eigenvalues of the resulting linear system.

  1. Compute the Jacobian matrix A=fxx=xeA = \frac{\partial f}{\partial x}\bigg|_{x=x_e}.
  2. Find the eigenvalues of AA.
  3. If all eigenvalues have strictly negative real parts, the equilibrium is locally asymptotically stable for the original nonlinear system.
  4. If any eigenvalue has a strictly positive real part, the equilibrium is unstable.
  5. If eigenvalues are on the imaginary axis (zero real part), the method is inconclusive for the nonlinear system.

Linearization

Linearization approximates a nonlinear system x˙=f(x)\dot{x} = f(x) by the linear system x˙=Ax\dot{x} = Ax near the equilibrium, where AA is the Jacobian evaluated at xex_e. This approximation is valid only locally, in a small neighborhood of the equilibrium. The key limitation: when the Jacobian has eigenvalues on the imaginary axis, the nonlinear terms that linearization discards can determine stability, so you need other tools.

Local vs global stability

  • Local stability means the equilibrium is stable for initial conditions within some neighborhood around it.
  • Global stability means the equilibrium is stable for all initial conditions in the state space.

Lyapunov's first method only gives local results. To establish global stability, you typically need a Lyapunov function that is valid over the entire state space (radially unbounded), which brings us to the second method.

Lyapunov's second method

Lyapunov's second (direct) method is the more powerful approach. Instead of linearizing, you construct a scalar Lyapunov function V(x)V(x) and analyze how it evolves along system trajectories. If you can find a function that behaves like "energy" and always decreases (or at least never increases), you can conclude stability without ever solving for x(t)x(t).

Lyapunov function properties

A valid Lyapunov function V(x)V(x) for the system x˙=f(x)\dot{x} = f(x) with equilibrium at the origin must satisfy:

  1. Positive definiteness: V(x)>0V(x) > 0 for all x0x \neq 0, and V(0)=0V(0) = 0.
  2. Time derivative condition:
    • For stability: V˙(x)0\dot{V}(x) \leq 0 (negative semi-definite) along system trajectories.
    • For asymptotic stability: V˙(x)<0\dot{V}(x) < 0 (negative definite) for all x0x \neq 0.

The time derivative is computed as V˙(x)=V(x)f(x)\dot{V}(x) = \nabla V(x) \cdot f(x), which avoids solving the differential equation. You just plug in the system dynamics.

For global asymptotic stability, you additionally need V(x)V(x) \to \infty as x\|x\| \to \infty (radial unboundedness).

Positive definite functions

A function V(x)V(x) is positive definite if V(0)=0V(0) = 0 and V(x)>0V(x) > 0 for all x0x \neq 0. The most common example is a quadratic form V(x)=xTPxV(x) = x^T P x, where PP is a symmetric positive definite matrix (all eigenvalues of PP are positive). Positive definiteness of VV is what makes it behave like a "distance" or "energy" measure from the equilibrium.

Decrescent functions

A function V(x)V(x) is decrescent if it can be upper-bounded by a continuous positive definite function: V(x)W(x)V(x) \leq W(x) for some positive definite WW. For autonomous systems this condition is automatically satisfied by continuous VV, but it becomes important for non-autonomous (time-varying) systems, where V(x,t)V(x,t) might grow with time even as xx stays small. The decrescent condition prevents that pathology and is needed to establish uniform asymptotic stability.

Equilibrium points, Pendulum (mathematics) - Wikipedia

Finding Lyapunov functions

Finding a suitable Lyapunov function is the main practical challenge of the direct method. There's no universal recipe, but several well-established strategies exist.

Quadratic Lyapunov functions

For linear systems x˙=Ax\dot{x} = Ax, the standard approach is to try V(x)=xTPxV(x) = x^T P x where PP is a symmetric positive definite matrix. The time derivative becomes:

V˙(x)=xT(ATP+PA)x\dot{V}(x) = x^T(A^T P + PA)x

For this to be negative definite, you need ATP+PA=QA^T P + PA = -Q for some positive definite QQ. This is the Lyapunov equation. The steps:

  1. Choose any positive definite matrix QQ (often Q=IQ = I).
  2. Solve the linear matrix equation ATP+PA=QA^T P + PA = -Q for PP.
  3. If the resulting PP is positive definite, the system is asymptotically stable.

This equation has a unique positive definite solution PP if and only if AA is Hurwitz (all eigenvalues have negative real parts). Quadratic functions also serve as starting candidates for nonlinear systems.

Energy-based Lyapunov functions

For physical systems, a natural candidate is the system's total energy (kinetic + potential for mechanical systems, stored energy in capacitors and inductors for electrical circuits). Since physical dissipation causes energy to decrease over time, the total energy often works directly as a Lyapunov function. Even when it doesn't work perfectly, it's usually a good starting point to modify.

Sum of squares methods

Sum of squares (SOS) methods offer a computational, systematic approach. The idea: represent the Lyapunov function as a polynomial and enforce that both V(x)V(x) and V˙(x)-\dot{V}(x) can be written as sums of squared polynomials (which guarantees non-negativity). This can be formulated as a semidefinite program (SDP) and solved numerically. SOS methods scale to moderate dimensions and polynomial degrees, making them practical for many engineering systems.

Constructing Lyapunov functions

Common strategies when no obvious candidate exists:

  1. Physics-based guessing: Use energy-like quantities from the system's physical interpretation.
  2. Conserved quantities: If the system has first integrals or conserved quantities, use them as building blocks.
  3. Structural exploitation: Systems with passivity properties or feedback structure often admit natural Lyapunov candidates.
  4. Computational search: Use SOS programming, linear matrix inequalities (LMIs), or even machine learning techniques to search for valid functions.

Trial-and-error is genuinely part of the process. If your first candidate doesn't work, modify it and check again.

Lyapunov function existence

The existence of a Lyapunov function is sufficient for stability but finding one isn't always possible in practice. Some stable systems (particularly those with non-smooth dynamics) may not admit smooth Lyapunov functions. However, converse Lyapunov theorems guarantee that if a system is asymptotically stable, a smooth Lyapunov function does exist (under mild regularity conditions). The gap is practical, not theoretical: the function exists, but constructing it explicitly can be very difficult.

Applications of Lyapunov stability

Lyapunov stability theory is used across control engineering wherever you need stability guarantees without closed-form solutions.

Stability analysis of nonlinear systems

This is the primary application. When linearization is inconclusive (e.g., eigenvalues on the imaginary axis) or when you need to estimate the region of attraction (the set of initial conditions from which the system converges to equilibrium), Lyapunov's direct method is the go-to tool. Typical examples include power system transient stability, robotic manipulator control, and chemical reactor dynamics.

Adaptive control

In adaptive control, system parameters are unknown or change over time. Lyapunov functions are constructed to capture both the tracking error and the parameter estimation error. The adaptive law is then designed so that V˙0\dot{V} \leq 0, guaranteeing that tracking errors remain bounded and (with additional arguments like Barbalat's lemma) converge to zero.

Robust control

Robust control designs controllers that maintain stability despite model uncertainties and disturbances. Lyapunov-based approaches derive conditions that must hold for an entire family of possible system models. Techniques like HH_\infty control, sliding mode control, and passivity-based control all rely on Lyapunov arguments to certify robustness.

Optimal control

In optimal control, the goal is to minimize a cost function subject to system dynamics. Lyapunov stability theory ensures the resulting closed-loop system is stable. The Lyapunov function can appear as a constraint in the optimization or can be related to the value function in dynamic programming (the value function of an optimal control problem is itself a Lyapunov function for the closed-loop system).

Stability of time-varying systems

For systems where the dynamics change with time (x˙=f(x,t)\dot{x} = f(x, t)), the Lyapunov function becomes time-dependent: V(x,t)V(x, t). The stability conditions now involve:

V˙(x,t)=Vt+xVf(x,t)\dot{V}(x,t) = \frac{\partial V}{\partial t} + \nabla_x V \cdot f(x,t)

Additional conditions like decrescence and uniform positive definiteness become important here. Applications include periodically varying systems, gain-scheduled controllers, and systems with switching dynamics.

Equilibrium points, Physical pendulum | TikZ example

Lyapunov stability extensions

Several extensions handle situations where the classical theorems don't directly apply.

Barbalat's lemma

Barbalat's lemma fills a common gap: you've shown V˙0\dot{V} \leq 0 and VV is bounded below, so VV converges to a limit, but you can't conclude that V˙0\dot{V} \to 0 directly. Barbalat's lemma states:

If f(t)f(t) is uniformly continuous and 0f(τ)dτ\int_0^\infty f(\tau)\, d\tau exists and is finite, then f(t)0f(t) \to 0 as tt \to \infty.

This is heavily used in adaptive control proofs, where V˙\dot{V} is only negative semi-definite but you need to show that certain error signals converge to zero.

Invariance principle

LaSalle's invariance principle relaxes the requirement that V˙\dot{V} be strictly negative definite. It states:

If V˙(x)0\dot{V}(x) \leq 0 in a compact region and trajectories are bounded, then the system state converges to the largest invariant set contained in {x:V˙(x)=0}\{x : \dot{V}(x) = 0\}.

This is extremely useful because in many systems, V˙=0\dot{V} = 0 only at the equilibrium itself, so you recover asymptotic stability even though V˙\dot{V} is only negative semi-definite. The invariance principle applies to autonomous systems; for non-autonomous systems, you need Barbalat's lemma or similar tools.

Stability of non-autonomous systems

Non-autonomous systems (x˙=f(x,t)\dot{x} = f(x,t)) require time-varying Lyapunov functions V(x,t)V(x,t). The analysis is more delicate because you need:

  • Uniform positive definiteness: V(x,t)α(x)V(x,t) \geq \alpha(\|x\|) for some class-K\mathcal{K} function α\alpha, independent of tt.
  • Decrescence: V(x,t)β(x)V(x,t) \leq \beta(\|x\|) for some class-K\mathcal{K} function β\beta.
  • Negative definiteness of V˙\dot{V} along trajectories.

Without these uniformity conditions, you can construct pathological examples where VV decreases but the state doesn't converge.

Input-to-state stability

Input-to-state stability (ISS) extends Lyapunov theory to systems with external inputs x˙=f(x,u)\dot{x} = f(x, u). A system is ISS if:

  • For bounded inputs, the state remains bounded.
  • As the input goes to zero, the state converges to a neighborhood of the origin whose size depends on the input magnitude.

ISS is characterized by the existence of an ISS-Lyapunov function satisfying a dissipation inequality of the form:

V˙(x)α(x)+γ(u)\dot{V}(x) \leq -\alpha(\|x\|) + \gamma(\|u\|)

where α\alpha and γ\gamma are class-K\mathcal{K} functions. ISS provides a clean framework for analyzing interconnected systems and robustness to disturbances.

Converse Lyapunov theorems

Converse theorems answer the question: "If a system is stable, must a Lyapunov function exist?" The answer is generally yes:

  • If the origin is uniformly asymptotically stable, there exists a smooth Lyapunov function satisfying all the standard conditions.
  • For exponential stability, the converse Lyapunov function can be taken as quadratic.

These results are theoretically important because they establish that Lyapunov's method is not just sufficient but, in principle, also necessary. The practical challenge remains: the converse theorems guarantee existence but don't tell you how to construct the function.

Limitations of Lyapunov stability

Conservativeness of Lyapunov functions

Lyapunov conditions are sufficient but not necessary. A bad choice of Lyapunov function can fail to prove stability even when the system is perfectly stable. The estimated region of attraction also depends heavily on the chosen function and may be much smaller than the true region. There's no guarantee that any particular candidate will work, and tighter results require more sophisticated (and harder to find) Lyapunov functions.

Computational complexity

For high-dimensional or highly nonlinear systems, searching for Lyapunov functions becomes computationally expensive. SOS methods scale polynomially in the number of variables and the polynomial degree, but this can still be prohibitive for large systems. The Lyapunov equation for linear systems is straightforward, but nonlinear problems often require iterative numerical approaches.

Stability vs convergence

Lyapunov theory tells you whether a system is stable but often says little about how fast it converges or what the transient response looks like. Exponential stability gives a convergence rate bound, but for plain asymptotic stability, the convergence could be arbitrarily slow. Characterizing transient behavior typically requires additional analysis beyond the Lyapunov framework.

Stability under perturbations

Standard Lyapunov analysis assumes a perfect model. Real systems have parameter uncertainties, unmodeled dynamics, and external disturbances. Extending Lyapunov results to handle these requires robust Lyapunov functions, ISS analysis, or other tools that add complexity. A system proven stable under nominal conditions may lose stability when perturbations are present.

Stability of hybrid systems

Hybrid systems combine continuous dynamics with discrete switching or jumps. Lyapunov analysis for these systems is significantly more complex because:

  • Each continuous mode may be stable individually, but switching between them can destabilize the overall system (and vice versa).
  • The Lyapunov function may need to be piecewise or discontinuous.
  • You need to account for what happens at switching instants, not just during continuous flow.

Tools like multiple Lyapunov functions, common Lyapunov functions, and dwell-time conditions have been developed to address these challenges, but the analysis remains considerably harder than for purely continuous systems.

2,589 studying →