Fiveable

🎛️Control Theory Unit 10 Review

QR code for Control Theory practice questions

10.4 Lyapunov-based control

10.4 Lyapunov-based control

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎛️Control Theory
Unit & Topic Study Guides

Lyapunov-based control lets you analyze and design controllers for nonlinear systems without having to solve the underlying differential equations directly. Instead, you construct a scalar energy-like function and show that it decreases over time, which proves stability. This makes it one of the most broadly applicable tools in nonlinear control.

Lyapunov stability theory

Lyapunov stability theory gives you a way to determine whether an equilibrium point of a nonlinear system is stable, all without explicitly solving the system's differential equations. For linear systems you can check eigenvalues, but nonlinear systems rarely have such clean solutions. Lyapunov's framework sidesteps this by asking a different question: can you find an "energy-like" function that always decreases along the system's trajectories?

Lyapunov functions

A Lyapunov function V(x)V(x) is a scalar function that acts as a generalized measure of energy for the system. To qualify, it must be:

  • Positive definite: V(x)>0V(x) > 0 for all x0x \neq 0, and V(0)=0V(0) = 0
  • Continuously differentiable in a neighborhood of the equilibrium

You then examine its time derivative V˙(x)\dot{V}(x) along the system trajectories. If V˙(x)0\dot{V}(x) \leq 0, the "energy" never increases, which tells you the equilibrium is stable. If V˙(x)<0\dot{V}(x) < 0 strictly, the energy is always draining, and the system converges to the equilibrium.

The key insight: you never need to solve for x(t)x(t) explicitly. The Lyapunov function does the work for you.

Positive definite functions

A function V(x)V(x) is positive definite if V(x)>0V(x) > 0 for all x0x \neq 0 and V(0)=0V(0) = 0. Common examples:

  • Quadratic forms: V(x)=xTPxV(x) = x^T P x where PP is a symmetric positive definite matrix. These are the most frequently used candidates because they're easy to differentiate and connect directly to linear matrix inequality (LMI) methods.
  • Squared Euclidean norm: V(x)=x2V(x) = \|x\|^2, which is just the special case where P=IP = I.

Radially unbounded functions

A function V(x)V(x) is radially unbounded if V(x)V(x) \rightarrow \infty as x\|x\| \rightarrow \infty. This property matters when you want to make global stability claims. Without it, your Lyapunov function might only "work" in a local region around the equilibrium. A radially unbounded Lyapunov function ensures that no matter how far the state starts from the origin, the energy landscape still captures it.

Equilibrium points

An equilibrium point is any state where x˙=f(x)=0\dot{x} = f(x) = 0. The system has no tendency to move away from (or toward) this point on its own. The origin x=0x = 0 is the most common equilibrium studied, but nonlinear systems can have multiple equilibria, and Lyapunov analysis can be applied to each one individually (by shifting coordinates so the equilibrium of interest sits at the origin).

Lyapunov's direct method

Lyapunov's direct method (also called the "second method") is the core technique. You construct a candidate Lyapunov function, compute its time derivative along the system dynamics, and read off stability conclusions from the sign of that derivative. No linearization required.

Stability in the sense of Lyapunov

An equilibrium point is stable in the sense of Lyapunov if trajectories that start close to it stay close forever. Formally: for any ϵ>0\epsilon > 0, there exists δ>0\delta > 0 such that x(0)<δ\|x(0)\| < \delta implies x(t)<ϵ\|x(t)\| < \epsilon for all t0t \geq 0.

To prove this, find a Lyapunov function V(x)V(x) with V˙(x)0\dot{V}(x) \leq 0 in a neighborhood of the equilibrium. The non-increasing energy keeps trajectories confined.

Note that stability alone does not mean the state converges to the equilibrium. It just doesn't wander away.

Asymptotic stability

Asymptotic stability adds convergence on top of stability: the state not only stays close but actually returns to the equilibrium as tt \rightarrow \infty. Formally, the equilibrium is asymptotically stable if it is stable and there exists δ>0\delta > 0 such that x(0)<δ\|x(0)\| < \delta implies limtx(t)=0\lim_{t \rightarrow \infty} x(t) = 0.

To prove asymptotic stability, you need a stricter condition: V˙(x)<0\dot{V}(x) < 0 for all x0x \neq 0 in a neighborhood of the equilibrium. The energy is strictly decreasing, so the state must converge.

Global asymptotic stability

Global asymptotic stability (GAS) means asymptotic stability holds for every initial condition, not just those near the equilibrium. To prove GAS via Lyapunov's direct method:

  1. Find a positive definite function V(x)V(x)
  2. Show V(x)V(x) is radially unbounded
  3. Show V˙(x)<0\dot{V}(x) < 0 for all x0x \neq 0

The radial unboundedness is what upgrades the result from local to global. Without it, you can only claim local asymptotic stability.

Instability

An equilibrium is unstable if it fails to be stable in the sense of Lyapunov. Lyapunov's direct method can also prove instability: if you find a function V(x)V(x) with V˙(x)>0\dot{V}(x) > 0 in a neighborhood of the equilibrium (a "Chetaev function"), trajectories are pushed away from the equilibrium.

Lyapunov's indirect method

Lyapunov's indirect method (the linearization method) takes a different approach: linearize the nonlinear system around the equilibrium and analyze the linear system's stability. This is often simpler, but it only gives you local information.

Lyapunov functions, NPG - Lyapunov analysis of multiscale dynamics: the slow bundle of the two-scale Lorenz 96 model

Linearization

To linearize a nonlinear system x˙=f(x)\dot{x} = f(x) around an equilibrium xex_e:

  1. Compute the Jacobian matrix A=fxx=xeA = \frac{\partial f}{\partial x}\bigg|_{x = x_e}

  2. The linearized system is x~˙=Ax~\dot{\tilde{x}} = A\tilde{x}, where x~=xxe\tilde{x} = x - x_e

  3. Higher-order terms are neglected

The linearized system approximates the nonlinear dynamics only near the equilibrium, so conclusions drawn from it are strictly local.

Jacobian matrix

The Jacobian matrix AA contains the partial derivatives Aij=fixjA_{ij} = \frac{\partial f_i}{\partial x_j} evaluated at the equilibrium. Its eigenvalues determine the local behavior:

  • All eigenvalues with negative real parts → the equilibrium is locally asymptotically stable
  • Any eigenvalue with a positive real part → the equilibrium is unstable
  • Eigenvalues on the imaginary axis → the linearization is inconclusive, and you need the direct method

Hurwitz stability criterion

The Hurwitz criterion provides a way to check whether all eigenvalues of a matrix have negative real parts without computing the eigenvalues directly. A matrix (or equivalently, its characteristic polynomial) is Hurwitz stable if and only if all eigenvalues have strictly negative real parts.

In the context of the indirect method, you apply the Hurwitz criterion to the Jacobian AA. If AA is Hurwitz, the nonlinear equilibrium is locally asymptotically stable. If any eigenvalue of AA has a positive real part, the equilibrium is unstable. The critical limitation: if eigenvalues sit exactly on the imaginary axis, the indirect method cannot determine stability of the nonlinear system.

Lyapunov-based controller design

The real power of Lyapunov theory in control comes from flipping the analysis problem around: instead of checking whether a given system is stable, you design a control law uu that makes a chosen Lyapunov function decrease. This guarantees closed-loop stability by construction.

Control Lyapunov functions

A control Lyapunov function (CLF) is a positive definite, radially unbounded function V(x)V(x) with the property that for every x0x \neq 0, there exists some control input uu that makes V˙(x,u)<0\dot{V}(x, u) < 0. In other words, no matter where the state is, you can always find an input that decreases the energy.

The formal condition is:

infuV˙(x,u)<0x0\inf_u \dot{V}(x, u) < 0 \quad \forall x \neq 0

If a CLF exists for your system, a stabilizing controller exists. The CLF doesn't tell you the best controller directly, but it guarantees one is out there.

Sontag's universal formula

Once you have a CLF, Sontag's formula gives you an explicit, closed-form control law. For an affine system x˙=f(x)+g(x)u\dot{x} = f(x) + g(x)u, the formula computes uu directly from the Lie derivatives of VV along ff and gg:

u={LfV+(LfV)2+(LgV)4LgVif LgV00if LgV=0u = \begin{cases} -\frac{L_f V + \sqrt{(L_f V)^2 + (L_g V)^4}}{L_g V} & \text{if } L_g V \neq 0 \\ 0 & \text{if } L_g V = 0 \end{cases}

where LfV=Vf(x)L_f V = \nabla V \cdot f(x) and LgV=Vg(x)L_g V = \nabla V \cdot g(x). This controller is continuous (away from the origin), smooth on Rn{0}\mathbb{R}^n \setminus \{0\}, and guarantees asymptotic stability.

Artstein's theorem

Artstein's theorem establishes the fundamental equivalence: a nonlinear system admits a smooth stabilizing feedback law if and only if it admits a CLF. This result is the theoretical backbone of CLF-based design. It tells you that searching for a CLF is not just sufficient for finding a stabilizing controller; it's necessary too (for smooth feedback).

Inverse optimal control

Inverse optimal control works backward from stability to optimality. Instead of specifying a cost functional and solving a Hamilton-Jacobi-Bellman equation (which is typically intractable for nonlinear systems), you:

  1. Start with a CLF V(x)V(x)
  2. Design a stabilizing control law using the CLF
  3. Identify a meaningful cost functional that this control law happens to minimize

The result is a controller that is both stabilizing and optimal with respect to some cost. You get optimality "for free" without solving a hard optimization problem. The trade-off is that you don't get to choose the cost functional in advance.

Robust Lyapunov-based control

Real systems always have uncertainties: unmodeled dynamics, parameter variations, external disturbances. Robust Lyapunov-based control extends the standard framework to guarantee stability even when the system model isn't perfectly known.

Input-to-state stability

Input-to-state stability (ISS) characterizes how a system responds to external inputs or disturbances. A system is ISS if:

  • Bounded inputs produce bounded states
  • When the input goes to zero, the state converges to the origin

More precisely, the state satisfies a bound involving a class-KL\mathcal{KL} function of the initial condition (which decays over time) plus a class-K\mathcal{K} function of the input magnitude. ISS is the nonlinear generalization of bounded-input bounded-output (BIBO) stability, but it's stronger because it also captures the transient behavior.

Lyapunov functions, NPG - Exploring the Lyapunov instability properties of high-dimensional atmospheric and climate ...

Integral input-to-state stability

Integral ISS (iISS) is a weaker but still useful robustness property. Instead of requiring the state bound to depend on the supremum of the input (as in ISS), iISS allows the bound to depend on the integral of the input over time. This is relevant for disturbances with cumulative effects, such as persistent biases or slowly drifting parameters. An iISS system can tolerate disturbances that an ISS system would also tolerate, but iISS also covers certain unbounded-but-integrable disturbances.

Robust control Lyapunov functions

A robust CLF (RCLF) extends the CLF concept to uncertain systems. The requirement is that for every x0x \neq 0 and for all admissible uncertainties, there exists a control input making V˙<0\dot{V} < 0. If you can find an RCLF, you can design a single controller that stabilizes the system across the entire uncertainty set, not just for one nominal model.

Applications of Lyapunov-based control

Lyapunov-based techniques appear across robotics, aerospace, process control, and power systems. Their ability to provide formal stability guarantees makes them especially valuable in safety-critical applications.

Nonlinear systems

Nonlinear systems are the primary motivation for Lyapunov-based control. These systems can exhibit behaviors that linear theory simply cannot capture: multiple equilibria, limit cycles, bifurcations, and chaos. Lyapunov-based controllers exploit the nonlinear structure directly rather than trying to cancel it out, often leading to more effective and less energy-intensive control.

Adaptive control

Adaptive control handles systems with unknown or slowly varying parameters. The typical Lyapunov-based adaptive design augments the Lyapunov function with terms that penalize parameter estimation error. The update law for the parameter estimates is then chosen so that the overall Lyapunov function still decreases. This simultaneously drives the state to the equilibrium and improves the parameter estimates, all with a single stability proof.

Sliding mode control

Sliding mode control (SMC) uses a discontinuous control law to force the state onto a lower-dimensional sliding surface, then keeps it there. Lyapunov analysis enters in two places:

  1. Reaching phase: A Lyapunov function proves that the state reaches the sliding surface in finite time
  2. Sliding phase: The dynamics on the surface are analyzed (often via a separate Lyapunov argument) to ensure convergence to the equilibrium

SMC is highly robust to matched uncertainties (those entering through the same channel as the control input), but it produces chattering (high-frequency switching) that can be problematic in practice.

Backstepping control

Backstepping is a recursive design method for systems in strict-feedback (triangular) form:

x˙1=f1(x1)+g1(x1)x2\dot{x}_1 = f_1(x_1) + g_1(x_1)x_2 x˙2=f2(x1,x2)+g2(x1,x2)x3\dot{x}_2 = f_2(x_1, x_2) + g_2(x_1, x_2)x_3 \vdots x˙n=fn(x)+gn(x)u\dot{x}_n = f_n(x) + g_n(x)u

The procedure works step by step:

  1. Treat x2x_2 as a "virtual control" for the x1x_1 subsystem. Design a stabilizing virtual control α1(x1)\alpha_1(x_1) using a Lyapunov function V1(x1)V_1(x_1).

  2. Define the error z2=x2α1(x1)z_2 = x_2 - \alpha_1(x_1). Augment the Lyapunov function to V2=V1+12z22V_2 = V_1 + \frac{1}{2}z_2^2 and design a virtual control for x3x_3.

  3. Repeat until you reach the actual control input uu at the last step.

Each step "backs up" through the cascade, building the Lyapunov function and control law simultaneously. The final controller stabilizes the entire system with a single, composite Lyapunov function as proof.

Limitations of Lyapunov-based control

Conservativeness

Lyapunov conditions are sufficient for stability but not necessary. A system might be perfectly stable, yet your chosen Lyapunov function fails to prove it. This can lead to controllers that are more aggressive or restrictive than needed, sacrificing performance for a stability guarantee that could have been achieved with a less conservative design. The gap between what Lyapunov analysis proves and what is actually true depends heavily on how well-chosen the Lyapunov function is.

Computational complexity

For high-dimensional systems or complex nonlinearities, constructing Lyapunov functions and solving the associated conditions (often formulated as optimization problems like LMIs or sum-of-squares programs) can become computationally expensive. This is a real concern for real-time control applications where the controller must compute inputs at high rates.

Lyapunov function construction

Finding a suitable Lyapunov function remains the central challenge. There is no universal recipe. For mechanical systems, total energy is a natural starting point. For other systems, you often rely on quadratic candidates V=xTPxV = x^T P x and solve for PP, or use physical intuition. When these approaches fail, more advanced tools are available:

  • Sum-of-squares (SOS) programming: Searches for polynomial Lyapunov functions by converting the problem into a semidefinite program
  • Machine learning approaches: Neural networks trained to approximate Lyapunov functions, with verification steps to confirm validity
  • Computational tools: Software like SOSTOOLS or DSOS/SDSOS that automate parts of the search

These methods are active research areas and are steadily making Lyapunov-based design more accessible, but the problem remains fundamentally hard for general nonlinear systems.