Fiveable

🎛️Control Theory Unit 6 Review

QR code for Control Theory practice questions

6.1 Stability concepts

6.1 Stability concepts

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎛️Control Theory
Unit & Topic Study Guides

Stability concepts

Stability analysis answers a simple but critical question: if you start a system near a desired operating point, will it stay there? This unit introduces the mathematical tools for answering that question, starting with Lyapunov's framework for nonlinear systems and extending to input-output notions like BIBO stability. These concepts underpin nearly every controller design method you'll encounter.

Stability in control systems

Stability describes a system's ability to maintain a desired state or behavior when disturbances or uncertainties act on it. A system that isn't stable is, at best, useless for control purposes and, at worst, dangerous.

Several distinct notions of stability exist because different situations call for different guarantees:

  • Lyapunov stability addresses whether the internal state of a (possibly nonlinear) system stays near an equilibrium point.
  • BIBO stability asks whether bounded inputs always produce bounded outputs, focusing on the external input-output relationship of linear systems.
  • Input-output stability generalizes BIBO ideas to nonlinear systems.

Each notion captures something different, so knowing which one applies to your problem matters.

Lyapunov stability theory

Lyapunov stability theory lets you analyze nonlinear systems without solving their differential equations. The core idea: construct an "energy-like" scalar function (a Lyapunov function) and show that it doesn't increase along the system's trajectories. If the system's "energy" can't grow, the state can't run away.

Lyapunov theory gives sufficient conditions for stability. If you find a valid Lyapunov function, you've proven stability. If you can't find one, that doesn't necessarily mean the system is unstable; you may just need a better candidate.

Lyapunov stability vs instability

Consider an autonomous system x˙=f(x)\dot{x} = f(x) with an equilibrium at x=0x = 0.

  • Lyapunov stable: For every small neighborhood around the equilibrium, you can find an even smaller neighborhood of initial conditions such that trajectories starting there remain in the larger neighborhood for all time. The state stays bounded near the equilibrium but doesn't have to converge to it.
  • Lyapunov unstable: No matter how close you start to the equilibrium, some trajectories will eventually leave a given neighborhood. The equilibrium repels nearby states.

You determine which case holds by examining the Lyapunov function V(x)V(x) and its time derivative V˙(x)\dot{V}(x) along system trajectories. If V(x)>0V(x) > 0 and V˙(x)0\dot{V}(x) \leq 0, the system is stable. If trajectories can make VV increase, instability may result.

Asymptotic stability

Asymptotic stability is stronger than Lyapunov stability. The system state must not only stay near the equilibrium but actually converge to it as tt \to \infty.

For asymptotic stability, you need:

  1. V(x)V(x) is positive definite (V(x)>0V(x) > 0 for x0x \neq 0, and V(0)=0V(0) = 0).
  2. V˙(x)\dot{V}(x) is negative definite (V˙(x)<0\dot{V}(x) < 0 for x0x \neq 0).

When both conditions hold, the "energy" strictly decreases at every nonzero state, so the system must settle to the equilibrium. Small perturbations will die out over time.

Exponential stability

Exponential stability adds a guaranteed convergence rate. The state satisfies a bound of the form:

x(t)kx(0)eαt\|x(t)\| \leq k \|x(0)\| e^{-\alpha t}

for constants k>0k > 0 and α>0\alpha > 0. The decay rate α\alpha tells you quantitatively how fast the system returns to equilibrium.

Exponential stability implies asymptotic stability, which in turn implies Lyapunov stability. The hierarchy is:

Exponential stability \Rightarrow Asymptotic stability \Rightarrow Lyapunov stability

Exponentially stable systems tend to be more robust to disturbances because they actively pull the state back at a known rate.

BIBO stability

BIBO (Bounded-Input Bounded-Output) stability applies to systems described by their input-output relationship, most commonly linear systems. Instead of asking about internal states, it asks: if the input stays bounded, does the output stay bounded too?

BIBO stability definition

A system is BIBO stable if every bounded input produces a bounded output. Formally, if u(t)Mu|u(t)| \leq M_u for all t0t \geq 0, then there exists a constant MyM_y such that y(t)My|y(t)| \leq M_y for all t0t \geq 0.

For a continuous-time LTI system with impulse response h(t)h(t), BIBO stability holds if and only if the impulse response is absolutely integrable:

0h(t)dt<\int_0^{\infty} |h(t)|\, dt < \infty

This gives you a concrete test: if the impulse response decays fast enough, the system is BIBO stable.

BIBO stability vs Lyapunov stability

These two concepts address different things and don't imply each other in general:

  • BIBO stability concerns the input-output map of (typically linear) systems. It doesn't require a state-space model.
  • Lyapunov stability concerns the internal state trajectory near an equilibrium, and applies to nonlinear systems.

A system can be BIBO stable but have unstable internal modes that happen to be unobservable from the output. Conversely, a system can be Lyapunov stable at an equilibrium but produce unbounded outputs for certain bounded inputs if the input-output path has different characteristics. Always check which notion your problem requires.

Input-output stability

Input-output stability generalizes BIBO stability beyond linear systems. It characterizes how output signals relate to input signals without requiring an explicit state-space model.

Finite gain stability

A system has finite gain stability (in the Lp\mathcal{L}_p sense) if the output norm is bounded by a scaled version of the input norm:

ypγup+β\|y\|_p \leq \gamma \|u\|_p + \beta

where γ>0\gamma > 0 is the gain and β0\beta \geq 0 is a bias term. The gain γ\gamma quantifies the worst-case amplification from input to output. A finite gain means the system can't amplify signals without bound, which is the input-output analog of "well-behaved."

Lyapunov stability vs instability, Lyapunov Stability Analysis of Certain Third Order Nonlinear Differential Equations

Input-to-state stability (ISS)

ISS bridges the gap between input-output and Lyapunov stability. A system x˙=f(x,u)\dot{x} = f(x, u) is ISS if the state satisfies:

x(t)β(x(0),t)+γ(u)\|x(t)\| \leq \beta(\|x(0)\|, t) + \gamma(\|u\|_{\infty})

where β\beta is a class KL\mathcal{KL} function (decays in tt) and γ\gamma is a class K\mathcal{K} function.

This bound says two things at once:

  • The effect of the initial condition fades over time (the β\beta term decays).
  • The effect of the input is bounded proportionally to its size (the γ\gamma term).

ISS is especially useful for analyzing interconnected nonlinear systems, since you can reason about how disturbances propagate through subsystems.

Stability of linear systems

Linear systems have the advantage that stability can be fully characterized by the system matrix. The analysis tools are well-developed and computationally straightforward.

Eigenvalue analysis

For a linear time-invariant system x˙=Ax\dot{x} = Ax (continuous-time) or xk+1=Axkx_{k+1} = Ax_k (discrete-time), stability depends entirely on the eigenvalues of AA:

  • Continuous-time: Stable if and only if all eigenvalues have strictly negative real parts (they lie in the open left-half of the complex plane).
  • Discrete-time: Stable if and only if all eigenvalues have magnitude less than one (they lie strictly inside the unit circle).

Even a single eigenvalue on the wrong side of the boundary makes the system unstable. Eigenvalues on the boundary (purely imaginary for continuous-time, on the unit circle for discrete-time) correspond to marginal stability, where the system neither grows nor decays.

Routh-Hurwitz criterion

The Routh-Hurwitz criterion lets you check continuous-time stability directly from the coefficients of the characteristic polynomial, without computing eigenvalues.

The procedure:

  1. Write the characteristic polynomial a0sn+a1sn1++an=0a_0 s^n + a_1 s^{n-1} + \cdots + a_n = 0.
  2. Check that all coefficients aia_i are positive (a necessary condition).
  3. Construct the Routh table by arranging coefficients in two rows, then computing subsequent rows using cross-multiplication formulas.
  4. Count the sign changes in the first column of the completed table.

The number of sign changes in the first column equals the number of roots with positive real parts. Zero sign changes means all roots are in the left-half plane, so the system is stable.

Stability of nonlinear systems

Nonlinear systems don't have a single, universal stability test. Instead, you combine several techniques depending on the problem.

Linearization around equilibrium points

Linearization approximates a nonlinear system x˙=f(x)\dot{x} = f(x) near an equilibrium xex_e by computing the Jacobian:

A=fxx=xeA = \left.\frac{\partial f}{\partial x}\right|_{x = x_e}

The eigenvalues of AA then determine local stability of the nonlinear system:

  • If all eigenvalues of AA have negative real parts, the equilibrium is locally asymptotically stable.
  • If any eigenvalue has a positive real part, the equilibrium is unstable.
  • If eigenvalues lie on the imaginary axis (with none in the right-half plane), linearization is inconclusive. You need Lyapunov methods or other tools to settle the question.

Linearization only tells you about behavior near the equilibrium. It says nothing about large deviations from that point.

Lyapunov function candidates

To prove stability of a nonlinear system x˙=f(x)\dot{x} = f(x) at an equilibrium x=0x = 0, you look for a scalar function V(x)V(x) satisfying:

  1. Positive definiteness: V(x)>0V(x) > 0 for all x0x \neq 0, and V(0)=0V(0) = 0.
  2. Negative semi-definite derivative: V˙(x)=Vf(x)0\dot{V}(x) = \nabla V \cdot f(x) \leq 0 along trajectories.
  3. Radially unbounded (for global results): V(x)V(x) \to \infty as x\|x\| \to \infty.

If conditions 1 and 2 hold, the system is Lyapunov stable. If V˙\dot{V} is strictly negative definite, you get asymptotic stability.

Finding a good Lyapunov function is the hard part. Common starting points include quadratic forms V(x)=xTPxV(x) = x^T P x (where PP is positive definite) and physical energy functions for mechanical or electrical systems.

LaSalle's invariance principle

LaSalle's principle is your tool when V˙(x)0\dot{V}(x) \leq 0 but is not strictly negative definite everywhere. In that case, the standard Lyapunov theorem only gives you stability, not asymptotic stability. LaSalle's principle can often close the gap.

The idea: define the set S={x:V˙(x)=0}S = \{x : \dot{V}(x) = 0\}. LaSalle's principle states that all trajectories converge to the largest invariant set contained in SS. If the only invariant set in SS is the equilibrium itself, then the system is asymptotically stable.

This is particularly useful because many natural Lyapunov candidates have V˙=0\dot{V} = 0 on some subspace (not just at the origin). LaSalle's principle lets you still conclude asymptotic stability by checking what can actually "live" in that subspace.

Stability robustness

Real systems never match their mathematical models exactly. Stability robustness asks: how much can the system deviate from the model before stability is lost?

Parametric uncertainties

Parametric uncertainties occur when system parameters (masses, resistances, gains, etc.) are not known exactly or vary during operation. For example, a robot arm's payload mass might change between tasks.

Robust stability analysis determines the range of parameter values for which the system remains stable. Techniques include:

  • Structured singular value (μ\mu-analysis): Handles structured uncertainties where you know which parameters are uncertain.
  • Kharitonov's theorem: For systems where uncertain parameters enter the characteristic polynomial coefficients, this checks stability by examining only four "corner" polynomials.
Lyapunov stability vs instability, NPG - Exploring the Lyapunov instability properties of high-dimensional atmospheric and climate ...

Unmodeled dynamics

Unmodeled dynamics are the discrepancies between your model and the real system. These often arise from neglected high-frequency modes, nonlinearities you linearized away, or time delays you ignored.

Techniques for ensuring stability despite unmodeled dynamics include:

  • Small-gain theorem: If the loop gain of the interconnection between the nominal system and the uncertainty is less than one, the closed-loop system is stable.
  • Passivity-based methods: If both the nominal system and the uncertainty are passive, their feedback interconnection is stable.

Stability margins

Stability margins give you a single number (or pair of numbers) that quantifies how close a stable system is to becoming unstable.

  • Gain margin: The factor by which the loop gain can increase before instability. Typically expressed in dB.
  • Phase margin: The additional phase lag the system can tolerate before instability. Measured in degrees.

Larger margins mean more tolerance for modeling errors and parameter changes. You can read these margins from Bode plots (look at the gain and phase crossover frequencies) or from Nyquist diagrams (measure the distance to the critical point 1+j0-1 + j0).

Stability analysis techniques

Root locus method

The root locus plots the closed-loop pole locations as a parameter (usually the loop gain KK) varies from 0 to \infty. It gives you a visual map of how stability and transient behavior change with gain.

  • Poles in the left-half plane (continuous-time) or inside the unit circle (discrete-time) correspond to stable operation.
  • As KK increases, poles may migrate toward the imaginary axis or right-half plane, indicating a stability limit.
  • The root locus also reveals damping ratio and natural frequency trends, helping you pick a gain that balances speed and stability.

Nyquist stability criterion

The Nyquist criterion determines closed-loop stability from the open-loop frequency response. You plot the open-loop transfer function G(jω)H(jω)G(j\omega)H(j\omega) as ω\omega sweeps from -\infty to ++\infty (the Nyquist plot).

The criterion states:

The closed-loop system is stable if and only if the number of counterclockwise encirclements of the point 1+j0-1 + j0 equals the number of open-loop right-half-plane poles.

The Nyquist criterion handles situations that other methods struggle with, including systems with time delays and non-minimum phase zeros.

Circle criterion

The circle criterion applies to systems that can be decomposed into a linear subsystem in feedback with a static nonlinearity (like saturation or dead-zone). The nonlinearity must lie within a known sector [a,b][a, b], meaning aN(x)xba \leq \frac{N(x)}{x} \leq b for all x0x \neq 0.

The test: if the Nyquist plot of the linear part avoids a critical circle (determined by the sector bounds aa and bb), the overall nonlinear system is stable. This provides a sufficient condition and is one of the few frequency-domain tools available for nonlinear stability analysis.

Stabilization methods

State feedback stabilization

With state feedback, the control law takes the form u=Kxu = -Kx, where KK is a gain matrix and xx is the full state vector. The closed-loop system becomes x˙=(ABK)x\dot{x} = (A - BK)x, and you choose KK to place the eigenvalues of (ABK)(A - BK) at desired stable locations.

Common design approaches:

  • Pole placement (Ackermann's formula): Directly specify desired closed-loop pole locations.
  • Linear Quadratic Regulator (LQR): Minimize a cost function that balances state regulation and control effort, yielding an optimal KK.

State feedback requires access to the full state. When some states aren't measured, you pair the controller with a state observer (e.g., Luenberger observer) that estimates the unmeasured states from the output.

Output feedback stabilization

When only the output y=Cxy = Cx is available (not the full state), you design the controller using output measurements alone. This is the more common practical scenario.

Output feedback approaches include:

  • PID control: The most widely used industrial controller. Combines proportional, integral, and derivative action on the error signal.
  • Lead-lag compensation: Shapes the frequency response to improve stability margins and transient response.
  • Observer-based output feedback: Combine a state observer with a state feedback law (separation principle for LTI systems).

Output feedback design typically involves trade-offs between performance, robustness, and complexity.

Adaptive stabilization

Adaptive controllers adjust their parameters online to handle systems with unknown or time-varying characteristics. The controller "learns" as it operates.

Key approaches:

  • Model Reference Adaptive Control (MRAC): Adjusts controller parameters so the closed-loop system tracks a desired reference model.
  • Self-tuning regulators: Identify system parameters in real time and update the controller accordingly.
  • Gain scheduling: Pre-computes controllers for different operating points and switches between them based on measured conditions.

Adaptive methods are powerful but require careful stability analysis of the adaptation mechanism itself. Poorly designed adaptive laws can lead to parameter drift or bursting phenomena.