Stability concepts
Stability analysis answers a simple but critical question: if you start a system near a desired operating point, will it stay there? This unit introduces the mathematical tools for answering that question, starting with Lyapunov's framework for nonlinear systems and extending to input-output notions like BIBO stability. These concepts underpin nearly every controller design method you'll encounter.
Stability in control systems
Stability describes a system's ability to maintain a desired state or behavior when disturbances or uncertainties act on it. A system that isn't stable is, at best, useless for control purposes and, at worst, dangerous.
Several distinct notions of stability exist because different situations call for different guarantees:
- Lyapunov stability addresses whether the internal state of a (possibly nonlinear) system stays near an equilibrium point.
- BIBO stability asks whether bounded inputs always produce bounded outputs, focusing on the external input-output relationship of linear systems.
- Input-output stability generalizes BIBO ideas to nonlinear systems.
Each notion captures something different, so knowing which one applies to your problem matters.
Lyapunov stability theory
Lyapunov stability theory lets you analyze nonlinear systems without solving their differential equations. The core idea: construct an "energy-like" scalar function (a Lyapunov function) and show that it doesn't increase along the system's trajectories. If the system's "energy" can't grow, the state can't run away.
Lyapunov theory gives sufficient conditions for stability. If you find a valid Lyapunov function, you've proven stability. If you can't find one, that doesn't necessarily mean the system is unstable; you may just need a better candidate.
Lyapunov stability vs instability
Consider an autonomous system with an equilibrium at .
- Lyapunov stable: For every small neighborhood around the equilibrium, you can find an even smaller neighborhood of initial conditions such that trajectories starting there remain in the larger neighborhood for all time. The state stays bounded near the equilibrium but doesn't have to converge to it.
- Lyapunov unstable: No matter how close you start to the equilibrium, some trajectories will eventually leave a given neighborhood. The equilibrium repels nearby states.
You determine which case holds by examining the Lyapunov function and its time derivative along system trajectories. If and , the system is stable. If trajectories can make increase, instability may result.
Asymptotic stability
Asymptotic stability is stronger than Lyapunov stability. The system state must not only stay near the equilibrium but actually converge to it as .
For asymptotic stability, you need:
- is positive definite ( for , and ).
- is negative definite ( for ).
When both conditions hold, the "energy" strictly decreases at every nonzero state, so the system must settle to the equilibrium. Small perturbations will die out over time.
Exponential stability
Exponential stability adds a guaranteed convergence rate. The state satisfies a bound of the form:
for constants and . The decay rate tells you quantitatively how fast the system returns to equilibrium.
Exponential stability implies asymptotic stability, which in turn implies Lyapunov stability. The hierarchy is:
Exponential stability Asymptotic stability Lyapunov stability
Exponentially stable systems tend to be more robust to disturbances because they actively pull the state back at a known rate.
BIBO stability
BIBO (Bounded-Input Bounded-Output) stability applies to systems described by their input-output relationship, most commonly linear systems. Instead of asking about internal states, it asks: if the input stays bounded, does the output stay bounded too?
BIBO stability definition
A system is BIBO stable if every bounded input produces a bounded output. Formally, if for all , then there exists a constant such that for all .
For a continuous-time LTI system with impulse response , BIBO stability holds if and only if the impulse response is absolutely integrable:
This gives you a concrete test: if the impulse response decays fast enough, the system is BIBO stable.
BIBO stability vs Lyapunov stability
These two concepts address different things and don't imply each other in general:
- BIBO stability concerns the input-output map of (typically linear) systems. It doesn't require a state-space model.
- Lyapunov stability concerns the internal state trajectory near an equilibrium, and applies to nonlinear systems.
A system can be BIBO stable but have unstable internal modes that happen to be unobservable from the output. Conversely, a system can be Lyapunov stable at an equilibrium but produce unbounded outputs for certain bounded inputs if the input-output path has different characteristics. Always check which notion your problem requires.
Input-output stability
Input-output stability generalizes BIBO stability beyond linear systems. It characterizes how output signals relate to input signals without requiring an explicit state-space model.
Finite gain stability
A system has finite gain stability (in the sense) if the output norm is bounded by a scaled version of the input norm:
where is the gain and is a bias term. The gain quantifies the worst-case amplification from input to output. A finite gain means the system can't amplify signals without bound, which is the input-output analog of "well-behaved."

Input-to-state stability (ISS)
ISS bridges the gap between input-output and Lyapunov stability. A system is ISS if the state satisfies:
where is a class function (decays in ) and is a class function.
This bound says two things at once:
- The effect of the initial condition fades over time (the term decays).
- The effect of the input is bounded proportionally to its size (the term).
ISS is especially useful for analyzing interconnected nonlinear systems, since you can reason about how disturbances propagate through subsystems.
Stability of linear systems
Linear systems have the advantage that stability can be fully characterized by the system matrix. The analysis tools are well-developed and computationally straightforward.
Eigenvalue analysis
For a linear time-invariant system (continuous-time) or (discrete-time), stability depends entirely on the eigenvalues of :
- Continuous-time: Stable if and only if all eigenvalues have strictly negative real parts (they lie in the open left-half of the complex plane).
- Discrete-time: Stable if and only if all eigenvalues have magnitude less than one (they lie strictly inside the unit circle).
Even a single eigenvalue on the wrong side of the boundary makes the system unstable. Eigenvalues on the boundary (purely imaginary for continuous-time, on the unit circle for discrete-time) correspond to marginal stability, where the system neither grows nor decays.
Routh-Hurwitz criterion
The Routh-Hurwitz criterion lets you check continuous-time stability directly from the coefficients of the characteristic polynomial, without computing eigenvalues.
The procedure:
- Write the characteristic polynomial .
- Check that all coefficients are positive (a necessary condition).
- Construct the Routh table by arranging coefficients in two rows, then computing subsequent rows using cross-multiplication formulas.
- Count the sign changes in the first column of the completed table.
The number of sign changes in the first column equals the number of roots with positive real parts. Zero sign changes means all roots are in the left-half plane, so the system is stable.
Stability of nonlinear systems
Nonlinear systems don't have a single, universal stability test. Instead, you combine several techniques depending on the problem.
Linearization around equilibrium points
Linearization approximates a nonlinear system near an equilibrium by computing the Jacobian:
The eigenvalues of then determine local stability of the nonlinear system:
- If all eigenvalues of have negative real parts, the equilibrium is locally asymptotically stable.
- If any eigenvalue has a positive real part, the equilibrium is unstable.
- If eigenvalues lie on the imaginary axis (with none in the right-half plane), linearization is inconclusive. You need Lyapunov methods or other tools to settle the question.
Linearization only tells you about behavior near the equilibrium. It says nothing about large deviations from that point.
Lyapunov function candidates
To prove stability of a nonlinear system at an equilibrium , you look for a scalar function satisfying:
- Positive definiteness: for all , and .
- Negative semi-definite derivative: along trajectories.
- Radially unbounded (for global results): as .
If conditions 1 and 2 hold, the system is Lyapunov stable. If is strictly negative definite, you get asymptotic stability.
Finding a good Lyapunov function is the hard part. Common starting points include quadratic forms (where is positive definite) and physical energy functions for mechanical or electrical systems.
LaSalle's invariance principle
LaSalle's principle is your tool when but is not strictly negative definite everywhere. In that case, the standard Lyapunov theorem only gives you stability, not asymptotic stability. LaSalle's principle can often close the gap.
The idea: define the set . LaSalle's principle states that all trajectories converge to the largest invariant set contained in . If the only invariant set in is the equilibrium itself, then the system is asymptotically stable.
This is particularly useful because many natural Lyapunov candidates have on some subspace (not just at the origin). LaSalle's principle lets you still conclude asymptotic stability by checking what can actually "live" in that subspace.
Stability robustness
Real systems never match their mathematical models exactly. Stability robustness asks: how much can the system deviate from the model before stability is lost?
Parametric uncertainties
Parametric uncertainties occur when system parameters (masses, resistances, gains, etc.) are not known exactly or vary during operation. For example, a robot arm's payload mass might change between tasks.
Robust stability analysis determines the range of parameter values for which the system remains stable. Techniques include:
- Structured singular value (-analysis): Handles structured uncertainties where you know which parameters are uncertain.
- Kharitonov's theorem: For systems where uncertain parameters enter the characteristic polynomial coefficients, this checks stability by examining only four "corner" polynomials.

Unmodeled dynamics
Unmodeled dynamics are the discrepancies between your model and the real system. These often arise from neglected high-frequency modes, nonlinearities you linearized away, or time delays you ignored.
Techniques for ensuring stability despite unmodeled dynamics include:
- Small-gain theorem: If the loop gain of the interconnection between the nominal system and the uncertainty is less than one, the closed-loop system is stable.
- Passivity-based methods: If both the nominal system and the uncertainty are passive, their feedback interconnection is stable.
Stability margins
Stability margins give you a single number (or pair of numbers) that quantifies how close a stable system is to becoming unstable.
- Gain margin: The factor by which the loop gain can increase before instability. Typically expressed in dB.
- Phase margin: The additional phase lag the system can tolerate before instability. Measured in degrees.
Larger margins mean more tolerance for modeling errors and parameter changes. You can read these margins from Bode plots (look at the gain and phase crossover frequencies) or from Nyquist diagrams (measure the distance to the critical point ).
Stability analysis techniques
Root locus method
The root locus plots the closed-loop pole locations as a parameter (usually the loop gain ) varies from 0 to . It gives you a visual map of how stability and transient behavior change with gain.
- Poles in the left-half plane (continuous-time) or inside the unit circle (discrete-time) correspond to stable operation.
- As increases, poles may migrate toward the imaginary axis or right-half plane, indicating a stability limit.
- The root locus also reveals damping ratio and natural frequency trends, helping you pick a gain that balances speed and stability.
Nyquist stability criterion
The Nyquist criterion determines closed-loop stability from the open-loop frequency response. You plot the open-loop transfer function as sweeps from to (the Nyquist plot).
The criterion states:
The closed-loop system is stable if and only if the number of counterclockwise encirclements of the point equals the number of open-loop right-half-plane poles.
The Nyquist criterion handles situations that other methods struggle with, including systems with time delays and non-minimum phase zeros.
Circle criterion
The circle criterion applies to systems that can be decomposed into a linear subsystem in feedback with a static nonlinearity (like saturation or dead-zone). The nonlinearity must lie within a known sector , meaning for all .
The test: if the Nyquist plot of the linear part avoids a critical circle (determined by the sector bounds and ), the overall nonlinear system is stable. This provides a sufficient condition and is one of the few frequency-domain tools available for nonlinear stability analysis.
Stabilization methods
State feedback stabilization
With state feedback, the control law takes the form , where is a gain matrix and is the full state vector. The closed-loop system becomes , and you choose to place the eigenvalues of at desired stable locations.
Common design approaches:
- Pole placement (Ackermann's formula): Directly specify desired closed-loop pole locations.
- Linear Quadratic Regulator (LQR): Minimize a cost function that balances state regulation and control effort, yielding an optimal .
State feedback requires access to the full state. When some states aren't measured, you pair the controller with a state observer (e.g., Luenberger observer) that estimates the unmeasured states from the output.
Output feedback stabilization
When only the output is available (not the full state), you design the controller using output measurements alone. This is the more common practical scenario.
Output feedback approaches include:
- PID control: The most widely used industrial controller. Combines proportional, integral, and derivative action on the error signal.
- Lead-lag compensation: Shapes the frequency response to improve stability margins and transient response.
- Observer-based output feedback: Combine a state observer with a state feedback law (separation principle for LTI systems).
Output feedback design typically involves trade-offs between performance, robustness, and complexity.
Adaptive stabilization
Adaptive controllers adjust their parameters online to handle systems with unknown or time-varying characteristics. The controller "learns" as it operates.
Key approaches:
- Model Reference Adaptive Control (MRAC): Adjusts controller parameters so the closed-loop system tracks a desired reference model.
- Self-tuning regulators: Identify system parameters in real time and update the controller accordingly.
- Gain scheduling: Pre-computes controllers for different operating points and switches between them based on measured conditions.
Adaptive methods are powerful but require careful stability analysis of the adaptation mechanism itself. Poorly designed adaptive laws can lead to parameter drift or bursting phenomena.