โž—Linear Algebra and Differential Equations

Key Concepts in Stability Analysis

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Stability analysis gives you the mathematical tools to predict whether a system will settle down, blow up, or oscillate forever. That question sits at the heart of differential equations. Whether you're modeling population dynamics, electrical circuits, or mechanical vibrations, you need to determine what happens as time goes to infinity.

The concepts here (equilibrium classification, eigenvalue analysis, linearization, and phase portraits) appear repeatedly in both computational problems and conceptual questions. Every concept connects to one core question: How do we know if a system is stable? You need to understand why eigenvalues determine stability, how linearization lets us analyze nonlinear systems, and when graphical methods reveal behavior that algebra alone might miss.


Foundations: Equilibrium and Linear Stability

Before analyzing any system, you need to identify where it might "rest" and understand the basic criteria for stability. These foundational concepts appear in nearly every stability problem you'll encounter.

Equilibrium Points

An equilibrium point is where all derivatives equal zero: dxdt=0\frac{d\mathbf{x}}{dt} = \mathbf{0}. At these points, the system has no tendency to change. Finding equilibria is always your first step: set all derivatives to zero and solve the resulting system of equations before doing any further analysis.

Once you've found equilibria, you classify them by how nearby trajectories behave:

  • Stable equilibria attract trajectories (nearby solutions move toward the point)
  • Unstable equilibria repel trajectories (nearby solutions move away)
  • Semi-stable points attract from some directions and repel from others

Stability Criteria for Linear Systems

For a linear system dxdt=Ax\frac{d\mathbf{x}}{dt} = A\mathbf{x}, stability depends entirely on the eigenvalues of the coefficient matrix AA. The rules are clean:

  • All eigenvalues have negative real parts โ†’ asymptotically stable. Every trajectory eventually decays to the equilibrium as tโ†’โˆžt \to \infty.
  • Any eigenvalue has a positive real part โ†’ unstable. Even a single eigenvalue with Re(ฮป)>0\text{Re}(\lambda) > 0 causes trajectories to escape to infinity along the corresponding direction.
  • Purely imaginary eigenvalues (with no positive real parts) โ†’ marginally stable. The system oscillates but neither grows nor decays. This is the borderline case where small perturbations or nonlinear terms can tip the balance.

Eigenvalues and Eigenvectors

Eigenvalues encode two pieces of information. The real part determines whether solutions grow (Re(ฮป)>0\text{Re}(\lambda) > 0) or decay (Re(ฮป)<0\text{Re}(\lambda) < 0). The imaginary part creates oscillation, with its magnitude setting the oscillation frequency.

Eigenvectors define the directions along which these behaviors occur. The general solution for a 2D system looks like:

x(t)=c1eฮป1tv1+c2eฮป2tv2\mathbf{x}(t) = c_1 e^{\lambda_1 t}\mathbf{v}_1 + c_2 e^{\lambda_2 t}\mathbf{v}_2

Each term evolves along its eigenvector direction, scaled by the exponential factor. When eigenvalues are complex, say ฮป=ฮฑยฑฮฒi\lambda = \alpha \pm \beta i, the trajectories spiral. The sign of ฮฑ\alpha determines whether spirals wind inward (stable, ฮฑ<0\alpha < 0) or outward (unstable, ฮฑ>0\alpha > 0).

Compare: Stable node vs. stable spiral. Both have eigenvalues with negative real parts, but nodes have real eigenvalues (trajectories approach along straight lines) while spirals have complex eigenvalues (trajectories rotate as they approach). If a phase portrait shows a trajectory curving toward equilibrium, you're looking at complex eigenvalues.


Handling Nonlinearity: Linearization Techniques

Real-world systems are rarely linear, but linearization lets you apply eigenvalue tools to nonlinear problems. The key idea: near an equilibrium point, a nonlinear system behaves approximately like its linear approximation.

Linearization of Nonlinear Systems

For a nonlinear system dxdt=f(x)\frac{d\mathbf{x}}{dt} = \mathbf{f}(\mathbf{x}), the Jacobian matrix captures the local behavior near an equilibrium point xโˆ—\mathbf{x}^*:

J=(โˆ‚f1โˆ‚x1โˆ‚f1โˆ‚x2โˆ‚f2โˆ‚x1โˆ‚f2โˆ‚x2)โˆฃx=xโˆ—J = \begin{pmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} \end{pmatrix} \bigg|_{\mathbf{x} = \mathbf{x}^*}

You evaluate all partial derivatives at the equilibrium, then analyze the eigenvalues of JJ just as you would for a linear system. If all eigenvalues have nonzero real parts (the equilibrium is called hyperbolic), the linearization correctly predicts the stability of the full nonlinear system.

Linearization fails at borderline cases: when eigenvalues are purely imaginary or zero, the nonlinear terms you dropped during linearization actually determine the outcome. You'll need other methods for those situations.

Lyapunov Stability Theory

Lyapunov's method determines stability without solving the system. The idea is to find a scalar function V(x)V(\mathbf{x}) that acts like an "energy" measure and show that this energy decreases (or at least doesn't increase) along trajectories.

A valid Lyapunov function must satisfy:

  1. V(0)=0V(\mathbf{0}) = 0 (zero energy at equilibrium)
  2. V(x)>0V(\mathbf{x}) > 0 for all xโ‰ 0\mathbf{x} \neq \mathbf{0} (positive energy everywhere else)
  3. Vห™=dVdtโ‰ค0\dot{V} = \frac{dV}{dt} \leq 0 along solutions (energy never increases)

If Vห™<0\dot{V} < 0 strictly, you get asymptotic stability. If only Vห™โ‰ค0\dot{V} \leq 0, you get stability but can't guarantee trajectories actually converge to the equilibrium.

The challenge is that there's no general recipe for constructing VV. For mechanical systems, total energy often works. For other systems, quadratic forms like V=x12+x22V = x_1^2 + x_2^2 are a common starting guess.

Compare: Linearization vs. Lyapunov methods. Linearization requires computing a Jacobian and its eigenvalues (algebraic and systematic). Lyapunov requires constructing an energy-like function (creative and problem-specific). Use linearization first; reach for Lyapunov when eigenvalues are purely imaginary or zero.


Visualizing Dynamics: Phase Plane Methods

Phase plane analysis transforms abstract equations into geometric pictures, revealing behaviors that algebra alone might miss. These graphical techniques are essential for understanding two-dimensional systems.

Phase Plane Analysis

A phase portrait plots trajectories in the (x1,x2)(x_1, x_2) plane. Each point represents a system state, and curves show how that state evolves over time. Time itself doesn't appear explicitly; it's encoded in the direction and spacing of the trajectories.

Phase portraits reveal global behavior that local eigenvalue analysis can't capture: basins of attraction (which initial conditions lead to which equilibrium), separatrices (boundaries between different long-term outcomes), and the overall flow structure of the system.

Equilibrium classification becomes visual in the phase plane:

  • Nodes: trajectories approach (or leave) along straight lines, tangent to eigenvectors
  • Spirals: trajectories rotate as they approach or recede
  • Saddle points: trajectories approach along one eigenvector direction and recede along the other, creating hyperbolic-shaped curves
  • Centers: closed elliptical orbits around the equilibrium (purely imaginary eigenvalues)

Nullclines are another useful graphical tool. These are curves where one derivative equals zero (xห™1=0\dot{x}_1 = 0 or xห™2=0\dot{x}_2 = 0). Equilibria occur where nullclines intersect, and the nullclines divide the phase plane into regions where you can determine the sign of each derivative, helping you sketch the overall flow direction.

Limit Cycles

A limit cycle is a closed loop in the phase plane representing a periodic solution where the system repeats its behavior indefinitely.

  • Stable limit cycles attract nearby trajectories: if you perturb the system slightly off the cycle, it returns. This makes them robust oscillators. Biological rhythms (heartbeats, circadian cycles) and predator-prey oscillations are classic examples.
  • Unstable limit cycles repel nearby trajectories.
  • Limit cycles cannot exist in linear systems. They are inherently nonlinear phenomena, so their presence tells you the system cannot be fully understood through linearization alone.

Compare: Stable equilibrium vs. stable limit cycle. Both are "attractors," but equilibria represent steady states (constant solutions) while limit cycles represent sustained oscillations (periodic solutions). A system that settles to a constant value has a stable equilibrium; a system that locks into a repeating pattern has a stable limit cycle.


Advanced Topics: Bifurcations and Dimensional Reduction

When parameters change or systems are high-dimensional, these techniques become essential. They connect stability analysis to real-world questions about how systems respond to changing conditions.

Bifurcation Theory

Bifurcation theory studies how the equilibrium structure of a system changes as a parameter varies. As a parameter crosses a critical value (the bifurcation point), equilibria can appear, disappear, or exchange stability.

The most common bifurcation types:

  • Saddle-node: two equilibria (one stable, one unstable) collide and annihilate each other. Past the bifurcation, no equilibrium exists in that region.
  • Transcritical: two equilibria exchange stability as they pass through each other. Both exist before and after, but their stability swaps.
  • Pitchfork: one equilibrium splits into three (or three merge into one). Common in systems with symmetry.
  • Hopf: a stable equilibrium loses stability and a limit cycle is born (or vice versa). This is how steady states transition to oscillatory behavior.

Bifurcation analysis predicts qualitative changes like population collapse, onset of oscillations, or sudden shifts between different operating regimes.

Stability of Periodic Solutions

Floquet theory extends eigenvalue analysis from equilibrium points to periodic orbits. Instead of the Jacobian, you analyze the monodromy matrix, which describes how small perturbations evolve over one full period of the orbit.

Floquet multipliers play the role that eigenvalues play for equilibria:

  • All multipliers with โˆฃฮผโˆฃ<1|\mu| < 1 โ†’ the periodic orbit is stable
  • Any multiplier with โˆฃฮผโˆฃ>1|\mu| > 1 โ†’ the periodic orbit is unstable

One multiplier always equals 1 (corresponding to perturbations along the orbit itself). This theory is critical for determining whether rhythmic behavior persists under perturbation.

Center Manifold Theory

When some eigenvalues have zero real parts and others don't, center manifold theory lets you reduce the problem's dimension. The idea is to separate the dynamics into three types:

  • Stable directions (eigenvalues with negative real parts): perturbations decay quickly
  • Unstable directions (eigenvalues with positive real parts): perturbations grow quickly
  • Center directions (eigenvalues with zero real parts): perturbations evolve slowly, and these determine the long-term behavior

The center manifold is a lower-dimensional surface tangent to the center eigenspace at the equilibrium. By restricting the dynamics to this surface, you get a simpler system that captures the essential behavior near the borderline equilibrium.

Compare: Bifurcation analysis vs. center manifold reduction. Bifurcation theory asks what happens when parameters change, while center manifold theory asks how can we simplify the analysis. They're often used together: reduce to the center manifold first, then study bifurcations on that reduced system.


Quick Reference Table

ConceptBest Examples
Determining stabilityEigenvalue sign, Lyapunov functions, Floquet multipliers
Linear system analysisEigenvalues of coefficient matrix, phase portraits, node/spiral/saddle classification
Nonlinear techniquesJacobian linearization, Lyapunov functions, center manifold reduction
Periodic behaviorLimit cycles, Floquet theory, Hopf bifurcations
Parameter dependenceBifurcation theory, saddle-node/transcritical/pitchfork/Hopf bifurcations
Graphical methodsPhase plane analysis, trajectory sketching, nullcline plotting
Dimensional reductionCenter manifold theory, separation of fast/slow dynamics

Self-Check Questions

  1. You compute the Jacobian at an equilibrium and find eigenvalues ฮป=โˆ’2ยฑ3i\lambda = -2 \pm 3i. What type of equilibrium is this, and is it stable? How would trajectories appear in the phase plane?

  2. When would you use linearization versus Lyapunov's method to determine stability? Describe a specific scenario where linearization is inconclusive but Lyapunov succeeds.

  3. A system has a stable equilibrium that becomes unstable as a parameter increases, while a stable limit cycle simultaneously appears. What type of bifurcation is this, and what does it predict about the system's long-term behavior?

  4. Both standard eigenvalue analysis and Floquet theory involve analyzing eigenvalues/multipliers, but they apply to different types of solutions. Explain how the mathematical setup differs between them.

  5. You're given a nonlinear system and asked to determine the stability of all equilibrium points. Outline the complete procedure, identifying which concepts from this guide you'd apply at each step.