Fiveable

🎛️Control Theory Unit 5 Review

QR code for Control Theory practice questions

5.4 State observers

5.4 State observers

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎛️Control Theory
Unit & Topic Study Guides

Definition of state observers

State observers estimate a system's internal states when you can't measure them directly. They're essential in modern control because most real systems have states that are physically inaccessible, too expensive to measure, or simply lack appropriate sensors.

An observer works by running a mathematical model of your system in parallel with the real system. It takes the known inputs and available sensor outputs, then uses the mismatch between predicted and actual outputs to correct its state estimates. Over time, the estimated state vector converges to the true state vector.

Role in control systems

State feedback controllers need access to the full state vector, but in practice you rarely have sensors on every state. Observers bridge that gap: they reconstruct the states you can't measure so you can still use powerful state feedback techniques.

Estimating unmeasurable states

Consider a motor where you can measure shaft position but not winding current. The current is part of the state vector and matters for control, but putting a current sensor on every winding may be impractical. An observer uses the position measurement plus the known voltage input to estimate the current in real time.

The quality of these estimates depends on three things:

  • How accurate your system model is
  • Which outputs you can actually measure
  • How well the observer gain is tuned

Reconstructing the full state vector

The observer combines measured outputs (typically yy) and known inputs (uu) with the system matrices AA, BB, and CC to reconstruct the full state vector xx. This reconstructed vector then feeds into the state feedback controller as if you had measured every state directly.

Types of state observers

Full-order observers

Full-order observers (also called Luenberger observers) estimate all states, including the ones you can already measure. They have the same number of states as the original system.

  • Simple to design and implement
  • Widely used as the default choice
  • Slightly more computation than necessary, since they re-estimate states you already know

Reduced-order observers

Reduced-order observers only estimate the states you can't measure. If your system has nn states and you measure pp of them, the reduced-order observer has only npn - p states.

  • More computationally efficient
  • Require that measured states are directly available and reliable
  • More complex to derive and implement than full-order observers

Design of state observers

Observer gain matrix

The observer gain matrix LL is the key design parameter. It determines how aggressively the observer corrects its estimates based on the output prediction error yy^y - \hat{y}.

The observer dynamics for a continuous-time LTI system look like this:

x^˙=Ax^+Bu+L(yCx^)\dot{\hat{x}} = A\hat{x} + Bu + L(y - C\hat{x})

The term L(yCx^)L(y - C\hat{x}) is the correction. When the estimation error e=xx^e = x - \hat{x} is nonzero, this term pushes the estimate toward the true state. The error dynamics are governed by:

e˙=(ALC)e\dot{e} = (A - LC)e

So the eigenvalues of (ALC)(A - LC) determine how fast the estimation error decays to zero.

Pole placement technique

Designing LL via pole placement follows these steps:

  1. Choose desired observer poles (eigenvalues of ALCA - LC). These should be faster than the closed-loop controller poles, typically 2 to 5 times faster, so the observer converges before the controller needs accurate estimates.

  2. Verify that the system is observable (see next section).

  3. Compute LL so that det(sIA+LC)\det(sI - A + LC) matches your desired characteristic polynomial.

This is mathematically dual to the pole placement problem for state feedback gain KK. Many software tools (like MATLAB's place or acker) handle the computation directly.

Separation principle

The separation principle says you can design the controller gain KK and the observer gain LL independently, then combine them. The closed-loop eigenvalues of the combined system are simply the union of the controller poles and the observer poles.

This holds for linear time-invariant systems that are both controllable and observable. It greatly simplifies design because you don't have to solve a coupled problem. However, for nonlinear systems or systems with significant model uncertainty, the separation principle does not generally hold.

Observability of systems

Before designing an observer, you need to confirm the system is observable: that the output measurements contain enough information to reconstruct all internal states.

Estimating unmeasurable states, Bifurcation Analysis of a Neutrophil Periodic Oscillation Model with State Feedback Control

Observability matrix

For an LTI system with state matrix AA and output matrix CC, the observability matrix is:

O=[CCACA2CAn1]\mathcal{O} = \begin{bmatrix} C \\ CA \\ CA^2 \\ \vdots \\ CA^{n-1} \end{bmatrix}

where nn is the number of states.

Observability conditions

The system is observable if and only if O\mathcal{O} has full rank (rank = nn).

  • If rank(O)=n\text{rank}(\mathcal{O}) = n: observable. You can place observer poles anywhere you want.
  • If rank(O)<n\text{rank}(\mathcal{O}) < n: not fully observable. Some states cannot be reconstructed from the outputs, and a full-state observer cannot be designed.

An equivalent condition: the system is observable if and only if no eigenvector of AA lies in the null space of CC.

Luenberger observers

Structure and design

The Luenberger observer runs a copy of the plant model with a correction term:

x^˙=Ax^+Bu+L(yCx^)\dot{\hat{x}} = A\hat{x} + Bu + L(y - C\hat{x})

The design process:

  1. Verify observability of (A,C)(A, C).
  2. Select desired observer poles, faster than the controller's closed-loop poles.
  3. Compute LL using pole placement (or an equivalent method).
  4. Simulate or implement the observer equation alongside the real system.

The observer poles should be fast enough for quick convergence but not so fast that they amplify measurement noise.

Advantages and limitations

Advantages:

  • Straightforward to design and implement
  • Works well for linear systems with reasonably accurate models
  • Provides the full state vector for feedback

Limitations:

  • Designed for linear systems; performance degrades with significant nonlinearities
  • Assumes the model is known and accurate
  • Does not explicitly handle process noise or model uncertainty (unlike Kalman filters)

Kalman filters as observers

Stochastic systems

Real systems have noise: disturbances acting on the plant (process noise) and imperfect sensors (measurement noise). The Kalman filter treats both as random signals with known statistical properties (zero-mean, with known covariance matrices QQ and RR).

Optimal state estimation

The Kalman filter produces the state estimate that minimizes the mean-squared estimation error. It operates recursively in two steps:

  1. Prediction: Use the system model to propagate the state estimate and its error covariance forward in time.

    • x^kk1=Ax^k1k1+Buk1\hat{x}_{k|k-1} = A\hat{x}_{k-1|k-1} + Bu_{k-1}
    • Pkk1=APk1k1AT+QP_{k|k-1} = AP_{k-1|k-1}A^T + Q
  2. Update: Incorporate the new measurement to correct the prediction.

    • Kalman gain: Kk=Pkk1CT(CPkk1CT+R)1K_k = P_{k|k-1}C^T(CP_{k|k-1}C^T + R)^{-1}
    • Updated estimate: x^kk=x^kk1+Kk(ykCx^kk1)\hat{x}_{k|k} = \hat{x}_{k|k-1} + K_k(y_k - C\hat{x}_{k|k-1})
    • Updated covariance: Pkk=(IKkC)Pkk1P_{k|k} = (I - K_kC)P_{k|k-1}

The Kalman gain KkK_k automatically balances trust between the model prediction and the measurement. If measurement noise is low (small RR), the gain is large and the filter trusts the sensor more. If process noise is low (small QQ), the filter trusts the model more.

Observer-based controller design

State feedback using estimated states

The control law u=Kx^u = -K\hat{x} uses the observer's estimate x^\hat{x} in place of the true state xx. During transients, the estimation error affects control performance, but as the observer converges, the behavior approaches that of full-state feedback.

Separation of observer and controller

Thanks to the separation principle, you design:

  • The controller gain KK to place the closed-loop poles of (ABK)(A - BK)
  • The observer gain LL to place the observer poles of (ALC)(A - LC)

The combined system has eigenvalues at both sets of poles. This modularity means you can tune the controller and observer independently, and swap out one without redesigning the other.

In practice, you should still verify the combined system's performance through simulation, since the separation principle guarantees stability but not necessarily good transient behavior in all cases.

Estimating unmeasurable states, Frontiers | Enhancing Station-Keeping Control With the Use of Extended State Observers

Robustness of observers

Sensitivity to model uncertainties

If your model of AA, BB, or CC is inaccurate, the observer's estimates will be biased or slow to converge. Small parameter errors might be tolerable, but large modeling errors can cause the estimation error to grow or oscillate. This is why model validation matters before deploying an observer.

Robust observer design techniques

Several approaches handle uncertainty more gracefully:

  • HH_\infty observers minimize the worst-case estimation error across all bounded disturbances and uncertainties
  • Sliding mode observers use a high-gain, discontinuous correction term to force the estimation error to zero in finite time, even with model mismatch
  • Adaptive observers estimate unknown system parameters alongside the states, adjusting as conditions change
  • Robust Kalman filters incorporate bounds on model uncertainty into the covariance matrices, producing more conservative but safer estimates

Applications of state observers

Process monitoring and fault detection

Observers can detect faults by comparing estimated states or outputs against expected values. A sudden divergence between the observer's prediction and the measured output signals that something has changed: a sensor failure, actuator malfunction, or process anomaly. This approach is widely used in aerospace, chemical plants, and power systems.

Sensor fusion and data reconciliation

When multiple sensors measure related quantities (e.g., GPS and accelerometers on a vehicle), an observer can fuse these measurements into a single, more accurate state estimate. The Kalman filter is the classic tool for this. It weights each sensor according to its noise characteristics and resolves inconsistencies between redundant measurements.

Implementation considerations

Discrete-time observers

Digital controllers operate in discrete time, so continuous-time observer equations must be discretized before implementation. Common discretization methods:

  • Zero-order hold (ZOH): Assumes the input is constant between samples. Generally preferred for control applications.
  • Forward Euler: Simple but can introduce instability if the sampling period is too large.
  • Bilinear (Tustin) transform: Preserves stability properties better than Euler methods.

The sampling time TsT_s should be small enough relative to the observer's fastest pole to avoid performance degradation. A common guideline is Ts110ωmaxT_s \leq \frac{1}{10\omega_{max}}, where ωmax\omega_{max} is the fastest observer pole's natural frequency.

Numerical issues and computational complexity

  • Covariance matrix symmetry: In Kalman filter implementations, round-off errors can cause the covariance matrix PP to lose symmetry or become non-positive-definite. The Joseph form of the covariance update or square-root filters help prevent this.
  • Ill-conditioning: Large differences in state magnitudes can cause numerical problems. State scaling or balanced realizations can help.
  • Real-time constraints: For embedded systems, the matrix inversions and multiplications in each observer update must complete within one sampling period. Sparse matrix techniques or pre-computed gains (for time-invariant systems) reduce the computational load.

Advanced topics in state estimation

Nonlinear observers

Linear observers fail when system dynamics are significantly nonlinear. Several extensions exist:

  • Extended Kalman Filter (EKF): Linearizes the nonlinear model around the current estimate at each time step, then applies the standard Kalman filter equations. Simple but can diverge if nonlinearities are strong.
  • Unscented Kalman Filter (UKF): Propagates a set of carefully chosen sample points (sigma points) through the nonlinear model, capturing mean and covariance more accurately than linearization.
  • Particle filters: Represent the state probability distribution with a large set of random samples (particles). Can handle arbitrary nonlinearities and non-Gaussian noise, but computationally expensive.
  • Sliding mode observers: Use discontinuous correction terms that are inherently robust to certain classes of nonlinearity and disturbance.

Adaptive observers

When system parameters are unknown or drift over time, adaptive observers estimate both the states and the parameters simultaneously. They're useful in applications like robotics (where payload mass changes) or aerospace (where aerodynamic coefficients vary with flight conditions). Design approaches include Lyapunov-based methods and recursive least squares.

Distributed and decentralized observers

For large-scale or networked systems, a single centralized observer may be impractical due to communication bandwidth, latency, or reliability constraints.

  • Distributed observers: Each node estimates its local states and exchanges information with neighbors. The local estimates collectively converge to the global state. Network topology and communication delays are key design factors.
  • Decentralized observers: Each node uses only its own measurements with no inter-node communication. Simpler to implement but generally less accurate than distributed approaches.