Fiveable

🎛️Control Theory Unit 11 Review

QR code for Control Theory practice questions

11.4 Digital controller design

11.4 Digital controller design

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎛️Control Theory
Unit & Topic Study Guides

Digital control systems overview

Digital controllers use computers or microcontrollers to sample continuous-time signals, process them with digital algorithms, and output control signals. They've largely replaced analog controllers in industry because they're programmable, more flexible, and capable of running advanced algorithms that analog hardware simply can't handle.

This section covers the full pipeline: sampling and discrete-time math, controller structures and discretization methods, design techniques in the z-domain, and the practical issues you'll face when implementing these controllers on real hardware.

Advantages vs analog systems

  • Programmability: You can change control algorithms and parameters through software alone, with no hardware modifications.
  • Noise immunity: Digital signals are far more resistant to environmental interference (temperature drift, electromagnetic noise, humidity) than analog signals.
  • Complex algorithms: Strategies like adaptive control, optimal control, and model predictive control become feasible when you have a processor doing the math.
  • Data handling: Digital systems can log data, communicate over networks, and support remote monitoring and diagnostics.

Limitations of digital controllers

  • Sampling and quantization artifacts: Converting between continuous and discrete signals introduces aliasing, resolution limits, and potential stability problems if the sampling rate or converter resolution is insufficient.
  • Bandwidth constraints: The sampling rate and processor speed set a hard ceiling on how fast the controller can respond. High-frequency or fast-acting plants may push these limits.
  • Additional hardware: You need A/D and D/A converters, which add cost, complexity, and their own error sources.
  • Mathematical complexity: Design and analysis rely on z-transforms and difference equations, which can feel less intuitive than the Laplace-domain methods used for analog systems.

Discrete-time systems

Discrete-time systems operate on signals sampled at regular intervals defined by the sampling period TT. Instead of differential equations, you work with difference equations. Instead of the Laplace transform, you use the z-transform. The core ideas parallel continuous-time theory, but the details differ enough that you need to build separate intuition.

Sampling and reconstruction

Sampling converts a continuous-time signal into a sequence of values measured every TT seconds.

The Nyquist-Shannon sampling theorem sets the fundamental rule: to perfectly reconstruct a continuous signal from its samples, the sampling frequency fs=1/Tf_s = 1/T must be at least twice the highest frequency component in the signal. If you violate this, aliasing occurs, where high-frequency content folds down and masquerades as lower-frequency content, corrupting your data in a way you can't undo after sampling.

Reconstruction converts the discrete samples back to a continuous signal. The most common method in control systems is the zero-order hold (ZOH), which simply holds each sample value constant until the next sample arrives. A first-order hold linearly interpolates between samples for a smoother approximation.

Z-transform

The z-transform is to discrete-time systems what the Laplace transform is to continuous-time systems. It converts a discrete-time sequence x[n]x[n] into a complex frequency-domain representation X(z)X(z), where zz is a complex variable.

Key properties that carry over from the Laplace domain include linearity, time-shifting, scaling, and convolution. The z-transform lets you:

  • Analyze system stability (are all poles inside the unit circle?)
  • Derive transfer functions and frequency responses
  • Convert between difference equations and transfer function representations

Pulse transfer functions

A pulse transfer function H(z)H(z) describes the input-output relationship of a discrete-time system in the z-domain:

H(z)=Y(z)X(z)H(z) = \frac{Y(z)}{X(z)}

This assumes zero initial conditions, just like transfer functions in the s-domain. You can express H(z)H(z) as a ratio of polynomials in zz, in factored form, or in pole-zero form.

The key difference from continuous-time: stability requires all poles to lie inside the unit circle (z<1|z| < 1), not in the left-half plane. Poles on or outside the unit circle mean the system is marginally stable or unstable.

Difference equations

Difference equations are the discrete-time equivalent of differential equations. They relate the current output to previous outputs and current/previous inputs. For example, a first-order system might look like:

y[k]=ay[k1]+bx[k]y[k] = a \cdot y[k-1] + b \cdot x[k]

The order of the difference equation equals the largest delay in the output terms. You can convert freely between difference equations and pulse transfer functions using the z-transform, which is useful because some analysis is easier in one domain than the other.

Solving a difference equation for a given input and initial conditions can be done either by iterating step-by-step (straightforward on a computer) or by using z-transform techniques (better for analytical insight).

Digital controller structures

How you arrive at your digital controller matters. There are two broad philosophies: design directly in discrete time, or design in continuous time and then convert. Each has trade-offs.

Direct design

Direct design means you work entirely in the discrete-time domain from the start. You use the plant's discrete-time model and apply z-domain techniques (pole placement, root locus, frequency response) to design the controller.

The advantage is that sampling effects, computational delays, and zero-order hold behavior are baked into the design from the beginning. This tends to produce better performance than emulation, especially when the sampling rate is relatively low compared to the plant dynamics or when performance requirements are tight.

Advantages vs analog systems, What Is Communication? | Public Speaking

Emulation of analog controllers

Emulation takes the opposite approach:

  1. Design an analog controller using familiar continuous-time methods (PID tuning, lead-lag compensation, state-space techniques).
  2. Discretize the resulting controller using one of the methods described in the next section.

This is practical when you already have a proven analog design or when the continuous-time design tools are more familiar. The downside is that discretization introduces approximation errors, and the resulting digital controller may not fully exploit what digital implementation can offer. Emulation works best when the sampling rate is much faster than the system dynamics.

PID controller implementation

PID controllers remain the workhorse of industrial control. The continuous-time PID law has three terms:

  • Proportional (P): Responds to the current error. Larger error produces larger output.
  • Integral (I): Accumulates past error over time, eliminating steady-state offset.
  • Derivative (D): Responds to the rate of change of error, improving transient response and damping.

To implement a PID digitally, you discretize the integral (using numerical integration like trapezoidal or rectangular rules) and the derivative (using finite differences). The controller runs at each sampling instant kk, reading the error e[k]e[k] and computing the control output u[k]u[k].

Practical digital PID implementation requires attention to several details:

  • Sampling time selection: Too slow and you lose performance; too fast and you amplify noise in the derivative term.
  • Anti-windup: Prevents the integral term from accumulating excessively when the actuator saturates (more on this below).
  • Derivative filtering: A low-pass filter on the derivative term prevents it from amplifying high-frequency measurement noise.

Discretization methods

Discretization methods convert continuous-time transfer functions or controllers into discrete-time equivalents. Each method makes a different approximation, and the choice affects accuracy, stability preservation, and computational cost.

Forward difference

The forward difference (Euler forward) method approximates the derivative as:

sz1Ts \approx \frac{z - 1}{T}

which corresponds to:

dx(t)dtx[k+1]x[k]T\frac{dx(t)}{dt} \approx \frac{x[k+1] - x[k]}{T}

It's the simplest method and computationally cheap. However, it can map stable continuous-time poles to unstable discrete-time poles, especially at larger sampling periods. This makes it unreliable for systems near the stability boundary.

Backward difference

The backward difference (Euler backward) method uses:

sz1Tzs \approx \frac{z - 1}{Tz}

which corresponds to:

dx(t)dtx[k]x[k1]T\frac{dx(t)}{dt} \approx \frac{x[k] - x[k-1]}{T}

This method has a useful property: it always maps stable continuous-time poles to stable discrete-time poles. That built-in stability preservation makes it more robust than the forward method. The trade-off is that it tends to introduce extra phase lag and can slow down transient response.

Bilinear transformation (Tustin's method)

The bilinear transformation substitutes:

s=2Tz1z+1s = \frac{2}{T} \cdot \frac{z - 1}{z + 1}

This is the most commonly used discretization method in practice. It maps the entire left-half s-plane to the interior of the unit circle, so stability is always preserved. It also provides a good approximation of the continuous-time frequency response.

The one catch is frequency warping: the mapping compresses the frequency axis, so a specific continuous-time frequency ωa\omega_a maps to a slightly different discrete-time frequency. If you need an exact match at a particular frequency (like a notch filter's center frequency), apply pre-warping by adjusting ωa\omega_a before discretizing.

Pole-zero mapping

Pole-zero mapping directly converts each pole and zero from the s-domain to the z-domain using:

z=esTz = e^{sT}

This preserves the pole and zero locations, giving a close match between continuous and discrete frequency responses. It works well for systems with clearly defined pole-zero structures, like lead-lag compensators or notch filters.

Be cautious, though: this method doesn't guarantee that the resulting discrete-time system is causal, and right-half-plane poles or zeros in the original system can cause problems.

Digital controller design techniques

These are the core methods for shaping a digital controller's behavior. Most parallel their continuous-time counterparts but operate in the z-plane.

Root locus in the z-plane

The z-plane root locus works just like the s-plane version: it traces the closed-loop pole locations as a gain parameter varies. The critical difference is the stability boundary. In the z-plane, stable poles must lie inside the unit circle (not the left-half plane).

You use the root locus to:

  • Visualize how gain changes affect stability and transient response
  • Select gain values that place poles at locations corresponding to desired settling time, overshoot, and damping
  • Understand how sampling period and zero-order hold affect pole trajectories

Constant-damping and constant-frequency loci in the z-plane are curves (not straight lines and circles as in the s-plane), so interpreting z-plane root locus plots takes some practice.

Advantages vs analog systems, The Control Process | Principles of Management

Frequency response methods

Bode plots and Nyquist diagrams extend naturally to discrete-time systems. You evaluate the pulse transfer function on the unit circle (z=ejωTz = e^{j\omega T}) to get the frequency response.

These methods let you:

  • Assess gain margin and phase margin for robustness
  • Shape the open-loop response to meet bandwidth, disturbance rejection, and noise attenuation specs
  • Design lead, lag, and lead-lag compensators in the frequency domain

One thing to keep in mind: the discrete-time frequency response is periodic with period ωs=2π/T\omega_s = 2\pi/T, so all frequency content above the Nyquist frequency ωs/2\omega_s/2 aliases back into the baseband.

Deadbeat control

Deadbeat control is unique to discrete-time systems. The goal is to drive the output to the reference value in the minimum number of sampling periods.

The controller is designed to cancel the plant's poles and place all closed-loop poles at z=0z = 0. With all poles at the origin, the system's impulse response becomes finite, and the output settles exactly to the reference after at most nn steps (where nn is the system order).

The appeal is obvious: the fastest possible response. The downsides are equally clear:

  • The controller requires exact knowledge of the plant model. Model uncertainty degrades performance significantly.
  • Control effort between samples can be very large, potentially exceeding actuator limits.
  • The design amplifies measurement noise.

Deadbeat control is most practical in applications like robotics or power electronics where fast settling is critical and the plant model is well known.

Pole placement

Pole placement lets you choose exactly where the closed-loop poles go in the z-plane. You pick pole locations based on your performance specs (settling time, damping ratio, overshoot), then compute the state feedback gains that achieve those locations.

The steps are:

  1. Obtain a discrete-time state-space model of the plant.
  2. Verify that the system is controllable (otherwise, you can't place all poles arbitrarily).
  3. Select desired closed-loop pole locations in the z-plane.
  4. Solve for the feedback gain vector KK such that the eigenvalues of (ABK)(A - BK) match the desired poles.

If you don't have access to all states directly, you'll need a state observer (like a discrete-time Luenberger observer) to estimate them. Pole placement can also be extended with integral action for reference tracking and zero steady-state error.

Linear quadratic regulator (LQR)

LQR is an optimal control method that finds the state feedback gains minimizing a quadratic cost function:

J=k=0[x[k]TQx[k]+u[k]TRu[k]]J = \sum_{k=0}^{\infty} \left[ x[k]^T Q \, x[k] + u[k]^T R \, u[k] \right]

The matrices QQ and RR are design parameters you choose. Increasing QQ penalizes state deviations more heavily (tighter tracking), while increasing RR penalizes control effort (smoother, less aggressive inputs). The balance between QQ and RR is the fundamental design trade-off.

To compute the optimal gains:

  1. Solve the discrete-time algebraic Riccati equation (DARE) for the matrix PP.
  2. Compute the feedback gain: K=(R+BTPB)1BTPAK = (R + B^T P B)^{-1} B^T P A.

LQR guarantees a stable closed-loop system with good robustness margins. It extends naturally to LQG (Linear Quadratic Gaussian) control when you combine it with a Kalman filter for state estimation, and you can add integral action for reference tracking and disturbance rejection.

Practical considerations

Theory gets you a controller on paper. These practical issues determine whether it actually works on real hardware.

Sampling rate selection

Choosing the sampling rate involves balancing several factors:

  • Minimum requirement: The Nyquist criterion says fs>2fmaxf_s > 2 f_{max}, but for control (not just signal reconstruction), a common rule of thumb is to sample 6 to 20 times faster than the closed-loop bandwidth.
  • Too slow: You lose information about the plant dynamics, reduce stability margins, and degrade performance.
  • Too fast: You increase computational load, amplify quantization noise, and may stress the A/D and D/A converters without meaningful performance gains.

The sampling rate also affects your discretization. A faster rate generally makes the discrete-time model a closer approximation of the continuous-time plant, but there are diminishing returns.

Quantization effects

A/D and D/A converters have finite resolution, so they round continuous values to the nearest discrete level. This rounding error acts like noise injected into the control loop.

For an nn-bit converter with range VrangeV_{range}, the quantization step size is:

q=Vrange2nq = \frac{V_{range}}{2^n}

The quantization error is bounded by ±q/2\pm q/2. In practice, this can cause:

  • Limit cycles: Small oscillations around the setpoint that never die out, even when the system should be at steady state.
  • Degraded precision: The controller can't distinguish between signal changes smaller than one quantization step.

Mitigation strategies include using higher-resolution converters, adding dither (small random noise that statistically improves effective resolution), and designing controllers that are robust to this noise floor.

Finite word length effects

Even after the signal is digitized, the arithmetic inside the controller has limited precision. Whether you use fixed-point or floating-point math, every multiplication and addition introduces small roundoff errors.

These errors matter because they can:

  • Shift pole and zero locations away from their designed values (coefficient quantization)
  • Alter the frequency response of digital filters
  • Accumulate over time, potentially causing drift or instability

Fixed-point implementations on low-cost microcontrollers are especially vulnerable. Mitigation approaches include careful scaling, using higher-precision data types for critical computations, and choosing controller realizations (like the delta operator form) that are less sensitive to coefficient quantization.

Anti-windup strategies

Windup happens when the controller's integral term keeps accumulating error while the actuator is saturated (at its physical limit). When the error finally changes sign, the integral has built up so much that the controller takes a long time to "unwind," causing large overshoot or oscillation.

Common anti-windup strategies:

  • Clamping (conditional integration): Stop updating the integral term when the actuator output is saturated.
  • Back-calculation: When saturation is detected, feed the difference between the desired and actual actuator output back to "drain" the integrator at a controlled rate.
  • Integrator limiting: Cap the integral term at a maximum value that corresponds to the actuator range.

Anti-windup is not optional for any real PID implementation. Without it, even a well-tuned controller can exhibit dangerous behavior whenever the actuator hits its limits.