Fiveable

🎛️Control Theory Unit 3 Review

QR code for Control Theory practice questions

3.5 Time-domain design specifications

3.5 Time-domain design specifications

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎛️Control Theory
Unit & Topic Study Guides

Time response specifications

Time-domain design specifications define how a control system should behave after a change in input. They give you concrete, measurable targets for things like how fast the system responds, how much it oscillates, and how accurately it tracks a reference. Without these specs, controller design is just guesswork.

The core specifications are rise time, settling time, overshoot, and steady-state error. Each one captures a different aspect of performance, and they often trade off against each other. Making a system faster (shorter rise time) tends to increase overshoot, for example. The design challenge is finding controller parameters that satisfy all specs simultaneously.

Transient response of second-order systems

The transient response is the system's behavior during the initial period after an input change, before it reaches steady state. Second-order systems are the workhorse of control theory because many real physical systems (motors, spring-mass-dampers, RLC circuits) behave approximately as second-order, and the math is tractable enough to derive closed-form expressions for every design spec.

Standard second-order transfer function

The standard form is:

G(s)=ωn2s2+2ζωns+ωn2G(s) = \frac{\omega_n^2}{s^2 + 2\zeta\omega_n s + \omega_n^2}

Two parameters control everything:

  • ωn\omega_n (natural frequency): Sets the speed of the response. Higher ωn\omega_n means a faster system.
  • ζ\zeta (damping ratio): Controls how oscillatory the response is. It's dimensionless and determines whether the system rings, settles smoothly, or sluggishly creeps to its final value.

Damping ratio and natural frequency

The damping ratio ζ\zeta describes how much energy the system dissipates relative to how much it stores. The natural frequency ωn\omega_n is the frequency at which the system would oscillate if there were zero damping (ζ=0\zeta = 0). Together, they fully determine the locations of the closed-loop poles:

s=ζωn±jωn1ζ2s = -\zeta\omega_n \pm j\omega_n\sqrt{1-\zeta^2}

The real part (ζωn-\zeta\omega_n) governs how quickly oscillations decay. The imaginary part (ωn1ζ2\omega_n\sqrt{1-\zeta^2}) governs the frequency of oscillation.

Effect of damping ratio on system response

The value of ζ\zeta places the system into one of three regimes:

  • Underdamped (0<ζ<10 < \zeta < 1): The response oscillates before settling. This is the most common case in control design because it offers the best balance of speed and accuracy.
  • Critically damped (ζ=1\zeta = 1): The fastest response with no oscillation at all. The two poles are real and equal.
  • Overdamped (ζ>1\zeta > 1): No oscillation, but the response is slower than the critically damped case. The two poles are real and distinct.

Most control design targets an underdamped response with ζ\zeta somewhere between 0.4 and 0.8, depending on how much overshoot is acceptable.

Overshoot vs damping ratio

Percent overshoot (%OS\%OS) measures how far the response exceeds its final value at the peak. For an underdamped second-order system with a step input:

%OS=eζπ1ζ2×100%\%OS = e^{-\frac{\zeta\pi}{\sqrt{1-\zeta^2}}} \times 100\%

As ζ\zeta increases, overshoot decreases. A damping ratio of ζ=0.5\zeta = 0.5 gives about 16.3% overshoot, while ζ=0.707\zeta = 0.707 gives about 4.3%. If you're given a maximum overshoot spec, you can invert this formula to find the minimum required ζ\zeta.

Settling time vs damping ratio

Settling time (tst_s) is the time for the response to stay within a tolerance band around the final value. For a 2% tolerance band:

ts4ζωnt_s \approx \frac{4}{\zeta\omega_n}

For a 5% band, use 3ζωn\frac{3}{\zeta\omega_n} instead. Notice that settling time depends on the product ζωn\zeta\omega_n, which is just the magnitude of the real part of the poles. To reduce settling time, you need the poles further left in the s-plane.

Rise time vs damping ratio

Rise time (trt_r) is the time for the response to go from 10% to 90% of its final value. A common approximation for underdamped systems:

tr1.8ωnt_r \approx \frac{1.8}{\omega_n}

Rise time depends primarily on ωn\omega_n, not ζ\zeta. Higher natural frequency means faster rise time. However, increasing ωn\omega_n to get a faster rise time also affects settling time and overshoot, so you can't tune it in isolation.

Peak time vs damping ratio

Peak time (tpt_p) is the time at which the response hits its first (and largest) peak:

tp=πωn1ζ2t_p = \frac{\pi}{\omega_n\sqrt{1-\zeta^2}}

The denominator ωn1ζ2\omega_n\sqrt{1-\zeta^2} is the damped frequency ωd\omega_d. As ζ\zeta increases toward 1, the damped frequency decreases and peak time increases. As ζ\zeta decreases toward 0, the peak comes sooner but the overshoot gets worse.

Steady-state error

Steady-state error (esse_{ss}) is the difference between the desired output and the actual output as tt \to \infty. It tells you how accurately the system tracks a reference signal in the long run. A system might have great transient behavior but still fail to reach the correct final value.

The steady-state error depends on two things: the system type (number of integrators in the open-loop transfer function) and the type of input (step, ramp, or parabolic).

Position error constant

The position error constant KpK_p applies to step inputs:

Kp=lims0G(s)K_p = \lim_{s \to 0} G(s)

ess=11+Kpe_{ss} = \frac{1}{1 + K_p}

If Kp=K_p = \infty (which happens when there's at least one integrator in G(s)G(s)), the steady-state error for a step input is zero.

Velocity error constant

The velocity error constant KvK_v applies to ramp inputs:

Kv=lims0sG(s)K_v = \lim_{s \to 0} sG(s)

ess=1Kve_{ss} = \frac{1}{K_v}

A Type 0 system has Kv=0K_v = 0, meaning infinite steady-state error for a ramp. You need at least a Type 1 system (one integrator) to get a finite ramp error.

Standard second-order transfer function, control system - Describing step response in terms of poles and zeros of transfer function ...

Acceleration error constant

The acceleration error constant KaK_a applies to parabolic inputs:

Ka=lims0s2G(s)K_a = \lim_{s \to 0} s^2G(s)

ess=1Kae_{ss} = \frac{1}{K_a}

You need at least a Type 2 system (two integrators) to track a parabolic input with finite error.

System type and steady-state error

The system type equals the number of pure integrators (poles at s=0s = 0) in the open-loop transfer function G(s)G(s). Here's how type relates to tracking ability:

  • Type 0: Finite KpK_p. Tracks step inputs with a constant (nonzero) error. Infinite error for ramp and parabolic inputs.
  • Type 1: Kp=K_p = \infty, finite KvK_v. Zero error for step inputs, constant error for ramp inputs, infinite error for parabolic inputs.
  • Type 2: Kp=K_p = \infty, Kv=K_v = \infty, finite KaK_a. Zero error for step and ramp inputs, constant error for parabolic inputs.

A common mistake is thinking you can just add integrators to eliminate all error. Each added integrator makes the system harder to stabilize.

Dominant poles and time-domain specifications

Real systems often have more than two poles, but the transient response is usually shaped most strongly by the poles closest to the imaginary axis. These are the dominant poles, and they let you use second-order approximations even for higher-order systems.

Dominant vs non-dominant poles

  • Dominant poles are the closed-loop poles closest to the imaginary axis. Their transient modes decay the slowest, so they dominate the visible response.
  • Non-dominant poles are further left in the s-plane. Their associated transient modes decay much faster and contribute little to the overall response.

A common rule of thumb: if a non-dominant pole is at least 5 to 10 times further from the imaginary axis than the dominant poles, its effect can be neglected.

Second-order approximation

When a higher-order system has a pair of complex conjugate dominant poles and all other poles are far to the left, you can approximate the system as second-order using only the dominant poles. This lets you apply all the standard second-order formulas (tst_s, trt_r, tpt_p, %OS\%OS) directly.

The approximation works well when non-dominant poles are much further from the imaginary axis and when there are no zeros near the dominant poles. Zeros near the dominant poles can significantly alter the transient response and invalidate the approximation.

Dominant poles and transient response

The location of the dominant poles in the s-plane maps directly to transient specs:

  • The real part (σ-\sigma) controls the exponential decay rate, which determines settling time: ts4/σt_s \approx 4/\sigma.
  • The imaginary part (±jωd\pm j\omega_d) controls the oscillation frequency, which determines peak time: tp=π/ωdt_p = \pi/\omega_d.
  • The angle from the negative real axis corresponds to ζ\zeta, which determines overshoot.

By specifying desired values for overshoot and settling time, you can compute the required pole locations and then design a controller to place the closed-loop poles there.

Time-domain design using root locus

The root locus is a graphical tool that shows how closed-loop pole locations move as a parameter (typically the loop gain KK) varies from 0 to \infty. It connects open-loop characteristics to closed-loop performance and is one of the most practical design methods in classical control.

Root locus review

  • The locus starts at the open-loop poles (when K=0K = 0) and ends at the open-loop zeros or at infinity (as KK \to \infty).
  • Branches on the real axis exist to the left of an odd number of real-axis open-loop poles and zeros.
  • The root locus tells you at a glance which gain values produce stable, underdamped, overdamped, or unstable closed-loop behavior.

Selecting closed-loop pole locations

Given your time-domain specs, convert them into a target region in the s-plane:

  1. From the overshoot spec, compute the minimum ζ\zeta. This defines a pair of radial lines from the origin.
  2. From the settling time spec, compute the minimum σ=ζωn\sigma = \zeta\omega_n. This defines a vertical line in the left half-plane.
  3. From the rise time or peak time spec, compute the minimum ωn\omega_n. This defines a circle centered at the origin.

The intersection of these regions is where the desired dominant poles should be placed. You then look for a gain KK that puts the root locus branches through (or near) that target region.

Designing controllers for time-domain specs

If simple gain adjustment can't place the poles where you need them (the root locus doesn't pass through the target region), you'll need to reshape the locus with a compensator:

  • Lead compensator: Adds a zero-pole pair that pulls the locus toward the left half-plane, increasing phase margin and improving transient response.
  • Lag compensator: Adds a zero-pole pair near the origin that boosts low-frequency gain, reducing steady-state error without significantly affecting transient behavior.
  • Lead-lag compensator: Combines both effects when you need to improve both transient response and steady-state accuracy.

The required gain KK at the desired pole location is found using the magnitude condition of the root locus.

Time-domain design using frequency response

Frequency response methods (Bode plots, Nyquist plots) offer an alternative path to meeting time-domain specs. They're especially useful when you have experimental frequency response data or when you need to assess robustness to model uncertainty.

Standard second-order transfer function, 控制系統 - 维基百科,自由的百科全书

Frequency response review

The frequency response describes how a system amplifies or attenuates sinusoidal inputs at each frequency, and how much phase shift it introduces. Bode plots show magnitude (in dB) and phase (in degrees) versus frequency on a log scale. Nyquist plots show the same information as a polar curve in the complex plane.

Bandwidth and rise time

Bandwidth (ωBW\omega_{BW}) is the frequency at which the closed-loop magnitude drops to 3-3 dB below its DC value. It's inversely related to rise time:

tr1.8ωBWt_r \approx \frac{1.8}{\omega_{BW}}

A wider bandwidth means a faster system. If your rise time spec requires a certain speed, you can translate that into a minimum bandwidth requirement and then shape the frequency response accordingly.

Resonant peak and overshoot

The resonant peak (MrM_r) is the maximum of the closed-loop frequency response magnitude. A larger resonant peak corresponds to more overshoot in the step response. For a second-order system:

Mr=12ζ1ζ2M_r = \frac{1}{2\zeta\sqrt{1-\zeta^2}}

This only exists when ζ<1/20.707\zeta < 1/\sqrt{2} \approx 0.707. Keeping MrM_r below a target value is equivalent to enforcing an overshoot limit.

Phase margin and stability

Phase margin (ϕm\phi_m) is measured at the gain crossover frequency (where the open-loop magnitude equals 0 dB). It equals the phase plus 180°:

ϕm=G(jωgc)+180°\phi_m = \angle G(j\omega_{gc}) + 180°

A rough relationship between phase margin and damping ratio for second-order systems:

ζϕm100\zeta \approx \frac{\phi_m}{100} (for ϕm\phi_m in degrees, valid roughly for ϕm<70°\phi_m < 70°)

Typical designs aim for a phase margin of 30° to 60°. More phase margin means less overshoot and more robustness to modeling errors.

Gain margin and stability

Gain margin (GMGM) is measured at the phase crossover frequency (where the open-loop phase equals 180°-180°). It's the reciprocal of the open-loop magnitude at that frequency:

GM=1G(jωpc)GM = \frac{1}{|G(j\omega_{pc})|}

Expressed in dB, a positive gain margin means the system is stable. Typical designs aim for at least 6 dB of gain margin, meaning the gain could increase by a factor of 2 before the system goes unstable.

Designing controllers for time-domain specs

Frequency-domain controller design involves reshaping the open-loop Bode plot to hit your targets:

  • Lead compensator: Adds phase near the crossover frequency, increasing phase margin. This reduces overshoot and improves transient response.
  • Lag compensator: Boosts gain at low frequencies without affecting the crossover region much, reducing steady-state error.
  • Notch filter: Targets a specific resonant frequency to suppress the resonant peak and limit overshoot.

The key idea is that time-domain specs translate into frequency-domain requirements (bandwidth, phase margin, gain margin, resonant peak), and you shape the Bode plot to meet those requirements.

Time-domain design using state-space methods

State-space methods handle the same design problem from a different angle. Instead of working with transfer functions and frequency plots, you work directly with the system's differential equations. This approach scales naturally to multi-input, multi-output (MIMO) systems and gives you direct control over all closed-loop pole locations.

State-space representation review

The state-space form consists of:

  • State equation: x˙=Ax+Bu\dot{x} = Ax + Bu
  • Output equation: y=Cx+Duy = Cx + Du

Here xx is the state vector, uu is the input, yy is the output, and AA, BB, CC, DD are matrices that define the system. The eigenvalues of AA are the open-loop poles. The state-space form is equivalent to the transfer function but more general, since it can represent systems that aren't easily described by a single transfer function.

Controllability and observability

Before designing a state-feedback controller, you need to verify two properties:

  • Controllability: Can you drive the state from any initial condition to any final condition using the input? Check by computing the controllability matrix C=[B  AB  A2B    An1B]\mathcal{C} = [B \; AB \; A^2B \; \cdots \; A^{n-1}B] and verifying it has full rank.
  • Observability: Can you determine the system's state from the output measurements? Check by computing the observability matrix O=[C  CA  CA2    CAn1]T\mathcal{O} = [C \; CA \; CA^2 \; \cdots \; CA^{n-1}]^T and verifying it has full rank.

If the system isn't controllable, you can't place all poles arbitrarily. If it isn't observable, you can't build a full-state observer.

Pole placement using state feedback

With full-state feedback, the control law is:

u=Kxu = -Kx

This changes the closed-loop system matrix from AA to ABKA - BK. The eigenvalues of ABKA - BK are the closed-loop poles. If the system is controllable, you can choose KK to place these poles anywhere you want.

The design process:

  1. Convert your time-domain specs (overshoot, settling time, etc.) into desired pole locations.
  2. Compute the gain matrix KK such that eig(ABK)\text{eig}(A - BK) matches those desired locations. Methods include Ackermann's formula or direct coefficient matching.

Observer design for state estimation

In practice, you rarely have access to all state variables. An observer (or Luenberger observer) estimates the states from the measured output:

x^˙=Ax^+Bu+L(yCx^)\dot{\hat{x}} = A\hat{x} + Bu + L(y - C\hat{x})

The matrix LL is the observer gain. The estimation error dynamics are governed by ALCA - LC, and if the system is observable, you can place the observer poles wherever you want by choosing LL.

A standard guideline is to make the observer poles 3 to 5 times faster than the controller poles, so the state estimates converge quickly and don't degrade the closed-loop performance.

Designing controllers for time-domain specs

The full state-space design procedure:

  1. Translate time-domain specs into desired closed-loop pole locations.
  2. Verify controllability and observability of the plant.
  3. Design the state feedback gain KK via pole placement to achieve the desired poles.
  4. If full state measurement isn't available, design an observer with gain LL to estimate the states (with observer poles faster than the controller poles).
  5. Combine the controller and observer. The separation principle guarantees that the controller and observer can be designed independently: the closed-loop poles are the union of the controller poles and the observer poles.

This approach is systematic and extends naturally to MIMO systems, which is a major advantage over classical transfer-function methods.