Fiveable

🎛️Control Theory Unit 1 Review

QR code for Control Theory practice questions

1.4 Fourier analysis

1.4 Fourier analysis

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎛️Control Theory
Unit & Topic Study Guides

Fourier analysis lets you decompose complex signals into sums of simple sinusoids, then work with those frequency components individually. In control theory, this is essential because it connects the time domain (where you observe signals) to the frequency domain (where you design controllers and analyze system behavior). This topic covers Fourier series, Fourier transforms, their discrete counterparts, and the related Laplace and Z-transforms that tie everything together.

Fourier series representation

A Fourier series represents a periodic function as an infinite sum of sinusoids at different frequencies and amplitudes. This is the starting point for all of Fourier analysis: if you can break a repeating signal into sine and cosine components, you can study each component separately.

Periodic functions

A function f(t)f(t) is periodic if there exists a positive constant TT such that f(t+T)=f(t)f(t+T) = f(t) for all tt. The smallest such TT is called the fundamental period.

Common examples include sine waves (smooth oscillation), square waves (abrupt transitions between two levels), and sawtooth waves (linear ramp that resets periodically). Each of these looks very different in the time domain, but all can be expressed as sums of sinusoids.

Trigonometric series

The general form of a Fourier series is:

f(t)=a0+n=1(ancos(nω0t)+bnsin(nω0t))f(t) = a_0 + \sum_{n=1}^{\infty} (a_n \cos(n\omega_0 t) + b_n \sin(n\omega_0 t))

  • a0a_0 is the DC component (the average value of the function over one period)
  • ana_n and bnb_n are the Fourier coefficients that set the amplitude of each harmonic
  • ω0=2πT\omega_0 = \frac{2\pi}{T} is the fundamental frequency in radians per second

Each term nω0n\omega_0 is called the nnth harmonic. The n=1n=1 term oscillates at the fundamental frequency, n=2n=2 at twice that frequency, and so on.

Fourier coefficients

To find the coefficients, you integrate the signal multiplied by the corresponding sinusoid over one full period:

  • a0=1TT/2T/2f(t)dta_0 = \frac{1}{T} \int_{-T/2}^{T/2} f(t)\, dt
  • an=2TT/2T/2f(t)cos(nω0t)dta_n = \frac{2}{T} \int_{-T/2}^{T/2} f(t) \cos(n\omega_0 t)\, dt
  • bn=2TT/2T/2f(t)sin(nω0t)dtb_n = \frac{2}{T} \int_{-T/2}^{T/2} f(t) \sin(n\omega_0 t)\, dt

The intuition: each integral "picks out" how much of that particular frequency is present in f(t)f(t). If the signal has no energy at frequency nω0n\omega_0, the corresponding coefficient will be zero.

Convergence of Fourier series

The Fourier series converges to the original function under the Dirichlet conditions: the function must have a finite number of discontinuities, a finite number of extrema, and be absolutely integrable over one period. Most signals you'll encounter in control theory satisfy these conditions.

Convergence can be pointwise, uniform, or in the mean-square sense. At points of discontinuity, the series converges to the midpoint of the left and right limits.

The Gibbs phenomenon is worth knowing: when you truncate a Fourier series to approximate a discontinuous function, you'll see overshoots of about 9% near the discontinuities. Adding more terms makes the overshoot narrower but doesn't eliminate it.

Fourier transforms

Fourier series handle periodic signals, but real-world signals are often non-periodic. The Fourier transform generalizes the Fourier series to handle signals that don't repeat, giving you a continuous frequency spectrum instead of discrete harmonics.

Fourier transform definition

The Fourier transform of a continuous-time signal x(t)x(t) is:

X(ω)=x(t)ejωtdtX(\omega) = \int_{-\infty}^{\infty} x(t) e^{-j\omega t}\, dt

The inverse Fourier transform recovers the time-domain signal:

x(t)=12πX(ω)ejωtdωx(t) = \frac{1}{2\pi} \int_{-\infty}^{\infty} X(\omega) e^{j\omega t}\, d\omega

X(ω)X(\omega) is generally complex-valued. Its magnitude X(ω)|X(\omega)| tells you how much of each frequency is present, and its phase X(ω)\angle X(\omega) tells you the timing offset of each frequency component.

Fourier transform properties

Several properties make Fourier transforms practical to work with. Rather than computing integrals from scratch every time, you can use these properties to build up transforms of complex signals from simpler ones.

Linearity and scaling

  • Linearity: If x1(t)X1(ω)x_1(t) \leftrightarrow X_1(\omega) and x2(t)X2(ω)x_2(t) \leftrightarrow X_2(\omega), then ax1(t)+bx2(t)aX1(ω)+bX2(ω)ax_1(t) + bx_2(t) \leftrightarrow aX_1(\omega) + bX_2(\omega)
  • Scaling: If x(t)X(ω)x(t) \leftrightarrow X(\omega), then x(at)1aX ⁣(ωa)x(at) \leftrightarrow \frac{1}{|a|}X\!\left(\frac{\omega}{a}\right)

The scaling property has a useful physical interpretation: compressing a signal in time (a>1a > 1) spreads its spectrum out in frequency, and vice versa. A short pulse has a wide bandwidth; a long pulse has a narrow bandwidth.

Time and frequency shifting

  • Time shifting: x(tt0)X(ω)ejωt0x(t-t_0) \leftrightarrow X(\omega)e^{-j\omega t_0}
  • Frequency shifting: x(t)ejω0tX(ωω0)x(t)e^{j\omega_0 t} \leftrightarrow X(\omega-\omega_0)

A delay in time doesn't change the magnitude spectrum; it only adds a linear phase shift. Frequency shifting (multiplying by a complex exponential) slides the entire spectrum along the frequency axis, which is the basis of modulation in communications.

Convolution and modulation

  • Convolution: x1(t)x2(t)X1(ω)X2(ω)x_1(t) * x_2(t) \leftrightarrow X_1(\omega)X_2(\omega)
  • Modulation: x(t)cos(ω0t)12[X(ωω0)+X(ω+ω0)]x(t)\cos(\omega_0 t) \leftrightarrow \frac{1}{2}[X(\omega-\omega_0) + X(\omega+\omega_0)]

The convolution property is arguably the most important for control theory. It means that passing a signal through a linear time-invariant (LTI) system, which involves convolution with the impulse response in the time domain, becomes simple multiplication in the frequency domain.

Discrete Fourier transforms (DFT)

When you work with digital systems, signals are sampled and stored as finite sequences. The DFT is the tool for frequency analysis of these discrete, finite-length signals.

Periodic functions, Square wave - Wikipedia

DFT definition and properties

For a discrete-time signal x[n]x[n] of length NN:

X[k]=n=0N1x[n]ej2πNkn,k=0,1,,N1X[k] = \sum_{n=0}^{N-1} x[n]\, e^{-j\frac{2\pi}{N}kn}, \quad k = 0, 1, \ldots, N-1

The inverse DFT recovers the time-domain sequence:

x[n]=1Nk=0N1X[k]ej2πNkn,n=0,1,,N1x[n] = \frac{1}{N} \sum_{k=0}^{N-1} X[k]\, e^{j\frac{2\pi}{N}kn}, \quad n = 0, 1, \ldots, N-1

The DFT shares many properties with the continuous Fourier transform (linearity, shifting, convolution), but the shifting and convolution are circular because the DFT implicitly treats the sequence as periodic with period NN.

Fast Fourier transform (FFT) algorithms

Computing the DFT directly requires O(N2)O(N^2) operations. The FFT is an algorithm that computes the same result in O(NlogN)O(N \log N) operations by exploiting symmetry and periodicity in the complex exponentials.

The most common variant is the Cooley-Tukey radix-2 algorithm, which recursively splits an NN-point DFT into two N/2N/2-point DFTs (requiring NN to be a power of 2). For a 1024-point signal, this reduces the operation count from about 1,000,000 to roughly 10,000. The FFT is what makes real-time spectral analysis practical.

Circular convolution

Circular convolution is the DFT's version of convolution:

y[n]=x[n]h[n]=m=0N1x[m]h[(nm)modN]y[n] = x[n] \circledast h[n] = \sum_{m=0}^{N-1} x[m]\,h[(n-m) \bmod N]

In the DFT domain, this becomes pointwise multiplication: Y[k]=X[k]H[k]Y[k] = X[k]H[k].

If you actually want linear convolution (the standard kind), you need to zero-pad both sequences to length NLx+Lh1N \geq L_x + L_h - 1 before taking the DFT. Otherwise the circular wrap-around will corrupt your result.

Zero-padding and aliasing

  • Zero-padding means appending zeros to a sequence before computing the DFT. This doesn't add new information, but it interpolates between frequency bins, giving you a finer-grained view of the spectrum.
  • Aliasing occurs when the sampling rate is too low to capture the signal's highest frequency components. High frequencies get "folded" into lower frequency bins, producing a distorted spectrum that can't be corrected after the fact.

Applications of Fourier analysis

Signal processing and filtering

Fourier transforms let you design filters by specifying what to keep and what to remove in the frequency domain:

  • Low-pass filters pass frequencies below a cutoff and attenuate higher ones (useful for removing high-frequency noise)
  • High-pass filters do the opposite (useful for removing DC offset or slow drift)
  • Band-pass and band-stop filters target specific frequency ranges

In practice, you transform the signal, multiply by the filter's frequency response, and inverse-transform back. This is often more efficient than time-domain convolution for long signals.

Frequency response of systems

The frequency response H(ω)H(\omega) describes how an LTI system amplifies or attenuates each frequency and how much phase shift it introduces. You obtain it by evaluating the transfer function on the imaginary axis:

H(ω)=H(s)s=jωH(\omega) = H(s)\big|_{s=j\omega}

The magnitude H(ω)|H(\omega)| gives the gain at each frequency, and H(ω)\angle H(\omega) gives the phase shift. This is the foundation for Bode plots and Nyquist plots, which you'll use extensively in control design.

Spectral analysis and synthesis

Spectral analysis decomposes a signal into its frequency components to identify dominant frequencies, measure power spectral density, or detect periodicities hidden in noisy data.

Spectral synthesis goes the other direction: you construct a signal with specific frequency content by summing sinusoids with chosen amplitudes and phases. This is useful for generating test signals or simulating disturbances.

Control system design using frequency domain

Frequency-domain methods are central to classical control design:

  • Bode plots show gain and phase vs. frequency on logarithmic scales, making it easy to read off gain margin and phase margin
  • Nyquist plots map the open-loop frequency response onto the complex plane, providing a graphical stability criterion
  • Nichols charts combine gain and phase information for closed-loop performance analysis

Controllers (e.g., lead, lag, PID) are often designed by shaping the open-loop frequency response to achieve desired bandwidth, disturbance rejection, and robustness specifications.

Laplace transforms vs Fourier transforms

The Laplace transform is a generalization of the Fourier transform that can handle a broader class of signals, including growing exponentials and transient signals that don't have a Fourier transform in the classical sense.

Laplace transform definition and properties

The (unilateral) Laplace transform of x(t)x(t) is:

X(s)=0x(t)estdtX(s) = \int_{0}^{\infty} x(t) e^{-st}\, dt

Here s=σ+jωs = \sigma + j\omega is a complex variable. The real part σ\sigma provides an exponential weighting factor that can make otherwise non-convergent integrals converge. Laplace transforms share the same key properties as Fourier transforms: linearity, scaling, time-shifting, and the convolution-to-multiplication correspondence.

Periodic functions, Sine Wave - Ascension Glossary

Relationship between Laplace and Fourier transforms

The Fourier transform is a special case of the Laplace transform evaluated on the imaginary axis:

X(ω)=X(s)s=jωX(\omega) = X(s)\big|_{s=j\omega}

This relationship holds when the region of convergence (ROC) of the Laplace transform includes the imaginary axis, which is the case for stable systems. For unstable systems, the Fourier transform may not exist, but the Laplace transform still works because the eσte^{-\sigma t} factor forces convergence.

Stability analysis using Laplace transforms

The poles of a system's transfer function H(s)H(s) directly determine stability:

  • Poles in the left-half plane (Re(s)<0\text{Re}(s) < 0): stable, decaying transients
  • Poles on the imaginary axis (Re(s)=0\text{Re}(s) = 0): marginally stable, sustained oscillations
  • Poles in the right-half plane (Re(s)>0\text{Re}(s) > 0): unstable, growing transients

The pole locations also tell you about transient response characteristics. Poles farther left decay faster. Complex pole pairs produce oscillatory responses, with the imaginary part setting the oscillation frequency and the real part setting the decay rate.

Inverse Laplace transforms and partial fraction expansion

To go from X(s)X(s) back to x(t)x(t), you typically use partial fraction expansion:

  1. Factor the denominator of X(s)X(s) into its roots (the poles)
  2. Decompose X(s)X(s) into a sum of simpler fractions, each with one pole
  3. Look up or recognize the inverse transform of each fraction (e.g., 1s+aeatu(t)\frac{1}{s+a} \leftrightarrow e^{-at}u(t))
  4. Sum the individual time-domain terms

This technique is the standard way to find step responses, impulse responses, and general transient behavior of LTI systems.

Z-transforms

The Z-transform is the discrete-time counterpart of the Laplace transform. It plays the same role for discrete-time (sampled) systems that the Laplace transform plays for continuous-time systems.

Z-transform definition and properties

The Z-transform of a discrete-time signal x[n]x[n] is:

X(z)=n=x[n]znX(z) = \sum_{n=-\infty}^{\infty} x[n]\, z^{-n}

Here zz is a complex variable. The Z-transform maps sequences to functions of zz, and it shares the familiar properties: linearity, time-shifting (x[nk]zkX(z)x[n-k] \leftrightarrow z^{-k}X(z)), and convolution becomes multiplication.

Relationship between Z-transforms and Fourier transforms

The discrete-time Fourier transform (DTFT) is obtained by evaluating the Z-transform on the unit circle:

X(ω)=X(z)z=ejωX(\omega) = X(z)\big|_{z=e^{j\omega}}

This parallels how the continuous Fourier transform is the Laplace transform evaluated on the imaginary axis. The region of convergence (ROC) of the Z-transform determines whether this evaluation is valid and also encodes stability and causality information about the system.

Discrete-time systems analysis using Z-transforms

Stability criteria in the zz-domain mirror those in the ss-domain, but the boundary is the unit circle instead of the imaginary axis:

  • Poles inside the unit circle (z<1|z| < 1): stable
  • Poles on the unit circle (z=1|z| = 1): marginally stable
  • Poles outside the unit circle (z>1|z| > 1): unstable

The mapping between the ss-plane and zz-plane is z=esTz = e^{sT}, where TT is the sampling period. The left-half ss-plane maps to the interior of the unit circle, which is why the stability boundary shifts from the imaginary axis to the unit circle.

Inverse Z-transforms and partial fraction expansion

The process mirrors the continuous case:

  1. Express X(z)X(z) as a ratio of polynomials in zz
  2. Perform partial fraction expansion
  3. Match each term to a known Z-transform pair (e.g., zzaanu[n]\frac{z}{z-a} \leftrightarrow a^n u[n])
  4. Sum the results

Power series expansion (long division) is another option: divide the numerator by the denominator to get coefficients directly, which are the values of x[n]x[n].

Sampling and reconstruction

Sampling converts continuous-time signals to discrete-time, and reconstruction does the reverse. Understanding the limits of this process is critical for any digital control system.

Nyquist-Shannon sampling theorem

The Nyquist-Shannon sampling theorem states that a band-limited signal with maximum frequency fmaxf_{max} can be perfectly reconstructed from its samples if the sampling rate satisfies:

fs2fmaxf_s \geq 2f_{max}

The minimum rate 2fmax2f_{max} is called the Nyquist rate. For example, audio signals with content up to 20 kHz require a sampling rate of at least 40 kHz (CD audio uses 44.1 kHz, providing some margin).

Aliasing and anti-aliasing filters

When you sample below the Nyquist rate, frequencies above fs/2f_s/2 fold back into the range [0,fs/2][0, f_s/2], appearing as lower-frequency components that weren't in the original signal. This is aliasing, and it's irreversible once the signal has been sampled.

Anti-aliasing filters are analog low-pass filters placed before the sampler. They attenuate all frequency content above fs/2f_s/2 so that aliasing doesn't occur. In practice, you need some guard band between the signal bandwidth and fs/2f_s/2 because real filters don't have infinitely sharp cutoffs.

Ideal vs practical sampling

  • Ideal sampling assumes instantaneous samples (impulse sampling) and perfect sinc-function reconstruction. It's a mathematical model, not physically realizable.
  • Practical sampling uses sample-and-hold circuits that hold each sample value constant until the next sample. This introduces the aperture effect, which slightly attenuates high frequencies with a sinc-shaped roll-off.

The choice of sampling rate, quantization resolution (number of bits per sample), and reconstruction filter quality all affect the fidelity of the digitized signal.

Signal reconstruction from samples

Reconstruction converts discrete samples back to a continuous signal. The ideal reconstruction filter is a perfect low-pass filter with cutoff at fs/2f_s/2, which corresponds to sinc interpolation in the time domain. Since perfect sinc interpolation requires infinite-length filters, practical systems use approximations:

  • Zero-order hold (ZOH): holds each sample constant until the next one. Simple but introduces a staircase effect and sinc-shaped frequency distortion.
  • Linear interpolation (first-order hold): draws straight lines between samples. Smoother than ZOH but still imperfect.
  • Sinc interpolation: the theoretically ideal method, approximated in practice with windowed sinc filters.

Higher-order interpolation methods get closer to ideal reconstruction at the cost of increased computation and delay.